Author: Anna Chen

  • W1. The Cat System – Anna & Kyra

    Kyra and I are both cat people. We then come up with this interaction and system with our cat when we go back to home—-the cat welcoming! 

    Imagine going back to your home after a day of working or studying:

    1. The cat sitting at home waiting for you to come back
    2. You come home, cat is still in the background to the person
    3. The cat approaches you
    4. As you pet your cat, the cat seeks more attention

    The reason why we chose this system is because we both have cats. There are multiple ways to interact with our cats, including petting, feeding, speaking and smelling, as well as interaction from the cat in return; these interactions are mutual between the person and the cat. Since people have special bonds with their pets (not just cats), when interacting with them, it provides more than just momentary reactions and emotions. While these interactions are fleeting in the physical space, the emotional memory stays for a lot longer. When petting a cat, for example, the experience goes beyond the physical sensation of soft fur under the hand. It elicits a sense of comfort, warmth, and connection that resonates deeply, creating a calming and fulfilling emotional response.

  • Final Project Proposal

    For my final project, I wanted to combine timbre studies with harmonic studies. Continuing with the cat and cat language from the timbre study, I wanted to turn that into a more logical and systematic conversational mini-game.

    Timbre Study
    Harmony Study

    By using the Chatgpt API and the tone. js sampler you can “communicate” with the characters in the game in the form of meows. In addition, since this is a music class, I also wanted to incorporate some musical elements, such as the ability to control our main character, A Cat, to walk around with different musical notes, and the ability to ask the game character if he can write a song for you in the dialog.
    Regarding the timeline of this project, I plan to finish debugging the API and the basic game code (including how to control the main character cat to walk around, how to interact with the game character and then have a conversation) by December 1st, write all the code of the program classes by December 3rd, and then spend the last week to improve the interface and other details.

    Sketched Interface
  • 6.3 Harmony Study—Line’s a Pirate!

    Based on the examples and assignments from class, I was going to use code to play the song I picked last week – He’s a Pirate. So first I went and found some sheet music.

    Then I wrote them as notes and chords so that Tone.js would recognize it.

    After that, I started incorporating the chords and rhythms into p5.js. It worked at first, but I don’t know what went wrong with the transfer timeline, even though I set the time signature to 6/8, it still couldn’t read certain parts of the timeline. I assumed that the first number indicated the number of bars and the second the number of beats, so when I set the time signature to 6/8, there would be six beats per bar. But it turns out that Tone.js doesn’t think so. I couldn’t find any more information about this transfer timeline either, so I’ll just have to live with it.

    After completing the music section, I used the concordant/discordant visuals in this project. As a result, it really compliments the music visually.

  • Class 20 | Metaverse: Architecture and Governance

    The piece that struck me the most was how We Feel Fine and Listening Post managed to turn cold, impersonal data into something that felt strangely alive and deeply human. We Feel Fine pulled me into a swirling mass of emotions, each one plucked from the online confessions of strangers—moments of joy, heartbreak, hope, and loneliness, all there for me to explore. At first, it felt like a secret window into the soul of the internet, but as I clicked and scrolled, I started to wonder whether the act of cataloging these feelings stripped away some of their meaning, turning them into nothing more than colorful dots on a screen, something to look at but not quite touch. The connection it offered felt fragile, as if it could dissolve the moment I looked too closely, leaving me questioning whether human emotions really belong in the rigid structures of data.

    Listening Post, on the other hand, didn’t try to offer meaning or connection—it gave me chaos. Standing in front of its glowing screens, watching snippets of live internet chatter flash before my eyes while robotic voices read them aloud, I felt like I was sinking into the noise of the digital world, the same noise that surrounds me every day but made visible, made impossible to ignore. There was no tidy narrative, no promise of understanding; it was just a flood of disjointed, fleeting thoughts, some mundane, some profound, all overlapping in a way that felt overwhelming yet honest. And while I stood there, trying to make sense of it all, I realized that maybe this was the point: to confront the sheer volume of our digital lives, to sit with the mess and accept that not everything is meant to be understood.

    Both works stayed with me long after I left, lingering like questions I couldn’t quite answer. They made me think about the systems we rely on to organize our world, systems that feel so clean and efficient but often hide the messy realities underneath. How much do we give up when we trust data to tell us who we are, and how much of ourselves do we lose when we try to fit into its tidy categories? Maybe it’s not about rejecting data entirely, but about remembering that it’s never the full picture, never the final answer—it’s only what we choose to make of it.

  • Class 14 | Networks

    Avatar created by Ready Player Me

    As a loyal RPG fan, I’ve had countless digital avatars over the years. It’s crazy to think about how fast the trend of digital avatars has grown in the past decades. Now, it feels like every game has some kind of marketplace for character skins or outfits, and despite being nothing more than data, some of these skins sell for anywhere from ten to fifty bucks. If you told someone twenty years ago that they’d be spending $10 just to change how their character looks in a game, they’d probably laugh and think it was ridiculous. But here we are, and more and more people are willing to spend money to make their characters look unique or even stand out from the crowd.

    world of warcraft( Picture found online )

    This phenomenon isn’t just a trend—it’s become a major part of the gaming industry. The roots of this can be traced back to games like World of Warcraft and Final Fantasy, where people started to value not only their characters’ skills and abilities but also how they appeared in the game. Fast forward to today, and we have platforms like VRChat, where entire social interactions revolve around your avatar’s appearance. This surge in digital avatars has opened up whole new industries and business opportunities. Just like in the real world, things that people buy and use in the physical space can be sold again in the virtual world—whether it’s clothing, accessories, or even real estate in some metaverse-like platforms. It’s become a new frontier for people to express themselves, make connections, and even build virtual businesses. 

    Final Fantasy XIV ( Picture found online )

    And I think one of the coolest things about this digital world is that it’s not limited by geography. In VRChat, for example, you can meet people from all over the world who speak all kinds of languages. It’s a space where people can come together without the usual physical boundaries, making it a powerful platform for global interaction, allowing people to break away from their real-world limitations. In these virtual spaces, you can become anything you want. You can choose to represent yourself however you see fit, which can be incredibly liberating for a lot of people. Whether it’s a reflection of your personality or an escape from your everyday life, digital avatars give you the freedom to create your ideal self.

    VR Chat ( Picture found online )

    As for my own digital avatars, I’ve spent plenty of time crafting them over the years. The screenshots below are my digital avatars in the fantasy RPGs I play. I tend to make my avatars as flashy as possible, but I won’t limit my digital gender. I’ve created both male and female avatars, but I find myself having female avatars more often. It feels more fitting for me and aligned with who I am in these worlds. It’s like a reflection of how I see myself in these digital spaces, where I can express a side of me that feels more authentic. But for other people, it might be completely opposite. They would like to make them look completely different from their real-world lives. This evolution of digital avatars has been fascinating to watch, and I’m excited to see where this trend goes next.

    pictures of a chinese rpg game I play and screenshots of my digital avatars

  • 4.3 + 4.4 Melody Study

    When I first conceptualized this assignment, I originally wanted to create a real-time interactive gesture-based study where you could compose a melody. However, after looking at the class resources on pitch detection, I thought that would be fun, so I chose option 2. My initial idea was to have a dandelion blown into the air, and the goal would be to help it fly as far as possible. The audience’s pitch would be detected using pitch detection, ideally detecting the exact pitch. The game was inspired by Flappy Bird, but sound would be used to control the dandelion.

    At first, I wanted to use pitch detection to control the game, but I ran into issues with essentia.js, as the detected pitch fluctuated too much. For example, one moment it would be around 300 Hz, and the next, it would spike to over 4000 Hz, making it impossible to map correctly. Because of this, I had to use volume instead to control the dandelion’s vertical position.

    Since this is a MELODY assignment, I also want to integrate the sound into it, and I was thinking of playing different pitches based on the y-value of the flower. But this flower changes really fast, sometimes spiking to the top in a split second, so that option was discarded. Then I wondered if I could use essentia.js to detect the intonation of my speech, and then play audio of the same pitch as I flew past each tree. For example, if I’m singing C, then when this flower flies over a tree, it will also play C. But this didn’t work because the notes detected by Essentia are not very accurate and too complicated for me to implement.

    And this is what I have for the final outcome.

    PS: This project was really hard on my voice because I had to howl constantly. If possible, please don’t watch my video, it’s so embarrassing—play it and experience it yourself!

    Click the image and start playing