Project Hyperessay #1 – Swappie 2.0


In Internet Art & Culture in Semester I, I felt like I merely dipped my toes in the idea before the semester ended and unfortunately, so did my endeavor with face-swapping. In Swappie, I swapped the faces of my friends on the Internet and compiled it in this tumblr site and the Swappie Facebook Group and had so much devious fun reading and watching reactions from friends and mutual friends online. The responses I received were mostly positive; with people requesting more and people submitting their own photos to be swapped by me. Due to the anonymity blanket the Internet provides, I was able to create some form of mystery as to who was grabbing and posting all these photos and it became a talking point in ADM. I joined the Media & Performance class in Semester II in hopes of taking the idea of the Swappie a step further.

I personally feel that it is what we experience through the senses that makes life meaningful. In fact, it can be said that what is experienced through the senses is life itself. The two senses, seeing and hearing, forms the basic fundamentals of life and living. Impressions obtained through these two senses in my opinion are essential to a performance. To get my point across, please enjoy this video of Charlie Chaplin in his Lion’s Cage silent comedy. The use of orchestra music creates the dramatic effect of him being stuck in a cage with a dangerous animal.

Now, watch the video below where the sound has been edited to feature some sound effects and song cuts that had nothing to do with the clip. (eg when the lion is shown sleeping we hear “soft kitty, warm kitty, little ball of fur”, this gives and invokes in us a different impression and feeling as to being stuck in the cage with the animal.) This isn’t the best example but I chose this clip for the sake of explanation 🙂

The idea of a performance piece is to pre-plan the experience; to arrange the sound effects and the music that accompanies the piece. However, in Swappie 2.0: Sound Edition (haha!) The idea is to pair a video with an inappropriate sound that alters the user experience and impression of the video altogether. You won’t know be able to predict the sound that might accompany the clip (although your mind already knows what it expects to hear; ie: a gun goes bang)

In the same way, in my Pixel8 Disembodied Max Video Project, you react differently to it from 0:01 – 0:05 when there is no sound and when sound is introduced after that. As the Super Mario game sounds were playing, your mind associates my video with a 16-bit game. Without the addition of sound, the video would not have made “as much sense”. This is due to the fact that sight and sound often go together, in media and performance and also in tv and internet culture. Thus, I would be thrilled to embark on a journey of sourcing short video clips and edit the sound that goes with them into a compilation – probably through vimeo, tumblr or vine.

Micro-project III: Disembodied – Pixel8


My Disembodied Micro-project is entitled Pixel8 or rather Pixellate. Reason being, when I was fiddling around with the Max7 patch, I realised that it could duplicate my main image into many smaller ones while still retaining the basic outline of my figure and that piqued my interest.

I started thinking about old school arcade games and computers in which the displays were simple images in the form of pixels (much unlike our HD videos today). Then, I thought about how I loved to play Super Mario on my GameBoy device when I was younger and how the low-fi graphics did not bother me (as much as it would today. I’d be pissed if graphics were low resolution).

The pixels were formed as a product of many vertical and horizontal repetitions of my image and the “loading screen-esque” movements was achieved by simply waving my arms in front of my webcam like a maniac. I performed a short skit of what a Super Ruzana game would look like. I do realise that my game doesn’t look very fun to play as compared to Super Mario 😛 but I indeed had fun creating the poses and experimenting with the different elements on Max that created this chain of effects.

Micro-Project: Max Net Appropriation


I’m the kind of person that likes to idly scroll my facebook newsfeed or instagram feed when I’m bored. This way. I can see what my friends are up to and how life is treating them. Most of my friends are avid posters of selfies. I get to see what my friends look like without actually being with them in real life. I wanted to explore this concept further in MAX for allowing me to see more selfies (even of people I do not know” tagged with “selfie” or “wefie” on Flickr to see how many of these are posted on the Flickr Stream. Thank you August Black and Randall Packer for creating a MAX Patch that a software n00b like me can easily digest 😛


This video is based on August’s original Flickr Search that has been e-mailed to us. I had  I then followed the instructions for audio and visual manipulation. 

Screen shot 2014-10-08 at 9.43.46 AM
(The original unedited MAX Patch)

Screen shot 2014-10-08 at 9.45.58 AM
Firstly, I entered the two search queries which is “selfie” and “wefie”.

Screen shot 2014-10-08 at 9.52.09 AM

The “movie” was moving at a speed to fast for me to screenshot so I adjusted it to 250 from 150:Screen shot 2014-10-08 at 9.50.09 AM


Screen shot 2014-10-08 at 10.34.36 AM

Changing the presets would make it look like filters; the ones you would get on instagram.

Screen shot 2014-10-08 at 10.04.21 AM

I also renamed it to Ruzana’s Flickr Search before turning on the audio.

Screen shot 2014-10-08 at 10.40.06 AM
The final MAX Patch Product…

Click the above video to see my MAX working file

Click the above video to see the final video

My concept for this video is inspired by selfies/wefies on instagram. In instagram, you’re able to choose from a vast array of image filters to pimp your selfie. I adjusted various parameters on the max file to create a gradual change of filter and recorded it. The video starts of slow, representing the beginning of selfie culture and then gradually becomes faster and the filters so extreme that you can’t even see the faces in the images anymore. The accompanied sound also stops and sounds “glitched” at some point and then becomes really really fast – This is to represent how something as innocent as taking a picture of yourself and uploading it onto social media websites online may erupt and evolve into something bigger; an issue of self-objectification.

I find the idea of appropriating real-time images by searching for them on the internet very interesting. Prior to this, I had no idea you could generate audio based on visual information and transpose our data from one media type to another. I’ve learnt a lot from this exercise though it proved to be rather challenging at first 🙂 I’m happy I pulled through!