Real Time Aggregation.

 

15 min of randomness in adm

Posted by ZiFeng Ong on Thursday, 17 August 2017

Before the Live Video:

This was the first time I did Facebook Live video and I felt like a cave-man that doesn’t know how does it even work, I don’t even know how set my profile to public since I’ve set it to private years ago, I had, however, took part as an audience for the Live video of my friends and kind of like it due to the fact that people reacted almost instantly on the comments I’ve wrote. But the idea of going live on Facebook was really scary for me as I did not have the confidence to produce quality video which people will like, what if like there is totally no viewers? And also there are people who I am not very well acquainted with in my friend list and I don’t feel good putting the live video where they will probably not get what is going on. But Oh well~ What needs to be done must be done.
*clicks the red button*

During the Live Video:

At the start of the live video, I was really lost and did not know what to do so I followed my classmate out of the class room and was feeling weird as my friends are all taking happily to the phone but I’ve got nothing to say so I only filmed them in silent. The anxiety of putting myself on the web Live slowly dwindle down as soon as I saw our IM juniors, since they are from IM, it feel like there is a connection with them just because I knew they will be doing this the next year, just like how I saw the Senior Nathanael did it last year.

 

And then, I’m surprise that my first viewer on this live video was also Senior Nathanael, I’m touched that at least someone was watching so I went to say hi to him in the video, at this point of time, I did not know about the function to switch the camera(because I’m a caveman) so I turn the phone physically to show my face, I had no intend to show my face at all as I planned to just film what is happening without showing myself but it just happened naturally.

After me exposing myself in the video(2:23), it feels OK since I’ve did it once and broken the barrier of videoing in a third person, suddenly my phone became something which I am interacting with, (so instead of me using as my phone to record and tell a story, it developed into me talking to the phone and the phone just extended itself from “part of me” into “a separate entity of me”. which is really bizarre as all I did was to show myself in the video and my perspective changed so much.)

After going around looking at what the second year was doing and semi-introducing what I was doing and filming others, it came to a point that I started to do random things and became numb on the fact that people will judge me while I am on live(my anxiety before the start of the Live video), I think this is partly because:
1) People around me were doing the same thing.
2) I’ve did it for a couple of minute and slowly getting used to it.
3) The pace of live video was fast and there wasn’t time to pre-think what to do.
4) The viewers were commenting on random events which gave me a feeling that I am not alone.
5) I felt like I have some responsibility in entertaining the viewers.

Therefore I did some random(disturbing) acts just because I could.(Just to clarify, I don’t go around squeezing people’s butt in real life)

I thought it was funny, and now looking back, its still kind of funny albeit the creepy movements. Just what the heck am I doing. LOLLL

And then I proceed to do normal activities like walking around, talking to people and trying to introduce ADM to the viewers whom are not from ADM like my Ex-classmates and such.


I also really like this part where I was filming what CherSee was Filming and he was on the front camera while I was at the back camera so I could see myself in his screen and I am recording myself recording myself like a video-inception (left), but from the video which he uploaded(right), its only a one way video which he only shoots my face trying to get into the center of the frame while filming him. This is really mind boggling and cool in its own way as both of us are having live videoing it feel like it is a real-time magic or something.

At the end of 15 minute while I was walking back, I went back to the IM Junior table and I only notice the difference when I rewatched the video. I was filming myself!! Comparing to when I just started this live video where I was just filming the juniors, Now I am in the video showing myself to the public. I was talking much more in comparison. (At the start, I was just waving to them without saying Hello and only briefly reply in short sentence about the live video, and towards the end, I am just talking aimlessly.)
*Thanks Bridgel, Tisya and Sylvester for entertaining me.*

At the end of the video, I got a slight panic when I stopped the video as I am afraid that I would do something wrong which accidentally delete the footage since it was live, there is no way which I could replicate it again if it was deleted so somehow, so a live video felt more precious to me than any other video I took.

After the Live Video:

Prof Randall showed us the 4×3 grid video wall of the with 12 of our videos playing at the same time, I love the way that everything overlapped but the non of the video’s time frame was in parallel (the same exact event happened at different timing on the video wall) and it is totally uncoordinated, in a sense, it is chaos, but the rhythm is there within the chaotic mass which made it much more interesting than just looking at one single video at once. The visual and audio of all the 12 video was overwhelming and just when I am focusing on a single video within the video wall, suddenly I heard something interesting happening and I will glance around to find which video produced the sound as there must be something interesting happening in it, this gave me a sense of treasure hunting.

In Conclusion

I love the way that our individual 15 minute effort was placed together to form “something greater than yourself ” piece of work. This 15 minutes of live video was really fun and enriching, it somehow changed me and gave me some confident in the future Live video that I will be doing. At least I won’t start with the Live video with anxiety and getting lost right after I clicked the red button.

 

Note to self: I should talk more in the live video.

Year 2 sem 2 – Interactive 2 – Exercise 4, Four Eye Monster

In this Exercise, I’ve used the location of the face tracker and cropped to only the eye area, scale it accordingly and delay video of the eye by using jit.matrixset, after that, I mapped it onto the original location of the face, creating a trippy effect of a four eye monster.

There are a few things that I’m unable to solve.

1 – the alphamask will give a rectangular box and im unable to remove it even with blur.

2-the additional eye is too sensitive and it look like its vibrating, maybe i could line it afterwards. so I added the eye tracker from the value of the jit.face box, while doing it, i realise it looks really cool on this effect for just using only the jit.face and scrdimstart and scrdimend it. it feel like a really old way of fliming drunken scene.

 

Year 2 sem 2 – Interactive 2 – Exercise 3, Photo Booth

This was the hardest Max patch we wrote up till now as it is logic base and after LPD taught us about the function of different commands –

Clocker(used in the patch to count down the 3,2,1,)
Timer
Speedlim (use in this patch to prevent sound from looping endlessly)
Pipe
Select
Split (used in this patch to split zeros with all other number hence when there is no value = 0 = do nothing)
Route
Trigger
Gate (I tried to use this, but diden fully understand this so i used IF statement)
Onebang(my favourite) (make sure there is only one bang until it is reset)
Counter
Line
LoadBang
Loadmess
Scale
Expr (to put mathemathical expression (x1 + x2)/2 )
Pack and Pak

all of these are basic tools and if one can use them to their fullest by chaining up different object, alot of cool stuffs could be done in just these objects.

This is the main piece of patcher in my patch, the rest of the patch are very similar to the previous exercise, so in summary, the flow of this patcher

2017-02-20-19-26-31

  • get the square value from face & unpack it
  • find the middle of the square in terms of X value and Y value
  • split into 5 zone
  •  1) Middle Zone (MZ) (the zone where X and y value are around the center, for my patch it is 60 < X < 80 and 50 < Y < 70),
  •  2) X higher than MZ
  •  3) X Lower than MZ
  •  4) Y Higher than MZ
  •  5) Y Lower than MZ
  • For 2 to 5, send the number straight to the audio player
  • And for 1,send the number to audio player while if the value stays in the zone without X or Y going out of the zone for 3 seconds(using counter and if $i1 >= 3000), then take a photo, else stop the timer(and when the timer resumed, it will auto start from 0
  • lastly, I used a single playlist of audio player just because i dont want the patch to play more than one sound clip at any given moment, the only way for different soundclip to play is either
    1) when one clip finish playing and the condition is still the same, it will playback the same soundclip after 2000ms or
    2) when condition changed when a soundclip is playing, it will then stop the current playing soundclip and play another sound clip instantly.

 

I hope this is relative clear of the flow, through this exercise, I learnt mainly about logic flow, which i think is the most important in every kind of coding.

Year 2 sem 2 – Interactive 2 – Exercise 2, Le Peeping Tom

Featured image warning: “sorry I retard I laughed at myself LOLLLOLL”

This Exercise was built on top of the previous one where we can use the majority of the exercise 1’s code and add on the playback of another video to it, playing the frame corresponding to the x position of the face detected.

I’ve also added the “if” statement which made the video to “look” at the middle when no face is detected and “line” it so that the face will turn gradually(only somewhat as i do not have enough framecount in my clip to make it really smooth)

 

in this exercise, I’ve learnt how to:

  • playback a certain part of video
  • use the “if”, “then” and “else” statement properly in max

2017-02-06-19-13-25 2017-02-06-19-13-39 2017-02-06-19-13-50
2017-02-06-19-14-22

Year 2 sem 2 – Interactive 2 – Exercise 1, Mirror Mirror on the.. screen.

This is the first exercise of max and we are all new to this program(we were kind of used to the script coding and this is really new! YAY TO NO MORE SCRIPT WRITING!! hopefully)

“your face is so ugly where the mirror got blackout if you are too near”

Principals behind this project:
1 – turn camera on
2 – detect face using the camera
3 – know how big(near) the face is in correlation to the image
4 – dim the image the bigger(nearer) the face is in correlation to the image
5 – when there are no face, set to normal value i.e. 1
6 – in the event of rapid transition between jumping numbers(face and no face) gradually dim or brighten the image.

I personally think max is relative easy to pick up but hard to master as it is a logic base(atleast i think it is for now) and the biggest problem of max is that 1- the documentation is hard to find comparing to other coding language. 2 – you have to know the syntax of the object before you can do anything and *read number 1 again*

there are a few difference between max and processing, i love that max is a multi thread processor which means it can do many different task at a single moment which is visually making sense (like things happening at the same time can be put side by side)

some really useful stuffs I’ve learnt in this exercise:

  • jit. – a useful library
  • cv.jit. – a library that need to download
  • unpack – take a package of many sets of number and split them into different number
  • Split – to be honest, i dont really know how this came about in my patch, i copied from somewhere and put it in which works(shall find out next time)
  • scale – to map the input and scale it to the range of output
  • line – to “smoothen” the transition of one number to anotherBelow is the system i used, the patch”p unpack&split” is math which i used to calculate where the face is and the best and smoothest transition between face.
    2017-02-06-19-15-34 2017-02-06-19-15-54 2017-02-06-19-16-07 2017-02-06-19-16-38