[Aura Voir] Interactive II: Final

Initial Concept

In this project we initially wanted to create a smart mirror that combines a wardrobe assistant with additional functions like a mood board. The user will utilise the mirror as a daily helper in helping the user in styling their daily outfit. Having to create different themes and genre to suit the user’s mood and outfit requirements, the mirror is able to display specific types of clothes to suit the user’s choice of mood based on its specific colour palette. Both genders are able to fully utilise the mirror as it provides gender specific fashion advices too.

Creating

Some aspects of how our project works is by incorporating a partial mirror film, camera to detect motion, and speakers for ambient music. These functions are built in to maximise the user’s experience with the mirror. For this project, we are using Maxmsp for the creation of motion region and pixel change for the selection of options.

Problems/ Brainstorming

During the creation of this project, we encountered several problems such as inadequate design factors and the inability to incorporate a more user-friendly interface. We wanted to make this project more elaborative and unique for its’ users to experience something that is not existing in the current market. However, our initial project mirrors similarly to many already existing smart mirror in the market, hence, we wanted to deviate from the norm.

Idea Development

Instead of creating something that is more technically challenging and hard to attain, we decided to create something that was funky and eccentric in order to attract viewers while achievable to us in Maxmsp at the same time. A typical personal wardrobe could be used as a quirky mirror that displays outfit stereotypes from across the globe, allowing the user a freedom of choice.

Concept

We wanted to have something that was more personalised for the user, which was why the mirror has gender specific outfits for users to choose. Not only that, the mirror has additional functions such as world selection page and music that is specific to the country selected, maximising the connection between the user and the mirror. The mirror also shows the time and date allowing ease for the user should he/she be rushing, and has a screenshot button with the ability to receive the screenshot. Thereby, allowing the user to recall their previous selection.

Technical Process

Okay so here comes the boring part…

FRIST: MOTION REGIONS

The first thing that came to mind was to use motion region detection to trigger an image.

So.. we tried some sampling in using the motion region patch to trigger different images

After which we had to plan what images to trigger. As much as we want to create a ‘menu’ with buttons and smooth transitions to different images, we figured that we will use each option as one image. With this we had to think of all the different options that we are and then create the respective number of screens.

So while Yi Jie helped to create the screens, Felicia went ahead to create more paths to put more screens.

So the flow of the decision making of the use will be as such:

The selection after gender is the selection of 8 countries, and each will show an outfit.

So with this we realised that since country selection is the same screen, we will have a problem because this system will not know whether the user chose Boy or Girl.

Hence, we decided that we need to duplicate the country selection screen.

With this, we started to coordinate the region boxes.

The placing of the region boxes matters. We realised that it would be best if the region boxes are away of the user’s body, where the user is required to stretch out his/her arm to reach the motion region.

This way, the motion regions would not detect the user’s body movements.

After testing with a few images, we realised that we needed to reset the threshold everytime the user chooses an option. This is so that the next screen’s motion regions are refreshed.

Also we realised that we would have a problem with the ‘Back’ button. Because the ‘Back’ button are on the same places in all the screens, the motion region will be triggered because the user’s hand is at that region so it will create a “double” back…

We tried to delay the motion regions but did not seem to work well. Hence, we decided to just change the location of the ‘back’ button.

SECOND: MUSIC

Of course music has to be in here! Since the start of conceptualisation, we already decided to add music as we feel that music plays an important part in setting someone’s mood. But since we change it to a more ‘global’ theme, we had to find music that best represent each country.

New York – Jazz

Paris – Romantic

London – Mr bean

Japan – Your name

China – Traditional Chinese drama ost

Egypt – Mysterious

India – Traditional Indian music

Singapore – Ah boys to men!

When we tested out the music, the music plays upon a ‘change’ in the motion regions,  hence, we needed to trigger it again when the user goes away from the screen in order for the music to stop.

THIRD: SWITCHING TO PORTRAIT 

To be honest, we did not expect this to be difficult at all. When we did our motion regions, we did it in landscape mode like how the iMac is like. However, when we switched to portrait, the camera view and motion regions were off!

First, we had to rotate the camera view and we did it by jit.dimmap @map. However, the moment we did that, the camera view was ‘ squashed ‘ and the motionregions werent as accurate anymore. After figuring out for awhile, we realised it has to do with the video input pixels. By default it was 240 by 320 hence, i tried switching over to 320 by 240 and it worked! Whew. We were quite stuck for a while…

FOURTH: ADDING SCREEN SHOT AND TIME&DATE

We knew it would be cool if we can add a ‘view history’ function in our smart mirrors like all the other smart mirrors if not it wouldnt really be a smart mirror right? While it was easy to find out how to make screen shot, we were stuck in finding out how to recall the screenshot automatically and display it on screen… First, it would mean that we somehow needed a database to store the screen shots. Second, it would mean we need to merge two images, the screen and the screen shot. Because it was too complicated for us at this point, we decided to go create another interface for our mirror.

FIFTH: OPTIMISING

Before we start to troubleshoot even further, we tried to test it with our one-way mirror film.

 

Not too bad!

We wanted our outfits to be able to fit the user when he or she is further away from the screen. However, we realised that that way, the motion regions will not work as well because the user is too far away… Hence, we decided to place the camera nearer to the user.

although its ugly… Its better for optimisation…

Even so, the images do not fit the user very well either… Also, it is hard for users to actually accurately trigger the motion regions, hence, we also added another feature where the camera view is on the screen as well.

References used:

The images of the outfits were found online.

Music were from Youtube.

The clips used for the video were from Youtube as well.

Outcome/ Final

And… That’s it! We finally completed everything to the best of our abilities and presented Aura Voir in 3 different interfaces.

Project description:

Aura Voir is your personal secret, she makes sure you look the best that you can. Even if it means to travel the world! Aura Voir gives you a sneak peak of your preferred destination of travel. This way, you’d be more prepared and enjoy the best of the country when you blend in together with the locals! Be careful of what you see on Aura Voir, because you might actually be too local to be true.

password: auravoir

Reflection

Overall, we felt that the user interaction could be better enhanced through smoother transitions between screens and also more additional functions and features like more outfits and by including more countries and genres into our project. If possible, we would like to incorporate our project into a feasible mirror instead of using a desktop too.

Interactive II Process II By Felicia and Yi Jie

Continuing from the previous milestone, we managed link different motion region on different image and when there is a motion detected on a page, the page will jump to the corresponding images with new motion region. However, the images were very pixelated and when we tried to make a fullscreen out of our images, there will be a dark screen appearing and all motion region fail to work.

However, even after checking through all the motion regions, there wasn’t any problem and we couldnt figure it out why. However, lpd managed to help us solve the problem after consulting him- the scaling of the screen resulted in the the motion region to be scaled at different area too.

Moving forth, we realised a problem that some images had the same motion regions and whenever the screen switches image, it will change its region immediately too and henceforth, it is hard to interact with the mirror at all. We tried delaying the detection but we felt that the transition could have been done better.

Also, we tried different mirror films and most of them do not act as a real mirror and the projection through the films were quite dim. For this, if we couldnt find a workable film, we might plan scrap of this mirror film idea.

Hopefully by this week, we could set up our mirror better and add in more interesting factors along the way.