This week, I’m getting all the things that need dynamic simulation out.

I spent the better part of a day just figuring out the simulation for a hospital blanket that i’m happy with. This is the end result

It helped having first hand experience of the weight and texture of the blankets, so knowing precisely what I was aiming for helped. That doesn’t make the dynamics go any faster though, especially with fabrics with all the self-coliding, potential geometry getting snagged on each other, all the settings i had to pour through. I guess what I’m trying to say is that it took me a bit to get right and i want the world to know. I’ll save its use in the scene for the actual video, but here’s a single frame. 


There are a lot of things that went into this. Firstly, thank goodness Cinema 4D has options for fracturing. Secondly, getting the interior of the ball to not explode due to intersecting geometry from the cuts (bless the settings). Also getting the inner faces of the fragments to not just be flat cuts and look more akin to broken glass itself took some finessing. Another thing I was trying to go for was for the front and back ends to have tinier fragments when it fractures and thankfully there are settings to dial that look in. Bless the souls at Maxon for creating such technical finesse.

Also since this scene is going to be in slow mo, here’s a slow mo version at 100fps. I do worry for this scene as it’ll probably be one of the longer ones to render, considering the high frame rate and the glass material, so everything should be good to go before i hit render.

In the next few days I still need to get the alternative portion of the scene, the merging of the two balls (hah) to work out. And that’s using something called metaballs, which I haven’t figured out yet, but that’d be the next bridge to cross. As I might have mentioned in a previous post, I really should have all the scenes rendering by the end of next week. I’ve already started rendering some whenever I leave the house and I’m going to start with overnight renders too. Till next time!

Oh, I actually forgot i had more pictures. Getting the glass material right was also fun to do, its pretty rewarding to hit render and find a result that makes me think I might have actually got something here.

Getting the scene layout is one thing, but lighting the scene is another; and on top of that working the camera for the scene, its focal length and movement all are another aspect. Here’s an example of how Scene 4 has progressed over time.

Before any real work has been done, but a concept was there in where the light will propagate through the room.

My initial idea was that the light would be from the back of the room, shining into the audiences face. Shafts of light/God rays will stream past the objects and through fog to create this grand image.

However, that did not work out the way i planned. For one, the lights kept leaving these hard dark lines wherever they intersected with geometry, mainly the room. You can even see it in the image above. This made me venture into IES lights, which really removed that problem (surprisingly) but still I could not achieve the look i wanted. Also IES lights from this point on were my go to methods for lighting. In short IES lights use real world bulb/light falloffs to replicate specific lamps or bulbs. Mainly used in architectural renders for bulb specific accuracy, they’ve really gone on to help other CG artists achieve better realism in their images, plainly because the light falloff wasn’t as linear as default CG lights.

This lovely scene i tried to work for some time, camera angles weren’t exactly working, lighting wasn’t where i wish it were. Even the way the objects in the scene laid out I did not like. (especially those damn chairs). But this still had parts that i liked; for example the light that counces off the walls onto the middle bed looked good. Image composition was still rather poor however, it was sorely lacking a foreground element. Hence my next move was to slap another bed into the scene. I also had to relight the new portion of the room, but this time, i knew that i wanted a sort of rim light to catch the foot end of the bed frame; and by miracles of miracles, the scene just started to look better.

Additionally I started to explore other techniques such as focal drift. I wouldn’t set the focus exactly on the object, I’d push it a bit back or forward; trying to achieve that organic look. Though I’m still wondering how I’d do that with enough control in more static shots.

I also improved the lighting for a closeup shot for the beds behind. Going from this style below.

To this~ Which I’d say is a pretty good improvement. I’m also pretty proud of the visible material difference between the bed frame and the mattress fabric. Hopefully these material details are still textually noticeable after the post-processing of Red and Blue. Additionally, I sculpted better pillows using the cloth simulator in C4D and then smoothing it out with a brush.

Sometimes I worry that I’m spending too much time on superfluous details like these, but I do think they’re noticeable even if not completely conscious on the sort of quick viewing that happens during an FYP show.

Next up I’ll talk about the other dynamic simulations.

Sometimes concepts don’t work out. One of which was to have various objects suspended from the ceiling of the room, swaying gently, very minimal movement. Had a lot of trouble getting that right, either the simulation would stop abruptly, the movement would be way to over-pronounced. Additionally the dynamics simulator would over time slowly drift from each other (the string connector to the object)

As seen above, the object is connected by a red line to the string its connected to. After running the simulations a few times, I run into problems like these. Additional problems include the length of the string sometimes changing. But for me the biggest deal breaker was that I could not get the subtle swaying I really wanted; and considering I wanted a room full of these at varying lengths. This room didn’t seem like it was going to go great and would probably take more time than it was worth.

You can even see the slack in the line when it drops. The coiled spring around is just a visual representation of the constraint between the two dynamic objects.

So I scrapped that entire sequence (there were a few other things going on for the scene too). But perhaps, it was more of a rework, because the end result would be the same.

I ended up with this.

Just for a visual idea. The scalpels are floating in the air. I plan to include other objects like pills and IV drips (which i am in the process of cobbling together).

I also tried fogging it up, which takes a huge strain on render time so that still up in the air a bit, but it does look great.

16000 Samples to clean up the noise, this being the crux of adding nice atmospheric fog


Trying another method of fog, but with lower sample counts so its reaaaallly noisy

trying to get that glare/flare into the camera


Speaking of glares and flares, I came across this guys work, a short he made in Cinema 4D talking about his personal project ‘Keys’. One of this mindsets going into the project is that he wanted to do everything in camera as possible. I’m trying to do that too, 1. because I think its better than compositing in particles later (tho that can really work well) but also 2. on a more practical level, I can spend that time working on the other aspects of the project. (which is kind of counter productive, cause it’ll probably take more time to get it “right” in C4D; pros and cons, pros and cons)

As for a more current update of the project. This week is simulations week, getting things like fabric simulations out, modelling objects using simulations, the fracturing glass ball at the start and the alternative ball being absorbed into the fetus sphere. Next week is animations week (which i hope and think will go much faster as I’m not at the mercy of simulations). Till then!

Oh, I also have at least something in for every scene i want in the video. I’m also using my design in motion module to add additional animations on top of my video, all in the theme of hospital UIs. All the visual stuff I hope to be done with by the end of the first week of April. My acrylic order should be in sometime during next week as well. For my next post, I’ll show a frame of every scene I have at the moment. (OR MAYBE I think the work I’m doing each day)

Finally on to the gimmick of this project!

To start, I set out into Illustrator to test out the limits of red and blue’s hue, saturation and brightness.

(Feel free to patronize this Carousell dealer for R/B Glasses

After that I started to apply the same principles to the files i already had rendered, first with stills and then with video.

Something that was really interesting to me was that some layers just worked better being Red/Blue. Case in point, the last 3 pictures show that when the blue ball is tinted red instead, it really disappears very well when viewed through the coloured filter (as compared to when it was blue and the highlights still peeked through).

Here’s a video example of the effect.

Lastly here’s the draft shown at the semester presentation in early/mid November

I’m not satisfied with the panning shots at 0:20 at all as they seem very flat (as previously mentioned in my other post).

Going forward video-wise I’m going to shoot some footage and make an edit and mash it up with my rendered footage to see if it works or not. Also plan to get a closer version of what the edit would be like in the final video.

Next up, AUDIO.


I’m exploring HDRI’s as a way of lighting my scenes. HDRI’s are 360 panoramic photos with multiple baked into one image with a large dynamic range; hence the term HDRI: ‘High-dynamic-range imaging’. These files are used as a means of lighting and are often used in 3D integration for live footage. The 3D software uses the image to calculate lighting as per the environment and provides reflections of the environment. You can see what they typically look like in the image below. HDRI’s typically come in the .hdr file format and are 32-bit, which hold a wider range of color information than a standard 8-bit .jpg like the example picture below.

HDRI example

This is mapped to a sphere in the 3D software of choice

as you can see it really provides a great way to realistically light a scene. The downsides are that they’re inflexible, you can always add more lights in there but for the most part you’re stuck with what you’re given. Another downside is that I think a lot of my scenes are indoors and/or dimly lit. There are workarounds for indoors, but as for dimly lit most hdri’s are often during the day or much brighter than what I’d need. The two main issues for me are it’s inflexibility and the variety that i can find. I’m considering taking my own, and if I do I’m probably going to involve object integration with footage that I’ve shot.


One look that I’m going for is the wet floor look. I see it used very often in numerous CG artists work to great effect, it adds some much needed depth and complexity to the scene; and its the added depth and textural look it provides that I think will be very useful. This pairs well with the limitations the Red/Blue color processing imposes. In the example below, you can see the floor has this variance of reflectiveness to it and that’s similar to what I’m trying to achieve.

View this post on Instagram


A post shared by beeple (@beeple_crap) on

I’ve thus far managed to replicate the effect to some degree that I’m happy with and am able to tweak it’s look on a surface to some degree. It also scales well as it’s procedural and saves me having to work with dirt maps to create matte-glossy textures on a surface. Below is the first time I got it to work, and is lit with warm and cool lights (something i only realized disappears with the Red/Blue post-processing later on, and I’ll touch more on it in the section where I delve into my experiments with it.)



Another aspect to add to an aesthetically pleasing CG image is depth of field, tho not necessary it really can help give depth to a scene. Much like motion blur these have been very finicky for technical reasons.

(The wet floor effect was just a rough input to test environment reflections and looks pretty terrible here because of its awfully hard edges)

Above you can see the two ingredients needed to create the depth of field effect.

Here’re some tests I made earlier. By doing this depth of field in post I can easily shift the focus between two different points, its intensity and even the aperture shape. However you can see that they’re extremely rough and have a lot of flaws (looks like vaseline was smeared on the lens)

In the Depth map above you may be able to start to see the problems

Resulting errors in DoF

The depth map in C4D comes out uneven due to some aliasing issues. I’ve set this issue aside for more pressing matters such as content creation for the meantime.

Other fun stuff in DoF is that I can get an anamorphic look by changing the iris for the “lens”

More Material Tests

While browsing around, I came across this image

I really liked the reflection on the floor so I decided to try and replicate it. I think it’s like a polished concrete look or something like that.

There was a lot of tweaking involved to get the reflections I wanted, and while it isn’t exactly like the image above, I think I came decently close for trying my own  hand at recreating the material. I think the image are a further show of how materials can affect realism; the skull doesn’t look particularly realistic while the floor reflection looks noticeably more so.

In the process of doing this exploration I came up with an idea for a shot and below is a low res gif of half of the idea.


trying to fit within the 3mb filesize is an artform



The weeks following the previous post was further experimentation and learning the technical aspects of what i want to achieve. I’ll be breaking these down into a few posts. In this one I’ll be covering the earliest weeks.


One of the problems I found was that my camera pans in C4D came out looking very flat, like a Ken Burns effect, but terrible. I’ve chalked this up to a few things, one being that the RED/BLUE tinting processing removes some depth from the image, the way I light the 3D models, the combination of both layers being too similarly flat and lastly, the lack of motion blur.

Here are some Gifs of the variations in motion blur settings, default settings, 1 sample and 2 samples. Sample rates drastically increase the render time and this is a reason to force motion blur in post, and the settings in post is something I’m still trying to dial in, especially with more subtle movements.

Default C4D Settings

In post

2 Samples

1 Sample

Another reason to get motion blur in post is because as much as possible, I’d like to render the image with flexibility to make adjustments; this applies to things such as colour, motion blur and focus. “Baking” these into the frame (especially the latter two) makes adjustments troublesome. For example, if I render the focus into the image, if parts that I want in focus slip out of focus I’d have to re-render the entire sequence.

Materials + Lighting

Another thing that I’ve been spending a lot of time on is trying to make good materials. In 3D software you have to adjust the parameters of a material to get it to look the way you want to; the way light interacts with the surface, is it rough, glossy etc.


Frosted Glass Tests

Same Lighting Different Materials

Materials can also drastically alter how a model looks, though the texturing on the left isn’t placed correctly, the way light reflects off it is much more pleasing in my opinion.


You can even see in this example above that the rim light catches the figure differently, in fact, I had to boost the brightness by about 5 times when switching between the two materials. The black patches on the left figure were supposed to be gold flakes. I was trying to replicate a marble with gold combination, the reason they show up as black is because reflective materials need an environment to reflect.

Some inspiration was @Billelis on Instagram.

View this post on Instagram

? Memento Mori ? by @billelis

A post shared by BILLELIS (@billelis) on

He has such masterful blends of materials and textures that its something I aspire towards and hope to achieve beyond this FYP. At least I hope to capture something of my own both visually and on a technical basis.

These tests eventually led to my Midterm assignment for my Lighting and Rendering Pipeline Module, with the final version on the far right.

A lot of time was also spent trying to find textures and materials. Textures were especially tricky as high res versions for close-up shots usually cost money, so I’ve been dabbling in procedural textures that scale better. Below are a collection of tests of 1k – 4k res textures that haven’t made it anywhere, but I’m trying to figure out a good texture for the walls and various objects and what kinda limits they have in terms of standing up to closeups and on various forms.

On the left you can see a displacement test. That was essentially a flat plane in Cinema 4D but using the map on the right causes it to displace according to the colour (tweakable with parameters). This test was just an exploration into displacement maps and textural visuals.

TO BE CONTINUED in the next post – [ HDRIs, DOF, R/B color Tests + more ]


Metachaos by Alessandro Bavari from 2010 has always been loitering in my mind since I first saw it way back many years ago. Visually it was very peculiar and odd, with its writhing bodies, shaky camera and mix of dark and light environments. While I had seen a number of odd videos before such Aphex Twin music videos, works by Chris Cunningham, Michel Gondry and other directors; something about Metachaos felt even more surreal. I think the fact that it was fully CG (or largely CG) helped push what was possible to “film” into this expanse of possible visuals without a concern for real world viability or even a large film budget to make it happen. Additionally, considering the year it was made, even now, I think the textures, lighting and camerawork (though low res) were and still are exceptional. Which brings me to what I intend to do. 8 years on, with technology advancing, I do strongly feel that it’s possible for me to recreate a fraction of what I saw in Metachaos. Not to say I’m attempting a recreation of its mood and visual style, but on a technical level.

Throwing back to the works of the aforementioned artists/directors, thinking about it now I’d say it was not that it was odd that really kept me around, but in the world building that making these fantastical scenarios brings about; likewise with Metachaos. Upfront these might just seem like kooky videos that last a couple of minutes but they appeal to me because they feel like they’re creating alternate worlds, akin to those of Tolkien’s Lord of the Rings universe though not as fleshed out in the same way.

Ultimately what I personally trying to do, is to bring that same kind of intensity and atmosphere of a world through to the viewer. I want the viewer to get a sense of the world, even from erratic snippets that are fleeting; which is why I gravitate strongly to the idea of having strong aesthetics and visuals for my shots. Thought this might be subjective, I personally feel that strong visuals give the viewer a mental branching off point for a variety of thoughts.


I start out my foray into the project by jumping back in 3D software. Having done CG from 2010 – 2014 in my 3 year Digital Visual Effects course in Polytechnic, I figured it’s time to jump back in. This time though, I started with Cinema 4D rather than Maya, both of which I’ve used before in varying degrees. These next few shots are the visual tests I’ve attempted as a refresher into the software.

As a start I tried to recreate a decent rocky textured terrain as per a tutorial. The aim being to get realistic surfaces and decent lighting.

Surface Deformers and Lighting Tests

Also I wanted to try lighting with objects and get reflections of said objects on surfaces such as water. But I haven’t been able to figure out that last portion with the reflections yet.

Testing Lighting with Objects

Testing Lighting with Objects

Lastly I tried smaller scale surface tests. I could get decent render times utilizing sub-poly displacement but not on a larger scale on the size of landscapes without dramatic increases in render time.

Sub-poly Displacement (1 second per frame)

Render Time – 7 mins 28 Seconds (Subpoly Displacement)

Render Time – 15 seconds

These were tests into lighting, displacement and bump maps, and additionally optimizing renders, which is my main concern, the downtime induced by rendering; which is why I’ve also started contemplating using software such as Unity for this project.

Unity allows for real time interaction as well as lighting and rendering, which is very appealing, though, the downside for Unity at the moment is unfamiliarity, I’m definitely not as proficient at Unity as I am in 3D software. However, I’ve seen very stunning examples of visuals to come out of Unity, outside of game development such as the example below.

View this post on Instagram

Excuse my breathing lol

A post shared by Stephen S Gibson (@ssgibsonart) on

View this post on Instagram

Testing particle effects for my secret project!

A post shared by Stephen S Gibson (@ssgibsonart) on

In doing these tests, I’ve stumbled upon a realization that, I’m not limited to one software for my final outcome, that is to say, I can utilize and draw on the even larger field of experience I have with other software, namely After Effects.


The gif above is the result of taking a render of the sphere rotation and utilizing a slit-scan technique through After Effects. The technique is essentially visual time displacement. Having tried this with live footage before, one of the shortcomings was that I didn’t have a smooth enough frame rate and caused banding (you’d see thick slices moving instead of a smooth displacement).

The benefit is that in 3D software I can increase the number of frames per second and render at a high frame rate (120 in this case). The downside being I’m rendering more frames = more render time and also additional rendering in After Effects.

Still finding my way though the software, optimizing the renders really feels like the make or break of this at the moment, being able to get fast feedback being the key at this iterative stage.

I’ll spend some time after this week looking into Unity as an option for both render and interactive reasons.

Additionally, I need to work on crafting good looking compositions or essentially the cinematography of the shots. Upon that I should also build a short storyboard for a sequence to test out camera movements and cuts.

In short + subsequent steps

  • Crafting strong shots
  • Creating a good sequence
  • Exploring further technical aspects (motion blur, depth of field) both of which I’m currently doing tests for
  • Sound Design
  • Trying out Unity
  • Final Execution look (multi-screen or other output)