In week 7, we discussed the possibility in class, to play with lighting. Light beams could stream in from several sources in varied directions. We could also play with depth to use light to show and hide the gears.
Above, I toggle the volumetrics on, followed by post processing, followed by lights, so you can see the impact they make on the scene visuals.
We used a total of 30 lights for the scene shown above and they are all rendering in real-time. We use 10 walls for this set up, spaced out evenly. Each wall has a rim light, an angled front light, and a top spot light.
To have our spot lights cast visible light beams, we pump up the density of our air particles in the air, using a component known as a Density Volume.
This mimicks fog or haze in a certain region, and the thickness of the air particles can be controlled to our heart’s desire.
The volumetric lighting is interacts dynamically with our objects. We are able to move objects in our 3D scene to arrange the composition as how we like.
This is crucial to our highly iterative process.
To get clean lighting and high control over lights, we use shadow-casting meshes that are set to shadows only. These meshes are not rendered to the camera, but act as light ‘blockers’ to stop light from reaching certain areas of the scene. This was essential for perfecting the desired composition during much of the fine-tuning phase.
This method was more effective than light layers. Light layers is a technique to categorize lights and objects into ‘layers’, (like layers in Photoshop), and lights will only affect objects in the same layer as it. While light layers also provide the same functionality, there are too many lights to do that manually on a per-light basis. Shadow casters are slightly more performance heavy, but not significantly, so the faster workflow was preferred.
After watching a visual effect shot in Prince of Persia: Sands of time, shared by my groupmate, Ayesha, I was heavily inspired to make my own cool visual effect as well. I decided to start by learning a tool called VFX Graph (within Unity). This is different from the Particle System previously used for sand simulations. I wanted to see if I could make cool things with it.
After watching a tutorial to get me started with this feature, I had managed to create a simple effect, while understanding how the VFX graph works.
To create the effect of my name appearing in the title GIF, I first had a model of my name as a 3D text mesh in Maya.
I imported it into Unity as an .fbx file.
Then, I created a basic particle emitter to emit in a rectangle.
Then using a tool, I created a Signed Distance Field (SDF) of my name. I think of an SDF as a texture, but instead of storing colors in each pixel, it stores directions. It is like a wind current. Any air particle that gets caught in the wind will flow along it.
Imagine the above, but in a 3D volumetric representation, so there are 3D arrows in space telling how particles should move. This is how I control seemingly random particles to flow into the direction or into a space I want them to go to.
By assigning the SDF file as input into the VFX emitter, and adjusting how much I want the particles to ‘stick’ to the SDF, I create the effect below.
After writing some code, I exposed the parameters of the VFX graph to be editable in real-time, without having to simulate the entire VFX every time an attribute is changed.
For those interested, here is how the Signed Distance Field part of my VFX graph looks like:
The VFX Graph is a node-based system, meaning no programming is required unless you wish to add interactivity. The pros of the VFX Graph is that it is heavily optimized and able to simulate millions of particles in real-time.
Having learnt this earlier on, there could be options that could be explored for this project – for example, using VFX to materialize our compass or the chinese symbols in our earlier video draft.
For reference, I used this SDF baker tool to assist me in generating the Signed Distance Field asset within Unity, without going through an external DCC like Houdini, reducing the need to jump through multiple software.
This post details the breakdown of how we achieve this result and the tricks discovered along the way to create particle physics simulations:
Above is a showcase of our custom-made interactive application where I am dragging our sand emitter around the screen with my mouse cursor. Sand is emitted and continue to fall, bouncing off objects. We manage to achieve and render interactive frame rates in almost real-time, rendering and simulating 100,000 individual sand grains colliding dynamically with moving and rotating objects, with physics. Our scene has been highly optimized to render within seconds instead of hours, allowing us to freely edit parameters throughout our iterations.
Warning: This post is quite technical until the end where we share how this was achieved:
2) Sand colliding dynamically with other 3D objects that are in motion
3) The sheer quantity of particles on screen simultaneously, and keeping realistic render times
First thoughts & Research
Early talks about whether we should do sand simulation in the group wasn’t very positive.
None of us had done it before, nor are we even aware of what techniques we can use to achieve believable sand.
Sand is a complex substance – it is not a viscous liquid in which we can use fluid simulations. Yet at the same time it does not act as a single mold-able solid like clay.
Research on how to achieve this led to pretty complicated techniques and sometimes involve software beyond our understanding. Even in Maya, we had not gone deep enough to understand how to use Maya’s nParticles, which are our guess at how we would execute it in Maya.
I narrowed my search to see if it was possible to create this in Unity. As it is not primarily a simulation software, there were many challenges and question marks.
Thousands of individual sand grains have to be simulated. And what about collisions? Would our sand interact with our gear geometry in the scene? If it had to be done in two separate software, then compositing them together would also be a challenge – we could do sand overlay on the gears, non-interactively, but we had no idea how that would look. If it is too obvious it’s an overlay of two distinct videos, it won’t look good.
I knew that it was possible to use CPU-based particles in Unity that use an underlying physics engine (PhysX 3.4).
But as much as I wanted to use Unity (as our gears are rendered there), the sheer quantity of particles destroyed any hope of this being possible, or is it?
Many youtube tutorials on how to do real-time sand physics within Unity did not achieve the fantastic results we were hoping for. The sand particles were always too big, and too few.
The truth is, physics calculations consist of plenty of mathematical operations that are expensive to compute in large quantities.
The best simulations went to 20,000 particles, maybe more. But this was still a far cry from believable sand. In other words, we had little luck with tutorials. Bumping it up would exponentially take a toll on simulation times and make it difficult to iterate creatively.
Achieving interactive frame rates was a challenge. But I wanted to see how far we could push it.
I gave all the important gears a MeshCollider, and enabled collision on particles so they would be added to the physics engine and collide.
I started with a basic particle system that simulates 1,000 particles.
Seeing sand interact with the gears for the first tine is cool. But aside from cool physics, the visuals look nasty. How about 10,000?
Better but not good. I pushed for 100,000 particles. This was a huge quantity and the simulation slowed down tremendously.
I decided to be crazy to the extreme, and bump it up by a factor of 10 again!
The grain sizes were finally fine enough, but at this point each frame was taking one full second to render. It is not too bad, but still extremely sluggish and difficult to experiment with. It was time to see if we could do some optimization.
We are being frame rate-throttled on the CPU, not the GPU, as the mathematical and physics calculations are done on the CPU. Knowing this, all our optimizations are targeted at the CPU-side.
1) Randomizing per-particle properties creates illusion of more sand
The first secret to get fast simulations is to completely NOT simulate sand particles colliding with each other. Instead, we fake it. To help sell the illusion that the particles collide with each other and pile up, we randomize the per-particle properties, indicated in red arrows below.
Dampen affects how much velocity a particle loses in contact with a surface (aka friction).
Bounce affects how much a sand particle retains its vertical potential energy. High values make each grain act like a bouncy ball.
We randomize them for every sand grain so a sand grain can be heavy, or light, or bouncy, or slidy. This creates variation in our sand behaviour.
This randomization across the particles create higher coverage of sand across the screen, and results in a “spreading” effect lets us use less particle numbers to give illusion of more sand.
2) Using primitive colliders is simpler for the physics engine to calculate
Secondly, we use basic collider shapes.
Gears have a very complex shape, but when broken down, it is basically a sphere (circle in 2D). In some cases, using multiple sphere colliders is less taxxing than a single, complicated shape collider.
Sphere colliders are easier to detect collision with, as the physics engine only needs to check the distance between two points in 3D space. This makes spheres the cheapest colliders. As our gears are circles from the camera perspective, we use spheres for our gears’ main body instead of cylinders.
We can use both – objects that require precise collision use mesh, while small objects not prioritized in our composition fall back to primitive colliders, and we can even toggle between the collider types through code in the midst of rendering if we really want full control!
Basic colliders give us up to 5 times faster render times. Now we’re talking speed!
3) Restricting physics to a single plane
We reduce the particle count needed to be simulated by limiting them to a single plane in our 3D space. This way, it is effectively computed in just 2D while being able to interact with our 3D objects. Withour particles overlapping or hidden in the z-axis, it allows us to be as efficient as possible with the particles we emit.
4) Decrease simulation speed
This last trick can double the render speeds but is case specific. By slowing down our simulation speed to 0.5, we effectively only calculate our physics every 2nd frame instead of every frame. Speed wise, this means the CPU performs half as many calculations over the same period of time. However, this can produce undesired effects, such as slow motion.
There was lots of trial and error involved but bit by bit the simulation got faster and was extremely rewarding to watch.
We went from rendering 1 frame per second, needing 1 million particles, to 10 frames a second, needing only 100,000 particles. That is a 10x speed increase in our simulation and render times!
We went from talking about how many seconds we need to render a frame, to how many frames we can render every second. I was personally amazed that we could simulate 100,000 colliding particles at such speeds.
When it became fast enough to render, we aren’t bogged down by render times, so it was time for the fun stuff!
Out of fun, I made the sand glow by giving it emission, and turned off all our scene lights and made the background black.
This created a very interesting composition:
I then wrote the code to make a button that can toggle between both modes in real-time through a single click, as can be seen by the “FX Mode” white button at the top right of the above GIF. This makes it convenient to test compositions while we experiment.
I doubt anyone is interested in the code I wrote (C#), but for the curious:
Additionally, I added a slider that progressively trades off performance for quality for our final render, so we can choose how much sand we want and the quality of the simulation results, based on how much time we are willing to wait for it to render. The longer we are willing to wait for longer render times, the higher the sand particle count. Collision parameters are not changed as this keeps the render consistent regardless of the quality used.
This is done by writing a script to set our particle sizes and emission count. Again, for anyone who is interested to look at the script to achieve this:
I also wrote another script that reduces the particles used during edit mode, and pumps up the particle count only during our rendering. This way we can iterate even faster with less lag in the scene view during our iterations, and not have to manually remember to increase the final quality before we render.
This post details the breakdown in the asset creation pipeline and how rendering of our gears is achieved. This breakdown showcases the entire pipeline:
The composition is not confirmed. Should the video not work, this is a high resolution screenshot from the video render.
Modelling the Geometry
Modelling is our first phase to obtain the 3D geometry we need. Modelling each 3D gear asset individually can be a labourous process. But what if it did not have to be this way?
Using Maya 2018, we discovered that it is possible to procedurally generate gears instead of hand-modelling individual ones, leading to huge time savings and procedural generation meant our assets are non-destructive (able to go back to edit easily its parameters).
Here is a screenshot presenting attributes of the gears we can edit when creating our gears through this method. It allows for easy experimentation.
With varying parameters, the aim was to produce interesting and contrasting shapes and proportions.
To give a realistic look, bevels were added to prevent the ‘CGI look’ resulting from edges with 90 degree angles. This will allow gears to catch lighting and reflections more realistically when lighting is added later on.
To make sure our gears look old enough to be from the 19th Century, much detail has to be considered in its texturing. It is easy to make metal look shiny and glossy. But this clean and shiny look is not what we want. Thus, we move to the next phase.
Adding detail through textures
With our models done, we need to make them look convincing by applying 2D textures and set up our material to emulate real-life materials, such as their physical properties (glossiness, smoothness).
For textures, I looked at various metals:
To give the old, worn-down look, I use:
To achieve a realistic finish, a technique called physically-based rendering is used. The standard surface shader in Unity supports this. The nature behind this idea is that in real life, no surface is perfectly flat and they have to respond to dynamic lighting easily in our CGI pipeline.
Normal maps lets us add micro-surface detail by simulating scratches, a common surface imperfection in metals. Notice that we also vary reflection in certain areas – a metal surface is not equally reflective across its surface.
Rust was applied using a Detail Mask, specifying three channels (Albedo, Smoothness and Normal).
In Photoshop, I took a rust texture and using a soft brush, desaturated areas of the texture where I did not want rust, as well as the rust layer itself, until I was happy with the percentage coverage of the rust on the gears.
Below is a summary of the gear at different stages of the material set-up process for comparison of start to end.
Notice how in step #2 the reflections and captured lighting falling on the gear are a lot more believable.
The detail masks added in steps #3 and #4 also affect the surface reflection, in addition to the albedo (color), as seen in the specular areas.
These are our two material set ups.
With our meshes and textures, we proceeded to try rendering them to see how they look.
In addition to quality results, the key to our workflow is as much about being able to iterate quickly throughout our conceptualization process.
Using the power of Real-time Rendering
I did not use Maya or Cinema4D to texture or render the gears in the meantime, as I wanted to adopt a faster rendering solution, observing that the long render times in these softwares limit what we can change in the final weeks, and it can be creatively-limited. Thus, I tapped onto the power of a game engine to do real-time rendering, and also because I am accustomed to using it.
We could get a 4K video rendered in 20 seconds in the Unity game engine, complete with lighting, reflections, shadows, particle effects, post processing and color grading. What is amazing is that it outputs a video directly with the option of individual image frames for lossless quality. So we can get a final render look and iterate extremely quickly without going through external video editing software, minimizing additional rendering steps. This lets us focus on the creative aspect instead of worrying about rendering time as it takes seconds to render high resolution 3D content instead of hours.
This additional creativity time let us experiment a lot with different lighting, camera perspectives and screen compositions as seen in pictures below. Plenty of happy accidents came from the results of this which will be talked about in future posts, which will be exciting to share. These would not have been possible in a pre-rendered solution. We are still open to use cinema4d if needed later on in the project.
Exploring compositions and perspectives on the first night of testing:
Lighting & Post Processing
The scene itself uses fully real-time and dynamic lights. These lights can be changed on the fly interactively to achieve results we want.
The scene is made up of a directional light as the key light, and a few spot and point lights for fill and rim lighting, as well as strategically placed lights to show off the surface detail of our gears.
Using lighting, it is easy to change the mood and tone of the scene globally as well. Above is one of the first few early videos of me playing with the directional lighting.
Post Processing involves several effects, one of which is tonemapping to get a filmic look. We also slightly bump the contrast and apply subtle color grading where needed.
Other subtle effects include ambient occlusion, which darken edges where objects appear to touch, making them look more grounded in reality, as well as bloom to simulate how intense light appear as bright spots through a camera lens.
Depth of field was also used to blur out gears in the background, just to test the composition and bring focus to the main gears.
These effects are added in the 3D engine so we can preview the gears’ look without going it through AE or other compositing softwares for faster iteration.
Ambient lighting is also used to set-up optimized reflections, by using something called Reflection Probes, as seen above. The reflections light up the darker areas of the gear parts, revealing details that are previously in shadow. A Reflection Probe is an optimized way to capture a 360 cubemap to cast as reflection onto nearby objects. The effect can be toggled off if we prefer the areas in shadow to remain dark.
This technique is cheap and fast to compute, and is highly optimized compared to Maya and C4D which calculate reflections by simulating light rays, known as ray tracing, which is comparatively fairly expensive computation. Our current technique of using reflection probes works well as we do not need super accurate mirror-like sharp reflections.
This project does not feature animation heavily, but is still essential to the process. To create intricacy and mesmerizing effects, we will play with:
Speed of rotation of gears
Direction of gear rotations
Animating of lights (future post)
Layering of multiple rotating gears in z-axis space
The gears are not animated in Maya using keyframes as we want to be able to change individual gears’ rotation speeds easily and flexibly. Thus, the gears are rotated using a script.
I wrote a script that lets me specify values (in degrees per second) to rotate the gear on a chosen axis, by modifying its transform values.
The script can easily be copied and applied to every gear, with each one rotating at selected user-input speeds that can be changed on the go. Below is a screenshot of the script written in Visual Studio (C#), with comments, for those interested.
To give illusion that gears are interlocked and rotation in sync with one another, we discovered this engineering trick:
Speed of Rotation is inversely proportional to its radius and number of teeth (sides).
Which means to say, if we halve the size (radius) of a gear, we have to halve the number of teeth and double its rotation speed.
Being a programmer helped discover this as I often use power of two values and this made me realise that as long as we ensure that our gears are modelled with exactly Power of Two number of sides – 8, 16, 32, 64, they are easily compatibility with one another.
Lastly, a test 3D particle system emits chinese symbols at the bottom right, which are tests for linking to the symbols on a compass we have yet to model.
The particles’ direction, positions and speeds are also yet to be confirmed. The initial plan is to have them float upward from the surface of the compass, which is different from what our current footage shows.
The particle system uses an emissive shader with a chinese symbol as texture, with its velocity generated by perlin noise to create a randomized movement.
Tutorials & References
Lastly, here are some additional materials referenced that aided in the process.