top of page

My Time At Applied Intuition.....

Customizable Humans
Another really fun but hard problem I was tasked with at Applied was to rework our pedestrian system. By this point I had already worked on procedural traffic signs, lights, as well as a couple of vfx systems that were all highly configurable. So I was very comfortable setting up these api's, building tooling for in editor work as well as actually designing and implementing the runtime systems.
Although this task was much larger compared to previous ones so my team had decided it was best to partner with a expert in digital humans who we could lean on for assets, animation, as well as some of the procedural work. I cannot name the team but they had a lot of experience working with fashion companies and because we could have a heavy hand in defining their api, we were a great fit for each other. Over the next 8 months or so I worked with their team to implement a plugin I would eventually bring into our project so that we could access key functions like the compositing of the configured human that I would spawn in and apply our own animation systems to.
By the end of this project the customer was able to configure a ridiculous amount of parameters to define a pedestrians. This system allowed for people of different ethnicities, ages, a spectrum of genders, as well as a layered clothing system with fully configurable materials. The sheer amount of variation this could produce was something we always strived for at Applied as allowed users to generate vast amounts of synthetic data!
The animation system I made would use a full body IK rig to allow for things like updating foot placement on uneven surfaces and for blending between idle/walk cycles while holding props. I also worked on a separate animation system just for when pedestrians would interact with vehicles, accounting for events like entering/exiting vehicles, loading luggage. This system was actually used to train ML models and showed a significant improvement in the detection of these models. Like with other systems I was aided by an amazing ML engineer who had a strong grasp of how to use synthetic data to improve computer vision models, another shoutout to Vickram who taught me a tremendous amount about ML models!
Tire Spray
One of my favorite problems I got to solve for a customer was the simulation of tire spray from vehicles. For a while our simulation actually never had tire spray when vehicles were driving on a wet surface, I was tasked with the body of work that would solve this. To start I defined an API (in proto) that would allow users to override features of this particle system, the idea was to by default create a realistic system that worked for both our camera and LiDAR sensors but still allow users to tune how the system reacts to their sensors. After much prototyping I ended up two main emitters, one would handle a realistic spray and the other would be more of an abstracted series of systems attached to the wheels for the power users that wanted full control. The realistic spray is where I spent most of the time since the other was just pluming data to a particle system.
 
With that time I made the system update based on the vehicle speed, ground wetness, amount of precipitation, as well as spray a little extra if it hit a pothole decal. This process involved a mix of cpp and blueprints to get the data from the vehicle to my Niagara system, as well as a great deal of material and particle work to make this effect look realistic. There was also a lot of help I received from one of of my coworker that actually wrote the physics models for our simulated sensors, he understood what a system like this would actually look like to lidar sensor and I learned a great deal about how light works and how we represent that through materials because of him, so a huge shoutout and thank you to Alex P!
Sensor Simulation Work:
Most of my time at Applied was spent working on our sensor simulation product. Its quite a complicated product  that you can read about more here.
To give you a quick overview the content work at Applied was focused on a couple of key factors, the content must always be: accurate to our sensor models, annotated for ground truth representation in our simulator, and just as important the content has to look amazing and perform at real time..... Sounds super easy right? 
Tilling and Harvesting
One thing I never expected to work on was agricultural work during my time at Applied. But I fortunate to have the guidance of some great technical artist to help me along the way. After some research our team identified the key operation that we would need to be able to simulate for our customers to be able to test and train their autonomous tractors. Those resulted in the tilling and harvesting features I spent a few months building. The systems was built on having a render target that would track certain the harvesting and tilling implements on a vehicle , we would use this to draw to a texture. This meant that we had a texture map that would correspond to a given crop field, one of the fun parts of this problem was that we needed to support different crops which would all be deformed in different ways. We designed the shader that deformed the crops in a way that we could track the amount of "passes" the vehicle had made at a given section. This allowed us to go from fully grown wheat to stover which is the stubble thats left behind after a harvesting, then if another vehicle went over that section with a tiller we would register that in a different channel on the render target and use that go go from stover to soil that is tilled. This was really fun system to design and work on by the end of it and I learned a lot about shaders and how to collaborate with artist to design performant and scalable vfx, a huge thanks to Trevor and Freek who helped me design and debug the system as well as Jasen for being an amazing artist to work with!​​
Vehicle Infotainment Work:
In my fourth year at Applied I was asked to lead a pretty large project that had to be developed from scratch, this project was meant to allow for complex 3D visualizations for in-vehicle infotainment used cases using Unreal Engine, sadly I cannot share any of the imagery but happily provide references if you contact me privately. The first and primary experience was an ADAS Visualization that would take in data from a self driving vehicle and allow for it to be visualized on automotive grade hardware at real time. If you have ever driven in a Tesla or any modern car with some ADAS features, this concept should be familiar. Later the project expanded to also allow for visualization of the internal vehicle functions for debugging the hardware or systems onboard the vehicle.
 
In both of these projects I led all the Unreal work, but thankfully I was aided by two stellar embedded engineers (thanks Deepak and Sameer) who had worked on hardware like this before, they also helped in writing a majority of the data relaying that would happen from a web socket server onboard the vehicle where I would take in the data to Unreal, deserialize and then use this to update the visualization.
ADAS Visualization
A majority of my effort went into the ADAS viz part of this project, this time was spent largely on writing runtime systems that would be performant at spawning and updating a large amount of actors in the world this would represent the various vehicles and pedestrians that the car would see around it. Because this was live perception data we were dealing with, I wrote a lot of filters for this data as well as simple smoothing systems so that while driving around at high speeds and in dense urban areas, the visualization would look smooth and clean. I also applied similar logic to the planner which would tell users where the self driving system planned on going, traffic sign/light visualizations as well as an occupancy grid visualizer that would take in a 2d grid and update the world around you for surface representations.
bottom of page