CGI Python

Automated Rendering


While working together with Prompto, my biggest task was generating a Blender scene from an Unreal level. Even though both software packages know the same abstract concepts like light, materials and objects, the implementations were very different. I had to reverse engineer the Unreal level to its core in order to rebuild it from scratch in Blender due to the many differences in the underlying applications.

I designed and implemented this whole data flow pipeline closely together with the creators of the original levels. This allowed me to identify use and edge cases more easily.

Once this data pipeline was set up, a lot of opportunities arose to build on top of what we had created. One of the ideas that came out of a whiteboard session was automated rendering. ( Another was automated shapespark scene generation )

Examining the problem

A key feature that Prompto offers to its clients is 360 images. Equirectangular images to whoever is more familiar with rendering. For the others, it’s basically an image that is mapping a 3D environment to a 2D image, but still allows the user to look around. I can’t embed any examples in this blog post, but this is what it looks like before processing.

image found online (

You can see it in action by downloading the above image & importing it in this website.

While using Unreal and the Prompto Application, we could easily generate these, but the quality was rather low. Both programs use baked light maps and no path / ray tracing. This is great for real-time feedback, but it does lack quality. This quality could be increased drastically using Blender.

Solving the problem

Inside Blender, we could take advantage of the built in render engine Cycles. It supports equirectangular and panoramic image output out of the box, so setting this up was quite straight forward. All I had to do was export a bunch of transformation matrices to disk and read those in in Blender in the same script, or at least chained together et voilà.

Equirectangular image generated in Blender

You can see it in action by downloading the above image & importing it in this website.

Blender offers some nice CLI tools which made the process of booting the program and running custom logic very streamlined. I was able to set this up to run in the background without much hassle. I did expose a terminal window to keep track of the status, adding to the usability and overal experience.

In order to distribute the workload, I figured to use a single camera and animate it over time. The amount of 360 spots was used to set the frame range and the transformation matrices were used to generate keyframes. This animated sequence was then rendered to disk, and could be broken up using different machines. After this process had finished, I ran some logic to rename the images based on their frame number and corresponding 360 spot location in the Unreal level.


This all happened very modularly. A change in Unreal would ripple all the way through to the final output, without needing to do any manual labor other than pressing the initiate button.

Afterwards, I added regular 2D renders to the system as well. For this, I was unable to benifit from the method described above. Due to changing aspect ratios I could not use the same camera and render a sequence. This a parameter that can not be animated. I could, however, submit individual batch jobs for each frame to the farm, allowing us to still distribute the load and get faster feedback. Another added complexity to 2D render cameras was field of view and rotation. By including these parameters, I could match the framing and perspective exactly, making this a very valuable tool to the team.

If you would like to know more about this project, please contact me and we can talk about it in more detail.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *