The ColorPickRenderer Building a Next-Generation 3D Renderer on Google Cloud Platform, Part 4


This post is the fourth and final in a series on our new rendering engine. The first post introduced the project, the second described our strategy for 3D storage, and the third outlined design for interactive rendering.

Now that we’ve gotten this far, we can start writing code.

ColorPickRenderer is a fully featured state-of-the-art renderer with all the bells and whistles you’d expect from a modern production renderer. This includes raytracing, ambient occlusion, subsurface scattering, physically based materials with full support for all PBR workflows, volumetric effects like smoke and fog and fire, deep compositing with full support for OpenEXR channels, render passes, and render layers (including render layer overrides), motion blur and DOF and bokeh, HDR lighting with spectral distribution… oh yeah, there’s also this thing called “caustics” that I heard about once but have never seen in person.

The ColorPickRenderer is a component within the Rendering Toolchain that is tasked with the job of generating color pick data for all primitives in a scene. This data is used throughout the rest of the Toolchain to help identify different primitives, as well as their specific instances and variations, within a single image.

Since color picking can be done at different times and resolutions during rendering, it is important to design the algorithm in such a way that it can cope with all these different workflows.

This article discusses the design of this algorithm, as well as how we use it for specific workflows in our Next-Generation renderer at Google.

In this part of the series, we will be talking about how we designed the frontend of our renderer. In particular, we’ll be discussing how we were able to design an architecture that could leverage the new WebAssembly and WASI primitives to allow us to use different languages in our renderer.

The importance of an architecture that supports multiple languages

As we mentioned in Part 1, one of the biggest challenges of building a new rendering engine is the large surface area of code you have to write. This code can range from a few hundred lines to a few million lines. To put this in perspective, most professional game engines are around 10 million lines of C++ code but are built by teams with hundreds of developers.

For ColorPickRenderer, our team had seven developers with varying levels of experience in graphics programming and coding competitions. Because we had so many developers and such a limited amount of time, it was important for us to choose an architecture that could allow different parts of our renderer to be written in different languages.

Having this flexibility allowed us to assign tasks based on each developer’s strengths. For example, the frontend was written in Rust because one of our team members had extensive experience with Rust programming. The backend was written using C

We’re back with another blog in our ongoing series on the design of our next-generation 3D renderer.

In this blog, we’ll focus on the Color Pick Renderer. This renderer is used to generate color pick buffers for SceneViewer and Viewer apps, which are used to manage input handling and events. The color pick renderer produces an RGBA buffer that maps an object id to each pixel in the buffer.

The following diagram shows the relationship between a color pick buffer and a rendered scene:

Color Pick Buffer Diagram

To implement a color pick buffer, we create a unique 16-bit integer for each object in the scene. We then assign each integer to its corresponding objects using gl_FragData[0].rgb as shown below:

Outputting Color Pick Data

We then pack each 16-bit value into 2 bytes by assigning the upper 8 bits to gl_FragData[0].a and lower 8 bits to gl_FragData[0].b . We can then use these two bytes to reconstruct the original 16-bit value. To achieve this packing, we use a custom shader function similar to the following:

Packing Color Data

Once these fragments have been written out, they are sent through our

In this series of blog posts, we’ll be sharing our experiences building a modern 3D rendering engine using cutting-edge technologies like Vulkan, OpenEXR and Google Cloud Platform.

This is the fourth part of the series. In Part 1 we talked about the need for a new rendering engine,

in Part 2 we discussed the challenges in moving from a CPU renderer to a GPU renderer,

and in Part 3 we covered some of our choices for the technology stack.

The fourth part will cover (1) how we did our color management,

(2) how we processed images from the large number of renders and (3) how we built an image viewer to interactively inspect the rendered images.

The challenge for this article is to render the same scene, with the same settings and assets, on multiple machines in parallel. We want to use Google Compute Engine instances to render each frame, taking advantage of their high throughput and low cost per hour.

We’ll describe how we distributed the computing power across multiple machines to render a single frame. We’ll also talk about how we integrated that with our existing renderer, so that we can easily switch between running locally or on GCE.

Google is sponsoring a contest for developers and aspiring coders to try their hand at creating a 3D model that uses gamification to make learning about the environment fun, engaging and interactive. The winning entrant will receive an all-expenses paid trip to Google’s Mountain View, California headquarters and will be able to present their work in person at the Google I/O conference this summer.

Our goal was to make an environmentally-focused game that was easy for anyone, even non-coders, to get into within minutes. To this end we created a simple interface that does not require any prior knowledge of coding or programming. We also wanted the game to be immediately engaging and fun so we added a competitive aspect by allowing players to share their scores online with others on Google+.

To enter, you need to create your own 3D model using Goo Create (goo.gl/FtMO1c). You can do this by following one of our step-by-step tutorials or starting from scratch. The theme of the competition is ‘Environment’. So your 3D model should relate in some way to the environment, it could be anything from modeling your local park or designing an eco-house! We encourage you to be as creative as possible!


Leave a Reply

Your email address will not be published.