Maya's render passes are designed to allow you to create the images you need for a composite, without the overhead of using render layers. However, their implementation has raised some concern from artists who like to use mental ray materials, because there is very limited support for these materials from Maya's default passes.
While render layers allow you to see in the viewport which objects will contribute to a given pass, you incur the overhead of having to translate the objects in the render layer before mental ray can render them.
Instead, you can set up a single render layer and extract as many render passes as you like, from a single render. Before I tell you about the writeToColorBuffer node, let's take a quick look at how to set up a render pass.
In my scene, I have a simple hierarchy of a car and some wheels, rearview mirrors, headlights....a directional light and an IBL node. The materials are various Maya and mental ray materials that I have assigned per face.
With a little ambient occlusion, the scene renders in 1 minute 21 seconds at 640 x 480.
Download the scene here (you'll need an hdri).
1) Set up an ambient occlusion render pass:
Let's say we want to extract the ambient occlusion to a seperate image.
Open the Render Settings and from the Passes tab, create a new render pass called Ambient Occlusion.
Below the area Scene Passes, click on the button labeled Associate selected render passes with current render layer.
This drops you render pass down into the area called Associated Passes.
BTW, double-click on the AO render pass to bring up the pass's attributes in the AE - note that by default the pass is set to output 3 channels; change this to 4 is you require an alpha channel.
Render the scene again. It takes a little longer to complete (1' 46'') but you now have an ambient occlusion pass as well as your beauty pass.
2) Create a custom render pass:
Back in the Render Settings > Passes tab, create a Custom Color pass, and associate it with the current render layer. Remember to set the number of Channels to 4 if you require an alpha.
In the Hypershade, from the menu Create mental ray nodes, go to the section at the bottom: Miscellaneous and add a writeToColorBuffer node to your scene.
For the writeTocolorBuffer to evaluate we need a couple of inputs:
Make the following connections to your writeToColorBuffer node:
Render the scene again. Now we have an extra render pass, and the scene took marginally longer to render : 1 minute 56 seconds.
3) Output a mia_material_x attribute:
Duplicate the green car body shader, and change the colour to blue.
Connect the result attribute of this material to the writeToColorBuffer1.color attribute (previously connected to the ramp.outColor).
Render, and check the customColor pass.
Notice that the writeToColorBuffer is applying the input to the shading group of the evaluation trigger material.
Anyway, there are loads of attributes you can render this way from a mia_materiel_x; choose from pretty much any attribute listed in the Connection Editor below the attribute result.
All you have to do is connect them to the writeTocolorBuffer.color attribute. Try refl_result, or spec_raw, or refr_raw.
Let's say we want to extract data from many attributes of the car body material. You would create a Custom Color and an associated writeToColorBuffer node for each attribute. You would then connect the material as the trigger to each writeToColorBuffer node and a different attribute to the node's color.
note: for the remainder of the tutorial, I've disassociated the AO render pass.
Here are three seperate writeToColorBuffer nodes, each taking a seperate input from mia_material_x, which is also the trigger.
Render. The image completes, with three passes, in a time of 1 minute 24 seconds
note: I've modified the colour values on the refl_result, otherwise it would be hard to see.
You've already begun to see that including a writeToColorBuffer node is having an impact on your render times. It doesn't take a mathematician to work out that if you have many of these nodes in a complex scene, your render times are going to rocket.
Note: in Maya 2011, the writeToColorBuffer node has been rewritten and does not increase the render times as in previous versions of Maya.
There is another way of creating your shader tree to include multiple writeToColorBuffer nodes that will mitigate the expensive render times, somewhat.
6) Pass through evaluation:
You might say that 1 minute and 24 seconds is not so bad, but we have seen examples where adding extra writeToColorBuffer nodes can have a dramatic impact on render times.
If you find that your render times are acceptable using the methods I've outlined so far, then all is good !
If, on the other hand, you decide to throw loads of these puppies into your scene and are horrified by the render times, then read on while we take a peek at the other evaluation option : passThrough evaluation.
By default a writeToColorBuffer node's evaluation is set to Always. As a rule of thumb, the more you have of these nodes connected to the same material, the slower your scene will render.
To speed things up a little, let's take the same nodes and put them in a daisy-chain, such that each writeToColorBuffer node is causing the previous node in the chain to evaluate.
Take the three writeToColorBuffer nodes you have currently triggered by the same mia_material_x and make the following connections:
Set the Evaluation Mode attribute on each of them to Pass Through Only.
Don't render just yet because you won't see anything. that's because mental ray won't evaluate any of the writeToColorBuffer nodes.
The pass through evaluation method expects a material node at the very end of the daisy-chain - this is by design.
When you set the evaluation to Pass Through Only, you are 'pulling' information through the shading network, such that an evaluation of writeToColorBuffer3 is triggering the evaluation of writeToColorBuffer2, which in turn is triggering the evaluation of writeToColorBuffer1.......unfortunately, nothing is triggering the last node in our network, so we get nada from our custom buffers.
Create a Maya Surface Shader.
Connect the last writeToColorBuffer to the Surface Shader
Now, when the surface shader is evaluated, it will cause a ripple effect down the chain and trigger the evaluation of all the writeToColorBuffer nodes.
The trick to get the surface shader to evaluate is to assign it to the objects that you want rendered into your custom passes, ie: the contents of the mia_material_x shading group.
I like to do the following:
I know what you're thinking.....assigning the surface shader to my car body has made it go all black - I don't want my car to render with the surface shader color; plus, all I can see in the viewport is black.
In order to trigger the evaluation of the writeToColorbuffer nodes, we must have a material node at the end of the daisy-chain to 'pull' the data through. Because we have to assign this material to the objects/faces we want included in our render passes, I use the render layer override so I can always flip back to the masterLayer to see the car body with the original material.
Render. the three passes come out in 1 minute 20 seconds, and your beauty layer is as expected.
For more information on how to use writeTocolorBuffers with textures, watch the video podcasts on Cory Mogk's Mayalicious blog:
You can include writeToColorBuffer nodes to pass data from a texture or a material to a custom colour buffer.
By default these writeToColorBuffer nodes Always evaluate. They apply the incoming color information to the the contents of the shading group which is connected to the Trigger material.
However, you will get better performance if you have many of these nodes feeding off a single material, if you daisy-chain them together, and 'pull' the data through, by setting the evaluation ot Pass Though Only.
The downside is if you want to sample data from a material (rather than a texture) : you must also have a material at the end of the chain, and this must be assigned to the objects/faces you want to render.
Remember, render passes are not a substitute for render layers, and you may choose to use a combination of both to get optimal results.
Like I say, Autodesk is aware of the increased render times caused by having many writeToColorBuffer nodes in a scene.
You can download my scene files here (you'll need an hdri)
Phew, that was a long one......I hope this was helpful.
Thanks to Ash and Annick for their assistance.