This post covers the most basic steps needed for rendering with V-Ray Next for Houdini.
Note on software versions: At the moment of writing this post V-Ray for Houdini supports Houdini version 18.0.460. I naively thought it would work with a later version of Houdini, I tried to install it on Houdini 18.0.499 thinking to myself “what can a couple of extra numbers do..” but I was wrong, It crushed. so at the moment it has to be Houdini 18.0.460, so when getting started with this, take a moment to see exactly what Houdini build is the installation of V-ray built for and install that specific version of Houdini. * It’s easy, the V-Ray Installation package’s name states the version: “vray_adv_43003_houdini18.0.460.exe” Full installation instructions on the V-Ray for Houdini documentation: https://docs.chaosgroup.com/display/VRAYHOUDINI/Setup+and+Installation
Adding the V-Ray tool shelve to the Houdini UI: Click the “+” button at the right of the available shelves, and from the list, select V-Ray. * This only has to be done once.
Scene preparation note: Surface objects have to be of type Polygon, Polygon Mesh or Polygon Soup for V-Ray rendering:
Setting up V-Ray rendering: There are 3 ways to setup V-Ray as a render output option for your scene:
In the out network, add a V-Ray > V-Ray Renderer node.
In the main menu, Select Render > Create Render Node > V-Ray.
In the V-Ray shelf, click the Show VFB button. This will open the V-Ray VFB (render window), and create both V-Ray Renderer and V-Ray IPR nodes in the out network.
* A V-Ray IPR node is needed for interactive rendering both in the Houdini view-port Region Render and in the V-Ray VFB.
Creating a camera: You guessed it.. 3 ways to create a camera:
Open the camera drop-down menu found at the top right of the view-port, and select New Camera. A new Camera node will be created and the view-port will be set to display the new camera view.
In the Lights and Cameras shelf. press the Camera button, and click inside the 3D view-port to create a new Camera node.
Create a Camera node directly in the obj network by right clicking and selecting Render > Camera.
Note that the rendered image resolution is set in the Camera node’s View properties:
Adding V-Ray Physical Camera properties to the Camera: With the Camera node selected, press the Physical Camera button in the V-Ray shelf. This will add a new V-Ray tab to the Camera node’s properties, containing V-Ray Physical Camera properties. Note, that the Physical Camera exposure settings are setup by default for physical sunlight illumination levels (EV 14), so in many cases, after adding the Physical Camera properties, unless these settings are tuned, your scene will render darker.
Adding light sources: To add light sources, In the V-Ray shelf, press the wanted light source button, click the 3D view-port to create the light node, transform it to the wanted location/orientation, and set it’s settings:
* If no light sources are added, The image will be rendered using default lighting.
Setting up V-Ray materials: In the mat network, right click and select V-Ray > Material > V-Ray Material to create a V-Ray Material node:
Select the V-Ray Material node, name it, and set it’s material settings:
In the obj network, double-click the wanted geometry object to enter its SOP network, and inside its SOP. create a new Material node:
Connect the sphere primitive SOP node’s output to the new Material node’s input, make sure it is displayed by clicking the right most node button so it’s highlighted in blue.
In the Material node’s properties, open the Floating Operator Chooser next to the Material property, to select a material for the surface, and in the hierarchical display, expend the mat network, and select the wanted V-Ray Material:
Now that a material has been set and the Material node is displayed, the objects is rendered with the selected material:
Rendering an image: There are 3 ways to render an image:
In the main menu, select Render > Render > vray
In the out network, click the V-Ray renderer node’s Render button (on its right), to open the Render dialog, and in the dialog press Render.
In the V-Ray shelf, press the Show VFB button to open the FVB (V-Ray’s render window), and there, press the Teapot button at the top right to render the image.
This post is a summary of the tips given by Epic Games technical-artist Min Oh in his GDC 2017 lecture about improving photo-realism in product visualization, more specifically, how to render high quality surfaces.
I recommend watching the full lecture:
Render sharper reflections by increasing the Cubemap resolution of reflection captures: Project Settings > Engine > Rendering > Reflection > Reflection Capture Resolution
* use powers of 2 values i.e. 256, 512, 1024….
Improve the accuracy of environment lighting by increasing the Cubemap resolution of the Skylight:
* use powers of 2 values i.e. 256, 512, 1024….
Improve screen space effects accuracy like screen space reflections by setting the engine to compute high precision normals for the GBuffer:
Set Project Settings > Engine > Rendering > Optimizations > GBuffer Format to: High Precision Normals
Use a high degree of tessellation (subdivision) for the models pre-import.
Simpy put: Use high quality models.
Improve the surfaces tangent space accuracy, and as a result also the shading/reflection accuracy by setting the model’s static mesh components to encode high precision tangent basis: Static Mesh Editor > Details > LOD 0 > Build Settings > Use High Precision Tangent basis
Creating materials with rich dual specular layers by enabling material clear coat separate normal input: Project Settings > Engine > Rendering > Materials > Clear Coat Enable Second Normal Set the material Shading Model to Clear Coat and use a ClearCoatBottomNormal input node to add a normal map for the underlying layer:
The Cycles render engine in Blender has a very convenient OSL Shader development and usage workflow.
Shaders can be both loaded from external files or written and compiled directly inside Blender.
Before you begin:
Make sure your Blender scene is set to use the Cycles render engine, in CPU rendering mode, and also check the option Open Shading Language:
To write an OSL shader in Blender:
Write your shader code in Blender‘s Text Editor:
In your object’s material shader graph (Shader Editor view),
Create a Script node:
Set the Script node‘s mode to Internal,
And select your shader’s text from the Script node‘s source drop-down:
If the shader compiles successfully, the Script node will display its input and output parameters, and you can connect it’s output to an appropriate input in your shading graph.
* If your shader is a material (color closure) connect it directly to the Material Output node’s Surface input, is it’s a volume to the Volume input, or if its a texture to other material inputs as needed.
If the shader code contains errors, it will fail to compile, and you’l be able to read the error messages in Blender‘s System Console window:
After fixing errors or updating the shader’s code, press the Script Noe Update button on the Script node to re-compile the shader:
Loading an external OSL shader into Cycles:
Exactly the same workflow described in the previous section, except setting the Script node‘s mode to External and either typing a path to the shader file in the Script node or pressing the little folder button to locate it using the file browser:
When installing a render/processing software in a different path than default, you may have to configure its Deadline Plugin executable path to Deadline will be able to find it.
To setup the executable path for a Deadline Plugin:
Make sure you are using the Deadline Monitor in Super User Mode:
Go to Tools > Configure Plugins:
In the plugin list on the left, select the plugin you want to configure,
And in the Plugin Executable text box (in this case Blender), type in a new path, or click the browse button to set the path.
* Multiple alternative paths can be set separated by lines.
In theory, all clear* refractive surfaces should have their shadow calculated using a refractive caustics calculation in-order to render the refractive lensing** effect correctly, have their transparency color calculated as volumetric absorption of light through the medium in-order to render the color correctly for areas of different thickness, and have not only external reflections, but also internal reflections calculated, in-order to render the interaction between light and the transparent body correctly.
However, for thin surfaces of even thickness, like window glazing and car windshields, these optical effects can be rendered in much cheaper (non physical) methods, with very little compromise on final image quality or look, and even have an easier setup in most cases.
For this reason most popular render engines have object (mesh) and material (shader) parameters that allow configuration of the way these transparency effects will be rendered.
In this short article we’ll cover the different methods for rendering transparency effects, the reasoning behind them and the way to configure these settings in different render-engines.
In the comparison images below (rendered with Cycles), the images on the left were rendered with physically correct glass settings, 8192 samples + denoising,
And the images on the right were rendered with “flat” transparency settings and 1024 samples + denoising.
> See the shader settings below
Note that while for the monkey statue, the fast flat transparency settings produce an unrealistic result, the window glazing model loses very little of its look with the flat fast settings:
Lensing, caustics and transparent shadows:
It’s a common intuitive mistake, that transparent objects don’t cast shadows, but they actually do. they don’t block light, they change its direction. light is refracted through them, gets focused in some areas of their surroundings (caustics) but can’t pass through them directly, so a shadow is created.
A good example of this would be a glass ball, acting like a lens, focusing the light into a tiny area, and otherwise having a regular elliptical shadow. if we tell the render-engine to just let direct light pass through the object we won’t get a correct realistic result, even if the light gets colored by the object’s transparency color.
There is however one case where letting the direct light simply pass through the object can both look correct and save a lot of calculations, and that is when the object is a thin surface with consistent thickness like window glazing.
So in many popular render-engines, when rendering an irregular thick solid transparent body like a glass statue or a glass filled with liquid, we have to counter-intuitively set the object or material to be opaque for direct light and let the indirect refracted light (caustics) create the correct lensing effect (focused light patterns in the shadow area) > physically, light passing through a material medium is always refracted, i.e. indirect light. but for thin surfaces with even thickness like glazing, the lensing effect is insignificant, and can be completely disregarded by letting light pass directly through the object and be rendered as ‘transparent shadow’.
So the general rule regarding calculating caustics (lensing) vs casting transparent shadows (non physical), is that if the transparent object is a solid irregular shape with varying thickness like a statue or a bottle of liquid it should be rendered as opaque for direct light but with fully calculated caustics i.e. refracted indirect light.
Transparency color:
Physically, the color of transparency*** is always created by volumetric absorption of light traveling within the material medium. as light travels further through a material, more and more of it’s energy gets absorbed in the medium**** (converted to heat), therefore the thicker the object, less light will reach its other side, and it will appear darker. this volumetric absorption of light isn’t consistent for all wave lengths (colors) of light so the object appears to have a color.
For example, common glass, absorbs the red and blue light at a higher rate than green light, and therefore objects seen through it will appear greenish. when we look at the thin side of a common glazing surface we see a darker green color because we see light that has traveled through more glass (through a thicker volume of glass) because of refraction bending the light into the length of the surface. tea, in a glass, generally looks dark orange-brown, but if spilled on the floor it will ‘lose’ its color, and look clear like water because spilled on the floor, it’s too thin to absorb a significant amount of light and appear to have a color.
Most render engines allow setting the transparency (“refraction”/”transmission”) color of the material both as a ‘flat’ non physical filter color, and as a physical RGB light absorption rate (sometimes referred to as ‘fog’ color), that can in some cases be more accurately tuned by additional multiplier or depth parameters.
Setting an object’s transparency color using physical absorption (fog) usually requires more tweaking because in this method, the final rendered color is dependent not only on the color we set at the material/shader, but also on the model’s actual real world thickness.*****
In general, the transparency color of thick, solid, irregularly shaped objects (with varying thickness) must be set as a physical absorption rate color, and not as a simple filter color, otherwise the resulting color will not be affected by the material thickness, and look wrong.
For thin surfaces with consistent thickness, like window glazing, however, it’s more efficient to setup the transparency color as a ‘flat’ filter color, because it’s more convenient and predictable to setup, and produced a correct looking result.
For example, if we need to render an Architectural glazing surface that will filter exactly 50 percent of the light passing through it, it’s much simpler to set it up using a simple 50% grey transparency filter color, because this method disregards the glass model’s thickness. This approach isn’t physical, but for an evenly thick glazing surface, the result has no apparent difference from a physical volumetric absorption approach to the same task.
Internal reflections:
It’s not intuitive to think that the air surface itself has reflections when seen through a transparent material volume like water or glass.
Viewed from under water, the air surface above, acts like a mirror for certain angles, reflecting objects that are under water. a glass ball lit by a lamp has a very distinct highlight, which is the reflected image of the light source itself (specular reflection), but it also has an internal highlight appearing on inside where the glass volume meets the air volume. we can easily ‘miss’ this internal highlight because in many cases it’s appearance converges with the bright focused light behind the ball, caused lensing (refractive caustics). the distinctly shiny appearance of diamonds, for example, is very much dependent on bright internal reflections, diamond cutting patterns are specifically designed to reflect a large percentage of light back to the viewer and look shiny, and if we wish to create a realistic rendering of diamonds, we will not only have to setup the correct refractive index for the material, but also model the geometric shape of the diamond correctly, and of course, set the material to render both external (“regular”******) reflections and internal reflections.
Your probably already guessing what I’m about to say next..
For thin surfaces with even thickness, the internal reflection is barely noticeable, because it converges with the main surface reflection, an for this reason, when rendering window glazing, car windshields, and the like, we can usually turn the internal reflections calculation off to save render time.
Render Settings:
Simplified settings summary table:
Flat (Glazing)
Physical (irregular volume)
Shadow
Transparent
Caustics
Color
Filter
Volumetric Absorption
Reflections
External only
External and Internal
Example Cycles (Blender) shaders: > The Flat glazing shader is actually more complex to define since it involves defining different types of calculations per different type of rays being traced (cheating).
In general, for Shadow and Diffuse rays that shader is calculated as a simple Transparent shader and nor a refraction shader, and when back-facing, the shader is calculated as pure white transparent instead of glossy to remove the internal reflections. > While the flat glazing shader is only connected to the Surface input of the material output, the physical glass shader has also a Volume Absorption BSDf node connected to the Volume input of the material output node. > Note that a simple Principled BSDF material will have flat transparency and physical shadow (caustics) by default.
> For caustics to be calculated, the Refractive Caustics option has to be enabled in the Light Path > Caustics settings in the Cycles render settings.
Example V-Ray Next for 3ds max material settings:
> In V-Ray for 3ds max (and Maya) the Affect Shadows parameter in the VrayMtl Refraction settings determines weather the shadows will be fake transparent shadows suitable for glazing or (on) or opaque (off) which is the suitable setting for caustics. > The caustics calculation is either GI Caustics which is activated by default in the main GI settings or a dedicated Caustics calculation that can be activated, also in the GI settings. > For flat glazing the color is defined as Refraction Color and for physical glass the Refraction color is pure white and the glass color is set as Fog color.
Example Arnold for Maya settings: > In Arnold 5 for Maya the Opaque setting in the shape node Arnold attributes must be unchecked for transparent shadows, and checked for opaque shadows suitable for caustics. > For rendering refractive caustics in Arnold for Maya more settings are needed. > When the Transmission Depth attribute is set to 0 the Transmission Color will be rendered as flat filter color, and when the Transmission Depth attribute is a value higher than 0 the transparency color will be calculated as volumetric absorption reaching the Transmission Color at the specified depth.
General notes:
> in Brute Force Path Tracers like Cycles and Arnold the Caustics calculation is actually a Diffuse indirect light path. this seems un-intuitive, but the light pattern appearing on a table surface in the shadow of a transparent glass is actually part of the table surface’s diffuse reflection phenomenon.
> what we refer to as ‘Diffuse Color’ in dielectric (non-metals) is actually a simplification of absorption of light scattered inside the object volume (SSS).
* Optically all dielectric materials (non-metals) are refractive, but not all of them are also clear, the is, most of them actually have micro particles or structures within their volume, that scatter and absorb light that travels through them, creating the effects we’re used to refer to as “Subsurface Scattering” (SSS) and in the higher densities “Diffuse reflection”.
** Lensing is a term used to describe the effect of a material medium bending light, focusing and dispersing it, and so acting as a lens.
*** Actually all color in dielectric (non metallic) materials is created by Volumetric Absorption.
**** Light isn’t only absorbed as it travels through medium, it’s also scattered.
***** Volumetric shading effects usually use the model original scale (the true mesh scale), so to avoid unexpected results it’s best that the object’s transform scale will be 1.0 (or 100% depending on program annotation)
The VRayFur is grown on a beveled surface, that has no bottom side surface to avoid growing fur at the bottom, and also because it’s unneeded.
The surface is beveled at the edges so that the fur there will grow to the sides,
And a noise modifier is applied to the surface to break its uniformity and give it a more organic shape.
* You could have a bottom surface set the fur not to grow on the bottom polys.
A combination of 3 procedural Noise maps (for each of the RGB channels) is used to create a direction map for the fur threads. the maps are added together using a VRayCompTex map.
The reason the pattern is separated to it’s RGB channels is that it allows more control.
A VRayFur direction map works like a normal map in tangent-space and this means we can’t have the blue channel be less than a value of 0.5 because that would cause the fur to grow down into the surface.
For the fur material, a VRayFastSSS2 is used to achieve a ‘fluffy’ organic look combined with a VRayDirt map to accentuate the shadows between the fur threads.
In-order for objects in 3ds max to be rendered as volumes with Arnold, the object mesh has to be converted to a volume, and a Standard Volume material assigned to the object:
Add an Arnold Properties modifier to the object.
Under Volume set the Step Size to a value higher than 0.0.
Assign a Standard Volume material to the object and set it’s parameters to design the volumetric effect:
Examples:
* Note that both Density and Depth control the transparency or ‘thickness’ of the volume. (lower Depth setting creates a thicker volume)
* When Scattering is set to 0.0 the volume will have only a absorption effect
In this example an Arnold Noise map is connected to the Standard Volume‘s Density parameter:
* Note that the Scale values must be set correctly in order to actually get a ‘cloudy’ effect.
* Note that the noise color values are now controlling the Density of the volume.
Adding a ‘Volume Light’ effect in Arnold for 3ds max is fairly simple:
In the Render Setup windows > Arnold Renderer tab, under Environment, Background & Atmosphere:
Click the Scene Atmosphere material slot, add an Arnold Atmosphere Volume material to it,
And drag it as an instance to the Material Editor to edit it’s parameters.
Set the Density to a higher than 0.0 value, so the material will have an effect.
You’ll probably need to significantly raise the number of samples in the Atmosphere Volume material, and also the number of Volume samples in the light settings in order to get a clean render.
An example of a basic traditional (not scanned) cloth material setup in Arnold 5 for Maya using an aiStandardSurface shader.
The shading network uses a classic angle dependent color blend to simulate the color of the cloth being washed out at grazing angle of view.
Explanation of the node graph:
A black and white fabric weave texture that will serve as input for multiple shading channels.
* This is actually not the best example of such a pattern, and could be replaced with a much better texture.
A remapValue node is used to set contrast to the fabric pattern (reduce contrast in this case) prior to it being multiplied with the fabric colors.
* Note that only one of the textures RGB channels is connected to the remapValue node since it’s a float (mono) processor and not RGB.
* Note that depending on the fabric texture, you may have to design different curves to achieve the right effect.
Two colors are defined with colorConstant nodes:
A deep color as the main fabric color, and a washed out color for grazing angle view (“side color”).
An aiFacingRatio node is used as an input for incident angle info.
* Note that in this case I checked the node’s invert option to make it behave more like other systems I’m used to (if you don’t use invert, the angle blend curve in 5 will be different..)
 A remapValue node used to set the angle blend curve or in other words, how much does the color appears washed out per change of view angle of the cloth surface.
* The longer it take the curve to become steep from left to right, the more the main color will be dominant before the washed out color will appear.
A colorCorrect node is used in this example just as a way to convert the remapped float value back to RGB for being multiplied with the cloth colors.
* We could also connect it directly to the individual float components of the RGB colors but this way the node graph is cleaner.
A multiplyDivide node is used to multiply the processed fabric texture with the 2 fabric colors “baking” the pattern into the color.
A blendColors node is used to blend the 2 processed fabric colors together according to the processed facingRatio angle input.
The result is the final cloth color that is connected to the aiStandardSurface shader.
An aiBump2d node is used to convert the fabric pattern to normal data that will be connected to the aiStandardSurface shader to produce bumps.
An aiStandartSurface shader serving as the main shading node for this material.
* Note that under Geometry the Thin Walled option is checked so that the Subsurface layer of the shader will act as a Paper Shader rather than SSS.
* The main cloth color is connected to the SubSurface Color input.
A simple way to create a snow material in V-Ray for 3ds max is to combine a VRayFastSSS2 material with a VRayFlakesMtl using a VRayBlendMtl.
The VRayFastSSS2 creates the soft translucent shading for the snow, and the VRayFlakesMtls adds sparkling highlights.
Note that depending on the scene and view scale,
The VRayFlakesMtls ‘flake glossiness’, ‘flake density’ and ‘flake size’ have to be tweaked to achieve the wanted result.