V-Ray for 3ds max – Simple “spectral” metallic setup

Software:
3ds max 2020 | V-Ray 5

Note:
This material setup is using V-Ray 5, but will also work for V-Ray next,
And in previous versions, not having the Metallic property, it can be implemented by unchecking Fresnel Reflections and setting the color through the Reflection color.
This technique can also be implemented in V-Ray for other 3D software like V-Ray for Maya etc.

The idea is simple:
Use a VRayBlendMtl to additively combine 3 anisotropic metallic materials,
Each of which reflects a pure primary RGB color, but has a slightly different anisotropic angle.
The additive combination creates an anisotropic reflection that “spreads” the color of the spectrum.
Note that in this example we use Anisotropic Rotation values of 0, 8, 16, but this may change with different roughness and anisotropy values.

For the materials Diffuse Color property we set levels that will combine to create the general brightness and tint of the metal, and the materials Reflection Color is set to 100% so they will combine to pure white reflection at grazing view angle.
For example, if you want the general metallic color to be a yellow RGB: 255, 186, 57, than the first red metallic component material would have a Diffuse Color of 255,0,0, the second green metallic component material would have a Diffuse Color of 0,186,0, and the third material, that adds the blue metallic component would have a Diffuse color of 0,0,57, so the combined blend will have the desired general color.
More info about the VRayMtl Metalness parameter


Related:

  1. V-Ray Next Metalness
  2. Fresnel Reflections
  3. V-Ray Next’s PBR compatibility

Blender to Unreal Engine tips

Software:
Blender 2.9 | Unreal Engine 4.25

The following is a list of guidelines for preparation and export of 3D content from Blender to Unreal Engine 4 via the FBX file format.

Disclaimer:
This is not a formal specification.
It’s a list of tips I found to work well in my own experience.
* Some of the issues listed here may have already been solved


Blender Scene and model settings:

System units in Blender:
Define the scene units in Blender as:
Metric unit with 0.01 scale (centimeters)
And model your content correctly using centimeter units.
* Modeling in 1 meter units may seem to be imported correctly into UE4 but will cause unsolvable problems like a skeletal mesh physics asset having incorrect auto-generated shapes, a problem that in my experience can’t be fixed manually.

Transform:
Model your model in Blender facing the -Y world axis, +Z obviously being up (obviously for Blender).
* This way the model is aligned to Blender’s views so the front view displays the model’s front etc.
Make sure to apply your model’s transformations before export.

Armatures:
Make sure the Armature object isn’t named “Armature”.
naming or leaving the Blender skeleton named “Armature” will cause the UE4 importer to fail due to “multiple roots”.
* Also remember some weird related bug with animation scale incorrectly imported, but can’t confirm this now..
No need for a dedicated root bone in the hierarchy. the Armature object is the root of the bone hierarchy.
* See export option below

Texture baking:
Set the normal map’s green channel to -Y.
* This is not critical at all because if baked as +Y it can easily be fixed in UE4.

Metadata:
Blender custom properties import as UE4 asset metadata that can be read by editor scripts for automation purposes.
* See export option below


FBX Blender export and UE4 import settings:

I recommend saving an FBX export preset with these settings.

Optional:
I prefer the export settings to include only selected objects.
* It’s more efficient for me to select the specifics objects I want to export into a single FBX file prior to export, than to delete all the temp / reference / draft objects from the scene.
If you want to export Blender custom properties with to the FBX check the “Custom Properties” option

Axes:
Blender’s native model/world orientation is model’s forward facing the -Y axis, left side facing +X and of-course up facing +Z.
UE4’s native model/world orientation is model’s forward facing the +X axis, left side facing -Y and up facing +Z.
There are axis settings in Blender’s FBX export module, that theoretically, should be set like this:

However, in tests I did, The axis settings made no difference when importing to UE4, even when setting intentionally incorrect upside-down axes.
Maybe the FBX exporter writes these settings to metadata that the UE4 importer doesn’t read..
From my experience, what’s important is to orient the model correctly in Blender (see above), apply the transformations,
And in the UE4 import menu, check the “Force Front XAxis” option:

Geometry:
Make sure either “Edge” or “Face” is chosen in the “Smoothing” option to import the mesh’s smooth shading correctly ans avoid a smoothing groups warning on import:

Optional:
Depending on how much control you need over the mesh’s tangent space,
You may want to check the “Tangent Space” export option,
This will make Blender export the full tangent space to the FBX and make UE4 read it from FBX instead of generate it automatically.
* For this option to be supported, the mesh geometry must have only triangle or quad polys.
In the UE4 import settings, choose the “Import Normals and Tangents” option in “Normal Import Method”:

Armature:
Set “Armature FBXNode Type” to “Root”.
Uncheck the “Add Leaf Bones” option to avoid adding unneeded end bones.
Set bones primary axis as X, and secondary axis as -Z.

Animation:
Uncheck “All Actions” to avoid exporting actions that don’t actually belong to the skeleton.
* Un-related animations in the FBX can also corrupt the character rest pose in UE.
The “NLA Strips” option is useful for exporting a library of animations with the skeleton.
* In Blender’s NLA editor, activate the actions you want exported to the FBX.


Related:
3ds max & V-Ray to UE4 Datasmith workflow

Creating a Visual Studio project from existing code

Software:
Visual Studio 2019

Steps for creating a new Visual Studio project based on existing code files:

  1. Create an empty project folder and name it your intended project name.
  2. Inside the new project folder, create a folder for your source code files.
    * I call is “Source”, not sure if it has to named that way..
  3. Copy your existing code files to the source folder.
  4. Launch Visual Studio, and open it without code:
  5. Select:
    File > New > Project From Existing Code..
    To open the Create New Project From Existing Code Files wizard:
  6. Note:
    The documentation on this operation states that the wizard will copy files by itself, my own experience is that it doesn’t, it just links them to the project, that’s why I copy the source files prior to this step.
    In the Create New Project From Existing Code Files wizard,
    1. Set the path to your project folder.
    2. Specify a project name.
    * I set this to be the same name as the project folder name, not sure if otherwise it will create a sub-folder..
    3. This is set to the same folder as the project folder.
    * It’s possible that I don’t understand this correctly.. but I think theoretically, the intention is that you would add external folders to this list, from which source files would be copied, but like I said, when I tried that the files where not copied to the project.
  7. Set your new projects settings and press Finish to create the new project:

A collection of Python snippets for 3D

If you’re interested in taking the first step into Python for 3D software,
Or simply would like to browse some script examples, your welcome to visit my Gist page,
It contains a useful library of Python code example for Blender, Maya, 3ds max and Unreal engine:
https://gist.github.com/CGLion

UE4 – Triplanar projection mapping setup

Software:
Unreal Engine 4.25

Triplanar Projection Mapping can be an effective texture mapping solution for cases where the model doesn’t have naturally flowing continuous UV coordinates, or there is a need to have the texture projected independently of UV channels, with minimally visible stretching and other mapping artifacts.
Classic use cases for Triplanar Projection Mapping are terrains and organic materials. provided that the image being used is a seamless texture, no seams will be visible because this projection type isn’t affected by UV coordinates.
Triplanar Projection Mapping can also be used in world space to create a continuous texture between separate meshes, allowing the meshes to be indestructibly transformed and edited.

How does Triplanar Projection Mapping work?
Triplanar Projection Mapping is a linear blend between 3 orthogonal 2D planar texture projections, typically each aligned to a natural world or object axis.
The more the surface faces an axis, the higher the weight of this axis projection in the final blend.

UE4 local (object) space Triplanar Projection Mapping material setup:
* It’s usually more efficient to create this setup as a Material Function

  1. Local shading coordinates are multiplied by by a “density” parameter to allow convenient scaling of the projected.
  2. The scaled coordinates vector is separated to its components who are combined to 3 pairs of planar coordinates XY, XZ and YZ, and fed as the sample coordinates to the 3 Texture Sample nodes.
  3. The Vertex Normal input vector is transformed to local space, converted to absolute value (absolute orientation in the positive axes octant) and separated to its X, Y and Z components so they can serve as blend weights in the mix.
  4. Each of the 3 planar axis projections is multiplied by the blend factors, and the resulting values are added to the raw mix.
  5. A value of 1.0 is divided by the Normal vector component values to obtain the factor needed to normalize the blend result to a value of 1.0.
  6. The raw blend value is multiplied by the normalizing factor so the blend resulting color will be normalized.
    * The blend weights should add to a value of 1.0, but a unit vector’s values add up to more than 1 in diagonal directions. for this reason, without this final step, the color of the texture in point on the surface that are diagonal to the projection axes will appear brighter than in points on the surface that face a projection axis.

An example of Triplanar Projection Mapping in world space:

A bunch of Blender monkeys (Suzanne) continuously textured using world space Triplanar Projection Mapping:

Related posts:
UE4 – Material Functions
UE4 – Material Instances
UE4 – Bump mapping
UE4 – Procedural bump mapping

UE4 – Procedural 3D noise bump setups

Software:
Unreal Engine 4.25

Yet another case where I develop my own costly solution only to find out afterwards that there’s actually a much more efficient built-in solution.. 😀

In this case the subject is deriving a bump normal from a procedural or non-uv projected height map/texture (like noise, or tri-planar mapping for example).

The built-in way:
Using the pre-made material functions, PreparePerturbNormalHQ and PerturbNormalHQ, the first of which uses the low level Direct3D functions DDX and DDY to derive the two extra surface adjacent values needed to derive a bump normal, and the last uses the 3 values to generate a world-space bump normal:

240 instructions
  1. Noise coordinates are obtained by multiplying the surface shading point local position by a value to set the pattern density.
  2. The Noise output value is multiplied by a factor to set the resulting bump intensity.
  3. The PreparePerturbNormalHQ function is used to derive the 2 extra values needed to derive a bump normal.
  4. The PerturbNormalHQ function is used to derive the World-Space bump normal.
  5. Note:
    Using this method, the material’s normal input must be set to world-space by unchecking Tangent Space Normal in the material properties.

The method I’m using:
This method is significantly more expensive in the number of shader instructions, but in my opinion, generates a better quality bump.
Sampling 3 Noise nodes at 3 adjacent locations in tangent-space to derive the 3 input values necessary for the NormalFromFunction material function:

412 instructions
  1. Noise coordinates are obtained by multiplying the surface shading point local position by a value to set the pattern density.
  2. Crossing the vertex normal with the vertex tangent vectors to derive the bitangent (sometimes called “binormal”).
  3. Multiplying the vertex-tangent and bitangent vectors by a bump-offset* factor to create the increment steps to the additional sampled Noise values.
    * This factor should be parameter for easy tuning, since it determines the distance between the height samples in tangent space.
  4. The increment vectors are added to the local-position to get the final height samples positions.
  5. The NormalFromFunction material function is used to derive a tangent-space normal from the 3 supplied height samples.

Note:
From my experience, even though the UV1, UV2 and UV3 inputs of the NormalFromFunction are annotated as V3, the function will only work is the inputs are a scalar value and not a vector/color.

Related:
UE4 – Material Functions
UE4 – Bump map
UE4 – fix an inverted normal map
UE4 – Triplanar mapping

Blender Python – Reading mesh UV data

Software:
Blender 2.83

Simple example code for reading mesh UV data.

Link to this code snippet on gist

Note that there are typically many more mesh loops than vertices.
*Unless the case is a primitive undivided plane..

import bpy
access mesh data:
obj = bpy.context.active_object
mesh_data = obj.data
mesh_loops = mesh_data.loops
uv_index = 0
iterate teh mesh loops:
for lp in mesh_loops:
    # access uv loop:
    uv_loop = mesh_data.uv_layers[uv_index].data[lp.index]
    uv_coords = uv_loop.uv
    print('vert: {}, U: {}, V: {}'.format(lp.vertex_index,      uv_coords[0], uv_coords[1]))

Blender – Python – Access animated vertices data

Software:
Blender 2.83

The following is simple example of reading a mesh’s animated vertices data.

This example code gist

Note that accessing an model’s animated vertex locations requires reading the model’s evaluated (deformed) mesh state per frame.
For that reason a new bmesh object is initiated per each frame with the the model’s updated dependency graph.

import bpy
import bmesh
obj = bpy.context.active_object
frames = range(0,10)
get the object's evaluated dependency graph:
depgraph = bpy.context.evaluated_depsgraph_get()
iterate animation frames:
for f in frames:
    bpy.context.scene.frame_set(f)
    # define new bmesh object:
    bm = bmesh.new()
    bm.verts.ensure_lookup_table()
    # read the evaluated mesh data into the bmesh   object:
    bm.from_object( obj, depgraph )
    # iterate the bmesh verts:
    for i, v in enumerate(bm.verts):
        print("frame: {}, vert: {}, location: {}".format(f, i, v.co))

Houdini – Set point color by reading custom point attributes

Software:
Houdini 18.0.499

Took me some time to figure out how to set the points color (“Cd”) attribute with data stored initially in custom points attributes.
I kept trying to use the Color SOP node with a “point()” function in its R, G, B fields attempting to refer to the wanted attributes but it didn’t work for me,
I also tried various loop setups iterating the geometry points, couldn’t get that to work either..
* I’m new to Houdini so the fact these approaches didn’t work for me doesn’t mean they can’t be used for this..

I finally managed to do this using a Point Wrangle node with the following VEX expression that sets the Cd (color) attribute’s vector components by referring to the 3 custom attributes “att_a”, “att_b” and “att_c” (see image below):

@Cd = set(@att_a,@att_b,@att_c);

What the Point Wrangler node does that I couldn’t achieve by writing expressions into the RGB fields of the Color node or by using loops is that it iterates all its input SOP’s points, and within its expression the attribute name i.e. “att_a” etc. automatically refers to that named attribute in the same point that is now being iterated over.

Note:
The reason I need such a workflow in the first place is to generate geometric property masks for a Houdini asset, that will be available for the target shading system via vertex color input.
* The Houdini point color attribute propagates to vertex color on output.

A custom “att_a” point attribute is added to a group of points using the Attribute Create SOP node
The Point Wrangler node with its expression

After setting the point color, I added an Attribute Delete SOP node to delete the no more necessary custom attributes:

Houdini – Rendering with V-Ray – first steps

Software:
Houdini 18.0.460 | V-Ray Next 4.30.03

This post covers the most basic steps needed for rendering with V-Ray Next for Houdini.

Note on software versions:
At the moment of writing this post V-Ray for Houdini supports Houdini version 18.0.460.
I naively thought it would work with a later version of Houdini, I tried to install it on Houdini 18.0.499 thinking to myself “what can a couple of extra numbers do..” but I was wrong, It crushed. so at the moment it has to be Houdini 18.0.460, so when getting started with this, take a moment to see exactly what Houdini build is the installation of V-ray built for and install that specific version of Houdini.
* It’s easy, the V-Ray Installation package’s name states the version:
“vray_adv_43003_houdini18.0.460.exe”
Full installation instructions on the V-Ray for Houdini documentation:
https://docs.chaosgroup.com/display/VRAYHOUDINI/Setup+and+Installation

Adding the V-Ray tool shelve to the Houdini UI:
Click the “+” button at the right of the available shelves, and from the list, select V-Ray.
* This only has to be done once.

Scene preparation note:
Surface objects have to be of type Polygon, Polygon Mesh or Polygon Soup for V-Ray rendering:

Setting up V-Ray rendering:
There are 3 ways to setup V-Ray as a render output option for your scene:

  1. In the out network, add a V-Ray > V-Ray Renderer node.
  2. In the main menu, Select Render > Create Render Node > V-Ray.
  3. In the V-Ray shelf, click the Show VFB button.
    This will open the V-Ray VFB (render window), and create both V-Ray Renderer and V-Ray IPR nodes in the out network.

* A V-Ray IPR node is needed for interactive rendering both in the Houdini view-port Region Render and in the V-Ray VFB.

Creating a camera:
You guessed it.. 3 ways to create a camera:

  1. Open the camera drop-down menu found at the top right of the view-port, and select New Camera.
    A new Camera node will be created and the view-port will be set to display the new camera view.
  2. In the Lights and Cameras shelf. press the Camera button, and click inside the 3D view-port to create a new Camera node.
  3. Create a Camera node directly in the obj network by right clicking and selecting Render > Camera.

Note that the rendered image resolution is set in the Camera node’s View properties:

Adding V-Ray Physical Camera properties to the Camera:
With the Camera node selected, press the Physical Camera button in the V-Ray shelf.
This will add a new V-Ray tab to the Camera node’s properties, containing V-Ray Physical Camera properties.
Note, that the Physical Camera exposure settings are setup by default for physical sunlight illumination levels (EV 14), so in many cases, after adding the Physical Camera properties, unless these settings are tuned, your scene will render darker.

Adding light sources:
To add light sources, In the V-Ray shelf, press the wanted light source button, click the 3D view-port to create the light node, transform it to the wanted location/orientation, and set it’s settings:

* If no light sources are added, The image will be rendered using default lighting.

Setting up V-Ray materials:
In the mat network, right click and select V-Ray > Material > V-Ray Material to create a V-Ray Material node:

Select the V-Ray Material node, name it, and set it’s material settings:

In the obj network, double-click the wanted geometry object to enter its SOP network, and inside its SOP. create a new Material node:

Connect the sphere primitive SOP node’s output to the new Material node’s input, make sure it is displayed by clicking the right most node button so it’s highlighted in blue.

In the Material node’s properties, open the Floating Operator Chooser next to the Material property, to select a material for the surface, and in the hierarchical display, expend the mat network, and select the wanted V-Ray Material:

Now that a material has been set and the Material node is displayed, the objects is rendered with the selected material:

Rendering an image:
There are 3 ways to render an image:

  1. In the main menu, select Render > Render > vray
  2. In the out network, click the V-Ray renderer node’s Render button (on its right), to open the Render dialog, and in the dialog press Render.
  3. In the V-Ray shelf, press the Show VFB button to open the FVB (V-Ray’s render window), and there, press the Teapot button at the top right to render the image.