UE4 – Triplanar projection mapping setup

Software:
Unreal Engine 4.25

Triplanar Projection Mapping can be an effective texture mapping solution for cases where the model doesn’t have naturally flowing continuous UV coordinates, or there is a need to have the texture projected independently of UV channels, with minimally visible stretching and other mapping artifacts.
Classic use cases for Triplanar Projection Mapping are terrains and organic materials. provided that the image being used is a seamless texture, no seams will be visible because this projection type isn’t affected by UV coordinates.
Triplanar Projection Mapping can also be used in world space to create a continuous texture between separate meshes, allowing the meshes to be indestructibly transformed and edited.

How does Triplanar Projection Mapping work?
Triplanar Projection Mapping is a linear blend between 3 orthogonal 2D planar texture projections, typically each aligned to a natural world or object axis.
The more the surface faces an axis, the higher the weight of this axis projection in the final blend.

UE4 local (object) space Triplanar Projection Mapping material setup:
* It’s usually more efficient to create this setup as a Material Function

  1. Local shading coordinates are multiplied by by a “density” parameter to allow convenient scaling of the projected.
  2. The scaled coordinates vector is separated to its components who are combined to 3 pairs of planar coordinates XY, XZ and YZ, and fed as the sample coordinates to the 3 Texture Sample nodes.
  3. The Vertex Normal input vector is transformed to local space, converted to absolute value (absolute orientation in the positive axes octant) and separated to its X, Y and Z components so they can serve as blend weights in the mix.
  4. Each of the 3 planar axis projections is multiplied by the blend factors, and the resulting values are added to the raw mix.
  5. A value of 1.0 is divided by the Normal vector component values to obtain the factor needed to normalize the blend result to a value of 1.0.
  6. The raw blend value is multiplied by the normalizing factor so the blend resulting color will be normalized.
    * The blend weights should add to a value of 1.0, but a unit vector’s values add up to more than 1 in diagonal directions. for this reason, without this final step, the color of the texture in point on the surface that are diagonal to the projection axes will appear brighter than in points on the surface that face a projection axis.

An example of Triplanar Projection Mapping in world space:

A bunch of Blender monkeys (Suzanne) continuously textured using world space Triplanar Projection Mapping:

Related posts:
UE4 – Material Functions
UE4 – Material Instances
UE4 – Bump mapping
UE4 – Procedural bump mapping

UE4 – Procedural 3D noise bump setups

Software:
Unreal Engine 4.25

Yet another case where I develop my own costly solution only to find out afterwards that there’s actually a much more efficient built-in solution.. ūüėÄ

In this case the subject is deriving a bump normal from a procedural or non-uv projected height map/texture (like noise, or tri-planar mapping for example).

The built-in way:
Using the pre-made material functions, PreparePerturbNormalHQ and PerturbNormalHQ, the first of which uses the low level Direct3D functions DDX and DDY to derive the two extra surface adjacent values needed to derive a bump normal, and the last uses the 3 values to generate a world-space bump normal:

240 instructions
  1. Noise coordinates are obtained by multiplying the surface shading point local position by a value to set the pattern density.
  2. The Noise output value is multiplied by a factor to set the resulting bump intensity.
  3. The PreparePerturbNormalHQ function is used to derive the 2 extra values needed to derive a bump normal.
  4. The PerturbNormalHQ function is used to derive the World-Space bump normal.
  5. Note:
    Using this method, the material’s normal input must be set to world-space by unchecking Tangent Space Normal in the material properties.

The method I’m using:
This method is significantly more expensive in the number of shader instructions, but in my opinion, generates a better quality bump.
Sampling 3 Noise nodes at 3 adjacent locations in tangent-space to derive the 3 input values necessary for the NormalFromFunction material function:

412 instructions
  1. Noise coordinates are obtained by multiplying the surface shading point local position by a value to set the pattern density.
  2. Crossing the vertex normal with the vertex tangent vectors to derive the bitangent (sometimes called “binormal”).
  3. Multiplying the vertex-tangent and bitangent vectors by a bump-offset* factor to create the increment steps to the additional sampled Noise values.
    * This factor should be parameter for easy tuning, since it determines the distance between the height samples in tangent space.
  4. The increment vectors are added to the local-position to get the final height samples positions.
  5. The NormalFromFunction material function is used to derive a tangent-space normal from the 3 supplied height samples.

Note:
From my experience, even though the UV1, UV2 and UV3 inputs of the NormalFromFunction are annotated as V3, the function will only work is the inputs are a scalar value and not a vector/color.

Related:
UE4 – Material Functions
UE4 – Bump map
UE4 – fix an inverted normal map
UE4 – Triplanar mapping

Blender Python – Reading mesh UV data

Software:
Blender 2.83

Simple example code for reading mesh UV data.

Link to this code snippet on gist

Note that there are typically many more mesh loops than vertices.
*Unless the case is a primitive undivided plane..

import bpy
access mesh data:
obj = bpy.context.active_object
mesh_data = obj.data
mesh_loops = mesh_data.loops
uv_index = 0
iterate teh mesh loops:
for lp in mesh_loops:
    # access uv loop:
    uv_loop = mesh_data.uv_layers[uv_index].data[lp.index]
    uv_coords = uv_loop.uv
    print('vert: {}, U: {}, V: {}'.format(lp.vertex_index,      uv_coords[0], uv_coords[1]))

Blender – Python – Access animated vertices data

Software:
Blender 2.83

The following is simple example of reading a mesh’s animated vertices data.

This example code gist

Note that accessing an model’s animated vertex locations requires reading the model’s evaluated (deformed) mesh state per frame.
For that reason a new bmesh object is initiated per each frame with the the model’s updated dependency graph.

import bpy
import bmesh
obj = bpy.context.active_object
frames = range(0,10)
get the object's evaluated dependency graph:
depgraph = bpy.context.evaluated_depsgraph_get()
iterate animation frames:
for f in frames:
    bpy.context.scene.frame_set(f)
    # define new bmesh object:
    bm = bmesh.new()
    bm.verts.ensure_lookup_table()
    # read the evaluated mesh data into the bmesh   object:
    bm.from_object( obj, depgraph )
    # iterate the bmesh verts:
    for i, v in enumerate(bm.verts):
        print("frame: {}, vert: {}, location: {}".format(f, i, v.co))

Houdini – Set point color by reading custom point attributes

Software:
Houdini 18.0.499

Took me some time to figure out how to set the points color (“Cd”) attribute with data stored initially in custom points attributes.
I kept trying to use the Color SOP node with a “point()” function in its R, G, B fields attempting to refer to the wanted attributes but it didn’t work for me,
I also tried various loop setups iterating the geometry points, couldn’t get that to work either..
* I’m new to Houdini so the fact these approaches didn’t work for me doesn’t mean they can’t be used for this..

I finally managed to do this using a Point Wrangle node with the following VEX expression that sets the Cd (color) attribute’s vector components by referring to the 3 custom attributes “att_a”, “att_b” and “att_c” (see image below):

@Cd = set(@att_a,@att_b,@att_c);

What the Point Wrangler node does that I couldn’t achieve by writing expressions into the RGB fields of the Color node or by using loops is that it iterates all its input SOP’s points, and within its expression the attribute name i.e. “att_a” etc. automatically refers to that named attribute in the same point that is now being iterated over.

Note:
The reason I need such a workflow in the first place is to generate geometric property masks for a Houdini asset, that will be available for the target shading system via vertex color input.
* The Houdini point color attribute propagates to vertex color on output.

A custom “att_a” point attribute is added to a group of points using the Attribute Create SOP node
The Point Wrangler node with its expression

After setting the point color, I added an Attribute Delete SOP node to delete the no more necessary custom attributes:

Houdini – Rendering with V-Ray – first steps

Software:
Houdini 18.0.460 | V-Ray Next 4.30.03

This post covers the most basic steps needed for rendering with V-Ray Next for Houdini.

Note on software versions:
At the moment of writing this post V-Ray for Houdini supports Houdini version 18.0.460.
I naively thought it would work with a later version of Houdini, I tried to install it on Houdini 18.0.499 thinking to myself “what can a couple of extra numbers do..” but I was wrong, It crushed. so at the moment it has to be Houdini 18.0.460, so when getting started with this, take a moment to see exactly what Houdini build is the installation of V-ray built for and install that specific version of Houdini.
* It’s easy, the V-Ray Installation package’s name states the version:
“vray_adv_43003_houdini18.0.460.exe”
Full installation instructions on the V-Ray for Houdini documentation:
https://docs.chaosgroup.com/display/VRAYHOUDINI/Setup+and+Installation

Adding the V-Ray tool shelve to the Houdini UI:
Click the “+” button at the right of the available shelves, and from the list, select V-Ray.
* This only has to be done once.

Scene preparation note:
Surface objects have to be of type Polygon, Polygon Mesh or Polygon Soup for V-Ray rendering:

Setting up V-Ray rendering:
There are 3 ways to setup V-Ray as a render output option for your scene:

  1. In the out network, add a V-Ray > V-Ray Renderer node.
  2. In the main menu, Select Render > Create Render Node > V-Ray.
  3. In the V-Ray shelf, click the Show VFB button.
    This will open the V-Ray VFB (render window), and create both V-Ray Renderer and V-Ray IPR nodes in the out network.

* A V-Ray IPR node is needed for interactive rendering both in the Houdini view-port Region Render and in the V-Ray VFB.

Creating a camera:
You guessed it.. 3 ways to create a camera:

  1. Open the camera drop-down menu found at the top right of the view-port, and select New Camera.
    A new Camera node will be created and the view-port will be set to display the new camera view.
  2. In the Lights and Cameras shelf. press the Camera button, and click inside the 3D view-port to create a new Camera node.
  3. Create a Camera node directly in the obj network by right clicking and selecting Render > Camera.

Note that the rendered image resolution is set in the Camera node’s View properties:

Adding V-Ray Physical Camera properties to the Camera:
With the Camera node selected, press the Physical Camera button in the V-Ray shelf.
This will add a new V-Ray tab to the Camera node’s properties, containing V-Ray Physical Camera properties.
Note, that the Physical Camera exposure settings are setup by default for physical sunlight illumination levels (EV 14), so in many cases, after adding the Physical Camera properties, unless these settings are tuned, your scene will render darker.

Adding light sources:
To add light sources, In the V-Ray shelf, press the wanted light source button, click the 3D view-port to create the light node, transform it to the wanted location/orientation, and set it’s settings:

* If no light sources are added, The image will be rendered using default lighting.

Setting up V-Ray materials:
In the mat network, right click and select V-Ray > Material > V-Ray Material to create a V-Ray Material node:

Select the V-Ray Material node, name it, and set it’s material settings:

In the obj network, double-click the wanted geometry object to enter its SOP network, and inside its SOP. create a new Material node:

Connect the sphere primitive SOP node’s output to the new Material node’s input, make sure it is displayed by clicking the right most node button so it’s highlighted in blue.

In the Material node’s properties, open the Floating Operator Chooser next to the Material property, to select a material for the surface, and in the hierarchical display, expend the mat network, and select the wanted V-Ray Material:

Now that a material has been set and the Material node is displayed, the objects is rendered with the selected material:

Rendering an image:
There are 3 ways to render an image:

  1. In the main menu, select Render > Render > vray
  2. In the out network, click the V-Ray renderer node’s Render button (on its right), to open the Render dialog, and in the dialog press Render.
  3. In the V-Ray shelf, press the Show VFB button to open the FVB (V-Ray’s render window), and there, press the Teapot button at the top right to render the image.

Python – Some useful list-of-tuples operations

Language:
Python 3.7

Sometimes we need to sort a list of tuples (or other list-type containers) according to their internal values.
For instance, you might have a list of tuples representing X,Y coordinates and need to sort that list according to the Y coordinate of the locations.
Using the sorted function with a lambda function as the supplied key argument can do that.
In this case, each tuple element is fed to the lambda function as the argument x and the function simply returns x[1], which is the second (Y) value within the tuple as the key for the sorting comparison:

sorted(my_tuple_list, key=lambda x: x[1])

Again, in a list of tuples (or other list-type containers), Sometimes we need to find the index of the first occurrence of a tuple that has a specific value as one of its elements.
This example will return the index of the first occurrence in the list of a tuple with the value 8 as its second element:

list(zip(*my_tuple_list))[1].index(8)

The zip function “decouples” the tuples into two lists one containing the tuples first element values and the second containing the tuples second element values.
The zip object is fed into a list function to be converted into a subscript-able list containing the 2 new lists, so now we can use the [1] index to access the list containing only the original tuples’s second elements, and use the index function to get the index of first occurrence of value 8.
> Note that the * operator isn’t a C like “pointer” operator..
Its a Python unpack operator needed to unpack the list of tuples to separate tuple arguments for the zip function.

Adobe Bridge – Full screen preview tips

Software:
Adobe Bridge 2020

  1. To get into full screen preview mode:
    Select an image and press Spacebar
  2. Use the arrow keys to navigate images in the folder
  3. Scroll the mouse wheel to zoom in-out

Note:
In full screen mode, you may see the images display soft / low resolution,
Even if the actual image files have enough resolution to fit the display.
This happens because Bridge’s cached previews were not generated at display resolution.
To fix this issue:

  1. In Edit > Preferences > Advanced:
    Check the Generate Monitor-Size Previews option
    Br
  2. In Tools > Cache >Manage Cache..:
    Select Clean Up Cache, Purge all local cache files,
    And then click Next.
    * You may have to restart the program for this to take effect
    Annotation 2020-08-17 193558

Advanced car-paint OSL shader

cglSurfaceCarPaint is an advanced car-paint OSL shader, free to download from CG-Lion studio’s website.

The cglSurfaceCarPaint car-paint material combines 3 layers:
Base: A blend of diffuse/metallic shading with a view-angle color mix
Metallic flakes: Distance blended procedural metallic flakes
Clear coat: Glossy clear coat layer with built-in procedural bump irregularity
And has been tested with:
Blender & Cycles
Maya & Arnold
3ds max & V-Ray

Download and more info

Getting started with OSL shaders

 

Read-list: Intro to OSL Shaders

An introductory article series for both using and writing OSL shaders:

  1. What are OSL shaders
  2. Using OSL shaders in Cycles for Blender
  3. Using OSL shaders in Arnold for Maya
  4. Using OSL shaders in V-Ray for 3ds max
  5. Writing a basic OSL shader