Note that accessing an model’s animated vertex locations requires reading the model’s evaluated (deformed) mesh state per frame. For that reason a new bmesh object is initiated per each frame with the the model’s updated dependency graph.
import bpy
import bmesh
obj = bpy.context.active_object
frames = range(0,10)
get the object's evaluated dependency graph:
depgraph = bpy.context.evaluated_depsgraph_get()
iterate animation frames:
for f in frames:
bpy.context.scene.frame_set(f)
# define new bmesh object:
bm = bmesh.new()
bm.verts.ensure_lookup_table()
# read the evaluated mesh data into the bmesh object:
bm.from_object( obj, depgraph )
# iterate the bmesh verts:
for i, v in enumerate(bm.verts):
print("frame: {}, vert: {}, location: {}".format(f, i, v.co))
Took me some time to figure out how to set the points color (“Cd”) attribute with data stored initially in custom points attributes. I kept trying to use the Color SOP node with a “point()” function in its R, G, B fields attempting to refer to the wanted attributes but it didn’t work for me, I also tried various loop setups iterating the geometry points, couldn’t get that to work either.. * I’m new to Houdini so the fact these approaches didn’t work for me doesn’t mean they can’t be used for this..
I finally managed to do this using a Point Wrangle node with the following VEX expression that sets the Cd (color) attribute’s vector components by referring to the 3 custom attributes “att_a”, “att_b” and “att_c” (see image below):
@Cd = set(@att_a,@att_b,@att_c);
What the Point Wrangler node does that I couldn’t achieve by writing expressions into the RGB fields of the Color node or by using loops is that it iterates all its input SOP’s points, and within its expression the attribute name i.e. “att_a” etc. automatically refers to that named attribute in the same point that is now being iterated over.
Note: The reason I need such a workflow in the first place is to generate geometric property masks for a Houdini asset, that will be available for the target shading system via vertex color input. * The Houdini point color attribute propagates to vertex color on output.
A custom “att_a” point attribute is added to a group of points using the Attribute Create SOP nodeThe Point Wrangler node with its expression
After setting the point color, I added an Attribute Delete SOP node to delete the no more necessary custom attributes:
This post covers the most basic steps needed for rendering with V-Ray Next for Houdini.
Note on software versions: At the moment of writing this post V-Ray for Houdini supports Houdini version 18.0.460. I naively thought it would work with a later version of Houdini, I tried to install it on Houdini 18.0.499 thinking to myself “what can a couple of extra numbers do..” but I was wrong, It crushed. so at the moment it has to be Houdini 18.0.460, so when getting started with this, take a moment to see exactly what Houdini build is the installation of V-ray built for and install that specific version of Houdini. * It’s easy, the V-Ray Installation package’s name states the version: “vray_adv_43003_houdini18.0.460.exe” Full installation instructions on the V-Ray for Houdini documentation: https://docs.chaosgroup.com/display/VRAYHOUDINI/Setup+and+Installation
Adding the V-Ray tool shelve to the Houdini UI: Click the “+” button at the right of the available shelves, and from the list, select V-Ray. * This only has to be done once.
Scene preparation note: Surface objects have to be of type Polygon, Polygon Mesh or Polygon Soup for V-Ray rendering:
Setting up V-Ray rendering: There are 3 ways to setup V-Ray as a render output option for your scene:
In the out network, add a V-Ray > V-Ray Renderer node.
In the main menu, Select Render > Create Render Node > V-Ray.
In the V-Ray shelf, click the Show VFB button. This will open the V-Ray VFB (render window), and create both V-Ray Renderer and V-Ray IPR nodes in the out network.
* A V-Ray IPR node is needed for interactive rendering both in the Houdini view-port Region Render and in the V-Ray VFB.
Creating a camera: You guessed it.. 3 ways to create a camera:
Open the camera drop-down menu found at the top right of the view-port, and select New Camera. A new Camera node will be created and the view-port will be set to display the new camera view.
In the Lights and Cameras shelf. press the Camera button, and click inside the 3D view-port to create a new Camera node.
Create a Camera node directly in the obj network by right clicking and selecting Render > Camera.
Note that the rendered image resolution is set in the Camera node’s View properties:
Adding V-Ray Physical Camera properties to the Camera: With the Camera node selected, press the Physical Camera button in the V-Ray shelf. This will add a new V-Ray tab to the Camera node’s properties, containing V-Ray Physical Camera properties. Note, that the Physical Camera exposure settings are setup by default for physical sunlight illumination levels (EV 14), so in many cases, after adding the Physical Camera properties, unless these settings are tuned, your scene will render darker.
Adding light sources: To add light sources, In the V-Ray shelf, press the wanted light source button, click the 3D view-port to create the light node, transform it to the wanted location/orientation, and set it’s settings:
* If no light sources are added, The image will be rendered using default lighting.
Setting up V-Ray materials: In the mat network, right click and select V-Ray > Material > V-Ray Material to create a V-Ray Material node:
Select the V-Ray Material node, name it, and set it’s material settings:
In the obj network, double-click the wanted geometry object to enter its SOP network, and inside its SOP. create a new Material node:
Connect the sphere primitive SOP node’s output to the new Material node’s input, make sure it is displayed by clicking the right most node button so it’s highlighted in blue.
In the Material node’s properties, open the Floating Operator Chooser next to the Material property, to select a material for the surface, and in the hierarchical display, expend the mat network, and select the wanted V-Ray Material:
Now that a material has been set and the Material node is displayed, the objects is rendered with the selected material:
Rendering an image: There are 3 ways to render an image:
In the main menu, select Render > Render > vray
In the out network, click the V-Ray renderer node’s Render button (on its right), to open the Render dialog, and in the dialog press Render.
In the V-Ray shelf, press the Show VFB button to open the FVB (V-Ray’s render window), and there, press the Teapot button at the top right to render the image.
Sometimes we need to sort a list of tuples (or other list-type containers) according to their internal values. For instance, you might have a list of tuples representing X,Y coordinates and need to sort that list according to the Y coordinate of the locations. Using the sortedfunction with a lambdafunction as the supplied key argument can do that. In this case, each tuple element is fed to the lambda function as the argument x and the function simply returns x[1], which is the second (Y) value within the tuple as the key for the sorting comparison:
sorted(my_tuple_list, key=lambda x: x[1])
Again, in a list of tuples (or other list-type containers), Sometimes we need to find the index of the first occurrence of a tuple that has a specific value as one of its elements. This example will return the index of the first occurrence in the list of a tuple with the value 8 as its second element:
list(zip(*my_tuple_list))[1].index(8)
The zip function “decouples” the tuples into two lists one containing the tuples first element values and the second containing the tuples second element values. The zip object is fed into a list function to be converted into a subscript-able list containing the 2 new lists, so now we can use the [1] index to access the list containing only the original tuples’s second elements, and use the index function to get the index of first occurrence of value 8. > Note that the * operator isn’t a C like “pointer” operator.. Its a Python unpack operator needed to unpack the list of tuples to separate tuple arguments for the zip function.
To get into full screen preview mode:
Select an image and press Spacebar
Use the arrow keys to navigate images in the folder
Scroll the mouse wheel to zoom in-out
Note:
In full screen mode, you may see the images display soft / low resolution,
Even if the actual image files have enough resolution to fit the display.
This happens because Bridge’s cached previews were not generated at display resolution.
To fix this issue:
In Edit > Preferences > Advanced:
Check the Generate Monitor-Size Previews option
In Tools > Cache >Manage Cache..:
Select Clean Up Cache, Purge all local cache files, And then click Next.
* You may have to restart the program for this to take effect
The cglSurfaceCarPaint car-paint material combines 3 layers: Base:Â A blend of diffuse/metallic shading with a view-angle color mix Metallic flakes:Â Distance blended procedural metallic flakes Clear coat:Â Glossy clear coat layer with built-in procedural bump irregularity
And has been tested with:
Blender & Cycles
Maya & Arnold
3ds max & V-Ray
This post is a summary of the tips given by Epic Games technical-artist Min Oh in his GDC 2017 lecture about improving photo-realism in product visualization, more specifically, how to render high quality surfaces.
I recommend watching the full lecture:
Render sharper reflections by increasing the Cubemap resolution of reflection captures: Project Settings > Engine > Rendering > Reflection > Reflection Capture Resolution
* use powers of 2 values i.e. 256, 512, 1024….
Improve the accuracy of environment lighting by increasing the Cubemap resolution of the Skylight:
* use powers of 2 values i.e. 256, 512, 1024….
Improve screen space effects accuracy like screen space reflections by setting the engine to compute high precision normals for the GBuffer:
Set Project Settings > Engine > Rendering > Optimizations > GBuffer Format to: High Precision Normals
Use a high degree of tessellation (subdivision) for the models pre-import.
Simpy put: Use high quality models.
Improve the surfaces tangent space accuracy, and as a result also the shading/reflection accuracy by setting the model’s static mesh components to encode high precision tangent basis: Static Mesh Editor > Details > LOD 0 > Build Settings > Use High Precision Tangent basis
Creating materials with rich dual specular layers by enabling material clear coat separate normal input: Project Settings > Engine > Rendering > Materials > Clear Coat Enable Second Normal Set the material Shading Model to Clear Coat and use a ClearCoatBottomNormal input node to add a normal map for the underlying layer:
Steps for activating DXR Ray-tracing in a UE4 project:
Project Settings: Platforms > Windows > Targeted RHIs:
Set Default RHI to DirectX 12 * RHI = Rendering Hardware Interface
Project Settings: Engine > Rendering > Ray Tracing:
Check Ray Tracing
* Requires restarting the editor, and may take a while to load the project afterwards..
* I’m actually not sure if the reason for delay in re-launching the project is a full re-build of the lighting or compiling shaders..
Post Process Volume > Rendering Features > Reflections:
Set Type to: Ray Tracing
Post Process Volume > Rendering Features > Ray Tracing Reflections:
Set Max Bounces to more than 1 if needed