The cglSurfaceCarPaint car-paint material combines 3 layers: Base: A blend of diffuse/metallic shading with a view-angle color mix Metallic flakes: Distance blended procedural metallic flakes Clear coat: Glossy clear coat layer with built-in procedural bump irregularity
And has been tested with:
Blender & Cycles
Maya & Arnold
3ds max & V-Ray
This post is a summary of the tips given by Epic Games technical-artist Min Oh in his GDC 2017 lecture about improving photo-realism in product visualization, more specifically, how to render high quality surfaces.
I recommend watching the full lecture:
Render sharper reflections by increasing the Cubemap resolution of reflection captures: Project Settings > Engine > Rendering > Reflection > Reflection Capture Resolution
* use powers of 2 values i.e. 256, 512, 1024….
Improve the accuracy of environment lighting by increasing the Cubemap resolution of the Skylight:
* use powers of 2 values i.e. 256, 512, 1024….
Improve screen space effects accuracy like screen space reflections by setting the engine to compute high precision normals for the GBuffer:
Set Project Settings > Engine > Rendering > Optimizations > GBuffer Format to: High Precision Normals
Use a high degree of tessellation (subdivision) for the models pre-import.
Simpy put: Use high quality models.
Improve the surfaces tangent space accuracy, and as a result also the shading/reflection accuracy by setting the model’s static mesh components to encode high precision tangent basis: Static Mesh Editor > Details > LOD 0 > Build Settings > Use High Precision Tangent basis
Creating materials with rich dual specular layers by enabling material clear coat separate normal input: Project Settings > Engine > Rendering > Materials > Clear Coat Enable Second Normal Set the material Shading Model to Clear Coat and use a ClearCoatBottomNormal input node to add a normal map for the underlying layer:
Thinking we must “cheat” about the real-world lighting conditions of an architectural interior in order to render an aesthetically pleasing image of it is a common misconception in the field of Architectural Visualization.
I have been a professional in the field of digital 3D Visualization and Animation for the past 17 years, and the technologies we use to create synthetic imagery have developed dramatically during this period. The profession that is traditionally named “Computer Graphics”, can today rightfully be named “Virtual Photography”.
At the beginning of my career, photo-realistic rendering was impossible to perform on a reasonably priced desktop PC workstation. Today things are very different. In the early years, the process of digital 3D rendering produced images of a completely graphic nature. No one back than would mistake a synthetic 3D rendering for being a real-world photograph.
About 12 years ago, the development of desktop CPU performance and the advent of 3D rendering software that use Ray-Tracing* processes have made possible a revolution in the ability to render photo-realistic images on desktop PC’s. The term “photo-realistic” simply means that an uninformed viewer might mistake the synthetically generated image for a real-world photo, but it doesn’t mean the image is an accurate representation of the way a photograph of the subject would look if it were really existing in the world. For a computer generated image to faithfully represent how a real-world photo would look, it’s not enough for the rendering to be photo-realistic, it also needs to be physically correct and photo-metric.
“Physically correct” rendering means the rendered image was produced using an accurate virtual simulation of physical light behavior, and “Photo-Metric” rendering means that the virtual light sources in the 3D model have been defined using real-world physical units and and the rendered raw output is processed in a way that faithfully predicts the image that would result from a real-world camera exposure.
Most contemporary rendering software packages, have the features I described above, and therefore are capable of generating photo-realistic images that are also physically correct and photo-metric, and so faithfully predict how a real world photo of the architectural structure would look.
So what’s the problem?
The problem is that when we virtually simulate the optics of a scene using real world physical light intensities, we come across the challenges that exist in real world photography, mainly the challenge of contrast management, or in more geeky terms, handling the huge dynamic range of real-world physical lighting, simply put, we encounter the common photography artifacts like unpleasing “blown out” or “burnt” highlights, light fixtures and windows.
Trying to solve the problem by lowering the camera exposure simply reveals more details in bright areas at the expense of darkening the more important areas of the image. traditional photo editing manipulations don’t do the trick, they might serve as a blunt instrument to darken areas of the image selectively but the result looks unnatural and fake and the traditional approach in interior rendering is to simply give up the realism of the visualization by drastically reducing the intensities of visible light sources and adding invisible light sources, a solution that might produce an aesthetic image but not one that faithfully reflects how a real photograph of the place would look and can be said to be physically correct.
Fortunately today we have tools and processes, that allow for a much more effective development of physically accurate renders, somewhat similar in approach technologies incorporated into professional digital photography. these techniques involve processing the rendered images using specialized file formats that contain a very high degree of color accuracy and can store the full dynamic range of the “virtual photograph”, a process called “Tone mapping” designed to display an image in a way that mimics the the way are eyes naturally see the world, optically simulated lens effects that mimic the way a real lens woulds react to contrast and high intensities of light.
Incorporating this workflow requires taking a completely different approach to creating and processing 3D rendered images than the traditional methods used in the past decades. we give up some of the direct control we’re used to in computer graphics, but in return we are able to produce physically correct visualization that are both aesthetically pleasing and have a naturally feeling lighting.
In conclusion, with effective usage of today’s imaging technologies, it’s possible to produce 3D visualization that will serve both as a faithful representation of a possible real world photograph of the architectural design, thus aiding the creative design and planning process, and at the same time provide a photo-realistic basis for producing highly aesthetic marketing media.
Thank you for reading! I would love to hear your opinion, discuss the subjects in the article and answer any questions that you may have about it.
* “Ray-Tracing” is a process that simulates the physical behavior of light by tracing the directions it travels as it hits surfaces, reflects of them and refract though them. Ray-Tracing calculations are a key ingredient in photo-realistic rendering.
The author is Oded Erell, photo-realistic rendering specialist and instructor, the 3D visualizations displayed in this article have all been produced CG LION Studio.
Your’e welcome to visit our portfolio website and see more examples of our work.