Understanding Transparency Render Settings

In theory, all clear* refractive surfaces should have their shadow calculated using a refractive caustics calculation in-order to render the refractive lensing** effect correctly, have their transparency color calculated as volumetric absorption of light through the medium in-order to render the color correctly for areas of different thickness, and have not only external reflections, but also internal reflections calculated, in-order to render the interaction between light and the transparent body correctly.
However, for thin surfaces of even thickness, like window glazing and car windshields, these optical effects can be rendered in much cheaper (non physical) methods, with very little compromise on final image quality or look, and even have an easier setup in most cases.
For this reason most popular render engines have object (mesh) and material (shader) parameters that allow configuration of the way these transparency effects will be rendered.
In this short article we’ll cover the different methods for rendering transparency effects, the reasoning behind them and the way to configure these settings in different render-engines.

In the comparison images below (rendered with Cycles), the images on the left were rendered with physically correct glass settings, 8192 samples + denoising,
And the images on the right were rendered with “flat” transparency settings and 1024 samples + denoising.
> See the shader settings below
Note that while for the monkey statue, the fast flat transparency settings produce an unrealistic result, the window glazing model loses very little of its look with the flat fast settings:

Transparency_Settings

Lensing, caustics and transparent shadows:

3D-Rendering-of-glassware

It’s a common intuitive mistake, that transparent objects don’t cast shadows, but they actually do. they don’t block light, they change its direction. light is refracted through them, gets focused in some areas of their surroundings (caustics) but can’t pass through them directly, so a shadow is created.
A good example of this would be a glass ball, acting like a lens, focusing the light into a tiny area, and otherwise having a regular elliptical shadow. if we tell the render-engine to just let direct light pass through the object we won’t get a correct realistic result, even if the light gets colored by the object’s transparency color.
There is however one case where letting the direct light simply pass through the object can both look correct and save a lot of calculations, and that is when the object is a thin surface with consistent thickness like window glazing.
So in many popular render-engines, when rendering an irregular thick solid transparent body like a glass statue or a glass filled with liquid, we have to counter-intuitively set the object or material to be opaque for direct light and let the indirect refracted light (caustics) create the correct lensing effect (focused light patterns in the shadow area)
> physically, light passing through a material medium is always refracted, i.e. indirect light. but for thin surfaces with even thickness like glazing, the lensing effect is insignificant, and can be completely disregarded by letting light pass directly through the object and be rendered as ‘transparent shadow’.
So the general rule regarding calculating caustics (lensing) vs casting transparent shadows (non physical), is that if the transparent object is a solid irregular shape with varying thickness like a statue or a bottle of liquid it should be rendered as opaque for direct light but with fully calculated caustics i.e. refracted indirect light.

Transparency color:

Cola_Test_ODED_ERELL_3D_Crop_signed

Physically, the color of transparency*** is always created by volumetric absorption of light traveling within the material medium. as light travels further through a material, more and more of it’s energy gets absorbed in the medium**** (converted to heat), therefore the thicker the object, less light will reach its other side, and it will appear darker. this volumetric absorption of light isn’t consistent for all wave lengths (colors) of light so the object appears to have a color.
For example, common glass, absorbs the red and blue light at a higher rate than green light, and therefore objects seen through it will appear greenish. when we look at the thin side of a common glazing surface we see a darker green color because we see light that has traveled through more glass (through a thicker volume of glass) because of refraction bending the light into the length of the surface. tea, in a glass, generally looks dark orange-brown, but if spilled on the floor it will ‘lose’ its color, and look clear like water because spilled on the floor, it’s too thin to absorb a significant amount of light and appear to have a color.
Most render engines allow setting the transparency (“refraction”/”transmission”) color of the material both as a ‘flat’ non physical filter color, and as a physical RGB light absorption rate (sometimes referred to as ‘fog’ color), that can in some cases be more accurately tuned by additional multiplier or depth parameters.
Setting an object’s transparency color using physical absorption (fog) usually requires more tweaking because in this method, the final rendered color is dependent not only on the color we set at the material/shader, but also on the model’s actual real world thickness.*****
In general, the transparency color of thick, solid, irregularly shaped objects (with varying thickness) must be set as a physical absorption rate color, and not as a simple filter color, otherwise the resulting color will not be affected by the material thickness, and look wrong.
For thin surfaces with consistent thickness, like window glazing, however, it’s more efficient to setup the transparency color as a ‘flat’ filter color, because it’s more convenient and predictable to setup, and produced a correct looking result.
For example, if we need to render an Architectural glazing surface that will filter exactly 50 percent of the light passing through it, it’s much simpler to set it up using a simple 50% grey transparency filter color, because this method disregards the glass model’s thickness. This approach isn’t physical, but for an evenly thick glazing surface, the result has no apparent difference from a physical volumetric absorption approach to the same task.

Internal reflections:

Diamond-close-up-inspection

It’s not intuitive to think that the air surface itself has reflections when seen through a transparent material volume like water or glass.
Viewed from under water, the air surface above, acts like a mirror for certain angles, reflecting objects that are under water. a glass ball lit by a lamp has a very distinct highlight, which is the reflected image of the light source itself (specular reflection), but it also has an internal highlight appearing on inside where the glass volume meets the air volume. we can easily ‘miss’ this internal highlight because in many cases it’s appearance converges with the bright focused light behind the ball, caused lensing (refractive caustics). the distinctly shiny appearance of diamonds, for example, is very much dependent on bright internal reflections, diamond cutting patterns are specifically designed to reflect a large percentage of light back to the viewer and look shiny, and if we wish to create a realistic rendering of diamonds, we will not only have to setup the correct refractive index for the material, but also model the geometric shape of the diamond correctly, and of course, set the material to render both external (“regular”******) reflections and internal reflections.
Your probably already guessing what I’m about to say next..
For thin surfaces with even thickness, the internal reflection is barely noticeable, because it converges with the main surface reflection, an for this reason, when rendering window glazing, car windshields, and the like, we can usually turn the internal reflections calculation off to save render time.

Underwater_31.12.18

Render Settings:

Simplified settings summary table:

Flat (Glazing) Physical (irregular volume)
Shadow Transparent Caustics
Color Filter Volumetric Absorption
Reflections External only External and Internal

Example Cycles (Blender) shaders:
> The Flat glazing shader is actually more complex to define since it involves defining different types of calculations per different type of rays being traced (cheating).
In general, for Shadow and Diffuse rays that shader is calculated as a simple Transparent shader and nor a refraction shader, and when back-facing, the shader is calculated as pure white transparent instead of glossy to remove the internal reflections.
> While the flat glazing shader is only connected to the Surface input of the material output, the physical glass shader has also a Volume Absorption BSDf node connected to the Volume input of the material output node.
> Note that a simple Principled BSDF material will have flat transparency and physical shadow (caustics) by default.
> For caustics to be calculated, the Refractive Caustics option has to be enabled in the Light Path > Caustics settings in the Cycles render settings.

Cycles

Example V-Ray Next for 3ds max material settings:
>
In V-Ray for 3ds max (and Maya) the Affect Shadows parameter in the VrayMtl Refraction settings determines weather the shadows will be fake transparent shadows suitable for glazing or (on) or opaque (off) which is the suitable setting for caustics.
> The caustics calculation is either GI Caustics which is activated by default in the main GI settings or a dedicated Caustics calculation that can be activated, also in the GI settings.
> For flat glazing the color is defined as Refraction Color and for physical glass the Refraction color is pure white and the glass color is set as Fog color.

V-Ray_Glass

Example Arnold for Maya settings:
> In Arnold 5 for Maya the Opaque setting in the shape node Arnold attributes must be unchecked for transparent shadows, and checked for opaque shadows suitable for caustics.
> For rendering refractive caustics in Arnold for Maya more settings are needed.
> When the Transmission Depth attribute is set to 0 the Transmission Color will be rendered as flat filter color, and when the Transmission Depth attribute is a value higher than 0 the transparency color will be calculated as volumetric absorption reaching the Transmission Color at the specified depth.

ArnoldMaya

General notes:

> in Brute Force Path Tracers like Cycles and Arnold the Caustics calculation is actually a Diffuse indirect light path. this seems un-intuitive, but the light pattern appearing on a table surface in the shadow of a transparent glass is actually part of the table surface’s diffuse reflection phenomenon.

> what we refer to as ‘Diffuse Color’ in dielectric (non-metals) is actually a simplification of absorption of light scattered inside the object volume (SSS).

* Optically all dielectric materials (non-metals) are refractive, but not all of them are also clear, the is, most of them actually have micro particles or structures within their volume, that scatter and absorb light that travels through them, creating the effects we’re used to refer to as “Subsurface Scattering” (SSS) and in the higher densities “Diffuse reflection”.

** Lensing is a term used to describe the effect of a material medium bending light, focusing and dispersing it, and so acting as a lens.

*** Actually all color in dielectric (non metallic) materials is created by Volumetric Absorption.

**** Light isn’t only absorbed as it travels through medium, it’s also scattered.

***** Volumetric shading effects usually use the model original scale (the true mesh scale), so to avoid unexpected results it’s best that the object’s transform scale will be 1.0 (or 100% depending on program annotation)

Related Posts:
>
Cycles Nested Transparencies
>
Arnold for Maya Refractive Caustics
> Arnold for Maya Transmission Scattering
> Understanding Fresnel Reflections
> Advanced Architectural Glazing shader for Blender
> V-Ray Underwater Rendering

Understanding the Photometric Light Measurement Units

There are two sets of light intensity measurement units:
Photometric units and Radiometric* units.
The Photometric units measure the intensity of visible light** as it is perceived by the human eye, and the Radiometric units measure the intensity of electromagnetic radiation***, which is the broader physical phenomenon of light, including the whole spectrum of radiation beyond visible light** (like x-rays and infrared radiation for example).

Light intensity is generally measured in three ways:

1. The directional intensity received from a light source as it is measured from a point in space. i.e Luminance in Photometric units or Radiance in Radiometric units.

2. The total light intensity output a light source emits to all directions i.e Luminous Flux in Photometric units or Radiant Flux in Radiometric units.

3. The amount of light intensity received by a surface from all directions i.e Illuminance in Photometric units or Irradiance in Radiometric units.

Similarly to the way measurement of kinetic power is based on the power of an ideal horse, the Photometric measurement units base the scale of light intensity on the light emitted by an ideal candle.

Ster

Luminance (Candela):
When measured from any point in space, the Luminance of an ideal candle seen from that point is measured as 1Candela‘ i.e. 1CD‘.
> In 3D rendering, a photometric IES file describes a light source’s light beam pattern by listing the Luminance or CD of the light source in different directions.

For light emitting surfaces like LCD screens Luminance is measured as Candelas per 1 square meter of surface i.e. CD/m2. Typical LCD computer monitors for instance, have a Luminance of about 250 CD/m2. imagine your computer screen displaying pure white and extended to an area of 1m x 1m, the light intensity perceived from it would be as if about 250 candles were spread on the whole area.

Luminous Flux (Lumen):
The amount of light emitted by an ideal candle through 1 solid angleSteradian‘**** conic beam distribution is measured as 1Lumen‘ or 1lm‘. the total Luminous Flux of the candle in all directions is 4 x PI lumens i.e 12.56 lumens which is simply the whole surface area of the unit sphere.
> The total amount of light produced by different kind of light bulbs is usually specified by Luminous Flux measurement i.e how many Lumens does the light source output.
> If sn optical reflector is placed next to a light source, focusing all it’s light output to a narrow direction, it wont change the light source’s Luminous Flux (Lumen) output, but since the same Luminous Flux will be focused to a narrower beam, it will have a higher Luminance (CD) measured from that direction, and therefore surfaces at that direction will be receive a brighter Illumination (Lux) (see below).

Illuminance (Lux):
A 1 m2 (meter squared) area surface, receiving illumination of 1 lumen has a measured Illuminance of 1 lux or 1 lx. Illuminance is measured by how many lumens a surface receives per square meter.
> In photography, the amount of Illuminance at which a surface is lit is important for determining the proper photographic exposure for the picture.

The inverse-square law:
As a light beam travels through space it’s distribution covers a larger and larger area, therefore it’s energy per area is reduced. the light energy a candle emits through 1 solid angle steradian, in a distance of 1 meter will cover an area of 1 meter squared, therefore the area of 1 meter squared will receive 1 lumen of light energy and will be illuminated with an illuminance of 1 lux. as that 1 lumen of light energy travels another 1 meter further, to a distance of 2 meters from the candle, it spreads and covers an area of 4 meter squared. each square meter of the 4 now receives just 1/4 of a lumen, so it’s illuminated by only 1/4 lux. as that 1 lumen of light energy travels another 1 meter further, to a distance of 3 meters from the candle, it spreads and covers an area of 9 meter squared. each square meter of the 9 now receives just 1/9 of a lumen, so it’s illuminated by only 1/9 lux. after a distance of 4 meters, the same 1 lumen on light energy will be spread on an area of 16 meter squared, so each square meter will be illuminated by 1/16 lux. you can already see the emerging pattern, the illumination intensity is inversely proportional to the square of the distance to the light source. This phenomenon is referred to as ‘The inverse-square law‘, and in practical terms it means that surface illumination is greatly influenced by it’s distance from the light source.

inv

Notes:

* Radiometric units measure light intensity using Watt light energy units.
note that this isn’t the Watt measurement units of electric consumption we’re used to for classifying electric light sources with, but a Watt measurement of the actual energy in the light itself.

** Electromagnetic radiation of wave lengths that stimulate the human eye.

*** Also referred to as ‘light‘ in physics.

**** A ‘Steradian‘, also referred to as ‘square radian’ is a measurement unit of 3D conic span or ‘solid angle’. a solid angle of 1 Steradian beginning at the center of a unit sphere covers exactly an area of 1 squared on the surface of the sphere. (the whole surface area of the sphere being 4 PI). The Steradian can be thought of as the Radian’s 3 dimensional ‘cousin’.

Related posts:
IES lighting in CG
Fresnel reflections

Realistic Spotlights for Blender & Cycles

Software:
Blender 2.79 | Cycles Renderer

There’s currently no built-in support for IES light sources in Blender & Cycles.
We already know that Blender 2.8 will have the feature built into it (which is great news!), and there’s an addon that provides the functionality, but I wasn’t satisfied with it’s workflow, not being integrated well into Cycles.
So I decided to develop a custom Cycles shader (node group) that will provide realistic IES like spotlights in a convenient customizable way.

The Shader I developed is called CG-Lion Spotlight Presets Pack 1.0 and is available for purchase on Blender Market.
It doesn’t load external IES files, but instead has a pre-configured library of 20 spotlights shapes, and also provides features that are not available in IES lighting like tweaking the spotlight beam focus, adding a chromatic dolor dispersion effect, and producing a correctly bright surface at the light source.

CGL_Spotlight_Presets_Pack_1.0_Previews.jpg


Related:

Customizable Photo-realistic Car-paint shader for Cycles
Complex Fresnel texture for Cycles
Optimized Architectural Glazing Shader for Cycles
Procedural Wood Shader for Cycles

Understanding Fresnel Reflections

What we refer to in CG by the term “Fresnel Reflections” or “Fresnel Effect”, is the way Specular Reflection intensity changes according to light \ surface incident angle, and it is a basic optical property surfaces.

Specular reflection intensity changes according to light incident angle, and behaves almost like a perfect mirror at grazing view angle.
The reason we call this natural reflection behavior “Fresnel Effect” or “Fresnel Reflection” is that the equations describing the how reflection intensity changes according to incident angle were invented by the French Physicist Augustin-Jean Fresnel, and in early CG days, not all systems knew how to calculate natural reflections or reflections at all for that matter. So in CG we ended up treating this as something special, when in fact it’s not special in nature, it was just special in the early days of ray-tracing.

When rendering Fresnel Reflections, the reflection intensity isn’t determined by a linear blending percent like mixing a layer.
It’s determined by a factor called “Refractive Index” or “Index Of Refraction” i.e. IOR.
The IOR value is derived from the physical material’s density, which is the key factor determining both reflection intensity and refraction amount.

Examples of some physical IOR values*:
Air (vacuum): 1.0
Water: 1.33
Glass: 1.52
Diamond: 2.417
* Physical values differ between different measurements and samples of materials so you might see differences between different data sources.

FResnel_Off

This ball is rendered without “Fresnel Reflections”.
Its Specular reflection is blended consistently at 50% over the diffuse color (reflection), not affected by the light/view incident angle.
The result looks wrong for a natural material. It may look like a dielectric material (non metal) that’s coated with a silvery coating, but it can’t look correctly like glossy plastic or glass.

FResnel_On

This ball is rendered with “Fresnel Reflections”.
The reflections look natural for a dielectric material (non metal), because they are dim at perpendicular incident angle and intense at grazing view angle, hence seen mostly at the sides of the ball accentuating its contour.

Theoretically Specular Reflection for all types of materials should be calculated using what we refer to in CG by the term “Complex Fresnel”, that is reflection equations that take into account both the Refractive Index (IOR) and Extinction Coefficient for 3 primary colors (spectrum wave lengths).
*Complex Fresnel component values for different materials can be found on https://refractiveindex.info/.
In practice, for Dielectric materials (non metals), most common production rendering systems use what we refer to in CG by the term “Simple Fresnel” or “Simple IOR”, that is calculating the reflection for all 3 primary colors using a single Refractive Index value, which is the surface’s Refractive Index for the Green primary color.
This method has proven itself to be very efficient for rendering non-metallic surfaces (dielectric materials).
Rendering metallic reflection using Complex IOR produces the most realistic color and reflection* for metals.
*In metallic surfaces the color is the reflection color itself and not a separate Diffuse component.
Some rendering systems like Arnold 5 for example have implemented a general form* of Complex IOR into their physical surface shader, Complex IOR reflection can also be rendered via OSL shaders that can be found on the web (or written..).
*I’m using the term ‘general form’ because these implementations don’t include input for Complex IOR values but just a general metallic reflection curve, that interpolates manual color selection.

This golden monkey statue (“Suzanne”) is rendered in Cycles using a Complex Fresnel procedural node:BlenderNation

Popular useful cheats for mimicking metallic reflection without complex IOR are to set a very high (non physical) simple IOR value, like 15 to 30 which forces the Fresnel reflection to become more metal-like, or turn Fresnel reflection completely off, turning the specular reflection into a perfect mirror reflection, or create a custom reflection/angle curve/ramp that produces the effect of the metallic reflection color and intensity changing by incident angle, see example here.

In many popular production renderers, the physical surface shader uses a single IOR parameter. Some rendering systems allow using 2 different IOR parameters, one for calculating reflections and the other for calculating Refraction.
* physically correct dielectric materials should be defined with the same IOR value for both reflections and refraction. using different IOR values for reflection and refraction allows useful cheats like creating transparent a material that is modeled without any thickness or defining a transparent glass that has silver reflective coating like sunglasses sometimes have.

Notes:

  1. IOR lists on the web, that display only simple IOR values like this list, are not valid for metals, and produce wrong results.
    *Using simple IOR values for dielectric materials however is very efficient.
  2. There are parts in the CG industry where in daily slang language, the term “Fresnel” is used to refer to any shading effect that is view-angle dependent,
    Usually referring to the shading properties appearing at the “sides” or contours of the model.
  3. There are some CG systems that use the term Fresnel to refer to a simple linear or non-linear incident angle blending effect, that should actually be called “Facing ratio” or “Perpendicular-Parallel” blending (falloff).
    This is wrong because IOR based Fresnel reflection intensity produces a specific physical Reflection intensity/view angle function curve, and not just a linear or simple power function.
    See example in UE4’s Fresnel node.
  4. Some PBR rendering systems using implementations of the Disney principled BRDF don’t use a Fresnel IOR input value for dielectric materials, but instead use a simpler 0 – 1 (or higher) Specular input parameter that produces dielectric reflections of approximately 1.0 to ~1.8 IOR value range, which generally covers the range between no reflection at all to common gemstones.
    This approach may be inconvenient for rendering artists that are used to setting IOR values, but it has a significant advantage of allowing the usage of an LDR 0 – 1 range (regular image file) texture map for the Specular input, and that way be able to define different material reflections on the same object, rather than having to create a physical IOR map that has encode higher than 1 float values or be mapped to that range at the shading graph.
    * See Blender’s Pricipled BSDF and Unreal Engine’s PBR Material.
    * Blender’s Principled BSDF actually allows setting Specular values higher than 1 to render material denser than IOR 1.8 (like a diamond for example).
  5. Many modern production renderers use Schlick’s Approximation to render Fresnel reflections, a simplified Fresnel formula that is both faster to compute and better suited for microfacet glossy reflection models.

Related:

  1. V-Ray Next’s new metallic material option.
  2. Creating a rich metallic shader in UE4.
  3. Complex Fresnel Texture for Cycles.
  4. Rendering Transparencies
  5. UE4 – Basic architectural glazing material

IES Lighting in CG

IES stands for Illuminating Engineering Society, it is the organization responsible for creating and maintaining industrial standards for design and manufacturing of artificial light sources.

In 3D rendering, an IES file or “photo-metric file” is a text file containing a photometric description of a light source’s beam spread , pattern and intensity, allowing for faithful depiction of the light source in 3D renders.
Most modern 3D rendering software support IES lights, that is allow loading IES files into the software and lighting the 3D scene using the light source described in the IES file.

Lighting manufacturers make measurements of their light fixture model’s physical light output and create IES files available for download on their websites.
This allows architects, lighting designers, and interior designers to download the files and realistically visualize the light sources effect on their projects.

CG artists use IES lights to add realistic spotlight beam patterns to their renderings and animations, such that can’t be created using regular simple 3D light sources.

Examples of IES lights rendered with V-Ray for 3ds max:

IES

Related:
Understanding the photometric light measurement units
IES Spotlights for Blender & Cycles

Maya – Setting the V-Ray Sun direction according to location, date and time

Software:
Maya 2018 | V-Ray 3.6

To set the VRaySun photometric light source diretion according to the location in the world, the date and the time:

  1. Select the VRaySun parent node – ‘VRayGeoSun1Transform‘ and rotate it so its Z axis points to the architectural plan’s south.
  2. Select the VRaySun node – ‘VRayGeoSun1‘ and in its attributes un-check Manual Position.
    This will make the location / date / time parameters accessible.
  3. Set the GMT zone of you architectural project’s location in the world, the Date and time.
    * haven’t found how to set daylight saving time….

Untitled-1

Related:
V-Ray for Maya Physical Camera
V-Ray for Maya White Balance
Daylight system addon for Blender

Architectural Visualization can be both physically correct and aesthetically pleasing

B_Sunset_EV-8_Oded-Erell

Thinking we must “cheat” about the real-world lighting conditions of an architectural interior in order to render an aesthetically pleasing image of it is a common misconception in the field of Architectural Visualization.

I have been a professional in the field of digital 3D Visualization and Animation for the past 17 years, and the technologies we use to create synthetic imagery have developed dramatically during this period. The profession that is traditionally named “Computer Graphics”, can today rightfully be named “Virtual Photography”.

At the beginning of my career, photo-realistic rendering was impossible to perform on a reasonably priced desktop PC workstation. Today things are very different. In the early years, the process of digital 3D rendering produced images of a completely graphic nature. No one back than would mistake a synthetic 3D rendering for being a real-world photograph.

About 12 years ago, the development of desktop CPU performance and the advent of 3D rendering software that use Ray-Tracing* processes have made possible a revolution in the ability to render photo-realistic images on desktop PC’s. The term “photo-realistic” simply means that an uninformed viewer might mistake the synthetically generated image for a real-world photo, but it doesn’t mean the image is an accurate representation of the way a photograph of the subject would look if it were really existing in the world. For a computer generated image to faithfully represent how a real-world photo would look, it’s not enough for the rendering to be photo-realistic, it also needs to be physically correct and photo-metric.

“Physically correct” rendering means the rendered image was produced using an accurate virtual simulation of physical light behavior, and “Photo-Metric” rendering means that the virtual light sources in the 3D model have been defined using real-world physical units and and the rendered raw output is processed in a way that faithfully predicts the image that would result from a real-world camera exposure.

Most contemporary rendering software packages, have the features I described above, and therefore are capable of generating photo-realistic images that are also physically correct and photo-metric, and so faithfully predict how a real world photo of the architectural structure would look.

So what’s the problem?

The problem is that when we virtually simulate the optics of a scene using real world physical light intensities, we come across the challenges that exist in real world photography, mainly the challenge of contrast management, or in more geeky terms, handling the huge dynamic range of real-world physical lighting, simply put, we encounter the common photography artifacts like unpleasing “blown out” or “burnt” highlights, light fixtures and windows.

Trying to solve the problem by lowering the camera exposure simply reveals more details in bright areas at the expense of darkening the more important areas of the image. traditional photo editing manipulations don’t do the trick, they might serve as a blunt instrument to darken areas of the image selectively but the result looks unnatural and fake and the traditional approach in interior rendering is to simply give up the realism of the visualization by drastically reducing the intensities of visible light sources and adding invisible light sources, a solution that might produce an aesthetic image but not one that faithfully reflects how a real photograph of the place would look and can be said to be physically correct.

Fortunately today we have tools and processes, that allow for a much more effective development of physically accurate renders, somewhat similar in approach technologies incorporated into professional digital photography. these techniques involve processing the rendered images using specialized file formats that contain a very high degree of color accuracy and can store the full dynamic range of the “virtual photograph”, a process called “Tone mapping” designed to display an image in a way that mimics the the way are eyes naturally see the world, optically simulated lens effects that mimic the way a real lens woulds react to contrast and high intensities of light.

Incorporating this workflow requires taking a completely different approach to creating and processing 3D rendered images than the traditional methods used in the past decades. we give up some of the direct control we’re used to in computer graphics, but in return we are able to produce physically correct visualization that are both aesthetically pleasing and have a naturally feeling lighting.

Daylight_Oded_Erell

In conclusion, with effective usage of today’s imaging technologies, it’s possible to produce 3D visualization that will serve both as a faithful representation of a possible real world photograph of the architectural design, thus aiding the creative design and planning process, and at the same time provide a photo-realistic basis for producing highly aesthetic marketing media.

Thank you for reading! I would love to hear your opinion, discuss the subjects in the article and answer any questions that you may have about it.

* “Ray-Tracing” is a process that simulates the physical behavior of light by tracing the directions it travels as it hits surfaces, reflects of them and refract though them. Ray-Tracing calculations are a key ingredient in photo-realistic rendering.

The author is Oded Erell, photo-realistic rendering specialist and instructor, the 3D visualizations displayed in this article have all been produced CG LION Studio.
Your’e welcome to visit our portfolio website
 and see more examples of our work.

 

Related Posts:

  1. Understanding the Photo-Metric Units
  2. IES Lighting
  3. Understanding Fresnel Reflections
  4. Understanding Transparency Render Settings
  5. Wooden floor material in V-Ray
  6. Advanced Spotlights for Blender & Cycles
  7. Advanced Architectural Glazing for Blender & Cycles

 

Cycles Area Light pleasant surprise

Software:
Blender 2.79

One of the features I would really like added to the Cycles Renderer is a photo-metric workflow.
That is the ability to set light sources intensity using real-world photo-metric units, load IES photo-metric data, have a physical daylight system, and set photographic camera exposure and white-balance for the output image.

While Cycles currently doesn’t have a fully functional photo-metric workflow,
It is equipped with some important basic ingredients needed for the development of such a workflow.

One Of these features is the Black-Body color conversion node that allows specifying color by Kelvin color temperature,
Another is the procedural Sky texture featuring Hosek / Wilkie and Preetham physical sky models, that can also be controlled according to global position, date and time with this addon.

Recently I’ve had a pleasant surprize finding out that Cycles actually has another important feature for a photometric workflow, and that is that Cycles Area Lights maintain a fixed general light output (‘Luminous Flux’) while area is changed and changes specular intensity correctly to so that the smaller the light source area, the greater its brightness (as it should physically be).
* This in difference to the way a mesh light with an emission shader behaves where the light output is per area and therefore increases or decreases when changing the shape and size of the surface.

This makes designing light sources with a fixed total output of light yet different shape, and therefore different specular reflection, shading, shadow softness possible,
And is in itself a valuable feature in realistic light source design.
* Especially coupled with setting the light color using Kelvin color temperature (Black-body node)
The only thing missing is the an ability to specify the total output of the light source in Lumens (lm) units.

I have encountered a mentioning of Cycles having a physical scale conversion ratio here:

http://www.3d-wolf.com/camera.html

Marco Pavanello, the developer of the Blender ‘Real Camera Addon‘ wrote:
“In Blender the Emission Node Strength is measured in W/m^2”
I haven’t had the time yet to seriously find out how that should be translated to intensity in lumens..
* It  should be noted that both the Cycles Area light and mesh light use the Emission shader as there source for intensity / color settings, but differently,
You can see in the demonstrations below that for a light source of the same surface area a significantly larger strength value is needed to produce roughly the same light output as the light mesh and this is probably due to the output being internally divided by surface area which is in fact the subject of this post.

Here are some renders to illustrate the point, and the diffrent behavior of light mesh (mesh with an Emission shader)
I’ve added a rough glare effect that depends on float color intensity to illustrate the way the specular highlight intensity increases as the area of the ligt source gets smaller while overall light output is the same:

AREA_Sizes
Cycles Area Light with different sizes but same strength

MESH_Sizes
Cycles Mesh Light using an Emission Shader with different sizes but same strength

MESH_Sizes_Compensation

Cycles Mesh Light using an Emission Shader with different sizes and strength changes to compensate

Daylight system for Blender

Software:
Blender 2.78

Found a simple and effective sun positioning addon for Blender,
It can be downloaded here:
https://developer.blender.org/F20492
The addon is easy to install like all other addons in Blender,
Once installed you will find it’s controls in the World settings,
You specify a sky environment map and a sun node and it will control them acording to given latitude longitude, date, timezone ect.
Also lets you set the north direction of the system as needed.

Very simple and very effective.

Related:
Photometric daylight setup in V-Ray for Maya