Question-and-Answer Resource for the Building Energy Modeling Community
Get started with the Help page
Ask Your Question

Glare and lighting in gaming engines vs Radiance and evalGlare

asked 2016-09-12 16:32:30 -0500

antonszilasi's avatar

updated 2016-09-12 16:33:22 -0500

I am not sure if I am way out of the ball park here but I thought I would at least ask this question to glean the community's thoughts on the topic. I do not have a sufficient background in lighting to realize if I am missing something fundamental here so please correct me where I am wrong and point out any flaws in my idea.

I work as a programmer/building physics consultant for a large Architectural firm, I am finding that a great deal of time is spent building Rhino models to undertake glare analysis in Diva and then even more time to understand them (on the designer's side).

With virtual reality technology advancing steadily it has become apparent to me that it would be fair more intuitive and straight forward to have the building designer experience the space and day lighting affects first hand.

The computer could identify and rank the potential times of the year where glare could occur (something like the Daysim annual glare display)

image description

and display them on an interactive interface within the virtual reality space so that the user could toggle between the times of the year when glare is an issue and experience them first hand.

Would something like this be possible in VR? Is there a gaming engine that accurately simulates real lighting affects and glare? I know this is theortically possible but is it too computationally intensive in VR is this why it hasn't been done?

Your thoughts would be most appreciated!

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2016-09-13 11:37:06 -0500

Performing glare analysis within a gaming engine is not an unreasonable idea. Some current gaming engines do make physically-based calculations to produce fairly accurate lighting distributions with validated resutls (e.g. Call of Duty: Advanced Warfare). To shamelessly plug some of my own work, I've previously shown that daylight glare probability calculations can be performed quite quickly using similar techniques. However, there are a number of clarifications we need to make here, and your question really breaks into three parts: the production of models, the simulation of light levels and glare, and the display of results.

Production of Models

You seem to imply that VR could cut down on the time it takes to make models because it would somehow take Rhino out of the picture. Creating and debugging models is the most time-intensive part of simulation. However, CAD environments are generally separate from VR environments (at least for now). So no matter what, you need to generate your geometry with one tool and then export to another tool for analysis. Perhaps as a programmer you want to link a different CAD tool to Radiance in place of Rhino or you want to make the export process less visible to the user. Either is possible, and because Radiance can interpret OBJ files, it can be made to work with pretty much any modeling software. So if the problem is that rebuilding models in Rhino takes too long, then build them with something else, but I'm not sure how VR fits into this picture.


The term "gaming engine" is too vague, so let's be specific. You need a rendering engine that performs calculations on real-valued inputs, so that if you provide it with light source luminance values scaled in units of radiance, it will output an image in which the pixel values are scaled to the same units. OpenGL renders won't do this because they scale values to integers in the range [0,256). Instead, you need a ray tracing tool. Fortunately many of these exist (e.g. Radiance, Iray, Mitsuba).

Next, you need HDR input for light sources. For glare, you're probably interested in daylight, which means you need an accurate model of the sky. The worst glare occurs under clear skies, so it's fairly straightforward to go with the CIE Clear Sky model or the Perez model, both of which are available through Radiance (several newer variants exist which give the sky color and improved accuracy at low light levels, but those aren't important for glare).

Most quantitative models of glare examine the contrast between bright and dark areas of the field of view. Bright areas will be direct views to light sources (e.g. the sun) or specular paths that take a small number of bounces. Fortunately, these can be calculated very quickly by most ray tracing tools. Dark areas are illuminated by only diffuse (or ambient) lighting, which is much more complex to calculate. However, these can be computed ... (more)

edit flag offensive delete link more

answered 2016-09-12 18:54:09 -0500

All the VR/archvis stuff out there is fast, no doubt. Many of those tools use physically-based light transport algorithms as well. The problem is that none of them return real lighting values which could be interpreted in terms of glare or passed to an existing or proposed glare metric -- assuming that "glare" is easily determined from a single output (which it is not). While they are useful and efficient at producing images that look "good", the images are somewhat useless for quantitative analysis.

In your post, you imply this is a non-issue, because designers could "experience" the space, and lighting effects, first hand. The problem here is that to truly "experience" glare, one would have to have an HDR VR display, and these are a) only now becoming available, and b) would still need valid HDR input (not to mention some validation). Guess where that valid HDR input might come from? That's right, Radiance (or derivative work such as Diva). This is saying nothing about the fact that glare is highly subjective and so if left to the conclusion of (n) member(s) of the design team, who's to say the design "works", or is "glare free"?

Your comment about the requirement for "even more time" (on the part of the "designers") to understand the glare analyses sounds a lot like you wanting to eliminate the role of the lighting consultant altogether, which is fine and nothing new, but it loses sight of the very real need for the lighting consultant's intuition, industry experience, and passion. Any tool in the wrong hands is ineffective at best, and more likely dangerous. Lighting consultants are often needed to interpret the output from lighting design tools in much the same way the architects must interpret the plans, sections, and elevations for their clients. Unfortunately, architects continue to lose sight of this simple truth.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Training Workshops


Question Tools

1 follower


Asked: 2016-09-12 16:32:30 -0500

Seen: 411 times

Last updated: Sep 13 '16