Question-and-Answer Resource for the Building Energy Modeling Community
Get started with the Help page
Ask Your Question

Revision history [back]

Performing glare analysis within a gaming engine is not an unreasonable idea. Some current gaming engines do make physically-based calculations to produce fairly accurate lighting distributions with validated resutls (e.g. Call of Duty: Advanced Warfare). To shamelessly plug some of my own work, I've previously shown that daylight glare probability calculations can be performed quite quickly using similar techniques. However, there are a number of clarifications we need to make here, and your question really breaks into three parts: the production of models, the simulation of light levels and glare, and the display of results.

Production of Models

You seem to imply that VR could cut down on the time it takes to make models because it would somehow take Rhino out of the picture. Creating and debugging models is the most time-intensive part of simulation. However, CAD environments are generally separate from VR environments (at least for now). So no matter what, you need to generate your geometry with one tool and then export to another tool for analysis. Perhaps as a programmer you want to link a different CAD tool to Radiance in place of Rhino or you want to make the export process less visible to the user. Either is possible, and because Radiance can interpret OBJ files, it can be made to work with pretty much any modeling software. So if the problem is that rebuilding models in Rhino takes too long, then build them with something else, but I'm not sure how VR fits into this picture.

Simulation

The term "gaming engine" is too vague, so let's be specific. You need a rendering engine that performs calculations on real-valued inputs, so that if you provide it with light source luminance values scaled in units of radiance, it will output an image in which the pixel values are scaled to the same units. OpenGL renders won't do this because they scale values to integers in the range [0,256). Instead, you need a ray tracing tool. Fortunately many of these exist (e.g. Radiance, Iray, Mitsuba).

Next, you need HDR input for light sources. For glare, you're probably interested in daylight, which means you need an accurate model of the sky. The worst glare occurs under clear skies, so it's fairly straightforward to go with the CIE Clear Sky model or the Perez model, both of which are available through Radiance (several newer variants exist which give the sky color and improved accuracy at low light levels, but those aren't important for glare).

Most quantitative models of glare examine the contrast between bright and dark areas of the field of view. Bright areas will be direct views to light sources (e.g. the sun) or specular paths that take a small number of bounces. Fortunately, these can be calculated very quickly by most ray tracing tools. Dark areas are illuminated by only diffuse (or ambient) lighting, which is much more complex to calculate. However, these can be computed at much lower resolution, especially if you don't care about making a presentation-ready image. Warning: the pictures won't be "pretty" because there will be a lot of rendering artifacts, but they can still be fast and free of bias (as Brigade demonstrates).

Display

Now we come to the difficult part. Take a look at the checkerboard image below. It represents the highest contrast possible on your screen (between pure black and pure white). Do you experience glare (i.e. pain to your vision) when looking at it? Probably not, because you eye can accommodate roughly three orders of magnitude variation in brightness, whereas your screen can only produce variation of roughly two orders of magnitude. Even if you had a so-called HDR display, those only produce a contrast of three orders of magnitude. After all, why would the manufacturer make a display that would cause you pain to view?

Checkerboard

However, if you want someone to "experience" glare through VR, then you want to produce enough contrast to cause pain. Probably, this is not your goal. Using appropriate tone mapping, you could still show how glare leads to loss of contrast (try reading the text on the stop sign in the Brigade video, for instance). This is also an important factor in veiling glare. Another option is to present images in false color, which reduces the dynamic range necessary to represent contrast.

One final point, if you develop your own system, you will need to validate the results to make sure that your architects get the correct information. One advantage to Radiance is that it has already been extensively validated (another shameless plug).