Question-and-Answer Resource for the Building Energy Modeling Community
Get started with the Help page
Ask Your Question

Revision history [back]

It sounds like the answer is to manage your users' expectations. As Greg noted in his answer to your last question, when you use ambient caching with a "-aa" parameter, you are specifying the amount of error you are willing to accept. If you can't accept error, don't use an ambient cache.

But furthermore, remember George Box's comment: "All models are wrong but some are useful". If your users expect that the answers they get will be 100% accurate (and therefore both simulation orders should give exactly the same results) then you are misleading them. If you check out the IESNA Lighting Handbook, 10th edition, you will find the guideline that simulated illuminance values (like yours) can only be expected to fall within 20% of actual lighting levels in the as-built space. Ng, et al., ("Advanced lighting simulation in architectural design in the tropics", 2001) report errors of up to 20% at individual sensors. Reinhart and Herkel ("The simulation of annual daylight illuminance distributions - a state-of-the-art comparison of six RADIANCE-based methods", 2000) compared six simulation engines and found root-mean-square errors (RMSEs) in global illumination ranging from 16% to 63%. Reinhart and Walkenhorst ("Validation of dynamic RADIANCE-based daylight simulations for a test office with external blinds", 2001) found errors under 20% and RMSE under 32%, which were later taken by Reinhart and Breton ("Experimental validation of Autodesk® 3ds Max® Design 2009 and Daysim 3.0", 2009) as acceptable maximums; however, the latter study produced higher errors in 15 of 80 data points. Reinhart and Andersen ("Development and validation of a radiance model for a translucent panel", 2006) reduced error to 9% and RMSE to 19% using advanced modelling and measurement techniques; however, they allow for the possibility of 20% error in daylight simulation results when applying them to energy calculations.

So yes, simulation tools that use Monte Carlo sampling or other stochastic processes like Radiance are not guaranteed to return the same results every time.