Question-and-Answer Resource for the Building Energy Modeling Community
Get started with the Help page
Ask Your Question
2

Ambient files in parallel computations

asked 2016-04-05 08:28:40 -0600

Grigori Grozman's avatar

updated 2016-04-06 11:14:42 -0600

Dear Radiance community.

Since you have been kind enough give me a quick and very relevant answer to my previous question, a ask a subsequent one.

If I understand correctly, in the last versions of Radiance, it is safe to use ambient files in parallel computations. I assume threads wait for each other before adding new results to the ambient file. However, the order, in which the different runs execute, seems to affect the final result, which is problematic, since the order of execution of parallel cases may vary from one run to another.

In order to illustrate the problem, I have created a small case with two zones. In each of them, a single measurement point is placed. The scene files are compressed into the attached file. The extension .bmp needs to be removed from the file name since the forum would not allow to .zip attachment [Dear moderators, please forgive me if I thereby break your rules. Feel free to delete this post. I will then try to rewrite it is a correct way]. In the archive, the file run2.bat starts the simulations and writes the results to file res.txt.

C:\fakepath\ambient files test.zip.bmp

In the first part of the experiment, I start with an empty ambient file and call rtrace for zone 1 first and then for zone 2 (run 1). Then, using the same ambient file, I call the same rtrace again for zone 1 first and then for zone 2 (run 2). I get different results in run 1 and run 2, which is expected, since the ambient files is being populated in run 1. Now, I call rtrace with the same ambient file for zone 2 first and then for zone 1 (run 3), but I get the same results as in in run 2. This is also expected, since the ambient file does not change during run 2 and run 3.

Now, I reset the ambient file and make runs 4,5 and 6 the same way as runs 1,2,and 3 but with reversed zone order.

As it turns out, the results are the same in run 5 and 6 (as expected), but not the same in runs 3 and 6, as I would hope they would.

As stated previously, this illustrates that the order of execution is crucial for the way the ambient file is populated. This means, one may obtain different results if one runs the same set of radiance commands in parallel. However, it is absolutely non-negotiable for me that the results are exactly the same from one simulation to another. The users get unhappy otherwise. The only way to achieve that, I as see it now, is to completely abandon the ambient files, at the cost of much longer simulation times.

So here finally comes the question: Do you know another workaround?

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted
5

answered 2016-04-05 09:01:49 -0600

It sounds like the answer is to manage your users' expectations. As Greg noted in his answer to your last question, when you use ambient caching with a "-aa" parameter, you are specifying the amount of error you are willing to accept. If you can't accept error, don't use an ambient cache.

But furthermore, remember George Box's comment: "All models are wrong but some are useful". If your users expect that the answers they get will be 100% accurate (and therefore both simulation orders should give exactly the same results) then you are misleading them. If you check out the IESNA Lighting Handbook, 10th edition, you will find the guideline that simulated illuminance values (like yours) can only be expected to fall within 20% of actual lighting levels in the as-built space. Ng, et al., ("Advanced lighting simulation in architectural design in the tropics", 2001) report errors of up to 20% at individual sensors. Reinhart and Herkel ("The simulation of annual daylight illuminance distributions - a state-of-the-art comparison of six RADIANCE-based methods", 2000) compared six simulation engines and found root-mean-square errors (RMSEs) in global illumination ranging from 16% to 63%. Reinhart and Walkenhorst ("Validation of dynamic RADIANCE-based daylight simulations for a test office with external blinds", 2001) found errors under 20% and RMSE under 32%, which were later taken by Reinhart and Breton ("Experimental validation of Autodesk® 3ds Max® Design 2009 and Daysim 3.0", 2009) as acceptable maximums; however, the latter study produced higher errors in 15 of 80 data points. Reinhart and Andersen ("Development and validation of a radiance model for a translucent panel", 2006) reduced error to 9% and RMSE to 19% using advanced modelling and measurement techniques; however, they allow for the possibility of 20% error in daylight simulation results when applying them to energy calculations.

So yes, simulation tools that use Monte Carlo sampling or other stochastic processes like Radiance are not guaranteed to return the same results every time.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Training Workshops

Careers

Question Tools

1 follower

Stats

Asked: 2016-04-05 08:28:40 -0600

Seen: 213 times

Last updated: Apr 05 '16