Question-and-Answer Resource for the Building Energy Modeling Community
Get started with the Help page
Ask Your Question

Revision history [back]

The optimum depends on exactly what you're trying to do. Specifically, how many jobs are you trying to run at a time, and how much of your job can be run in parallel. Amdah's Law describes the maximum speedup you can expect depending on your clock speed and number of threads.

More specific to rtrace, it matters whether you are running multiple simulations simultaneously, or spawning processes from one master process (which you can't do on Windows). Do you know if you are RAM limited (for instance, are you considering a large number of bounces or a large number of ambient divisions in a model of unusually large size)? You might want to monitor the RAM usage of one of your simulations to see if this is actually an issue for you. If you require a very large amount of memory (for instance, using photon mapping) then you might want a solid state drive for faster read/write access.

At the high end of the performance spectrum, you could use GPU computing through Accelerad. This is most useful if you have a large number of sensor points in your model because parallelism occurs at the primary ray level. Then you'll want to consider the CUDA core count and clock speed of your GPU, and you may also need a little more RAM for data transfer.