Question-and-Answer Resource for the Building Energy Modeling Community
Get started with the Help page
Ask Your Question
6

How Do Simultaneous EnergyPlus Processes Affect Runtime

asked 2017-02-24 10:45:24 -0500

Adam Hilton's avatar

In some situations, such as a parametric analysis, it is desirable to call multiple EnergyPlus instances at once. Tools such as the OpenStudio Parametric Analysis Tool easily give you this ability. What are the performance consequences/benefits associated with calling multiple instances and is there and optimal hardware setup?

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted
9

answered 2017-02-24 10:45:59 -0500

Adam Hilton's avatar

updated 2017-02-24 10:47:01 -0500

I performed my testing on an i7-6700K. I used OpenStudio PAT on a pretty simple model with eight design alternatives that each had one (the same) measure associated with them. To control how many EP engines were called at a time I adjusted the Max Local Processes value found under Preferences->Show Tools. I changed this value from one to seven letting all eight design alternatives complete each time. I didn’t run the baseline since that was missing the measure that was applied to the design alternatives.

The following chart shows what I’m calling the ‘Effective Iteration Runtime’. This value is the total parametric analysis runtime divided by the iterations performed (eight in every case). For example, for the case that was only allowed two processes, the effective iteration runtime was twenty-four seconds, and multiplying this times eight shows that the entire analysis took three minutes and twelve seconds.

image description

We see an exponential taper from one to four processes allowed with five, six, and seven taking most nearly the same amount of time. I believe this to be an artifact of hyper-threading since this is a four core hyper-threaded machine. What this means is that the ‘block’ of seven runs took 175% longer to run than the ‘block’ on one run. Slightly counter-intuitive to me since I have an eight thread machine, I should be able to run seven identical processes in the same amount of time it takes me to run one of those processes. I guess I hadn’t really put much thought into how hyper-threading scales. I’m guessing it might have something to do with cache/resource sharing, but I ain’t no processor designer! I wanted to ensure that the hyper-threading wasn’t affecting the results of one to three process allowed results so I disabled it and ran those three tests again. The results were reasonably consistent.

image description

So what does all this mean? It means that at five, six, and seven processes (since those extra processes will in theory be assigned to the ‘hyper-threads’) you’re getting inconsequentially small reductions in entire analysis runtimes (remember, this is specific to my hardware setup). While you’re able to run MORE models at once, the blocks run SLOWER. It’s not even a statistically significant difference with respect to the decrease in analysis runtime.

image description

I quickly threw this together more for my own edification, so free feel to call BS on something. Interested to see other people’s thought and if they’ve experienced something similar. Happy to add more detail if someone’s curious about a particular aspect of my approach.

Hopefully you found this interesting :)

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Careers

Question Tools

3 followers

Stats

Asked: 2017-02-24 10:45:24 -0500

Seen: 358 times

Last updated: Feb 24 '17