How do you account for distribution losses when modelling a central plant?
Some resources (like the LEED DES guidance) say to use 2%, or 5% losses, but don't exactly elaborate on how those losses are applied. I've heard of modelers using two methods which yield very different results:
Decrease the efficiency of the plant by the loss, so if its 2% losses, decrease the plant efficiency by 2%. This basically results in a 2% increase in energy use.
Decrease the annual chilled water output by 2%, then re-calculated average efficiency for application to your model, if you have a separate central plant and building model. This also results in close to 2% increase in energy use.
- Apply the losses as a constant percentage of the total plant capacity. If you have 1000 kW of plant capacity, that means a constant 20 kW of losses at every hour of the simulation. This causes a significant increase in energy use, in excess of 20% increase.
Does anyone have guidance that shows which of these methods actually most closely represents measured data?