Question-and-Answer Resource for the Building Energy Modeling Community
Get started with the Help page
Ask Your Question

Revision history [back]

Representing Distribution Losses

How do you account for distribution losses when modelling a central plant?

Some resources (like the LEED DES guidance) say to use 2%, or 5% losses, but don't exactly elaborate on how those losses are applied. I've heard of modelers using two methods which yield very different results:

  1. Decrease the efficiency of the plant by the loss, so if its 2% losses, decrease the plant efficiency by 2%. This basically results in a 2% increase in energy use.

  2. Decrease the annual chilled water output by 2%, then re-calculated average efficiency for application to your model, if you have a separate central plant and building model. This also results in close to 2% increase in energy use.

    1. Apply the losses as a constant percentage of the total plant capacity. If you have 1000 kW of plant capacity, that means a constant 20 kW of losses at every hour of the simulation. This causes a significant increase in energy use, in excess of 20% increase.

Does anyone have guidance that shows which of these methods actually most closely represents measured data?

Representing Distribution Losses

How do you account for distribution losses when modelling a central plant?

Some resources (like the LEED DES guidance) say to use 2%, or 5% losses, but don't exactly elaborate on how those losses are applied. I've heard of modelers using two a few methods which yield very different results:

  1. Decrease the efficiency of the plant by the loss, so if its 2% losses, decrease the plant efficiency by 2%. This basically results in a 2% increase in energy use.

  2. Decrease the annual chilled water output by 2%, then re-calculated average efficiency for application to your model, if you have a separate central plant and building model. This also results in close to 2% increase in energy use.

    1. Apply the losses as a constant percentage of the total plant capacity. If you have 1000 kW of plant capacity, that means a constant 20 kW of losses at every hour of the simulation. This causes a significant increase in energy use, in excess of 20% increase.

Does anyone have guidance that shows which of these methods actually most closely represents measured data?

Representing Distribution Losses

How do you account for distribution losses when modelling a central plant?

Some resources (like the LEED DES guidance) say to use 2%, or 5% losses, but don't exactly elaborate on how those losses are applied. I've heard of modelers using a few methods which yield very different results:

  1. Decrease the efficiency of the plant by the loss, so if its 2% losses, decrease the plant efficiency by 2%. This basically results in a 2% increase in energy use.

  2. Decrease the annual chilled water output by 2%, then re-calculated average efficiency for application to your model, if you have a separate central plant and building model. This also results in close to 2% increase in energy use.

  3. Apply the losses as a constant percentage of the total plant capacity. If you have 1000 kW of plant capacity, that means a constant 20 kW of losses at every hour of the simulation. This causes a significant increase in energy use, in excess of 20% increase.

Does anyone have guidance that shows which of these methods actually most closely represents measured data?

Representing Distribution Losses

How do you account for distribution losses when modelling a central plant?

Some resources (like the LEED DES guidance) say to use 2%, or 5% losses, but don't exactly elaborate on how those losses are applied. I've heard of modelers using a few methods which yield very different results:

  1. Decrease the efficiency of the plant by the loss, so if its 2% losses, decrease the plant efficiency by 2%. This basically results in a 2% increase in energy use.

  2. Decrease the annual chilled water output by 2%, then re-calculated average efficiency for application to your model, if you have a separate central plant and building model. This also results in close to 2% increase in energy use.

  3. Apply the losses as a constant percentage of the total plant capacity. If you have 1000 kW of plant capacity, that means a constant 20 kW of losses at every hour of the simulation. This causes a significant increase in energy use, in excess of 20% increase.

Does anyone have guidance that shows which of these methods actually most closely represents measured data?

Representing Distribution Losses

How do you account for distribution losses when modelling a central plant?

Some resources (like the LEED DES guidance) say to use 2%, or 5% losses, but don't exactly elaborate on how those losses are applied. I've heard of modelers using a few methods which yield very different results:

  1. Decrease the efficiency of the plant by the loss, so if its 2% losses, decrease the plant efficiency by 2%. This basically results in a 2% increase in energy use.

  2. Decrease the annual chilled water output by 2%, then re-calculated average efficiency for application to your model, if you have a separate central plant and building model. This also results in close to 2% increase in energy use.

  3. Apply the losses as a constant percentage of the total plant capacity. If you have 1000 kW of plant capacity, that means a constant 20 kW of losses at every hour of the simulation. This causes a significant increase in energy use, in excess of 20% increase.

Does anyone have guidance that shows which of these methods actually most closely represents measured data?

Representing Distribution Losses

How do you account for distribution losses when modelling a central plant?

Some resources (like the LEED DES guidance) say to use 2%, or 5% losses, but don't exactly elaborate on how those losses are applied. I've heard of modelers using a few methods which yield very different results:

  1. Decrease the efficiency of the plant by the loss, so if its 2% losses, decrease the plant efficiency by 2%. This basically results in a 2% increase in energy use.

  2. Decrease the annual chilled water output by 2%, then re-calculated average efficiency for application to your model, if you have a separate central plant and building model. This also results in close to 2% increase in energy use.

  3. Apply the losses as a constant percentage of the total plant capacity. If you have 1000 kW of plant capacity, that means a constant 20 kW of losses at every hour of the simulation. This causes a significant increase in energy use, in excess of 20% increase.

Does anyone have guidance that shows which of these methods actually most closely represents measured data?

Representing Distribution Losses

How do you account for distribution losses when modelling a central plant?

Some resources (like the LEED DES guidance) say to use 2%, or 5% losses, but don't exactly elaborate on how those losses are applied. I've heard of modelers using a few methods which yield very different results:

  1. Decrease the efficiency of the plant by the loss, so if its 2% losses, decrease the plant efficiency by 2%. This basically results in a 2% increase in energy use.

  2. Decrease the annual chilled water output by 2%, then re-calculated average efficiency for application to your model, if you have a separate central plant and building model. This also results in close to 2% increase in energy use.

  3. Apply the losses as a constant percentage of the total plant capacity. If you have 1000 kW of plant capacity, that means a constant 20 kW of losses at every hour of the simulation. This causes a significant increase in energy use, in excess of 20% 10% increase.

Does anyone have guidance that shows which of these methods actually most closely represents measured data?