I am trying to study the effect of mass flow rate and deltaT (input-output temperature difference) on the performance of IT equipment and data centers using two schemes, a steady-state one built into E+ and a transient scheme for servers developed experimentally. The goal is to characterize the difference in CRAC performance and mechanical PUE when comparing the two schemes. The cooling system used is DX Cooling (DXCoolingCoil:SingleSpeed).
However, what I have found is that there is no difference in facility total power, facility building power (quotient of which gives PUE), the CRAC power, DX cooling coil power and system flow rate when I implement the two schemes.
1) My understanding is that a higher deltaT and hence higher exit temperature would result in greater power savings in terms of the HVAC system but E+ does not report any difference in any of the above parameters.
2) Hard Sizing the DX coil to produce changes in mass flow rate and inlet air temperatures does not seem to work. Either errors are reported that there is a mismatch in sizing of various components or that zone temperatures are too high. Auto-sizing the coil (rated capacity, rated flow rate and SHR) seems to produce optimum results in terms of cooling system power consumed and PUE; however, that technique does not work with my transient scheme where E+ reports the supply side air flow rate to be around 2.8 kg/s and the ITE mass flow rate to be around 22kg/s, which does not make physical sense, even taking re-circulation into account.
Has anyone seen such a situation or can shed some light as to how to go about this and characterize the performance. Doing it experimentally is one thing but simulation should support those results. I would highly appreciate any insight into this situation.