Question-and-Answer Resource for the Building Energy Modeling Community
Get started with the Help page
Ask Your Question
5

File structure comparison of OS measures run on Desktop vs OS server

asked 2017-06-07 18:58:38 -0500

antonszilasi's avatar

updated 2017-08-05 07:30:21 -0500

I am writing a reporting measure which will extract a number of outputs from the sql file in addition to extracting the inputs which the user specified in the measures applied to the osm model in each particular Energy Plus run.

I have therefore tried to write some code which will extract the latter from each Energy Plus run. When using OpenStudio server this is very easy as there is a file produced called - data_point.json.

It is extremely easy to query the inputs made by the user from this file as can be seen from the attached screenshot, you simply have to call set_variable_values_names.

image description

However when I run simulations on the desktop using the OpenStudio Application, the file data_point.json is not produced so it is not possible to use the same code on both the desktop and the server.

Instead on the desktop I have to query the file data_point_out.json which is not as clean and easy as using the set_variable_values_names section in data_point.json.

Why is there a difference between the desktop and the server? How can I produce the file data_point.json on the desktop?

One other difference that I noticed is that file workflow.osw does appear on the desktop but not on the server.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
3

answered 2017-06-08 13:10:38 -0500

I would use runner.workflow.workflowSteps to get both measure inputs and runner.registerValue information on upstream measures. It should work locally on PAT or on the Server

The code below will add an info statement for every measure argument and every runner.registerValue. It even returns values for downstream measure arguments that have not run yet.

If your reporting measures save data from sql queries as a runner.registerValue then you can get that here as well (only for upstream measures that have already run).

# 2.x methods (currently setup for measure display name but snake_case arg names)
runner.workflow.workflowSteps.each do |step|
  if step.to_MeasureStep.is_initialized
    measure_step = step.to_MeasureStep.get

    measure_name = measure_step.measureDirName
    if measure_step.name.is_initialized
      measure_name = measure_step.name.get # this is instance name in PAT
    end
    if measure_step.result.is_initialized
      result = measure_step.result.get
      result.stepValues.each do |arg|
        name = arg.name
        value = arg.valueAsVariant.to_s
        runner.registerInfo("#{measure_name}: #{name} = #{value}")
      end
    else
      #puts "No result for #{measure_name}"
    end
  else
    #puts "This step is not a measure"
  end
end

A PAT project script would have access to this same information but across multiple datapoints.

edit flag offensive delete link more

Comments

@David Goldwasser thank you for your answer as I understand it I need to insert this code into every measure that I run, only then in my final reporting measure can I get the "upstream measures" as these measures reported their values when they were run using the code "runner.registerInfo("#{measure_name}: #{name} = #{value}")" is this correct?

antonszilasi's avatar antonszilasi  ( 2017-06-14 10:36:40 -0500 )edit
1

You only need this code in the last measure. Only thing special you need to do in upstream measures is include runner.registerValue('some_value_name',value,"units") for outputs that you want to gather.

David Goldwasser's avatar David Goldwasser  ( 2017-06-20 09:58:01 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Training Workshops

Careers

Question Tools

2 followers

Stats

Asked: 2017-06-07 18:58:38 -0500

Seen: 239 times

Last updated: Jun 09 '17