First time here? Check out the Help page!
1 | initial version |
Hello,
I'm just wondering why are you trying to save these sql files from the datapoint.zip, I assume that you are doing this from the OpenStudio server console?
You can access the sql files directly in the data point run directory by using a datapoint finalization script. You can set this up with PAT and then SSH into the docker container running the datapoints to test the finalization script.
You could write some code which puts these .sql into a AWS s3 bucket. This worked well for me but I was doing it for smaller files.
The eplusout.sql can be very large, so it maybe cleaner to write a measure that queries the .sql files for the info that you need and then puts this into S3 somewhere.
2 | No.2 Revision |
Hello,
I'm just wondering why are you trying to save these sql files from the datapoint.zip, I assume that you are doing this from the OpenStudio server console?
You can access the sql files directly in the data point run directory by using a datapoint finalization script. You can set this up with PAT and then SSH into the docker container running the datapoints to test the finalization script.
You could write some code which puts these .sql into a AWS s3 bucket. This worked well for me but I was doing it for smaller files.
The eplusout.sql can be very large, so it maybe cleaner to write a measure that queries the .sql files for the info that you need and then puts this into S3 somewhere.
EDIT: I've attached an example pushtos3.rb script where I put something in a S3 bucket through the script
3 | No.3 Revision |
Hello,
I'm just wondering why are you trying to save these sql files from the datapoint.zip, I assume that you are doing this from the OpenStudio server console?
You can access the sql files directly in the data point run directory by using a datapoint finalization script. You can set this up with PAT and then SSH into the docker container running the datapoints to test the finalization script.
You could write some code which puts these .sql into a AWS s3 bucket. This worked well for me but I was doing it for smaller files.
The eplusout.sql can be very large, so it maybe cleaner to write a measure that queries the .sql files for the info that you need and then puts this into S3 somewhere.
EDIT: I've attached an example pushtos3.rb script https://drive.google.com/file/d/16PObL1qOmTguB0o7c1B4w61rpISil6-T/view?usp=sharing in google drive here where I put something in a S3 bucket through the script