First time here? Check out the Help page!
1 | initial version |
As @TomB says, the site UX is horrible! I made a similar tool in Excel a few years ago which may be of use.
Here's a link to the list of paths I ended up building along with a few characteristics. This seemed an easier approach than building the URLs on the fly.
Here's a link to an unlocked version of the Excel tool showing the workings.
And here's a blog post describing the tool.
In terms of the Python you'll need, this snippet I wrote while fetching the Prometheus weather files ought to help.
import os
import zipfile
import requests
WEATHER_DIR = os.path.join(os.pardir, 'weather_files')
def fetch_and_extract(url):
"""Download and extract a zipfile to a weather directory.
Parameters
--------------
url : str
URL of the zipfile.
"""
filename = os.path.join(WEATHER_DIR, url.split('/')[-1])
if os.path.isfile(filename): # we already have this file
return
# fetch the zipfile
data = requests.get(url).content
with open(filename, 'wb') as f:
print "Saving {}".format(filename)
f.write(data)
# extract the zipfile
zf = zipfile.ZipFile(filename)
zf.extractall(WEATHER_DIR)
2 | No.2 Revision |
As @TomB says, the site UX is horrible! I made a similar tool in Excel a few years ago which may be of use.
Here's a link to the list of paths I ended up building along with a few characteristics. This seemed an easier approach than building the URLs on the fly.
Here's a link to an unlocked version of the Excel tool showing the workings.
And here's a blog post describing the tool.
In terms of the Python you'll need, this snippet I wrote while fetching the Prometheus weather files (link) ought to help.
import os
import zipfile
import requests
WEATHER_DIR = os.path.join(os.pardir, 'weather_files')
def fetch_and_extract(url):
"""Download and extract a zipfile to a weather directory.
Parameters
--------------
url : str
URL of the zipfile.
"""
filename = os.path.join(WEATHER_DIR, url.split('/')[-1])
if os.path.isfile(filename): # we already have this file
return
# fetch the zipfile
data = requests.get(url).content
with open(filename, 'wb') as f:
print "Saving {}".format(filename)
f.write(data)
# extract the zipfile
zf = zipfile.ZipFile(filename)
zf.extractall(WEATHER_DIR)
3 | No.3 Revision |
As @TomB says, the site UX is horrible! I made a similar tool in Excel a few years ago which may be of use.
Here's a link to the list of paths I ended up building along with a few characteristics. This seemed an easier approach than building the URLs on the fly.fly. It's possible that the sites available have changed in the intervening time, I haven't checked.
Here's a link to an unlocked version of the Excel tool showing the workings.
And here's a blog post describing the tool.
In terms of the Python you'll need, this snippet I wrote while fetching the Prometheus weather files (link) ought to help.
import os
import zipfile
import requests
WEATHER_DIR = os.path.join(os.pardir, 'weather_files')
def fetch_and_extract(url):
"""Download and extract a zipfile to a weather directory.
Parameters
--------------
url : str
URL of the zipfile.
"""
filename = os.path.join(WEATHER_DIR, url.split('/')[-1])
if os.path.isfile(filename): # we already have this file
return
# fetch the zipfile
data = requests.get(url).content
with open(filename, 'wb') as f:
print "Saving {}".format(filename)
f.write(data)
# extract the zipfile
zf = zipfile.ZipFile(filename)
zf.extractall(WEATHER_DIR)
4 | No.4 Revision |
As @TomB says, the site UX is horrible! (Edit: I'm talking specifically about searching for a weather file, not the site in general.) I made a similar tool in Excel a few years ago which may be of use.
Here's a link to the list of paths I ended up building along with a few characteristics. This seemed an easier approach than building the URLs on the fly. It's possible that the sites available have changed in the intervening time, I haven't checked.
Here's a link to an unlocked version of the Excel tool showing the workings.
And here's a blog post describing the tool.
In terms of the Python you'll need, this snippet I wrote while fetching the Prometheus weather files (link) ought to help.
import os
import zipfile
import requests
WEATHER_DIR = os.path.join(os.pardir, 'weather_files')
def fetch_and_extract(url):
"""Download and extract a zipfile to a weather directory.
Parameters
--------------
url : str
URL of the zipfile.
"""
filename = os.path.join(WEATHER_DIR, url.split('/')[-1])
if os.path.isfile(filename): # we already have this file
return
# fetch the zipfile
data = requests.get(url).content
with open(filename, 'wb') as f:
print "Saving {}".format(filename)
f.write(data)
# extract the zipfile
zf = zipfile.ZipFile(filename)
zf.extractall(WEATHER_DIR)
5 | No.5 Revision |
As @TomB says, the site UX is horrible! horrible isn't ideal for automated downloading. (Edit: I'm talking specifically about searching for a weather file, not the site in general.) I made a similar tool in Excel a few years ago which may be of use.
Here's a link to the list of paths I ended up building along with a few characteristics. This seemed an easier approach than building the URLs on the fly. It's possible that the sites available have changed in the intervening time, I haven't checked.
Here's a link to an unlocked version of the Excel tool showing the workings.
And here's a blog post describing the tool.
In terms of the Python you'll need, this snippet I wrote while fetching the Prometheus weather files (link) ought to help.
import os
import zipfile
import requests
WEATHER_DIR = os.path.join(os.pardir, 'weather_files')
def fetch_and_extract(url):
"""Download and extract a zipfile to a weather directory.
Parameters
--------------
url : str
URL of the zipfile.
"""
filename = os.path.join(WEATHER_DIR, url.split('/')[-1])
if os.path.isfile(filename): # we already have this file
return
# fetch the zipfile
data = requests.get(url).content
with open(filename, 'wb') as f:
print "Saving {}".format(filename)
f.write(data)
# extract the zipfile
zf = zipfile.ZipFile(filename)
zf.extractall(WEATHER_DIR)