UPDATE 2022-01-19:We have received word from FERC that access to the historical data discussed below will be restored this week. As it becomes available we will also archive it on Zenodo just in case. Thank you to everyone who reached out and helped bring this issue to FERC’s attention!
This week we discovered that decades worth of energy system data collected by the Federal Energy Regulatory Commission (FERC) had been removed from the agency’s website. They apparently have no plan to archive it or migrate it to another platform. We are attempting to obtain a bulk download of all this data so we can archive it alongside our other raw data sources on Zenodo.
This data records many financial, operational, and economic aspects of the US energy system. It is a unique and valuable resource for anyone trying to understand how public policy and market conditions have shaped our energy system over time. Simply deleting this data with no warning, no plan to archive it, or migrate it to another platform is completely unacceptable.
If you know someone within FERC who can help get us a copy of this data to archive publicly, please put us in touch: hello@catalyst.coop
Data cleaning IS analysis, not grunt work. A longish post exploring what we really get out of doing data cleaning, and why it’s more valuable and complex than it often gets credit for.
It’s been almost a month since we pushed out our first actual quarterly software and data release: PUDL v0.5.0! The main impetus for this release was to get the final annual 2020 data integrated for the FERC and EIA datasets we process. We also pulled in the EIA 860 data for 2001-2003, which is only available as DBF files, rather than Excel spreadsheets. This means we’ve got coverage going back to 2001 for all of our data now! Twenty years! We don’t have 100% coverage of all of the data contained in those datasets yet, but we’re getting closer.
Beyond simply updating the data, we’ve also been making some significant changes to how our ETL pipeline works under the hood. This includes how we store metadata, how we generate the database schema, and what outputs we’re generating. The release notes contain more details on the code changes, so here I want to talk a little bit more about why, and where we are hopefully headed.
If you just want to download the new data release and start working with it, it’s up here on Zenodo. The same data for FERC 1 and EIA 860/923 can also be found in our Datasette instance at https://data.catalyst.coop
Oil and gas companies operating in the arctic and other areas impacted by climate change have been adapting their operations and infrastructure planning to the melting permafrost and other long-term impacts of their pyromania for decades, even while spreading disinformation about the same processes publicly. But are electric utilities doing the same kind of planning?
We’ve been thinking a bit about the ways in which the energy system in the US West is exposed to potential climate risks, in the context of long term utility resource adequacy and operational planning. We posted a short thread on Twitter and got some references from the #EnergyTwitter hive mind.
Are utility planners, regulators & grid operators in the Western US appropriately accounting for the likelihood of extreme weather events in coming decades? What's the existing literature look like on this question? What data & analysis would best help answer this question?
E.g. in the 2020s, 2030s, or 2040s, what are the chances that an extreme heatwave hits Las Vegas or Phoenix spiking demand for air conditioning, while hydroelectric generation or cooling water for thermal plants are also unavailable? And what would the consequences be?
What would be the easiest way to make a compelling case to regulators / operators / legislators that this kind of forward-looking question should be addressed in IRPs & resource adequacy analyses, and identify the most vulnerable metro regions and states?
ETL vs. ELT — a comparison of two data pipeline architectures from the folks at Fivetran.
What is the Modern Data Stack? another post from Fivetran, attempting to define the different components of data engineering pipelines as they appear to be coming together in the last few years.
A good interactive introduction to SQL from Mode. You can even use your own data as you work through it, if it’s in a database online. Broken down into beginner, intermediate, and advanced sections.
Hex is another platform that seems similar to Mode, for collaborating on data analyses using notebooks and a combination of straight SQL and python. Again, you load your own data directly via an online DB connection. I admit that after seeing it mentioned for months I only clicked through after realizing it was named after the magic / science hybrid technology depicted in Arcane.
Cookiecutter Data Science is a cookie-cutter repo and a set of guidelines for standardizing data science projects to be more easily replicated and parsed by other people.
Thou Shalt Scale Sustainably: some thoughts (commandments…) on how to scale social enterprises (especially when dependent on foundation funding) from the Shuttleworth Foundation. Not related to the so-called Modern Data Stack.
In August we put out a new PUDL software and data release for the first time in 18 months. We had a lot of client work, and kept putting off doing the release, so a whole lot of changes accumulated. Some highlights, mostly based on the continuously updated release notes in our documentation:
New Data Coverage
EIA Form 860 added coverage for 2004-2008, as well as 2019.
EIA Form 860m has been integrated (through Nov 2020). Note that it only adds up-to-date information about generators (especially their operational status).
EIA Form 923 added the 2001-2008 data, as well as 2019.
EPA CEMS Hourly Emissions covering 2019-2020.
FERC Form 714 covering 2006-2019, but only the table of hourly electricity demand by planning area. This data is still in beta and the data hasn’t been integrated into the core SQLite database, but you can process it on the fly if you want to work with it in Pandas.
EIA Form 861 for 2001-2019. Similar to the FERC Form 714, this ETL runs on the fly and the outputs aren’t integrated into the database yet, but it’s available for experimental use.
US Census Demographic Profile 1 (DP1) for 2010. This is a separate SQLite database, generated from a US Census Geodatabase, which includes census tract, county, and state level demographic information, as well as spatial boundaries of those jurisdictions.
We’ve come across a few allied projects looking at environmental justice data specifically, and thought it would be nice to share!
Environmental Enforcement Watch
In May, Christina and I gave a talk at CSV,Conf,v6 about things we’ve learned liberating US energy system data. We focused a lot on the challenge of making data accessible to advocates. The following talk was analogous, but focused on environmental justice data. The speaker was Kelsey Breseman (@ifoundtheme) from the Environmental Data and Governance Initiative (EDGI) and their project Environmental Enforcement Watch (EEW). EEW is trying to hold polluters accountable using federally reported data, by making that data more accessible to and understandable by the people who are affected. They’re scraping the data from the web and creating a database that folks can query using Google CoLab notebooks. At the same time they’re trying to get EPA the full underlying database accessible to the public.
You can watch her excellent talk here:
I was struck by how many parallels there were between our work. We’re both trying to mitigate the poor curation of government data, and make it more accessible way to the public. EDGI also seems very open and GitHub centered and is trying to operate as a horizontal organization. They support themselves through foundation grants and volunteer labor. Nobody works on EDGI full time. They have a fiscal sponsorship agreement through Earth Science Information Partners (ESIP).
If you’re interested in public data and environmental justice they seem like a great organization! Maybe we can collaborate at some point.
Before the Tidyverse and Pandas, there was SQL. There’s still SQL, and as Vicki Boykis often points out: every data-centric framework that hangs around long enough tends toward SQL. It’s got almost half a century of careful thinking and optimization behind it. It seems entirely possible that it’ll still be around after another half century.
In this extensive postHaki Benita explores a bunch of data analysis that can be done directly with PostgreSQL in particular. It can be used either as an efficient preprocessing step before handing off to other tools, or to generate final products. It covers basic data selection, random selection, sampling, splitting data into training & testing sets, descriptive statistics, aggregations, regressions, interpolation, binning and much more. It’s almost more of a pocket guide to data analysis in SQL than a blog post.
In this post Emily Riederer explores how conceptualizing data (and error!) generation processes can help you do better data validation. What does the data represent in the real world? How is it being collected? How does it move from where it’s collected to where it’s processed? What kinds of transformations operate on it before you look at the outputs? Understanding these steps and their contexts makes it easier to imagine how things can go wrong along the way and what errors to check for. It also makes it easier to debug errors when you find them.
The authors explore several different styles of pair programming and the logistical planning required to make it work. They touch on the extra challenges of doing remote pairing which seems extra relevant these days. They cover productive and destructive social dynamics that come up, and a whole lot more. The article is long, but it’s definitely worth a read if you’ve thought about trying pair programming and been reluctant, or have tried it and been dissatisfied.
We work with a lot of messy public data. In theory it’s already “structured” and published in machine readable forms like Microsoft Excel spreadsheets, poorly designed databases, and CSV files with no associated schema. In practice it ranges from almost unstructured to… almost structured. Someone working on one of our take-home questions for the data wrangler & analyst position recently noted of the FERC Form 1: “This database is not really a database – more like a bespoke digitization of a paper form that happened to be built using a database.” And I mean, yeah. Pretty much. The more messy datasets I look at, the more I’ve started to question Hadley Wickham’s famous Tolstoy quip about the uniqueness of messy data. There’s a taxonomy of different kinds of messes that go well beyond what you can easily fix with a few nifty dataframe manipulations. It seems like we should be able to develop higher level, more general tools for doing automated data wrangling. Given how much time highly skilled people pour into this kind of computational toil, it seems like it would be very worthwhile.
Like families, tidy datasets are all alike but every messy dataset is messy in its own way.
Hadley Wickham, paraphrasing Leo Tolstoy in Tidy Data
On creating a version-controlled SQL Query library, from Caitlin Hudon. Queries are little hand-crafted jewels that encode knowledge about the contents and structure of data. They should be saved and shared and improved collaboratively over time! As we move our PUDL data access toward using SQL directly, it seems like we’ll want to do this too, with some of the most common queries baked into the database as views directly. I bet it would be easy to set up automated testing of the queries too, whenever a new version of the DB comes online, to see which old queries are broken, and whether they yield the expected results.
Building Data Dictionaries, also from Caitlin Hudon, talking about the importance of continuous, incremental documentation of institutional knowledge related to your data — where it came from, what it should look like, how it can (and can’t!) be used. We finally made some progress in this direction by starting to publish our table and column level metadata directly in our documentation.
CarbonPlan has published a new review of carbon removal projects, based on responses to Microsoft’s recent RFP. The general takeaway: most projects are low volume, impermanent, and didn’t provide enough detail to be evaluated critically. The one and only 5-star project they reviewed, Climeworks in Iceland, which does permanent mineralized geologic sequestration (injecting CO2 into fresh volcanic basalt) powered by geothermal energy, currently costs $1,100/ton of CO2, or $17,000/yr to sequester the emissions of a typical US person.
The censusviz package provides a straightforward Python interface to US census map and population data, which integrates directly with GeoPandas. Looks like it’s pulling data on the fly from the Census API.
A nice PyData talk from Julia Signell that stitches together data from Intake catalogs and a variety of interactive data visualization & dashboarding tools. Code from the talk can be found on GitHub.
Ghost looks like an interesting open source business, it’s organized as a non-profit foundation that has to reinvest all of its profits in itself, and can’t be bought out. They have developers all over the world, and produce a permissively licensed open source content management system.