The U.S. electricity system is fun and fascinating! The beatings will continue until everyone agrees.
In my last two posts, I have argued that electrical utilities in the U.S. are not well-suited to contemporary circumstances. In the first, I explained that the “regulatory compact” governing utilities was designed for an era of rapid electrification; it discourages innovation and encourages perpetual expansion. In the second, I explained that utilities are structured to treat electricity as a commodity, produced in central power plants and delivered to consumers over long distances in a one-way transaction, with price and reliability of supply the sole concerns.
None of that is working anymore. Lots of forces are conspiring to put the current arrangement under stress, but the most important, in my mind, is a wave of innovation on the “distribution edge” of the grid. (I stole the term from an eLab report that I’ll discuss in a later post.) The distribution edge includes the point where customers interface with the grid, typically a meter, and everything on the customer side of it, “behind the meter.”
So what exactly do I mean by innovation on the distribution edge? This post will explore that a bit, to offer a sense of the kind of things coming down the pike, the stuff utilities will have to deal with in five to 10 years.
Let’s begin with a little thought exercise. Imagine, if you will, a house.
This house is built (or retrofitted) efficiently, with thick walls, good insulation, and triple-glazed windows, so it wastes very little energy. It is heated and cooled by a system with sensors and separate vents in each room, controlled by a smart thermometer like the Nest that learns the habits of the house’s inhabitants and maximizes efficiency around them.
On our house’s roof is an array of solar panels that, at the mid-afternoon peak, provides more power than the house needs. For supplemental generation, when the panels aren’t producing or grid power is unusually expensive, the house’s basement contains a small microturbine running on natural gas (or biogas, if you prefer).
Excess energy from the solar panels can be stored in a fuel cell like the Bloom box, or in an appliance-sized battery pack, or in the batteries of the electric car parked in the garage.
All the appliances, the hot water heater, washing machine, dishwasher, etc., are internet-connected (“smart”) and able to ramp up or down in response to price signals.
All of this stuff — panels, batteries, heating and cooling system, appliances — is tied together by software that tracks consumption and monitors price signals from the utility. The software can ramp up generation, reduce or delay non-essential consumption, store more energy, or sell more energy to the grid, depending on which choice is more valuable at the moment. In the event of a blackout or other grid failure, the software can “island” the house from the grid, at least temporarily, by cranking up the microturbine, emptying the batteries or the fuel cell, and dialing down unnecessary consumption.
It does this all more-or-less automatically. The house’s owner can specify all sorts of parameters if she wants to — balancing price, reliability, resilience, and cleanliness based on her values and preferences — but if not, the system will work fine on its own.
Note that this house enjoys the benefits of being connected to the grid — backup power, the ability to sell electricity to neighbors — but also some of the benefits of “energy independence.”
Still with me? Now, imagine lots of such houses, along with commercial and industrial buildings with similar technology, along with a few wind and solar farms of various sizes, scattered over a broad geographic area. Imagine a power company that connects all those individual energy-management systems into one big energy-management system. Instead of one building that’s able to sell its extra power to the grid when prices are high, a whole group of buildings can. Instead of one building being able to adjust its power consumption based on price signals, a whole group of buildings can.
The power company can aggregate all these buildings and renewable generators and operate them as a single “virtual power plant,” selling electricity and services (power smoothing, demand response, peak shaving, etc.) to the larger grid.
Now imagine a group of such buildings in a geographically contiguous area — say, an Army base — networked together into a “smart microgrid,” essentially a freestanding mini-grid connected to the larger grid at a single point of contact (a super-meter, if you will). The entire microgrid is behind the meter, managed by a third party that coordinates supply and demand within it. The microgrid has the advantages of a virtual power plant, with some additional security benefits. Like the individual buildings within it, the microgrid can island itself off from the larger grid in the case of blackouts or cyber attacks. It can also add some mid-sized distributed generation to the mix — a small wind or solar farm, a waste-to-energy plant, or the like — and, like the virtual power plant, sell energy or energy services to the grid operator.
A microgrid could be run by a business park, a neighborhood, or a whole community. It could be jointly owned by the participants, who agree to contract with a company to run it. The company would aggregate and sell the energy services, take a cut, and return the remaining value to the joint owners. In that way it keeps money circulating locally rather than exporting it for fuel.
Now, take a step further back and imagine an entire region’s electrical distribution system composed of smart buildings, virtual power plants, and microgrids. Energy nerds refer to this as “nodal architecture”: a whole composed of networked, semi-autonomous nodes (see: the internet). Nodal architecture brings with it all sorts of benefits.
First, a network of small-scale generators helps to prevent, or at least reduce and delay, the need for new power plants and power lines. That saves money and prevents pollution. Second, it is far more resilient against failure than the brittle centralized grid of today, which is vulnerable to accidents and attacks and subject to cascading failures. Third, customers in this decentralized system have access to energy services that are superior to what they now enjoy. And fourth, distributing autonomy and control more widely has democratizing effects and creates political constituencies for additional climate policy.
Whew! So. That’s where innovation on the distribution edge is headed: toward an entirely new, more reliable and resilient distribution system.
The technologies I described above are just beginning to emerge, in halting, nascent form. Nest (the thermometer company) recently acquired MyEnergy (a home energy management software company). Tesla (the electric car company) has teamed up with SolarCity (a solar panel leasing company) to provide homeowners with a package that would include a car, solar panels, and a big ol’ battery system for home energy storage. Japanese IT company NEC just began mass production of home energy storage units. A company called MTT is making a sweet-looking unit that cogenerates heat and electricity, running on natural gas or biogas.
John Simmins at the Electric Power Research Institute says virtual power plants are going to “become ubiquitous in the next 5-10 years.” Navigant Research expects global virtual power plant capacity to grow fivefold by 2020. Meanwhile, the U.S. is leading the global microgrid market, which Navigant expects to reach $17 billion by 2017. The U.S. military, in particular, is aggressively pursuing microgrids. (While I’m at it, I highly recommend this piece on microgrids in Fortnightly.)
I could go on and on about the disruptive technology that’s starting to emerge on the distribution edge. It’s really exciting stuff. It’s expensive, of course, as you’d expect from bleeding-edge solutions in uncertain, poorly structured markets. For now, it will mostly be of interest to people and organizations with very specific needs (the Army and cyber-security, for instance).
But costs fall. And costs especially fall when markets are structured to allow for competition and innovation. And therein lies the rub. As things stand, every bit of this distribution-edge innovation is a threat to utilities, because all of it involves customers buying less power from utilities and using utility power lines less.
You see, utilities have made these long-term — 20-year, 50-year — investments. They did so with the understanding that they would recapture the costs via per-kilowatt-hour rates charged to customers. If customers begin buying fewer kWh, if they begin defecting from that system en masse, utilities face the prospect of being unable to recover their costs or offer a return to their investors. They risk having those investment costs “stranded.”
Stranded costs: That’s what keeps utility executives up at night. And that’s what innovation at the distribution edge means to them right now. That’s the deep, dysfunctional dynamic that must be addressed.
It’s not just that utilities shouldn’t be opposing or hampering this stuff. They should be enabling it. They should be structuring markets, providing information and price signals, so that entrepreneurs can enter this space and compete to provide customers with the best energy services. And they should be doing so while also maintaining grid reliability.
That is, I acknowledge, a tall order. (Right now I’m picturing a utility executive reading this, downing a shot of whiskey, and then flinging the empty shot glass against a wall.) But I believe it can be done, if a) utilities are made partners rather than opponents, and b) regulators change the business model under which utilities operate.
That — the utility model of the future — is what I’m going to address in my next post. I realize I said that in the last post, but this time I’m pretty sure I mean it!