What Makes Power Fail?
By Grant Martin
On November 9, 1965, the lights went out across much of the northeastern United States. A massive, cascading power failure put 25 million people in the dark, covered 80,000 square miles, and was so sudden and widespread that many thought the Russians had launched a Cold War attack. Several days later, investigators traced the true source of the problem to a faulty relay in Canada. Shocked energy companies and federal regulators instituted a wave of policies to ensure that such a staggering power outage could never happen again.
“The electrical system is now designed so that no one thing can cause a widespread blackout,” says UAB electrical engineer Gregory Franklin, Ph.D., P.E. The electrical system, Franklin explains, has three main components: generation, transmission, and distribution.
“Those three things have to be balanced so that the amount of power being generated matches the load being used,” he says. To keep this balance intact, utilities carefully forecast the amount of power they’ll need in any given situation, and they have constantly updated contingency plans that factor in worst-case scenarios. For a major power outage to occur today, “a series of things have to go wrong,” Franklin says.
That’s exactly what happened in 2003, when an even larger blackout hit much of the same area affected by the 1965 shutdown, leaving more than 50 million people in the United States and Canada without power. There were no severe storms that day, and the regional grids had been operating within their capacity. In fact, in the hours before the event, such a catastrophic failure would have seemed impossible.
What went wrong? “The system works great as long as you have the right technology and operators who know how to use that technology,” says Franklin. “But there are two possible flaws: People make mistakes, and technology sometimes fails.”
The 2003 blackout was traced to just those problems—tree branches in Ohio had been allowed to grow too close to power lines, and a software bug kept operators from being aware of the danger. So when an increase in electrical load caused power lines to expand and sag within a few feet of the trees, distribution was automatically cut off on those lines, and power started to be pulled from other sources. Since the computers never alerted operators to the trouble on the lines, contingencies were never put in place, and the failure raced from one system to the next.
A vast February 2008 blackout in Florida was also traced back to human error. A field engineer mistakenly disabled two relays protecting a transmission line at a substation (cutting through two contingencies), which caused a total substation outage when a fault occurred on the unprotected line, which cut the emergency power supply to a nuclear power plant. (Even though nuclear plants provide their own auxiliary power, they must also have a source of emergency power; if this source is lost, the plant must be shut down.) The rippling failures then forced a second nuclear plant to shut down—and suddenly the moon was one of the few things shining over Miami.
Even though the Florida blackout was “an example of the system doing exactly what it is supposed to do after the initial human error,” says Franklin, both recent failures had the same basic cause: “A unique set of circumstances happened simultaneously.” As long as humans are error-prone and machines interact in unpredictable ways, it seems failure is always an option.