Human and computer failures were to blame for last year's power cut in North America, not the MSBlast worm, according to an investigation
A US and Canadian task force investigating the August 2003 blackout that cut power to an estimated 50 million North Americans published its final report on Monday, finding that institutional, human and computer failures -- not the MSBlast worm -- led to the outage.
The 14 August blackout hit three days after the worm started spreading, leading many to speculate that the quickly propagating program caused or contributed to the cascading failures that ultimately darkened New York, Toronto, Detroit and other areas.
Although several computer systems failed -- in particular, a server and backup that ran software for keeping track of the status of a major power network -- the Security Working Group leg of the Federal Energy Regulatory Commission (FERC)'s US-Canada Power System Outage Task Force "found no evidence that malicious actors caused or contributed to the power outage, nor is there evidence that worms or viruses circulating the Internet at the time of the power outage had an effect on power generation and delivery systems of the companies directly involved in the power outage," the report said.
The MSBlast, or Blaster, worm started spreading on 11 August, using a vulnerability in a common Microsoft Windows networking feature. The latest information from Microsoft indicates that as many as 16 million computers were infected.
In addition to ruling out MSBlast as the cause, the Security Working Group's report also stressed that there was no evidence that a cyberattack by al-Qaida, which had reportedly claimed responsibility for the attack after the fact, had caused the outage.
The finding essentially reiterates the conclusions of the task force's interim report, published in November.
Systems failures and human error at both the Midwest ISO and at FirstEnergy, a group of seven electric utilities that operate in the US Northeast and Midwest, were the primary causes of the blackout, according to the report.
An early-warning system at Midwest ISO could have alerted engineers, but it had been malfunctioning and was left off by an engineer who had gone to lunch. Meanwhile, another such system, known as the Alarm and Event Processing Routine, and its backup server, both failed at FirstEnergy, a fact that wasn't discovered until many hours later. Those system failures, combined with three major line outages caused by fallen tree limbs, resulted in the regional blackout, the report concluded.
While the system failures weren't the cause of the blackout, they prevented FirstEnergy from adequately responding to its own outages and caused the blackout to spread beyond that conglomerate's own system.
US Energy Secretary Spencer Abraham described the problem in a statement issued last November, when the interim report was published.
"Because FirstEnergy's monitoring equipment wasn't telling them about the downed lines, the control room operators took no action -- such as shedding load -- which could have kept the problem from growing and becoming too large to control," Abraham said in the statement.
The Security Working Group believes that its investigation, which included interviews, telephone transcripts, and law enforcement and intelligence information, gave it a complete picture of what happened. However, the working group decided not to analyse the logs of network devices, firewalls and intrusion detection systems, which could have given further evidence of any network attacks that coincided with the outage.
The report recommends that US energy companies share alert and vulnerability information, create a group to improve the security of control systems and adopt a set of interim regulations for computer security issued by the FERC.
FERC published the final report on its Web site on Monday. ZDNet News