The Ethical Lessons of Deepwater
The Ethical Lessons of Deepwater
For engineers, playing it safe is never the easy way out. Early December 2010 saw the release of two reports issued by groups tasked with deconstructing the deadly and devastating Deepwater Horizon Spill that occurred in the Gulf of Mexico.
The Deepwater Horizon Oil Spill and Offshore Drilling commission, appointed by President Obama, and the Deepwater Horizon Study Group (DHSG), formed by members of the Center for Catastrophic Risk Management (CCRM) at UC Berkeley, pointed toward many of the same failures.
The DHSG’s 60 university professors, accident investigators, petroleum engineers, social scientists, environmental advocates, and directors of research centers did go one step further, however, by directly linking mismanagement by the well’s owner, British Petroleum, with its drive for profit. "Analysis of the available evidence indicates that when given the opportunity to save time and money—and make money—tradeoffs were made for the certain thing—production—because there were perceived to be no downsides associated with the uncertain thing—failure caused by the lack of sufficient protection."
With oil and gas development in the deep waters of the Gulf, Arctic, and other new frontier areas set to continue, the DHSG also contends that the risks of such exploration and production pose "likelihoods and consequences of catastrophic failures, that are several orders of magnitude greater than previously confronted."
Pondering the Worst Case
The prospects of failures far more severe are chilling. Yet, Lehigh University Professor John Kenly Smith, a chemical engineer who specializes in the history of technology, believes that forcing stakeholders to ponder the absolute worst is the only way to grapple with what’s really at stake. "If you are going to work in an environment where it’s physically impossible to go down there and get your hands on the technology, you really have to think of the unthinkable and nobody wants to do that," says Smith. " I’ll bet every day on that platform there were engineers thinking, ‘If we have a blowout on this thing what will we do?’"
What have we learned in the months since the worst that could happen, in fact, did? Perhaps not much that’s new, says Smith, who believes some of the safety failures that led to the disaster stem from what’s all predictably human and imperfect in all of us. What’s also clear is that engineers who design and maintain complex systems are in a tough spot. Here, Smith cites a few lessons of the spill:
1. Numbers can be deceiving. "There’s tremendous pressure in the corporate and scientific worlds to convert uncertainty to risk," says Smith. Take an uncertainty, assign it a probability number then run it through a model to obtain data on how likely a failure might be. The problem, though, says Smith, who during his career in industry investigated a number of serious job-related accidents, is that "999 times, people get away with doing unsafe things, and it’s only the 1,000th time that something horrible happens."
2. Safety has to be hardwired into a firm’s SOP. Smith cites the success of companies like DuPont—the subject of a book he coauthored, Science and Corporate Strategy: DuPont R&D, 1902-1980—with rewarding teams with the best safety records. "You have to really drill it into people and create counterincentives that make them stop and say ‘Will I cost everyone their prize if I get hurt?’" A hard-core safety-first stand also can relieve the tension between line functions that bring in the money and the staff people (i.e., engineers who raise the red flags). This is where ethics come in, says Smith: "The staff functions and engineers need to have the clout to make themselves heard."
3. Simplicity has its virtues (i.e., technical controls can create a false sense of security). The jury is still out on why the Deepwater Horizon blowout preventer failed. Even if results of the investigations lead to future fixes, blind faith in technology can be dangerous, Smith warns "When facing a problem there’s a tendency to add equipment like a blowout preventer, and think ‘Problem solved.’ In this case, it didn’t work." Additionally, he cautions, adding complexity to a system can inject more ways the pieces of the system can interact and produce unpredictable outcomes.
4. Think broadly. As the saying goes: "It’s not enough to guard against the failures that have already occurred. Those that haven’t happened yet are the ones to fear most." Organizations need to prepare for the unthinkable, and when that happens, go beyond devising ways to keep that particular failure from happening again. "Engineers are taught to be problem solvers rather than broad thinkers," says Smith. "When something goes wrong, the focus should be on what was the thinking that got us in this position?"
5. Know where you work. Engineers, says Smith, have always faced one central dilemma: "Are we independent professionals who provide objective assessments based on our training and ethics? Or are we employees who do what the boss says?" Clearly society needs the former, and because of that, he contends, it’s important to know an organization’s history before you join it: "As a young engineer buried down at the bottom of an org chart, you might not see much or really know what a place is about." But studying up on who’s running the company and the values it was founded upon can provide important clues about what to expect when it’s time to take a tough stand.
Marion Hart is an independent writer.
999 times, people get away with doing unsafe things, and it’s only the 1,000th time that something horrible happens.John Kenly Smith, Professor, Lehigh University