Hacker News new | past | comments | ask | show | jobs | submit
> See, for instance, the space shuttle O-ring incident

That wasn't really a result of an alignment of small weaknesses though. One of the reasons that whole thing was of particular interest was Feynman's withering appendix to the report where he pointed out that the management team wasn't listening to the engineering assessments of the safety of the venture and were making judgement calls like claiming that a component that had failed in testing was safe.

If a situation is being managed by people who can't assess technical risk, the failures aren't the result of many small weaknesses aligning. It wasn't an alignment of small failures as much as that a component that was well understood to be a likely point of failure had probably failed. Driven by poor management.

> Fukushima

This one too. Wasn't the reactor hit by a wave that was outside design tolerance? My memory was that they were hit by an earthquake that was outside design spec, then a tsunami that was outside design spec. That isn't a number of small weaknesses coming together. If you hit something with forces outside design spec then it might break. Not much of a mystery there. From a similar perspective if you design something for a 1:500 year storm then 1/500th of them might easily fail every year to storms. No small alignment of circumstances needed.

In reality the "swiss cheese" holes for major accidents often turn out to be large holes that were thought to be small at the time.

> [Fukushima] No small alignment of circumstances needed.

The tsunami is what initiated the accident, but the consequences were so severe precisely because of decades of bad decisions, many of which would have been assumed to be minor decisions at the time they were made. E.g.

- The design earthquake and tsunami threat

- Not reassessing the design earthquake and tsunami threat in light of experience

- At a national level, not identifying that different plants were being built to different design tsunami threats (an otherwise similar plant avoid damage by virtue of its taller seawall)

- At a national level, having too much trust in nuclear power industry companies, and not reconsidering that confidence after a number of serious incidents

- Design locations of emergency equipment in the plant complex (e.g. putting pumps and generators needed for emergency cooling in areas that would flood)

- Not reassessing the locations and types of emergency equipment in the plant (i.e. identifying that a flood of the complex could disable emergency cooling systems)

- At a company and national level, not having emergency plans to provide backup power and cooling flow to a damaged power plant

- At a company and national level, not having a clear hierarchy of control and objective during serious emergencies (e.g. not making/being able to make the prompt decision to start emergency cooling with sea water)

Many or all of these failures were necessary in combination for the accident to become the disaster it was. Remove just a few of those failures and the accident is prevented entirely (e.g. a taller seawall is built or retrofitted) or greatly reduced (e.g. the plant is still rendered inoperable but without multiple meltdowns and with minimal radioactive release).

I’m not sure why you think those are not a confluence of smaller events or that something outside the design spec isn’t one of those factors. By “small,” I don’t mean trivial. I mean an event that by itself wouldn’t necessarily result in disaster. Perhaps I should have said “smaller” rather than “small.” With the O-rings, the cold and the pressure to launch on that particular day all created the confluence. With Fukushima, the earthquake knocked out main power for primary cooling. That would have been manageable except then the backup generators got destroyed by the tsunami. It was not a case of just a big earthquake, whether outside or inside the design spec, making the reactor building fall down and then radiation being released.