In: Civil Engineering
the disasters of the Challenger and Columbia space shuttles:-
The disaster was the second fatal accident in the Space Shuttle program, after the 1986 breakup of Challenger soon after liftoff. During the launch of STS-107, Columbia's 28th mission, a piece of foam insulation broke off from the Space Shuttle external tank and struck the left wing of the orbiter.
Space Shuttle Columbia Launch:-
The Columbia’s 28th space mission, designated STS-107, was originally scheduled to launch on January 11, 2001, but was delayed numerous times for a variety of reasons over nearly two years. Columbia finally launched on January 16, 2003, with a crew of seven.
Eighty seconds into the launch, a piece of foam insulation broke off from the shuttle’s propellant tank and hit the edge of the shuttle’s left wing.Residents in the area heard a loud boom and saw streaks of smoke in the sky. Debris and the remains of the crew were found in more than 2,000 locations across East Texas, Arkansas and Louisiana. Making the tragedy even worse, two pilots aboard a search helicopter were killed in a crash while looking for debris.
Columbia Disaster Investigation:-
In August 2003, an investigation board issued a report revealing that it would have been possible either for the Columbia crew to repair the damage to the wing or for the crew to be rescued from the shuttle.
The Columbia could have stayed in orbit until February 15 and the already planned launch of the shuttle Atlantis could have been moved up as early as February 10, leaving a short window for repairing the wing or getting the crew off of the Columbia.
In the aftermath of the Columbia disaster, the space shuttle program was grounded until July 26, 2005, when the space shuttle Discovery was launched on the program’s 114th mission. In July 2011, the space shuttle program, which began with the Columbia’s first mission in 1981, completed its final (and 135th) mission, flown by Atlantis.
Failure due to fatigue:-
When a material undergoes permanent deformation from exposure to radical temperatures or constant loading, the functionality of the material can become impaired.This time–dependent plastic distortion of material is known as creep. Stress and temperature are both major factors of the rate of creep. In order for a design to be considered safe, the deformation due to creep must be much less than the strain at which failure occurs. Once the static loading causes the specimen to surpass this point the specimen will begin permanent, or plastic, deformation.
In mechanical design, most failures are due to time-varying, or dynamic, loads that are applied to a system. This phenomenon is known as fatigue failure. Fatigue is known as the weakness in a material due to variations of stress that are repeatedly applied to said material. For example, when stretching a rubber band to a certain length without breaking it (i.e. not surpassing the yield stress of the rubber band) the rubber band will return to its original form after release; however, repeatedly stretching the rubber band with the same amount of force thousands of times would create micro-cracks in the band which would lead to the rubber band being snapped. The same principle is applied to mechanical materials such as metals.
Fatigue failure always begins at a crack that may form over time or due to the manufacturing process used. The three stages of fatigue failure are:
Note that fatigue does not imply that the strength of the material is lessened after failure. This notion was originally referred to a material becoming "tired" after cyclic loading.
Failure due to software:-
Software has played a role in many high-profile disasters.
Failure due to static loading :-
Static loading is when a force is applied slowly to an object or structure. Static load tests such as tensile testing, bending tests, and torsion tests help determine the maximum loads that a design can withstand without permanent deformation or failure. Tensile testing is common when calculating a stress-strain curve which can determine the yield strength and ultimate strength of a specific test specimen.
Tensile testing on a composite specimen
The specimen is stretched slowly in tension until it breaks, while the load and the distance across the gage length are continuously monitored. A sample subjected to a tensile test can typically withstand stresses higher than its yield stress without breaking. At a certain point, however, the sample will break into two pieces. This happens because the microscopic cracks that resulted from yielding will spread to large scales. The stress at the point of complete breakage is called a material’s ultimate tensile strength. The result is a stress-strain curve of the material's behavior under static loading. Through this tensile testing, the yield strength is found at the point where the material begins to yield more readily to the applied stress, and its rate of deformation increases.
OTHER TRAGEDIES:-
THESE ARE SOME FAMOUS TRAGEDY .
The Problem of Responsibility Ascription in Engineering Contexts :-
On April 20, 2010 the drilling rig Deepwater Horizon exploded as a result of a wellhead blowout, killing 11 platform workers and causing an oil spill in the Gulf of Mexico. Causes of the accident suggested by the media include a failing blowout preventer, which was meant to stop the flow after a blowout, improper cementing of the well, and lacking regulatory oversight. After the accident, several attempts to stop the leak failed. Almost 3 months later, on July 15, a cap was put on the well and in August BP announced that a so-called ‘static kill’ procedure (pumping mud into the well) has been successful and stopped the oil. The oil spill has damaged marine and wildlife habitats as well as fishing and other economic activities. The case invokes images of other offshore accidents such as Piper Alpha in 1988 (explosion of a platform in the North Sea) and the Exxon Valdez oil spill in 1989 (tanker ran aground near Alaska).
In the wake of this disaster, moral responsibility has been mainly ascribed to individuals and collectives treated as individuals: BP (the oil company), Tony Hayward (BP), Barack Obama, the US Government (the regulator), and several (other) companies. This way of thinking about moral responsibility is understandable and at first sight seems entirely justified. However, this approach is also exemplary of some significant limitations of our traditional theories of moral responsibility.
The conditions for attributing moral responsibility prescribed by traditional theories make demands on agency, control, and knowledge that are seldom met in engineering and—more generally speaking—technological action. It is usually assumed that responsibility is individual and in line with Aristotle’s (1925) discussion in the Nicomachean Ethics (Book III, 1109b30-1111b5) a distinction is made between two negative conditions for ascribing moral responsibility: one should not be forced to do something (control condition) and one must not be ignorant of what one is doing (knowledge condition). These conditions are often problematic. For example, in the literature there have been discussions about how tenable the control condition is given the influence of character, circumstances, and consequences (Nagel 1979) and given that persons sometimes lack attention to crucial elements of the situation, exercise poor judgment, or lack moral insight and imagination (Sher 2006). However, in the case of technological action it appears even more difficult to meet the conditions. Let me give some reasons why the reality of technological action and experience, for example in engineering, is far removed from what the traditional approach and criteria assume and require.
First, both in philosophical analysis and in practice it is often assumed that responsibility is mainly or exclusively individual. In engineering ethics, philosophers focus on the application of moral principles to individual actions, for example by means of a codes of ethics (Davis 1991), or on the virtues of individual engineers (Harris 2008). And in our legal systems individuals (and collectives like companies treated as individuals) are held legally responsible. But technological action is often distributed and collective rather than individual (Lenk and Maring 2001, p. 100). As Ger Wackers and I have shown for the case of the Snorre A gas blowout (a near disaster with an oil and gas production platform in the North Sea), responsibility should be understood as distributed between various actors at various levels and times (Coeckelbergh and Wackers 2007). Recently I suggested in a contribution to the Guardian that this is also true in the Deepwater Horizon case: responsible actors include many companies involved, the financial actors, the regulators and politicians, and—last but not least—citizens and consumers who depend on oil and support the current regulatory frameworks.Footnote1 However, our traditional practices of responsibility ascription are ill equipped to deal with such a broad distribution of responsibility. It is easier to blame or prosecute only individuals directly involved and clearly defined and visible organisations and institutions like BP and the US Government.
Second, regardless of our (individual) intentions and (individual) capacity of self-control, we usually lack full control of the technology and its consequences. We may enjoy external, negative freedom in the sense that there is no-one who tells us what to do (we sometimes wish there was one, since the freedom is hard to bear) and we may also have internal freedom (control over our desires). But even if we have no master and if we master ourselves, the major problem is that we cannot control the consequences of technological action; it escapes the boundaries of what we and others intend and can control. In cases where possibilities to control are very limited, we might decide not to develop or use the technology for that reason. However, if and when we use it (for instance because we already depend on it for our way of living), we want to be able to ascribe responsibility for technological action. For example, it is likely that in the Deepwater Horizon case most people who contributed to the disaster were not ‘forced’ to do what they did; yet the cumulative outcome of their actions (or the outcomes of failures to act in the right way) combined with circumstances they did not control resulted in the disaster. Moreover, it appears that as an oil consumer I have little control over the consequences of my consumption. As with food and (other) mass produced consumer goods, we often have no idea where the products comes from, how they are produced, which risks and costs that way of production incurs, etc. Furthermore, once the blow-out accident happened, there was a general failure to control its consequences. Now failure to control is an instance of wrongdoing if one has the possibility to control. But what if this condition is only partly fulfilled what if there is very little space for action? Does that imply that no-one is responsible? How meaningful is the control condition anyway with regard to technological action?
Third, as the examples show, the control condition depends on knowledge: we lack knowledge and are uncertain about the consequences of the technology. This uncertainty is not only due to unpredictability of the future as such, but also to the scale and complexity of the technological-social world in which we act and which we shape by our actions. In previous work I argued that in engineering contexts moral responsibility is ascribed under epistemic conditions of opacity: between the actions of an engineer and the eventual consequences of her actions lies a world of relationships, people, things, time, and space. This lack of epistemic transparency makes it difficult to define the nature and scope of technological action. For instance, we can foresee some potential consequences of technological action but not all potential consequences. In the Deepwater Horizon case, it is unclear if people could have foreseen (1) that the blowout preventer would not function under these circumstances, (2) that initial attempts to stop the blowout would fail, and (3) all consequences of that failure and those actions. Moreover, in technological action it is hard to sharply distinguish between our own contributions and those of others, and between our actions and (bad) luck (Coeckelbergh 2010a). What happens (e.g. an accident) is the outcome of many actions and events—some which cannot be controlled. This amounts to saying that in a very real sense ‘we don’t know what we’re doing’ when it comes to technological action and engineering practice. Of course as individual users, designers, etc. we know what we do in the sense that we know our tasks, roles, and direct actions. But how these contribute to the larger technological action and engineering practice is not entirely clear to any single individual. Again, it seems that if we use the traditional criteria, it is difficult to ascribe moral responsibility. In the Gulf oil spill case, for example, most citizens who voted for politicians who maintained a deficient regulatory framework seemed unaware of the risks they created for the environment.
In the remainder of this paper, I analyse these problems concerning responsibility ascription by using the concept of tragedy. Responding to the philosophical tradition, I will first distinguish between different meanings of tragedy and its relation to technology. Then I will construct a Kierkegaardian notion of moral responsibility that accounts for experiences of the tragic in technological culture and engineering contexts. Thus, I do not only introduce the idea of tragedy in thinking about engineering, but I also give it a new interpretation. In this way I hope to contribute to exploring new ways of ascribing responsibility in engineering contexts and hence to avoiding a fatalist or defeatist response to the problems with meeting the conditions of responsibility. In the course of my arguments I provided examples relating to offshore engineering, in particular the Deepwater Horizon case.
Tragedy and Technology:-
In daily speech ‘tragedy’ usually means ‘terrible’, ‘awful’, or ‘catastrophic’. For instance, the accident with Deepwater Horizon can be called ‘tragic’ in the sense that people died and were injured and that the environment was damaged. In his paper, however, I use the term ‘tragic’ in a sense that refers back to ancient Greek tragedy and its reception in the history of philosophy.
There is a tradition in philosophy which understands modern culture as essentially untragic. It is claimed that in our obsession with rationality and control we lost a sense for fate. Steiner thought that in modern times we succeeded in destroying our sense of tragedy (Steiner 1961). Technology, it appears, is the very opposite of a tragic sense: it is a means to tame fate, as de Mul phrased it (de Mul 2006). Steiner stands in a tradition of thought that turned to ancient Greek tragedy as a remedy for modern non-tragic culture and technology. Nietzsche and Heidegger also argued that we need to recover our sense of tragedy as a cure for our obsession with control and mastery of nature—our obsession with technology.Footnote3
If this were true, then by definition oil production, as a technology, could easily be interpreted as part of a ‘sick’ technological culture that exploits nature and we should ‘return to nature’ and to the tragic understanding of life. In this view, accidents such as Deepwater Horizon could be interpreted as a kind of divine punishment for what the ancient Greeks called hybris: technology displays arrogance and lack of humility. However, I do not adopt this conception of tragedy and its relation to technology for at least the following reasons.
First, these views are too Romantic in assuming that we can make a strict distinction between on the one hand, ‘nature’ that functions ‘on its own terms’Footnote4 and, on the other hand, human culture and human experience. Instead, we have always transformed nature and in this sense Greek culture was as much ‘technological’ as it was tragic. Modern oil production is different from ancient means of energy production, of course, but in a sense nature has always been used as an energy resource. We can (and should) discuss about what kind of technology and energy we want, of course, but we cannot ‘switch off’ technology. Technology is an important aspect of what we do and what we are: we have always been technological beings and to stop being technological would be to stop being human altogether.
Second, it is not clear that today we have lost our sense of tragedy. Perhaps it has not been promoted in modern culture, but as de Mul has argued, technology can give us a sense of the tragic as well (de Mul 2006).Footnote5 De Mul uses in his work the stories of Prometheus (the tragedy Prometheus Unbound) and the monster of Frankenstein. Technology appears to us as something we created but which then escapes our (full) control. For example, the Deepwater Horizon case is not only tragic in the common sense noted above, but is also tragic in a deeper way since the disaster and failing human efforts to cope with the disaster reveal to us how little control we have over the consequences of our technological actions. Disasters such as this show how vulnerable we and our technological systems are, and how dependent we are on our technologies and our natural environment.
Third, and most important for my following argument, my conception of tragedy rejects Nietzsche’s, Heidegger’s, and de Mul’s fatalistic interpretation of tragedy. To (re)discover the tragic in technology does not imply that we have to set up technology as an autonomous force, a new god or demon which keeps us in the chains of fate and which we have to accept. Technological practice already includes the struggle and attempts to escape fate at two ‘moments’ or levels. As de Mul has argued, technology is itself a means to tame fate (we try to gain mastery of nature). But in addition, our attempts to perceive, assess, and reduce the risks associated with that technology are secondary attempts at mastery: we try to gain mastery of the technology (not only of nature).Footnote6 Thus, when I call experience in technological culture and engineering ‘tragic’ I do not refer to (the acceptance of) fate but to the dynamics between, on the one hand, the experience of fate, luck, and contingency and, on the other hand, how we respond to these events and experiences as beings that are free and in control to some extent. For example, the responses to the Deepwater Horizon case (efforts to gain control over the well, efforts to contain the oil spill, political actions, etc.) and the struggles and failures related to them are as much part of the ‘tragedy’ as the more ‘fatalistic’ (experiences of the) initial events and their direct consequences. Below I will construct a view of the tragic that does not resolve the tension between (experiences of) freedom and (experiences of) fate but instead identifies this tension as the core of tragedy and tragic experience.
Finally, in contrast to treating technology as one thing (as Heidegger did: technology as an attitude or way of seeing the world), we should break up the term ‘technology’ and ‘technological culture’. We should not only say something about the tragic character of technological culture as a whole, but explore tragic experiences in concrete technological and engineering practices such as oil production. In the following pages I sketch a framework that can guide this exploration and draw conclusions for thinking about moral responsibility in engineering and other technological practices.
DEAR STUDENT HOPE YOU WILL LIKE IT. PLEASE PROVIDE POSITIVE RATINGS FOR IT. THANKS