Management Of Technological Risk

Read Complete Research Material

MANAGEMENT OF TECHNOLOGICAL RISK

Management of technological risk

Management of technological risk

1. Risk and Probability

The link to probability and utility theory means risk management tends to the calculative and predictive. Without this abstraction, through calculating and codifying future impacts, the management of risk remains indistinct from intuition, and without successful prediction and control, it remains indistinct from astrology. Yet not all risks are amenable to measurement. For example, many risks associated with investment in nuclear power, such as the costs of protestor action or environment despoliation, are not readily confined by quantified analysis. Moreover, even where risks are quantified, their management is not necessarily successful. Crises such as the U.K. foot-and-mouth epidemic, or corporate collapses such as Barings Bank (which had a supposedly robust risk management system in place), demonstrate not only that experts and systematic checks and balances fail to prevent risks occurring, but can actually escalate them.

Some authors argue that, at its broadest level, the problem of enduring risk is endemic to all modern societies. It seems the motion, flux, and reach of transactional and organized activity overwhelms the efforts of experts and managers to predict and control events. There are always unintended side effects, both with manufactured human risks, such as IT viruses, and increasingly with natural risks compounded by human influence, as with damage associated with global warming. Organizations inevitably incur the complexities of their own production, until they are left, like Goethe's sorcerer's apprentice, awash with the alienating product of their own actions.

An example of a societal risk from high-risk technology that affects organizational environments would be the cases of nuclear, biochemical, or nanotechnological business. Charles Perrow and Scott Sagan contrast high reliability theory—predicting that redundancy, controls, safety checks, and learning mechanisms can mitigate risk—from the normal accidents theory—arguing that tightly coupled and increasingly complex systems cannot be completely controlled by boundedly rational and politically motivated actors and can even escalate small glitches into major failures. The thrust of the reasoning is that failure that can arise “normally” in systems by virtue of the interdependent nature of the components of the system itself, concluding that as technologies become more complex, the probability of tragic results increases. This logic has been subsequently extended to other organizational endeavors such as information technology, computer viruses, health care, large-scale construction projects, space exploration, and highway design and regulation.

Risk is often defined as the probability of adverse outcomes multiplied by consequences. Technically, the word “risk” refers to situations in which a decision is made whose consequences depend on the outcome of future events having known probabilities. For example, the decision to build or relicense a civilian nuclear power plant involves comparing the risks and benefits associated with this energy source to other sources (e.g., coal, hydroelectric, natural gas, wind) by assessing the veracity of certain assumptions about future impacts on human health, environment, energy security, and so forth. As such, risk is not simply equivalent to hazard, but rather, the possible damage that may result from one's decisions or ...
Related Ads