Risk is not limited to the software project itself. Risks can occur after the software has been successfully developed and delivered to the...
Risk is not limited to the software project itself. Risks can occur after the software has been successfully developed and delivered to the customer. These risks are typically associated with the consequences of software failure in the field.
In the early days of computing, there was reluctance to use computers (and software) to control safety critical processes such as nuclear reactors, aircraft flight control, weapons systems, and large-scale industrial processes. Although the probability of failure of a well-engineered system was small, an undetected fault in a computerbased control or monitoring system could result in enormous economic damage or, worse, significant human injury or loss of life. But the cost and functional benefits of computer-based control and monitoring far outweigh the risk. Today, computer hardware and software are used regularly to control safety critical systems.
When software is used as part of a control system, complexity can increase by an order of magnitude or more. Subtle design faults induced by human error—something that can be uncovered and eliminated in hardware-based conventional control— become much more difficult to uncover when software is used.
Software safety and hazard analysis are software quality assurance activities that focus on the identification and assessment of potential hazards that may affect software negatively and cause an entire system to fail. If hazards can be identified early in the software engineering process, software design features can be specified that will either eliminate or control potential hazards.