Enterprise Risk Management Case Studies: Heroes and Zeros

By Andy Marker | April 7, 2021

  • Share on Facebook
  • Share on LinkedIn

Link copied

We’ve compiled more than 20 case studies of enterprise risk management programs that illustrate how companies can prevent significant losses yet take risks with more confidence.   

Included on this page, you’ll find case studies and examples by industry , case studies of major risk scenarios (and company responses), and examples of ERM successes and failures .

Enterprise Risk Management Examples and Case Studies

With enterprise risk management (ERM) , companies assess potential risks that could derail strategic objectives and implement measures to minimize or avoid those risks. You can analyze examples (or case studies) of enterprise risk management to better understand the concept and how to properly execute it.

The collection of examples and case studies on this page illustrates common risk management scenarios by industry, principle, and degree of success. For a basic overview of enterprise risk management, including major types of risks, how to develop policies, and how to identify key risk indicators (KRIs), read “ Enterprise Risk Management 101: Programs, Frameworks, and Advice from Experts .”

Enterprise Risk Management Framework Examples

An enterprise risk management framework is a system by which you assess and mitigate potential risks. The framework varies by industry, but most include roles and responsibilities, a methodology for risk identification, a risk appetite statement, risk prioritization, mitigation strategies, and monitoring and reporting.

To learn more about enterprise risk management and find examples of different frameworks, read our “ Ultimate Guide to Enterprise Risk Management .”

Enterprise Risk Management Examples and Case Studies by Industry

Though every firm faces unique risks, those in the same industry often share similar risks. By understanding industry-wide common risks, you can create and implement response plans that offer your firm a competitive advantage.

Enterprise Risk Management Example in Banking

Toronto-headquartered TD Bank organizes its risk management around two pillars: a risk management framework and risk appetite statement. The enterprise risk framework defines the risks the bank faces and lays out risk management practices to identify, assess, and control risk. The risk appetite statement outlines the bank’s willingness to take on risk to achieve its growth objectives. Both pillars are overseen by the risk committee of the company’s board of directors.  

Risk management frameworks were an important part of the International Organization for Standardization’s 31000 standard when it was first written in 2009 and have been updated since then. The standards provide universal guidelines for risk management programs.  

Risk management frameworks also resulted from the efforts of the Committee of Sponsoring Organizations of the Treadway Commission (COSO). The group was formed to fight corporate fraud and included risk management as a dimension. 

Once TD completes the ERM framework, the bank moves onto the risk appetite statement. 

The bank, which built a large U.S. presence through major acquisitions, determined that it will only take on risks that meet the following three criteria:

  • The risk fits the company’s strategy, and TD can understand and manage those risks. 
  • The risk does not render the bank vulnerable to significant loss from a single risk.
  • The risk does not expose the company to potential harm to its brand and reputation. 

Some of the major risks the bank faces include strategic risk, credit risk, market risk, liquidity risk, operational risk, insurance risk, capital adequacy risk, regulator risk, and reputation risk. Managers detail these categories in a risk inventory. 

The risk framework and appetite statement, which are tracked on a dashboard against metrics such as capital adequacy and credit risk, are reviewed annually. 

TD uses a three lines of defense (3LOD) strategy, an approach widely favored by ERM experts, to guard against risk. The three lines are as follows:

  • A business unit and corporate policies that create controls, as well as manage and monitor risk
  • Standards and governance that provide oversight and review of risks and compliance with the risk appetite and framework 
  • Internal audits that provide independent checks and verification that risk-management procedures are effective

Enterprise Risk Management Example in Pharmaceuticals

Drug companies’ risks include threats around product quality and safety, regulatory action, and consumer trust. To avoid these risks, ERM experts emphasize the importance of making sure that strategic goals do not conflict. 

For Britain’s GlaxoSmithKline, such a conflict led to a breakdown in risk management, among other issues. In the early 2000s, the company was striving to increase sales and profitability while also ensuring safe and effective medicines. One risk the company faced was a failure to meet current good manufacturing practices (CGMP) at its plant in Cidra, Puerto Rico. 

CGMP includes implementing oversight and controls of manufacturing, as well as managing the risk and confirming the safety of raw materials and finished drug products. Noncompliance with CGMP can result in escalating consequences, ranging from warnings to recalls to criminal prosecution. 

GSK’s unit pleaded guilty and paid $750 million in 2010 to resolve U.S. charges related to drugs made at the Cidra plant, which the company later closed. A fired GSK quality manager alerted regulators and filed a whistleblower lawsuit in 2004. In announcing the consent decree, the U.S. Department of Justice said the plant had a history of bacterial contamination and multiple drugs created there in the early 2000s violated safety standards.

According to the whistleblower, GSK’s ERM process failed in several respects to act on signs of non-compliance with CGMP. The company received warning letters from the U.S. Food and Drug Administration in 2001 about the plant’s practices, but did not resolve the issues. 

Additionally, the company didn’t act on the quality manager’s compliance report, which advised GSK to close the plant for two weeks to fix the problems and notify the FDA. According to court filings, plant staff merely skimmed rejected products and sold them on the black market. They also scraped by hand the inside of an antibiotic tank to get more product and, in so doing, introduced bacteria into the product.

Enterprise Risk Management Example in Consumer Packaged Goods

Mars Inc., an international candy and food company, developed an ERM process. The company piloted and deployed the initiative through workshops with geographic, product, and functional teams from 2003 to 2012. 

Driven by a desire to frame risk as an opportunity and to work within the company’s decentralized structure, Mars created a process that asked participants to identify potential risks and vote on which had the highest probability. The teams listed risk mitigation steps, then ranked and color-coded them according to probability of success. 

Larry Warner, a Mars risk officer at the time, illustrated this process in a case study . An initiative to increase direct-to-consumer shipments by 12 percent was colored green, indicating a 75 percent or greater probability of achievement. The initiative to bring a new plant online by the end of Q3 was coded red, meaning less than a 50 percent probability of success. 

The company’s results were hurt by a surprise at an operating unit that resulted from a so-coded red risk identified in a unit workshop. Executives had agreed that some red risk profile was to be expected, but they decided that when a unit encountered a red issue, it must be communicated upward when first identified. This became a rule. 

This process led to the creation of an ERM dashboard that listed initiatives in priority order, with the profile of each risk faced in the quarter, the risk profile trend, and a comment column for a year-end view. 

According to Warner, the key factors of success for ERM at Mars are as follows:

  • The initiative focused on achieving operational and strategic objectives rather than compliance, which refers to adhering to established rules and regulations.
  • The program evolved, often based on requests from business units, and incorporated continuous improvement. 
  • The ERM team did not overpromise. It set realistic objectives.
  • The ERM team periodically surveyed business units, management teams, and board advisers.

Enterprise Risk Management Example in Retail

Walmart is the world’s biggest retailer. As such, the company understands that its risk makeup is complex, given the geographic spread of its operations and its large number of stores, vast supply chain, and high profile as an employer and buyer of goods. 

In the 1990s, the company sought a simplified strategy for assessing risk and created an enterprise risk management plan with five steps founded on these four questions:

  • What are the risks?
  • What are we going to do about them?
  • How will we know if we are raising or decreasing risk?
  • How will we show shareholder value?

The process follows these five steps:

  • Risk Identification: Senior Walmart leaders meet in workshops to identify risks, which are then plotted on a graph of probability vs. impact. Doing so helps to prioritize the biggest risks. The executives then look at seven risk categories (both internal and external): legal/regulatory, political, business environment, strategic, operational, financial, and integrity. Many ERM pros use risk registers to evaluate and determine the priority of risks. You can download templates that help correlate risk probability and potential impact in “ Free Risk Register Templates .”
  • Risk Mitigation: Teams that include operational staff in the relevant area meet. They use existing inventory procedures to address the risks and determine if the procedures are effective.
  • Action Planning: A project team identifies and implements next steps over the several months to follow.
  • Performance Metrics: The group develops metrics to measure the impact of the changes. They also look at trends of actual performance compared to goal over time.
  • Return on Investment and Shareholder Value: In this step, the group assesses the changes’ impact on sales and expenses to determine if the moves improved shareholder value and ROI.

To develop your own risk management planning, you can download a customizable template in “ Risk Management Plan Templates .”

Enterprise Risk Management Example in Agriculture

United Grain Growers (UGG), a Canadian grain distributor that now is part of Glencore Ltd., was hailed as an ERM innovator and became the subject of business school case studies for its enterprise risk management program. This initiative addressed the risks associated with weather for its business. Crop volume drove UGG’s revenue and profits. 

In the late 1990s, UGG identified its major unaddressed risks. Using almost a century of data, risk analysts found that extreme weather events occurred 10 times as frequently as previously believed. The company worked with its insurance broker and the Swiss Re Group on a solution that added grain-volume risk (resulting from weather fluctuations) to its other insured risks, such as property and liability, in an integrated program. 

The result was insurance that protected grain-handling earnings, which comprised half of UGG’s gross profits. The greater financial stability significantly enhanced the firm’s ability to achieve its strategic objectives. 

Since then, the number and types of instruments to manage weather-related risks has multiplied rapidly. For example, over-the-counter derivatives, such as futures and options, began trading in 1997. The Chicago Mercantile Exchange now offers weather futures contracts on 12 U.S. and international cities. 

Weather derivatives are linked to climate factors such as rainfall or temperature, and they hedge different kinds of risks than do insurance. These risks are much more common (e.g., a cooler-than-normal summer) than the earthquakes and floods that insurance typically covers. And the holders of derivatives do not have to incur any damage to collect on them.

These weather-linked instruments have found a wider audience than anticipated, including retailers that worry about freak storms decimating Christmas sales, amusement park operators fearing rainy summers will keep crowds away, and energy companies needing to hedge demand for heating and cooling.

This area of ERM continues to evolve because weather and crop insurance are not enough to address all the risks that agriculture faces. Arbol, Inc. estimates that more than $1 trillion of agricultural risk is uninsured. As such, it is launching a blockchain-based platform that offers contracts (customized by location and risk parameters) with payouts based on weather data. These contracts can cover risks associated with niche crops and small growing areas.

Enterprise Risk Management Example in Insurance

Switzerland’s Zurich Insurance Group understands that risk is inherent for insurers and seeks to practice disciplined risk-taking, within a predetermined risk tolerance. 

The global insurer’s enterprise risk management framework aims to protect capital, liquidity, earnings, and reputation. Governance serves as the basis for risk management, and the framework lays out responsibilities for taking, managing, monitoring, and reporting risks. 

The company uses a proprietary process called Total Risk Profiling (TRP) to monitor internal and external risks to its strategy and financial plan. TRP assesses risk on the basis of severity and probability, and helps define and implement mitigating moves. 

Zurich’s risk appetite sets parameters for its tolerance within the goal of maintaining enough capital to achieve an AA rating from rating agencies. For this, the company uses its own Zurich economic capital model, referred to as Z-ECM. The model quantifies risk tolerance with a metric that assesses risk profile vs. risk tolerance. 

To maintain the AA rating, the company aims to hold capital between 100 and 120 percent of capital at risk. Above 140 percent is considered overcapitalized (therefore at risk of throttling growth), and under 90 percent is below risk tolerance (meaning the risk is too high). On either side of 100 to 120 percent (90 to 100 percent and 120 to 140 percent), the insurer considers taking mitigating action. 

Zurich’s assessment of risk and the nature of those risks play a major role in determining how much capital regulators require the business to hold. A popular tool to assess risk is the risk matrix, and you can find a variety of templates in “ Free, Customizable Risk Matrix Templates .”

In 2020, Zurich found that its biggest exposures were market risk, such as falling asset valuations and interest-rate risk; insurance risk, such as big payouts for covered customer losses, which it hedges through diversification and reinsurance; credit risk in assets it holds and receivables; and operational risks, such as internal process failures and external fraud.

Enterprise Risk Management Example in Technology

Financial software maker Intuit has strengthened its enterprise risk management through evolution, according to a case study by former Chief Risk Officer Janet Nasburg. 

The program is founded on the following five core principles:

  • Use a common risk framework across the enterprise.
  • Assess risks on an ongoing basis.
  • Focus on the most important risks.
  • Clearly define accountability for risk management.
  • Commit to continuous improvement of performance measurement and monitoring. 

ERM programs grow according to a maturity model, and as capability rises, the shareholder value from risk management becomes more visible and important. 

The maturity phases include the following:

  • Ad hoc risk management addresses a specific problem when it arises.
  • Targeted or initial risk management approaches risks with multiple understandings of what constitutes risk and management occurs in silos. 
  • Integrated or repeatable risk management puts in place an organization-wide framework for risk assessment and response. 
  • Intelligent or managed risk management coordinates risk management across the business, using common tools. 
  • Risk leadership incorporates risk management into strategic decision-making. 

Intuit emphasizes using key risk indicators (KRIs) to understand risks, along with key performance indicators (KPIs) to gauge the effectiveness of risk management. 

Early in its ERM journey, Intuit measured performance on risk management process participation and risk assessment impact. For participation, the targeted rate was 80 percent of executive management and business-line leaders. This helped benchmark risk awareness and current risk management, at a time when ERM at the company was not mature.

Conduct an annual risk assessment at corporate and business-line levels to plot risks, so the most likely and most impactful risks are graphed in the upper-right quadrant. Doing so focuses attention on these risks and helps business leaders understand the risk’s impact on performance toward strategic objectives. 

In the company’s second phase of ERM, Intuit turned its attention to building risk management capacity and sought to ensure that risk management activities addressed the most important risks. The company evaluated performance using color-coded status symbols (red, yellow, green) to indicate risk trend and progress on risk mitigation measures.

In its third phase, Intuit moved to actively monitoring the most important risks and ensuring that leaders modified their strategies to manage risks and take advantage of opportunities. An executive dashboard uses KRIs, KPIs, an overall risk rating, and red-yellow-green coding. The board of directors regularly reviews this dashboard.

Over this evolution, the company has moved from narrow, tactical risk management to holistic, strategic, and long-term ERM.

Enterprise Risk Management Case Studies by Principle

ERM veterans agree that in addition to KPIs and KRIs, other principles are equally important to follow. Below, you’ll find examples of enterprise risk management programs by principles.

ERM Principle #1: Make Sure Your Program Aligns with Your Values

Raytheon Case Study U.S. defense contractor Raytheon states that its highest priority is delivering on its commitment to provide ethical business practices and abide by anti-corruption laws.

Raytheon backs up this statement through its ERM program. Among other measures, the company performs an annual risk assessment for each function, including the anti-corruption group under the Chief Ethics and Compliance Officer. In addition, Raytheon asks 70 of its sites to perform an anti-corruption self-assessment each year to identify gaps and risks. From there, a compliance team tracks improvement actions. 

Every quarter, the company surveys 600 staff members who may face higher anti-corruption risks, such as the potential for bribes. The survey asks them to report any potential issues in the past quarter.

Also on a quarterly basis, the finance and internal controls teams review higher-risk profile payments, such as donations and gratuities to confirm accuracy and compliance. Oversight and compliance teams add other checks, and they update a risk-based audit plan continuously.

ERM Principle #2: Embrace Diversity to Reduce Risk

State Street Global Advisors Case Study In 2016, the asset management firm State Street Global Advisors introduced measures to increase gender diversity in its leadership as a way of reducing portfolio risk, among other goals. 

The company relied on research that showed that companies with more women senior managers had a better return on equity, reduced volatility, and fewer governance problems such as corruption and fraud. 

Among the initiatives was a campaign to influence companies where State Street had invested, in order to increase female membership on their boards. State Street also developed an investment product that tracks the performance of companies with the highest level of senior female leadership relative to peers in their sector. 

In 2020, the company announced some of the results of its effort. Among the 1,384 companies targeted by the firm, 681 added at least one female director.

ERM Principle #3: Do Not Overlook Resource Risks

Infosys Case Study India-based technology consulting company Infosys, which employees more than 240,000 people, has long recognized the risk of water shortages to its operations. 

India’s rapidly growing population and development has increased the risk of water scarcity. A 2020 report by the World Wide Fund for Nature said 30 cities in India faced the risk of severe water scarcity over the next three decades. 

Infosys has dozens of facilities in India and considers water to be a significant short-term risk. At its campuses, the company uses the water for cooking, drinking, cleaning, restrooms, landscaping, and cooling. Water shortages could halt Infosys operations and prevent it from completing customer projects and reaching its performance objectives. 

In an enterprise risk assessment example, Infosys’ ERM team conducts corporate water-risk assessments while sustainability teams produce detailed water-risk assessments for individual locations, according to a report by the World Business Council for Sustainable Development .

The company uses the COSO ERM framework to respond to the risks and decide whether to accept, avoid, reduce, or share these risks. The company uses root-cause analysis (which focuses on identifying underlying causes rather than symptoms) and the site assessments to plan steps to reduce risks. 

Infosys has implemented various water conservation measures, such as water-efficient fixtures and water recycling, rainwater collection and use, recharging aquifers, underground reservoirs to hold five days of water supply at locations, and smart-meter usage monitoring. Infosys’ ERM team tracks metrics for per-capita water consumption, along with rainfall data, availability and cost of water by tanker trucks, and water usage from external suppliers. 

In the 2020 fiscal year, the company reported a nearly 64 percent drop in per-capita water consumption by its workforce from the 2008 fiscal year. 

The business advantages of this risk management include an ability to open locations where water shortages may preclude competitors, and being able to maintain operations during water scarcity, protecting profitability.

ERM Principle #4: Fight Silos for Stronger Enterprise Risk Management

U.S. Government Case Study The terrorist attacks of September 11, 2001, revealed that the U.S. government’s then-current approach to managing intelligence was not adequate to address the threats — and, by extension, so was the government’s risk management procedure. Since the Cold War, sensitive information had been managed on a “need to know” basis that resulted in data silos. 

In the case of 9/11, this meant that different parts of the government knew some relevant intelligence that could have helped prevent the attacks. But no one had the opportunity to put the information together and see the whole picture. A congressional commission determined there were 10 lost operational opportunities to derail the plot. Silos existed between law enforcement and intelligence, as well as between and within agencies. 

After the attacks, the government moved toward greater information sharing and collaboration. Based on a task force’s recommendations, data moved from a centralized network to a distributed model, and social networking tools now allow colleagues throughout the government to connect. Staff began working across agency lines more often.

Enterprise Risk Management Examples by Scenario

While some scenarios are too unlikely to receive high-priority status, low-probability risks are still worth running through the ERM process. Robust risk management creates a culture and response capacity that better positions a company to deal with a crisis.

In the following enterprise risk examples, you will find scenarios and details of how organizations manage the risks they face.

Scenario: ERM and the Global Pandemic While most businesses do not have the resources to do in-depth ERM planning for the rare occurrence of a global pandemic, companies with a risk-aware culture will be at an advantage if a pandemic does hit. 

These businesses already have processes in place to escalate trouble signs for immediate attention and an ERM team or leader monitoring the threat environment. A strong ERM function gives clear and effective guidance that helps the company respond.

A report by Vodafone found that companies identified as “future ready” fared better in the COVID-19 pandemic. The attributes of future-ready businesses have a lot in common with those of companies that excel at ERM. These include viewing change as an opportunity; having detailed business strategies that are documented, funded, and measured; working to understand the forces that shape their environments; having roadmaps in place for technological transformation; and being able to react more quickly than competitors. 

Only about 20 percent of companies in the Vodafone study met the definition of “future ready.” But 54 percent of these firms had a fully developed and tested business continuity plan, compared to 30 percent of all businesses. And 82 percent felt their continuity plans worked well during the COVID-19 crisis. Nearly 50 percent of all businesses reported decreased profits, while 30 percent of future-ready organizations saw profits rise. 

Scenario: ERM and the Economic Crisis  The 2008 economic crisis in the United States resulted from the domino effect of rising interest rates, a collapse in housing prices, and a dramatic increase in foreclosures among mortgage borrowers with poor creditworthiness. This led to bank failures, a credit crunch, and layoffs, and the U.S. government had to rescue banks and other financial institutions to stabilize the financial system.

Some commentators said these events revealed the shortcomings of ERM because it did not prevent the banks’ mistakes or collapse. But Sim Segal, an ERM consultant and director of Columbia University’s ERM master’s degree program, analyzed how banks performed on 10 key ERM criteria. 

Segal says a risk-management program that incorporates all 10 criteria has these characteristics: 

  • Risk management has an enterprise-wide scope.
  • The program includes all risk categories: financial, operational, and strategic. 
  • The focus is on the most important risks, not all possible risks. 
  • Risk management is integrated across risk types.
  • Aggregated metrics show risk exposure and appetite across the enterprise.
  • Risk management incorporates decision-making, not just reporting.
  • The effort balances risk and return management.
  • There is a process for disclosure of risk.
  • The program measures risk in terms of potential impact on company value.
  • The focus of risk management is on the primary stakeholder, such as shareholders, rather than regulators or rating agencies.

In his book Corporate Value of Enterprise Risk Management , Segal concluded that most banks did not actually use ERM practices, which contributed to the financial crisis. He scored banks as failing on nine of the 10 criteria, only giving them a passing grade for focusing on the most important risks. 

Scenario: ERM and Technology Risk  The story of retailer Target’s failed expansion to Canada, where it shut down 133 loss-making stores in 2015, has been well documented. But one dimension that analysts have sometimes overlooked was Target’s handling of technology risk. 

A case study by Canadian Business magazine traced some of the biggest issues to software and data-quality problems that dramatically undermined the Canadian launch. 

As with other forms of ERM, technology risk management requires companies to ask what could go wrong, what the consequences would be, how they might prevent the risks, and how they should deal with the consequences. 

But with its technology plan for Canada, Target did not heed risk warning signs. 

In the United States, Target had custom systems for ordering products from vendors, processing items at warehouses, and distributing merchandise to stores quickly. But that software would need customization to work with the Canadian dollar, metric system, and French-language characters. 

Target decided to go with new ERP software on an aggressive two-year timeline. As Target began ordering products for the Canadian stores in 2012, problems arose. Some items did not fit into shipping containers or on store shelves, and information needed for customs agents to clear imported items was not correct in Target's system. 

Target found that its supply chain software data was full of errors. Product dimensions were in inches, not centimeters; height and width measurements were mixed up. An internal investigation showed that only about 30 percent of the data was accurate. 

In an attempt to fix these errors, Target merchandisers spent a week double-checking with vendors up to 80 data points for each of the retailer’s 75,000 products. They discovered that the dummy data entered into the software during setup had not been altered. To make any corrections, employees had to send the new information to an office in India where staff would enter it into the system. 

As the launch approached, the technology errors left the company vulnerable to stockouts, few people understood how the system worked, and the point-of-sale checkout system did not function correctly. Soon after stores opened in 2013, consumers began complaining about empty shelves. Meanwhile, Target Canada distribution centers overflowed due to excess ordering based on poor data fed into forecasting software. 

The rushed launch compounded problems because it did not allow the company enough time to find solutions or alternative technology. While the retailer fixed some issues by the end of 2014, it was too late. Target Canada filed for bankruptcy protection in early 2015. 

Scenario: ERM and Cybersecurity System hacks and data theft are major worries for companies. But as a relatively new field, cyber-risk management faces unique hurdles.

For example, risk managers and information security officers have difficulty quantifying the likelihood and business impact of a cybersecurity attack. The rise of cloud-based software exposes companies to third-party risks that make these projections even more difficult to calculate. 

As the field evolves, risk managers say it’s important for IT security officers to look beyond technical issues, such as the need to patch a vulnerability, and instead look more broadly at business impacts to make a cost benefit analysis of risk mitigation. Frameworks such as the Risk Management Framework for Information Systems and Organizations by the National Institute of Standards and Technology can help.  

Health insurer Aetna considers cybersecurity threats as a part of operational risk within its ERM framework and calculates a daily risk score, adjusted with changes in the cyberthreat landscape. 

Aetna studies threats from external actors by working through information sharing and analysis centers for the financial services and health industries. Aetna staff reverse-engineers malware to determine controls. The company says this type of activity helps ensure the resiliency of its business processes and greatly improves its ability to help protect member information.

For internal threats, Aetna uses models that compare current user behavior to past behavior and identify anomalies. (The company says it was the first organization to do this at scale across the enterprise.) Aetna gives staff permissions to networks and data based on what they need to perform their job. This segmentation restricts access to raw data and strengthens governance. 

Another risk initiative scans outgoing employee emails for code patterns, such as credit card or Social Security numbers. The system flags the email, and a security officer assesses it before the email is released.

Examples of Poor Enterprise Risk Management

Case studies of failed enterprise risk management often highlight mistakes that managers could and should have spotted — and corrected — before a full-blown crisis erupted. The focus of these examples is often on determining why that did not happen. 

ERM Case Study: General Motors

In 2014, General Motors recalled the first of what would become 29 million cars due to faulty ignition switches and paid compensation for 124 related deaths. GM knew of the problem for at least 10 years but did not act, the automaker later acknowledged. The company entered a deferred prosecution agreement and paid a $900 million penalty. 

Pointing to the length of time the company failed to disclose the safety problem, ERM specialists say it shows the problem did not reside with a single department. “Rather, it reflects a failure to properly manage risk,” wrote Steve Minsky, a writer on ERM and CEO of an ERM software company, in Risk Management magazine. 

“ERM is designed to keep all parties across the organization, from the front lines to the board to regulators, apprised of these kinds of problems as they become evident. Unfortunately, GM failed to implement such a program, ultimately leading to a tragic and costly scandal,” Minsky said.

Also in the auto sector, an enterprise risk management case study of Toyota looked at its problems with unintended acceleration of vehicles from 2002 to 2009. Several studies, including a case study by Carnegie Mellon University Professor Phil Koopman , blamed poor software design and company culture. A whistleblower later revealed a coverup by Toyota. The company paid more than $2.5 billion in fines and settlements.

ERM Case Study: Lululemon

In 2013, following customer complaints that its black yoga pants were too sheer, the athletic apparel maker recalled 17 percent of its inventory at a cost of $67 million. The company had previously identified risks related to fabric supply and quality. The CEO said the issue was inadequate testing. 

Analysts raised concerns about the company’s controls, including oversight of factories and product quality. A case study by Stanford University professors noted that Lululemon’s episode illustrated a common disconnect between identifying risks and being prepared to manage them when they materialize. Lululemon’s reporting and analysis of risks was also inadequate, especially as related to social media. In addition, the case study highlighted the need for a system to escalate risk-related issues to the board. 

ERM Case Study: Kodak 

Once an iconic brand, the photo film company failed for decades to act on the threat that digital photography posed to its business and eventually filed for bankruptcy in 2012. The company’s own research in 1981 found that digital photos could ultimately replace Kodak’s film technology and estimated it had 10 years to prepare. 

Unfortunately, Kodak did not prepare and stayed locked into the film paradigm. The board reinforced this course when in 1989 it chose as CEO a candidate who came from the film business over an executive interested in digital technology. 

Had the company acknowledged the risks and employed ERM strategies, it might have pursued a variety of strategies to remain successful. The company’s rival, Fuji Film, took the money it made from film and invested in new initiatives, some of which paid off. Kodak, on the other hand, kept investing in the old core business.

Case Studies of Successful Enterprise Risk Management

Successful enterprise risk management usually requires strong performance in multiple dimensions, and is therefore more likely to occur in organizations where ERM has matured. The following examples of enterprise risk management can be considered success stories. 

ERM Case Study: Statoil 

A major global oil producer, Statoil of Norway stands out for the way it practices ERM by looking at both downside risk and upside potential. Taking risks is vital in a business that depends on finding new oil reserves. 

According to a case study, the company developed its own framework founded on two basic goals: creating value and avoiding accidents.

The company aims to understand risks thoroughly, and unlike many ERM programs, Statoil maps risks on both the downside and upside. It graphs risk on probability vs. impact on pre-tax earnings, and it examines each risk from both positive and negative perspectives. 

For example, the case study cites a risk that the company assessed as having a 5 percent probability of a somewhat better-than-expected outcome but a 10 percent probability of a significant loss relative to forecast. In this case, the downside risk was greater than the upside potential.

ERM Case Study: Lego 

The Danish toy maker’s ERM evolved over the following four phases, according to a case study by one of the chief architects of its program:

  • Traditional management of financial, operational, and other risks. Strategic risk management joined the ERM program in 2006. 
  • The company added Monte Carlo simulations in 2008 to model financial performance volatility so that budgeting and financial processes could incorporate risk management. The technique is used in budget simulations, to assess risk in its credit portfolio, and to consolidate risk exposure. 
  • Active risk and opportunity planning is part of making a business case for new projects before final decisions.
  • The company prepares for uncertainty so that long-term strategies remain relevant and resilient under different scenarios. 

As part of its scenario modeling, Lego developed its PAPA (park, adapt, prepare, act) model. 

  • Park: The company parks risks that occur slowly and have a low probability of happening, meaning it does not forget nor actively deal with them.
  • Adapt: This response is for risks that evolve slowly and are certain or highly probable to occur. For example, a risk in this category is the changing nature of play and the evolution of buying power in different parts of the world. In this phase, the company adjusts, monitors the trend, and follows developments.
  • Prepare: This category includes risks that have a low probability of occurring — but when they do, they emerge rapidly. These risks go into the ERM risk database with contingency plans, early warning indicators, and mitigation measures in place.
  • Act: These are high-probability, fast-moving risks that must be acted upon to maintain strategy. For example, developments around connectivity, mobile devices, and online activity are in this category because of the rapid pace of change and the influence on the way children play. 

Lego views risk management as a way to better equip itself to take risks than its competitors. In the case study, the writer likens this approach to the need for the fastest race cars to have the best brakes and steering to achieve top speeds.

ERM Case Study: University of California 

The University of California, one of the biggest U.S. public university systems, introduced a new view of risk to its workforce when it implemented enterprise risk management in 2005. Previously, the function was merely seen as a compliance requirement.

ERM became a way to support the university’s mission of education and research, drawing on collaboration of the system’s employees across departments. “Our philosophy is, ‘Everyone is a risk manager,’” Erike Young, deputy director of ERM told Treasury and Risk magazine. “Anyone who’s in a management position technically manages some type of risk.”

The university faces a diverse set of risks, including cybersecurity, hospital liability, reduced government financial support, and earthquakes.  

The ERM department had to overhaul systems to create a unified view of risk because its information and processes were not linked. Software enabled both an organizational picture of risk and highly detailed drilldowns on individual risks. Risk managers also developed tools for risk assessment, risk ranking, and risk modeling. 

Better risk management has provided more than $100 million in annual cost savings and nearly $500 million in cost avoidance, according to UC officials. 

UC drives ERM with risk management departments at each of its 10 locations and leverages university subject matter experts to form multidisciplinary workgroups that develop process improvements.

APQC, a standards quality organization, recognized UC as a top global ERM practice organization, and the university system has won other awards. The university says in 2010 it was the first nonfinancial organization to win credit-rating agency recognition of its ERM program.

Examples of How Technology Is Transforming Enterprise Risk Management

Business intelligence software has propelled major progress in enterprise risk management because the technology enables risk managers to bring their information together, analyze it, and forecast how risk scenarios would impact their business.

ERM organizations are using computing and data-handling advancements such as blockchain for new innovations in strengthening risk management. Following are case studies of a few examples.

ERM Case Study: Bank of New York Mellon 

In 2021, the bank joined with Google Cloud to use machine learning and artificial intelligence to predict and reduce the risk that transactions in the $22 trillion U.S. Treasury market will fail to settle. Settlement failure means a buyer and seller do not exchange cash and securities by the close of business on the scheduled date. 

The party that fails to settle is assessed a daily financial penalty, and a high level of settlement failures can indicate market liquidity problems and rising risk. BNY says that, on average, about 2 percent of transactions fail to settle.

The bank trained models with millions of trades to consider every factor that could result in settlement failure. The service uses market-wide intraday trading metrics, trading velocity, scarcity indicators, volume, the number of trades settled per hour, seasonality, issuance patterns, and other signals. 

The bank said it predicts about 40 percent of settlement failures with 90 percent accuracy. But it also cautioned against overconfidence in the technology as the model continues to improve. 

AI-driven forecasting reduces risk for BNY clients in the Treasury market and saves costs. For example, a predictive view of settlement risks helps bond dealers more accurately manage their liquidity buffers, avoid penalties, optimize their funding sources, and offset the risks of failed settlements. In the long run, such forecasting tools could improve the health of the financial market. 

ERM Case Study: PwC

Consulting company PwC has leveraged a vast information storehouse known as a data lake to help its customers manage risk from suppliers.

A data lake stores both structured or unstructured information, meaning data in highly organized, standardized formats as well as unstandardized data. This means that everything from raw audio to credit card numbers can live in a data lake. 

Using techniques pioneered in national security, PwC built a risk data lake that integrates information from client companies, public databases, user devices, and industry sources. Algorithms find patterns that can signify unidentified risks.

One of PwC’s first uses of this data lake was a program to help companies uncover risks from their vendors and suppliers. Companies can violate laws, harm their reputations, suffer fraud, and risk their proprietary information by doing business with the wrong vendor. 

Today’s complex global supply chains mean companies may be several degrees removed from the source of this risk, which makes it hard to spot and mitigate. For example, a product made with outlawed child labor could be traded through several intermediaries before it reaches a retailer. 

PwC’s service helps companies recognize risk beyond their primary vendors and continue to monitor that risk over time as more information enters the data lake.

ERM Case Study: Financial Services

As analytics have become a pillar of forecasting and risk management for banks and other financial institutions, a new risk has emerged: model risk . This refers to the risk that machine-learning models will lead users to an unreliable understanding of risk or have unintended consequences.

For example, a 6 percent drop in the value of the British pound over the course of a few minutes in 2016 stemmed from currency trading algorithms that spiralled into a negative loop. A Twitter-reading program began an automated selling of the pound after comments by a French official, and other selling algorithms kicked in once the currency dropped below a certain level.

U.S. banking regulators are so concerned about model risk that the Federal Reserve set up a model validation council in 2012 to assess the models that banks use in running risk simulations for capital adequacy requirements. Regulators in Europe and elsewhere also require model validation.

A form of managing risk from a risk-management tool, model validation is an effort to reduce risk from machine learning. The technology-driven rise in modeling capacity has caused such models to proliferate, and banks can use hundreds of models to assess different risks. 

Model risk management can reduce rising costs for modeling by an estimated 20 to 30 percent by building a validation workflow, prioritizing models that are most important to business decisions, and implementing automation for testing and other tasks, according to McKinsey.

Streamline Your Enterprise Risk Management Efforts with Real-Time Work Management in Smartsheet

Empower your people to go above and beyond with a flexible platform designed to match the needs of your team — and adapt as those needs change. 

The Smartsheet platform makes it easy to plan, capture, manage, and report on work from anywhere, helping your team be more effective and get more done. Report on key metrics and get real-time visibility into work as it happens with roll-up reports, dashboards, and automated workflows built to keep your team connected and informed. 

When teams have clarity into the work getting done, there’s no telling how much more they can accomplish in the same amount of time.  Try Smartsheet for free, today.

Discover why over 90% of Fortune 100 companies trust Smartsheet to get work done.

Logo for Boise State Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Module 1: Case Studies & Examples

In this section, we will review some examples of how to generate an initial estimate using two very basic methods. Then, we are going to walk through some case studies so that you can put what you’ve learned into the context of a cyber risk scenario.

The Value of the Initial Analysis

In any organization, decision-making is a crucial process that can significantly impact the success or failure of the organization. Making informed decisions requires access to accurate and relevant information. It does not, however, require in-depth, time-consuming, and expensive research and analysis. The initial analysis provides a quick, cost-effective analysis of risk. It allows decision-makers to have a timely analysis based on readily available data. If decision-makers determine that a more in-depth analysis is warranted, this gives them the opportunity to clearly scope the effort and provide their authorization for the expenditure of additional funds and resources.

What is an Initial Analysis?

An initial analysis is a preliminary assessment of a situation or problem. It involves gathering and analyzing information to understand the situation comprehensively. An initial analysis is typically conducted before making any significant decisions or taking any action. Its purpose is to provide decision-makers with the information they need to make informed decisions. In the case of quantifying risk, you are making estimates with fairly broad ranges (such as 20% or more). This provides an accurate, if broad, estimate. With more detail, the estimate becomes more precise.

Benefits of an Initial Analysis for Decision Support

An initial analysis is valuable for decision support because it gives decision-makers a comprehensive overview of the situation. It allows decision-makers to make informed decisions based on accurate and relevant information. There are several benefits of conducting an initial analysis.

Benefits of Conducting an Initial Analysis

  • Provides a Comprehensive Overview : An initial analysis gives decision-makers a comprehensive overview of the situation. It helps decision-makers to understand the situation, including the challenges, risks, and opportunities. This comprehensive overview allows decision-makers to make informed decisions based on accurate and relevant information.
  • Identifies Risks and Opportunities : An initial analysis helps to identify risks and opportunities associated with the situation. It allows decision-makers to assess the potential impact of these risks and opportunities on the organization. This information is critical to making informed decisions considering potential risks and opportunities.
  • Helps to Identify and Prioritize Options : An initial analysis helps to identify and prioritize options for addressing the situation. It provides decision-makers with a range of options and the potential benefits and risks associated with each option. This information is critical to making informed decisions that consider all available options.
  • Facilitates Consensus-Building : An initial analysis helps to facilitate consensus-building among decision-makers. It provides decision-makers with a shared understanding of the situation, which can help to build consensus around the best course of action. This consensus-building is critical to ensuring that decisions are made with the support of all decision-makers.
  • Reduces the Risk of Making Poor Decisions : An initial analysis helps to reduce the risk of making poor decisions. It provides decision-makers with accurate and relevant information, which can help to reduce the risk of making decisions based on incomplete or inaccurate information. This can help avoid costly mistakes and ensure that decisions are made in the organization’s best interests.
  • Approval for Additional Time and Resources : An initial analysis is typically conducted before making any significant decisions or taking any action. Its purpose is to provide decision-makers with the information they need to make informed decisions. However, in some cases, decision-makers may require additional information before deciding. In these cases, an initial analysis can serve as a basis for approving additional time and resources to produce a more in-depth analysis. This additional analysis can provide decision-makers with more detailed information, which can help to make more informed decisions. By using the initial analysis as a basis for approving additional time and resources, decision-makers can ensure that the additional analysis is focused on the most critical issues and provides the information they need to make informed decisions.

Always begin with an initial analysis.

Figure 6 NOTE: Always begin with an initial analysis

General Guidelines for Developing Estimates

  • Internet-facing assets generally represent a very high likelihood of compromise if there is an exploitable vulnerability. Any asset with a directly accessible interface to the internet could be considered to meet this criterion if it has an exploitable vulnerability.
  • Vulnerabilities in perimeter defenses generally represent a very high likelihood of compromise.
  • Vulnerabilities in high-value assets generally represent a very high risk.
  • Vulnerabilities on web-based servers and applications represent a very high likelihood of compromise.
  • Vulnerabilities on workstations generally represent a high likelihood of compromise.
  • Vulnerabilities in databases represent a high likelihood of compromise.
  • Vulnerabilities on unsupported systems or products may be considered a higher likelihood of compromise.
  • Vulnerabilities that could cause extreme outages generally represent a very high risk.
  • Vulnerabilities that could lead to initial access or privilege escalation generally represent a very high risk.
  • Vulnerabilities that could lead to system compromise generally represent a higher risk.
  • If you know what percentage of systems have a particular vulnerability, you can use this as the basis for a threat estimate.
  • Zero-day vulnerabilities generally represent a very high risk.
  • Perimeter defense Zero-Day vulnerabilities generally represent a very high risk.
  • Web servers with Zero-Day vulnerabilities generally represent a very high risk.
  • Web server and application exploits such as SQL and Cross-site scripting vulnerabilities generally represent a very high risk.
  • Unsupported operating systems and applications generally represent a very high risk as these are frequently targets of attack.
  • Remote code execution vulnerabilities generally represent a higher risk.
  • Named exploits such as man-in-the-middle type attacks generally represent a higher risk.
  • Vulnerabilities for which there may be known, or ongoing exploits generally represent a higher risk.
  • Vulnerabilities with a public proof-of-concept generally represent a higher risk. Any vulnerability that can lead to initial access or privilege escalation generally represents a higher risk.
  • Internal exploitable vulnerabilities generally represent an elevated risk.
  • Strong perimeter defense can be a mitigating factor.
  • Security by obscurity is not considered a mitigating factor.
  • Policies or procedures may be considered a mitigating factor.
  • Mitigating factors generally can reduce an estimate by a single 20% range. A very strong mitigation generally can reduce an estimate by two 20% ranges.
  • Financially motivated cyber-criminals are generally very successful. You may want to specify the targeted system or data to refine the scope of your estimate.
  • Insider threats are generally very successful.
  • APTs or nation-states are generally very successful. You may want to specify a particular APT or nation-state to refine your estimate.
  • An accidental misconfiguration is as dangerous as an intentional act.
  • Poor processes and procedures can represent a risk, especially if they may be undocumented and not consistently applied.
  • It is useful to stipulate the time period for your estimate and whether it is a factor in the likelihood of compromise. In some cases, this may be the time period until a patch or remediation is in place. In some cases, the longer the time period, the higher the likelihood of compromise. Similarly, in some cases, a shorter period of exposure may indicate a slightly lower likelihood of compromise.

Using a 1-5 Scale

Risk is an inherent part of any business or organizational activity. It is the possibility of an event occurring that could adversely impact the organization’s objectives. Risk can be expressed in various ways, including verbally, numerically, or graphically. One commonly used method of verbally expressing risk is through a 1-5 scale using the labels very low, low, moderate, high, and very high values.

The Five-Point Scale

The five-point scale is a simple and effective way to express risk verbally. It uses five categories to describe the level of risk associated with an event or activity. The categories are very low, low, moderate, high, and very high. Each category represents a different level of risk, with very low representing the lowest level of risk and very high representing the highest level of risk.

image

Figure 7 The 5-Point Scale Labels

This scale is beneficial because it allows for quick and easy understanding and consensus-building among different organizational groups. It is a simple and intuitive way to express risk that people with different levels of expertise in risk management can easily understand.

Converting the Scale to 20% Ranges

While the five-point scale is a useful way to express risk qualitatively, it can also be adapted into numerical form, represented by 20% ranges, to quantify the risk. This allows for a more precise and objective assessment of risk that can be used to make informed decisions about risk management.

To convert the five-point scale to 20% ranges, each category is assigned a range of probabilities. The ranges are as follows:

  • Very Low: 0% – 20%
  • Low: 21% – 40%
  • Moderate: 41% – 60%
  • High: 61% – 80%
  • Very High: 81% – 100%

Five-point scale

Figure 8 The 5-Point Scale Range Values

By assigning each category a range of probabilities, the level of risk associated with an event or activity can be quantified. When communicating this, you should note that this estimate is based on an initial range of 20% for each.

Benefits of Using the Scale

Using the five-point scale with values of very low, low, moderate, high, and very high is a good way to begin thinking, speaking, and quantifying risk. It provides a simple and intuitive way to express risk that people with different levels of expertise in risk management can easily understand. It also allows for quick and easy consensus-building among different organizational groups.

One of the benefits of using the 1-5 scale is the same as found by L. Hoffman and D. Clement (1970) 19 , which is the value of using “intuitive linguistic variables” for range variables. Another benefit is a five-point scale avoids the issues found in a three-point scale by allowing wider disbursement among the mid-range values. A simple three-point scale is susceptible to bias (most people are averse to using either the lowest or highest extremes and tend to default to mid-range values).

The conversion of the scale to 20% ranges provides a more precise and objective assessment of risk that can be used to make informed decisions about risk management. This allows for a more systematic and consistent approach to risk management that can help organizations identify, assess, and manage risk.

In addition, using the five-point scale can help promote a risk management culture within an organization. Providing a simple and intuitive way to express risk can encourage employees to think more proactively about risk and take appropriate steps to manage risk in their daily activities.

A five-point scale provides a simple and intuitive way to express risk that people with different levels of expertise in risk management can easily understand. Translating the qualitative descriptors of the five-point scale into corresponding 20% probability ranges enhances the precision of risk evaluations, allowing for a more quantifiable and objective approach to risk assessment. Using this scale can help promote a risk management culture within an organization and aid in consensus-building among different organizational groups.

Back-of-the-Napkin Math

This method is an easy way to quantify risk without advanced tools or models. It approximates an advanced method known as the Monte Carlo Simulation using ranges described in the 5-point scale method. This method produces a usable approximation but lacks the level of detail or ability to generate meaningful probability distribution charts available with the Monte Carlo simulation method. You only need a sheet of paper and a pen or pencil to use this method, which is why I call it the “back-of-the-napkin” method.

The Three-Point Range Values

Using three-point values is a simple and effective way to express a range, such as the level of threat and likelihood associated with an event or activity. The three values are minimum, most likelihood, and maximum.

When we quantify risk, we use the formula Threat x Likelihood = Risk . Each of these (threat, likelihood, and risk) is expressed as a range.

To this equation, we can add the impact as a way to rate the risk. Risk x Impact = Rating

The impact can be financial or operational, and whether the impact is Very High or Very Low is always established by the organization. If the impact is financial it is expressed as a dollar value.

Let’s look at how the three-point values are used to quantify risk.

Assume the threat values of .10, .20, and .30. Then assume the likelihood values are .20, .80, and .60. How do we multiply ranges?

Follow these steps to multiply two 3-value ranges:

  • Multiply the first value of the first range by the first value of the second range.
  • Multiply the second value of the first range by the second value of the second range.
  • Multiply the third value of the first range by the third value of the third range.

[.10 .20 .30] x [.20 .60 .80] = [.10 x .20] [.20 x .60] [.30 x .80]

Now, just give the final three values.

.10 x .20 = .02

.20 x .60 = .12

.30 x .80 = .24

You get the following range [.02 .12 .24].

Now, let’s estimate the range for impact . Assume $10K, $20K, and $50K as the values.

[.20 .16 .18] x [ $10K $20K $50K] = [$2,000 $2,400 $12,000]

.20 x $10,000 = $2,000

.16 x $20,000 = $2,400

.18 x $50,000 = $12,000

Developing a Range Estimate from a Single Point Value

In many instances, you will only have a single-point value, such as the percentage of assets missing a patch. In this case, you can use the single point value as your most likely value and add +/- 10% to get a 20% range.

Example : If 20% of workstations are missing a patch, you could use the +/- 10% to produce the range .10-.20-.30. When using this method, you should note in your communications that this is a +/- 10% estimate based on the initial value of the weakness finding (20% of workstations with a missing patch).

Developing a Range from Multiple Variables .

When you have multiple variables, one approach to establishing your range is to take the highest and lowest values in the set, then establish your mid-point value by subtracting the lowest value from the highest and dividing that value by 2, then add that value to the lowest value. BYJUS.com, a global EdTech firm, has a basic explainer for ranges available at BYJUS.com “Range”. https://byjus.com/maths/range/ .

Example : 20% of servers are missing a patch and 45% of servers have a weak configuration that leaves them open to compromise. We can use 20% as the low value and 45% as the high value. To calculate the mid-range value, we subtract the lower value from the higher value (45-20=25) and divide that by 2 (25/2=12.5), then add that to the lower value (20+12.5=32.5). That gives us .20-.32.5-.45.

image

Figure 9 Back-of-the-Napkin Worksheet

Case Studies

For each of the scenarios provided, use the five-point scale to convert estimates of threat (weakness), likelihood (the likelihood that the weakness will be leveraged against the organization), risk, impact (a range of financial cost), and score. Reading and understanding the examples will guide your evaluation process and prepare you for the module quiz and final project.

The Branch Manager

As the branch manager sat in her office, she received an urgent message from the corporate security team about a newly released patch that addressed a critical vulnerability in the company’s network. Concerned about the potential risk to her branch, she immediately contacted the network operations group to inquire about the patch.

The network administrator reviewed the vulnerability data and determined that 28% of their web servers required the patch. She knew that this was a significant number of web servers involved. She also knew that a critical vulnerability on web facing servers posed a high risk to the organization.

However, the operations group could not apply the patch for a week due to other scheduled maintenance. The network administrator explained to the branch manager that the patch required significant testing and validation before being deployed to the production environment. She assured the branch manager that the operations group was working diligently to ensure the patch would be deployed as soon as possible.

  • Assign a range to weakness . In this example, we have a percentage of the threat landscape that is missing a required patch. We can use this as the basis for our initial range for threat. 28% falls within the low range, so we can use this to justify a low rating for weakness. With 28% as a midpoint, we add +/- 10%, giving us a range of .18-.28-.38 for threat.
  • Assign a range to likelihood . In the example we are told the missing patch has a critical severity and that it is on web servers. We can review our guidance for establishing an initial estimate and consider the criticality of the vulnerability and location (web servers); we can justify a very high risk range of .80-.90-1.0.
  • Set the time period for the estimate . We will use the time period of “until patches are applied”. We could note that the longer this takes the more the likelihood of compromise increases.
  • Calculate initial estimate .

image

University Case Study

The college has always prided itself on its commitment to technology and innovation. With a sprawling campus and a diverse student population, the college relies heavily on its network infrastructure to provide critical services to its students, faculty, and staff.

However, in recent months, the college has experienced several issues with its network infrastructure. Users across the campus had reported slow performance, intermittent outages, and other issues. Concerned about the potential impact of these issues, the college decided to perform an internal audit of its network infrastructure.

The audit revealed a number of significant issues with the college’s network infrastructure. The most pressing issue was that 70% of the college’s workstations required system upgrades due to recent end-of-life notices that hadn’t been tracked. The previous network administrator had recently left, and it had taken some time for the new administrator to come up to speed. As a result, critical updates and patches had been missed, leaving the college’s network vulnerable to potential cyber-attacks.

The new administrator found that there was little network documentation, and in fact, there was little segment across the campus. This meant that if a cyber-attacker were to gain access to one part of the network, they would have access to the entire network.

The new administrator was alarmed by the audit’s findings. She knew that the college’s network was vulnerable to potential cyber-attacks and that urgent action was needed to address the issues.

As she continued to review the network infrastructure, the new administrator read about a recent cyber-attack at another university. In that attack, the threat actor had moved laterally across the network and could compromise and exfiltrate sensitive data from the administration office. The attack had caused significant damage to the university’s reputation and resulted in a loss of trust among students, faculty, and staff.

  • Assign a range to weakness . In this example, we are given the statistic that 70% of workstations are on an unsupported operating system version. We can use this percentage of the threat landscape (workstations) as the basis for an initial estimate. Using 70 as our mid-range value, we get .60-.70-.80, which is moderate to high.
  • Assign a range to likelihood . For likelihood, we consider the network’s lack of segmentation and documentation and the recent attack on another university in which this weakness was leveraged, resulting in the exfiltration of sensitive data. This activity raises the likelihood that the university would be a target. We can use a range of very high , giving us .80-.90-1.0.

image

  • Assign a range to impact . We can consider the impact experienced by the recent attack at another university as a potential impact on this university, given the lack of segmentation and documentation. We also know that 70% of workstations (including administrative) use an unsupported operating system. Combined, we can justify a very high impact range of .80-.90-1.0.

image

  • Indicate applicable time period. We considered two key variables: vulnerable workstations and lack of network segmentation. Both of these would need to be addressed to change the risk, impact, or rating. When we indicate our applicable time periods, we need to note this and state that this estimate is applicable until these weaknesses are sufficiently addressed.

Health Care Facility Case Study

As the HIPAA compliance auditor arrived at the healthcare provider, she was ready to conduct a thorough audit of their HIPAA compliance measures. The healthcare provider hired an auditor to identify any systems vulnerabilities and provide recommendations for improvement.

As the auditor began her assessment, she quickly identified several areas of concern. She discovered that over 60% of the staff were not provided with HIPAA compliance training. The auditor found that the healthcare provider had not implemented a comprehensive training program to educate their staff on HIPAA compliance policies and procedures. This presented a significant risk, as the staff may unknowingly violate HIPAA regulations, leading to potential legal and financial liabilities.

In addition, the auditor found that 12% of the staff did not have dedicated laptops. This created a risk of unauthorized access to patient information, as multiple staff members with varying degrees of “need to know” shared laptops, potentially allowing staff who did not have the “need to know” to access patient records.

The auditor also discovered that 48% of the logging system was missing or inoperable due to some network configurations that were only partially implemented. This meant that the healthcare provider could not track and monitor access to patient records. This potentially meant that they could have a privacy violation or loss of sensitive information and not be aware of the violation, which could expose them to civil penalties or even criminal charges.

The auditor also found that patient data was not partitioned from other data on the network. This presented a significant risk, as the healthcare provider’s network could be compromised by external threat actors, and the lack of data partitioning could allow lateral movement, resulting in sensitive data being stolen or ransomed.

After compiling her assessment, the auditor estimated that the healthcare provider’s HIPAA compliance posture did have significant weaknesses, with a significant risk of unauthorized internal access. She noted that the lack of HIPAA compliance training, the inadequate number of workstations, the missing logging system, and the lack of data partitioning presented a significant risk of HIPAA violations and data breaches. She estimated that the healthcare provider’s legal liability from the identified weaknesses could be significant, as the provider could be held responsible for any financial losses or damages suffered by patients due to the breach.

The auditor’s report included detailed recommendations for the healthcare provider to improve their HIPPA compliance measures. She advised the provider to implement a comprehensive HIPPA compliance training program to educate their staff on HIPPA regulations and procedures. She also recommended that the provider increase the number of laptops from 132 to 150 to ensure that patient records were not left unintentionally exposed to staff that lacked the “need to know.”

To address the missing logging system, the auditor recommended that the healthcare provider implement a comprehensive system that tracks and monitors access to patient records. She advised the provider to implement least privilege role-based access controls and appropriate network segmentation to separate patient data from other network data.

The estimated cost to implement the auditor’s recommendations was significant. The healthcare provider would need to invest between $50,000 to $100,000.

  • Estimate the weakness . We can use the 12% estimate of missing laptops as the basis for estimating the weakness as a percentage of the threat landscape. We can use a very low estimate of 0-.12-.22.  The lack of sufficient data separation was linked to the risk of external threat actors moving laterally and potentially stealing or ransoming sensitive data.  The lack of logging is of concern, but it is not a weakness that can be leveraged to result in an attack. Rather, it results in a lack of visibility and awareness.
  • Estimate the likelihood . We can use the 60% of staff lacking the training to estimate the likelihood of inadvertent unauthorized access to patient-sensitive data. We could use a .50-.60-.70 range or moderate to high. We have insufficient data to estimate the likelihood of an external attack because no relevant weaknesses were identified in the audit.

image

Accounting Firm Case Study

The cybersecurity auditor arrived at the accounting firm of Smith and Associates, ready to conduct a thorough audit of their cybersecurity measures. The firm hired the auditor to identify any systems vulnerabilities and provide recommendations for improvement.

As the auditor began his assessment, he quickly identified several areas of concern. He discovered that 67% of the firm’s workstations had outdated software, including operating systems and applications. This presented a significant risk, as obsolete software can contain known vulnerabilities that cyber-attackers can exploit.

In addition, the auditor found that 29% of the workstations had outdated anti-virus software. This was a significant concern, as anti-virus software is the first line of defense against malware and other cyber threats. Outdated anti-virus software can be ineffective against new and emerging threats, leaving the firm’s systems vulnerable to attack.

The auditor also discovered that the firm’s public-facing web server had multiple SQL vulnerabilities. SQL vulnerabilities are a common target for cyber-attackers, as they can be exploited to gain unauthorized access to databases and steal sensitive data. The auditor was particularly concerned about this vulnerability, as it posed a significant risk to the firm’s clients and their confidential financial information.

After completing his assessment, the auditor stated that the firm’s cybersecurity posture has several significant weaknesses that could likely be leveraged in an attack. He noted that the outdated software and anti-virus, combined with the SQL vulnerabilities on the public-facing web server, created a significant risk of cyber-attack. He recommended that the firm immediately address these vulnerabilities and improve its cybersecurity posture.

According to a recent report by IBM, the average data breach cost is $3.86 million. This includes costs associated with detecting and containing the breach, notifying affected individuals, and providing identity theft protection services. The report also found that the cost per lost or stolen record containing sensitive information was $180.

If the accounting firm suffered a data breach, the financial impact could be substantial. For example, if the attackers had stolen 10,000 client records, the cost of the breach could have been $1.8 million.

  • Estimate the weakness. We have two weaknesses related to the workstations: 67% are using outdated operating systems and applications, and 29% have outdated anti-virus. We subtract the lowest value from the highest value (67-29=38) and divide that by 2 (38/2=19), then add that to the lowest value (29+29=48). That gives us the range of .29-.48-.67, which is low-high. We have one web server with an SQL vulnerability, which we consider very high by default. That range is .80-.90-1.0.
  • Estimate the likelihood. For the workstations we will estimate the likelihood as high or .60-.70-.80. We will estimate the likelihood of compromise for the web server as very high or .80-.90-1.0.

image

  • Estimate the risk rating for workstations and web server , each based on a $ 5 0,000, $ 5 50,000, and $ 2, 00,000 cost range . Compare to determine which source is more likely to result in a higher financial impact . In this example we are not splitting the financial cost between two probable risk sources, rather we’re comparing the two potential sources of a potential data breach with a single potential financial impact and comparing the resulting rating which is given in financial terms.

image

Cybersecurity Risk Quantification Copyright © by Charlene Deaver-Vazquez. All Rights Reserved.

risk assessment case study examples

You are using an outdated browser. Please upgrade your browser to improve your experience.

Risk Assessment Case Studies | Machine Safety Specialists

Case studies.

Live Event: Open Enrollment for Machine Safety Specialists (MSS) Virtual Machine Safety and Risk Assessment Training Class. Click Here to enroll today!

risk assessment case study examples

What are “Unbiased Risk Assessments”?

Unbiased  Risk Assessments are guided by safety experts that have your best interests in mind.  Product companies, integrators, and solution providers may steer you toward expensive, overly-complex technical solutions.   Machine Safety Specialists provides unbiased Risk Assessments.   See examples below.

Biased  risk assessments can happen when a safety products company, integrator, or solution provider participates in the risk assessment. The participant has a conflict of interest and may steer you towards overly expensive or complex solutions that they want to sell you.  Some safety product companies will do anything to get involved in the risk assessment, knowing they will “make up for it” by selling you overly expensive solutions.  Safety product companies have sales targets and you could be one of them.

risk assessment case study examples

Machine Safety Specialists are experts in OSHA, ANSI, NFPA, RIA and ISO/EN safety standards. We can solve your machine safety compliance issues, provide  unbiased  Risk Assessments, or help you develop your corporate Machine Safety and Risk Assessment program.

Case Study: Machine Safety Verification and Validation

A multi-national food processing company had a problem.  A recent amputation at a U.S. food processing plant generated negative publicity, earned another OSHA citation, and caused significant financial losses due to lost production.  Another amputation, if it occurred, would likely result in more lost production, an OSHA crackdown, and, if posted on social media, irreparable damage to the company’s brand.

After multiple injuries and OSHA citations, the company contacted MSS for help.  First, the company needed to know if the existing machine safeguarding systems provided “effective alternative protective measures” as required by OSHA.  MSS was contracted through the company’s legal counsel to audit three (3) plants with various type of machines and deliver detailed Machine Safety compliance reports for each machine to the client under Attorney Client Privilege.  A summary in our safeguarding audit reports for one plant was as follows:

risk assessment case study examples

For the 19 high risk and 8 medium risk (poorly guarded) machines, action by applying risk reduction measures through the use of the hierarchy of controls was required.  MSS provided a Machine Safeguarding specification for the machines and worked with our client to select qualified local fabricators and integrators that performed the work in an aggressive schedule.  MSS provided specifications and consulting services, and our client contracted the fabrication and integration contractors directly, under the guidance of MSS.

During the design phase of the Machine Safeguarding implementation, MSS provided safety  verification services  and detailed design reviews.  Due to the stringent legal requirements and the need for global compliance, the safety design verification included  SISTEMA analysis  of the functional safety systems.  MSS provided a detailed compliance report with written compliance statements covering OSHA compliance, hazardous energy controls (Lockout/Tagout, LOTO, OSHA 1910.147) and effective alternative protective methods, per OSHA’s minor servicing exception.  Due to the complexity of the machines and global safety requirements, MSS verified and validated the machine to numerous U.S. and international safety standards, including ANSI Z244.1, ANSI B11.19, ANSI B11.26, ISO 14120, ISO 13854, ISO 13855, ISO 13857 and ISO 13849.

Then, after installation and before placing the machines into production, MSS was contracted to perform safety validation services as required by the standards.  During the validation phase of the project, MSS traveled to site to inspect all machine safeguarding and validate the functional safety systems.  This safety validation included all aspects of the safety system, including barrier guards, interlocked barrier guards, light curtains, area scanners, the safety controllers and safety software, safety servo systems, variable frequency safety drives (safety VFDs), and pneumatic (air) systems.

After the machine safeguarding design verification, installation, and safety system validation, MSS was pleased to provide the following data in the executive summary of the report.

risk assessment case study examples

By involving Machine Safety Specialists (MSS) early in the project, we can ensure your project complies with OSHA, ANSI/RIA, NFPA and ISO safety standards.  By helping you implement the project, we kept the safety project on-track.  Our validation testing and detailed test reports provide peace of mind, and evidence of due diligence if OSHA pays you a visit.   Contact MSS  for all of your Machine Safety Training, Safeguarding Verification, and on-site functional safety validation needs.

Case Study:  Collaborative Robot System

risk assessment case study examples

  • Is this Collaborative Robot system safe?
  • How can we validate the safety of the Collaborative Robot system before duplicating it?
  • If we ship these globally, will we comply with global safety standards?
  • What if the Collaborative Robot hurts someone?
  • What about OSHA?

The OEM called Machine Safety Specialists (MSS) to solve these concerning problems.   Prepared to help, our TÜV certified Machine Safety engineers discussed the Collaborative Robot system, entered an NDA, and requested system drawings and technical information.  On-site, we inspected the Collaborative Robot, took measurements, gathered observations and findings, validated safety functions, and spoke with various plant personnel (maintenance, production, EHS, engineering, etc.).  As part of our investigation, we prepared a gap analysis of the machine relative to RIA TR R15.606, ISO 10218-2, OSHA, ANSI, and ISO standards.  The final report included Observations, Risk Assessments, and specific corrective actions needed to achieve US and global safety compliance.   Examples of our findings and corrective actions include:

  • Identification of the correct safeguarding modes (according to RIA TR R15.606-2016 and ISO/TS 15066-2016).
  • Observation that Area Scanners (laser scanners) provided by the machine builder were not required, given the Cobot’s modes of operation. Recommended removal of the area scanners, greatly simplifying the system.
  • Observation that the safety settings for maximum force, given the surface area of the tooling, provided pressure that exceeds US and global safety requirements. Recommended a minimum surface area for the tooling and provided calculations to the client’s engineers.
  • Observation that the safety settings for maximum speed were blank (not set) and provided necessary safety formulas and calculations to the client’s engineers.
  • Recommended clear delineation of the collaborative workspace with yellow/black marking tape around the perimeter.

With corrective actions complete, we re-inspected the machine and confirmed all safety settings.  MSS provided a Declaration of Conformance to all applicable US and global safety standards.  The customer then duplicated the machines and successfully installed the systems at 12 plants globally, knowing the machines were safe and that global compliance was achieved.   Another success story by MSS…

[drawattention ID=”5446″]

Case Study:  Robot Manufacturing

risk assessment case study examples

The manufacturer hired a robotics integrator and a brief engineering study determined that speed and force requirements required a high-performance Industrial Robot (not a  Cobot ).  The client issued a PO to the integrator, attached a manufacturing specification, and generically required the system to meet “OSHA Standards”.  Within 3 months, the robot integrator had the prototype system working beautifully in their shop and was requesting final acceptance of the system.   This is when the second problem hit –  the US manufacturer experienced a serious robot-related injury .

In the process of handling the injury and related legal matters, the manufacturer learned that generic “OSHA Standards” were not sufficient for robotic systems.  To prevent fines and damages in excess of $250,000, our client needed to make their existing industrial robots safe, while also correcting any new systems in development.   The manufacturer then turned to Machine Safety Specialists (MSS) for help.

Prepared to help, our TÜV certified and experienced robot safety engineers discussed the Industrial Robot application with the client.  MSS entered an NDA and a formal agreement with the client and the client’s attorney.  On-site, MSS inspected the Industrial Robot system, took measurements, gathered observations and findings, tested (validated) safety functions, and met with the client’s robotics engineer to complete a compliance checklist.   As part of our investigation, we prepared a Risk Assessment in compliance with ANSI/RIA standards, an RIA compliance matrix, and performed a gap analysis of the industrial robot systems relative to ANSI/RIA standards.  The final report included a formal Risk Assessment, a compliance matrix, our observations, and specific corrective actions needed to achieve safety compliance.

Examples of our findings and corrective actions included:

  • A formal Risk Assessment was required in compliance with ANSI/RIA standards (this was completed by MSS and the client as part of the scope of work).
  • Critical interlock circuity needed upgrading to Category 3, PL d, as defined by ISO-13849. (MSS provided specific mark-ups to the electrical drawings and worked with the integrator to ensure proper implementation).
  • The light curtain reset button was required to be relocated. (MSS provided specific placement guidance.)
  • The safeguarding reset button was required to be accompanied by specific administrative. (MSS worked with the integrator to implement these into the HMI system and documentation).
  • The robot required safety soft limits to be properly configured and tested (Fanuc: DCS, ABB: SafeMove2).
  • Specific content needed to be added to the “Information for Use” (operation and maintenance manuals).

With corrective actions complete, MSS re-inspected the machine, verified safety wiring, validated the safety functions and provided a Declaration of Conformance for the robot system. The customer then accepted the system, commissioned, and placed it into production.  The project was then deemed a huge success by senior management.   The industrial robot system now produces high-quality assemblies 24/7, the project team feels great about safety compliance, and the attorneys are now seeking other opportunities.   Another success story by MSS…

Case Study: Manufacturing Company

Risk Assessment Case Study

Another question….

Q:  Which safety product company do we trust to perform a risk assessment with your best interest in mind? A:  None of them. Companies selling safety products have a hidden agenda – sell the most products and charge insane dollars for installation! Machine Safety Specialists are safety engineers and consultants who have your best interest in mind. We will conduct an unbiased Risk Assessment and recommend the most sensible, lowest cost, compliant safeguards on the market – with no hidden sales agenda!

Case Study: Machine Safeguarding Example

One photo, two points of view…., safety product company recommendation:.

“Wow – This Customer needs $50K of functional safety equipment on each machine. Add light curtains, safety system, software, etc….”.   Problem solved for $50,000.

MSS Recommendation:

“Bolt down the existing guard, add end cap, remove sharp edges and secure the air line. Add a warning sign with documented training….”.  Problem solved for $50.   Once again, this really happened  – don’t let it happen to you !

Case Study: Risk Reduction

“machine safety specialists’ comprehensive approach to  risk reduction ensured the most complete, sensible, and least expensive solution for compliance” – safety manager.

Green Circle (right): We use all methods of Risk Reduction (elimination, signs, training) – not just guards and protective devices. This is the least expensive and most comprehensive approach. Red Circle (right): Guarding company methods of risk reduction (guards and protective devices) are very expensive, time consuming, and do not mitigate all of the risk.

Case Study - Why Perform a Risk Assessment?

Another frequently asked question is: “ Why do I need a Risk Assessment?” To answer this, please see Case Study: “ Applicable U.S. Machine Safety Codes and Standards”, then please see below… Why Perform a Risk Assessment? A written workplace hazard assessment is required by law.  In section 1910.132(d)(2), OSHA requires a workplace hazard analysis to be performed.  The proposed Risk Assessment fulfils this requirement with respect to the machine(s).

1910.132(d)(2): “The employer shall verify that the required workplace hazard assessment has been performed through a written certification that identifies the workplace evaluated; the person certifying that the evaluation has been performed; the date(s) of the hazard assessment; and, which identifies the document as a certification of hazard assessment.”

A Risk Assessment (RA) is required by the following US standards:

  • ANSI Z244.1
  • ANSI B11.19
  • ANSI B155.1
  • ANSI / RIA R15.06

Please note the following excerpt from an actual OSHA citation :

“The machines which are not covered by specific OSHA standards are required under the Occupational Safety and Health Act (OSHA Act) and Section 29 CFR 1910.303(b)(1) to be free of recognized hazards which may cause death or serious injuries.”

In addition, the risk assessment forms the basis of design for the machine safeguarding system.  The risk assessment is a process by which the team assesses risk, risk reduction methods, and team acceptance of the solution.  This risk reduction is key in determining the residual risks to which personnel are exposed.  Without a risk assessment in place, you are in violation of US Safety Standards, and you may be liable for injuries from the un-assessed machines.

Contact Us Today for your Free Risk Assessment Spreadsheet

Download your Free Risk Assessment Spreadsheet

ANSI/RIA Risk Assessment Spreadsheet-Enhanced Three State

Case Study:  Applicable U.S. Machine Safety Codes and Standards

We are often asked: “What must I do for minimum OSHA compliance at our plant?  Do I have to follow ANSI standards?  Why?” The following information explains our answer… Please note the following excerpt from an actual OSHA citation:

 “These machines must be designed and maintained to meet or exceed the requirements of the applicable industry consensus standards.  In such situations, OSHA, may apply standards published by the American National Standards Institute (ANSI), such as standards contained in ANSI/NFPA 79, Electrical Standard for Industrial Machinery, to cover hazards that are not covered by specific OSHA standards .”

  U.S. regulations and standards used in our assessments include:

  • OSHA 29 CFR 1910, Subpart O
  • Plus, others as applicable….

Please note the following key concepts in the U.S. Safety Standards:

  • Control Reliability as defined in ANSI B11.19 and RIA 15.06
  • Risk assessment methods in ANSI B11.0, RIA 15.06, and ANSI/ASSE Z244.1
  • E-Stop function and circuits as defined in NFPA 79 and ANSI B11.19
  • OSHA general safety regulations as defined in OSHA 29 CFR 1910 Subpart O – Section 212
  • Power transmission, pinch and nip points as defined in OSHA 29 CFR 1910 Subpart O -Section 219
  • Electrical Safety as defined in NFPA 79 and ANSI B11.19.

Note:  OSHA is now citing for failure to meet ANSI B11.19 and NFPA 79 .

SISTEMA

Telephone: (740) 816-9178 E-mail: [email protected]

Contact Us Today – We Can Be There Tomorrow!

  • Work E-mail *

Live Event:

I am interested in:.

  • Machine Safety Audits and Risk Assessments
  • Functional Safety and Control Reliability Design Reviews (Verification)
  • Stop-time Measurement
  • Instructor Lead Training (ILT)
  • Online Instructor Lead Training (ILT)
  • Machine Safeguarding Plans
  • SISTEMA Analysis
  • Safety System Testing (Validation) and Testing Procedures
  • Industrial or Collaborative Robot Safety
  • Machine Safeguarding Specification
  • Consulting and Expert Witness
  • Machine SafetyProTM - Mobile Risk Assessment Software
  • Free Risk Assessment Spreadsheet
  • En-Tronic FT-50 / FT-100 Parts
  • Safety Signs and Labels
  • Safety Sign/Label Assessment
  • Where did you hear about Machine Safety Specialists?
  • Upload Machine Photos Drop files here or Select files Max. file size: 300 MB. empty to support CSS :empty selector. --> Photo Uploads: Please upload photos of machines to evaluate here and provide any additional instructions in the Message field
  • Comments This field is for validation purposes and should be left unchanged.

P.O. Box 1111 Sunbury, OH 43074-9013

Osum

Get instant access to detailed competitive research, SWOT analysis, buyer personas, growth opportunities and more for any product or business at the push of a button, so that you can focus more on strategy and execution.

Table of contents, a powerful risk assessment example.

  • 6 May, 2024

risk assessment example

Understanding Risk Assessment

When it comes to managing risks in any project or workplace, a thorough risk assessment is essential. By understanding the definition, importance, and legal requirements associated with risk assessment, project managers can effectively identify and mitigate potential risks.

Definition and Importance

Risk assessments are a systematic process of identifying, analyzing, and controlling hazards and risks in a situation or place. The primary goal of a risk assessment is to determine measures to eliminate or control those risks, prioritizing them based on their likelihood and impact on the business ( SafetyCulture ).

The importance of risk assessment cannot be overstated. It plays a crucial role in preventing accidents, injuries, and financial losses. By proactively identifying and addressing potential risks, organizations can create a safer work environment, protect their employees and customers, and safeguard their reputation. Additionally, risk assessments help organizations comply with legal requirements and industry regulations.

Legal Requirements

In many countries, risk assessments are required by law to ensure the health and safety of employees and customers. For example, in the United States, the Occupational Safety and Health Administration (OSHA) mandates risk assessments to determine the personal protective gear and equipment needed for workers. Different industries may have specific guidelines for risk assessments due to varying types of risks ( SafetyCulture ).

For instance, the Environmental Protection Agency (EPA) in the US specializes in assessing hazards related to humans, animals, chemicals, and ecological factors. In the UK, conducting risk assessments is a legal requirement under the Health and Safety at Work Act. These regulations emphasize the importance of risk assessments in maintaining a safe workplace and protecting individuals from harm.

To ensure the effectiveness of risk assessments, it is crucial to employ competent individuals with experience in assessing hazard severity, likelihood, and control measures. These competent persons play a vital role in carrying out the risk assessment process, which involves planning, identifying hazards, evaluating risks, deciding on control measures, documenting findings, and reviewing and updating assessments as necessary ( SafetyCulture ).

By understanding the definition and importance of risk assessment, as well as the legal requirements associated with it, project managers can prioritize the safety of their teams, clients, and stakeholders. Implementing a comprehensive risk assessment process helps to identify potential risks, develop appropriate control measures, and ultimately mitigate risks with confidence.

Conducting a Risk Assessment

When it comes to conducting a risk assessment, it is crucial to have competent individuals with experience in assessing hazard severity, likelihood, and control measures. The process involves several steps, ensuring a comprehensive evaluation of potential risks and the development of effective control strategies. Let’s explore the two key aspects of conducting a risk assessment: competency and process, as well as the tools and techniques involved.

Competency and Process

To conduct a risk assessment, it is important to have competent individuals who possess the necessary knowledge and expertise in risk assessment methodologies. These individuals should be able to understand and evaluate the potential hazards, assess their severity and likelihood, and determine appropriate control measures. Competency in risk assessment ensures a thorough and reliable assessment of risks.

The risk assessment process generally consists of five stages:

Identify Hazards : This stage involves identifying potential hazards that may pose a risk to the organization or project. Hazards can range from physical risks to operational, financial, or reputational risks.

Evaluate Risks : Once hazards are identified, the next step is to evaluate the risks associated with each hazard. This involves assessing the severity of the risk and the likelihood of it occurring. This evaluation helps prioritize risks and allocate resources effectively.

Decide on Control Measures : Based on the evaluation of risks, control measures are determined to mitigate or eliminate the identified risks. These control measures can include preventive actions, protective equipment, safety protocols, or process improvements.

Document Findings : It is essential to document the findings of the risk assessment process. This documentation serves as a reference for future assessments, helps in monitoring the effectiveness of control measures, and ensures transparency and accountability.

Review and Update Assessment : Risk assessments are not a one-time activity. They should be periodically reviewed and updated to reflect changes in the organization, project, or external factors. Regular reviews ensure that the risk assessment remains relevant and effective.

Tools and Techniques

Various tools and techniques are available to facilitate the risk assessment process. These tools help in organizing and analyzing data, making informed decisions, and controlling risks effectively. Some commonly used tools and techniques include:

Risk Matrices : A risk assessment matrix is a visual tool that helps in assessing and prioritizing risks based on their severity and likelihood. It provides a structured approach to understanding and communicating risks.

Decision Trees : Decision trees are graphical representations that help in evaluating different courses of action based on the potential outcomes and associated risks. They assist in making informed decisions by considering various scenarios.

Failure Modes and Effects Analysis (FMEA) : FMEA is a systematic approach used to identify potential failure modes, assess their effects, and prioritize risks based on their impact. It is commonly used in industries such as manufacturing and healthcare to proactively manage risks.

Bowtie Models : Bowtie models are a visual representation of risks, controls, and consequences. They help in understanding the relationship between hazards, potential consequences, and the effectiveness of control measures.

By employing these tools and techniques, organizations can enhance the efficiency and effectiveness of their risk assessment processes, making well-informed decisions and prioritizing control measures effectively.

Conducting a risk assessment requires competent individuals who follow a structured process and utilize appropriate tools and techniques. By doing so, organizations can identify and mitigate risks, ensuring the safety, security, and success of their projects and operations.

Types of Risk Analysis

When it comes to risk assessment, there are two main types of analysis that organizations employ: qualitative risk analysis and quantitative risk analysis. Each approach offers unique insights and benefits in evaluating and addressing risks.

Qualitative Risk Analysis

Qualitative risk analysis is a method used to identify risks that require detailed analysis and determine the necessary controls and actions based on the risk’s effect and impact on objectives ( ISACA Journal ). This approach focuses on understanding the characteristics of risks and their potential consequences, without assigning numerical values.

The key advantage of qualitative risk analysis is its simplicity and ease of implementation. It provides a general picture of how risks affect an organization’s operations by categorizing risks on scales such as High, Medium, or Low ( Drata ). Qualitative risk assessments often involve gathering input from employees or stakeholders to assess and prioritize risks based on their expertise and experience.

While qualitative analysis does not provide precise quantitative data, it helps organizations gain a better understanding of their risk landscape and identify potential areas of concern. It serves as the foundation for developing risk mitigation strategies and allocating resources effectively.

Quantitative Risk Analysis

Quantitative risk analysis, on the other hand, provides a more objective and accurate assessment of risks by utilizing numerical data and calculations ( ISACA Journal ). This approach involves assigning numerical values to risks, allowing for a deeper understanding of their potential impact and probability.

Quantitative risk analysis is particularly useful for developing a probabilistic assessment of high-priority and/or high-impact risks. It requires high-quality data, a well-developed project model, and a list of business or project risks ( ISACA Journal ). By assigning numerical ratings to risks, organizations can prioritize their mitigation efforts and allocate resources accordingly.

One commonly used method in quantitative risk analysis is the calculation of the Annual Loss Expectancy (ALE). ALE helps determine the expected monetary loss for an asset or investment over a single year. It is calculated by multiplying the Single Loss Expectancy (SLE) with the Annual Rate of Occurrence (ARO) ( ISACA Journal ). This approach provides organizations with a quantitative estimate of potential financial losses associated with specific risks.

It’s worth noting that there are also semi-quantitative risk assessments that combine elements of both qualitative and quantitative methodologies. These assessments use numerical scales to assign risk values, enabling more analytical assessments while avoiding complex calculations ( Drata ).

By utilizing qualitative and quantitative risk analysis, organizations can gain a comprehensive understanding of their risks, prioritize their mitigation efforts, and make informed decisions to safeguard their objectives. The choice between these two methods depends on the organization’s specific needs, available resources, and the level of detail required for risk evaluation and management.

Risk Assessment Examples

When it comes to risk assessment, it can be helpful to examine real-world examples to better understand how the process works and how it can be applied in different industries. In this section, we will explore industry-specific examples and case studies provided by SafetyCulture to illustrate the application of risk assessment.

Industry-specific Examples

Risk assessment is a versatile tool that can be implemented across various industries to identify and manage potential hazards. Here are some industry-specific examples:

Construction: In the construction industry, risks such as falls from heights, electrical hazards, and equipment failure can pose significant threats to worker safety. A comprehensive risk assessment would involve identifying these hazards, evaluating the likelihood and severity of potential incidents, and implementing measures to mitigate the risks.

Transport and Logistics: In the transport and logistics sector, risks associated with vehicle accidents, cargo handling, and warehouse operations need to be carefully assessed. Risk assessment in this industry would involve evaluating factors such as driver training, vehicle maintenance, load securement, and adherence to safety regulations.

Manufacturing: Manufacturing facilities are exposed to a wide range of risks, including machinery accidents, chemical exposures, and ergonomic hazards. A thorough risk assessment would involve analyzing the production processes, identifying potential risks, and implementing control measures such as machine guarding, personal protective equipment (PPE), and employee training.

Retail: In the retail industry, risks can include slips and falls, manual handling injuries, and workplace violence. A comprehensive risk assessment in this context would involve identifying hazards specific to the retail environment, such as wet floors, heavy lifting, and customer interactions, and implementing measures to minimize these risks.

Energy: The energy sector faces risks related to hazardous materials, fires, explosions, and electrical incidents. Risk assessment in this industry would involve evaluating the potential hazards associated with energy production, transmission, and distribution, and implementing appropriate safety measures to protect workers and the environment.

SafetyCulture Case Studies

SafetyCulture (formerly iAuditor) provides case studies that demonstrate the practical application of risk assessment in various industries. These case studies offer insights into how organizations have utilized risk assessment to identify and mitigate potential hazards effectively. By examining these examples, project managers can gain a better understanding of how to approach risk assessment in their own industries.

To access detailed case studies and learn more about risk assessment, visit SafetyCulture’s risk analysis examples .

By exploring industry-specific examples and case studies, project managers can gain valuable insights into the application of risk assessment. Remember, risk assessment is a dynamic process that requires continuous evaluation and adaptation to changing circumstances. Utilizing risk assessment templates , risk assessment matrices , and risk assessment tools can further streamline the process and help organizations effectively identify, evaluate, and mitigate risks.

Risk Mitigation Strategies

To effectively manage risks identified during the risk assessment process, organizations employ various risk mitigation strategies. These strategies aim to protect business operations, minimize potential negative impacts, and ensure the overall success of projects or initiatives. Two common risk mitigation strategies are acceptance and avoidance, as well as transfer and management.

Acceptance and Avoidance

Acceptance and avoidance are risk mitigation strategies that organizations can employ depending on the nature and severity of the identified risks.

Acceptance : In certain situations, it may be more practical or cost-effective to accept the risk and its potential consequences. This strategy involves acknowledging the risk and its potential impact, while actively deciding not to take further action to prevent or mitigate it. Acceptance is often chosen when the risk is deemed low or the cost of mitigation outweighs the potential loss.

Avoidance : On the other hand, avoidance involves proactively taking measures to prevent or eliminate the identified risks. This strategy aims to completely avoid the occurrence of the risk and its associated negative outcomes. Avoidance may involve altering project plans, changing processes, or even discontinuing certain activities or initiatives that pose a significant risk.

By carefully assessing the risks and considering the potential consequences, organizations can determine whether acceptance or avoidance is the most appropriate strategy for each specific risk.

Transfer and Management

Another set of risk mitigation strategies includes transfer and management. These strategies focus on shifting or controlling risks through various means.

Transfer : Risk transfer involves transferring the financial burden or responsibility of the risk to another party, typically through insurance or contractual agreements. By transferring the risk, organizations can minimize the potential financial impact and ensure that they are adequately protected against unexpected events. Transferring risks is particularly common for risks that can be insured against, such as property damage or liability claims.

Management : Risk management involves implementing measures and controls to reduce the likelihood or impact of identified risks. This strategy includes actively monitoring and addressing risks through preventive actions, contingency plans, and ongoing risk assessments. Risk management enables organizations to proactively identify, analyze, and respond to risks, minimizing their potential negative effects.

By combining risk transfer and management strategies, organizations can effectively mitigate risks while maintaining control and minimizing potential losses.

Implementing a combination of these risk mitigation strategies allows organizations to address risks from various angles and ensure the successful execution of projects and initiatives. It’s essential to evaluate each risk individually and determine the most appropriate strategy based on its potential impact, likelihood, and feasibility of mitigation.

To learn more about risk assessment and mitigation, refer to our article on risk assessment tools and the risk assessment process .

Perform Deep Market Research In Seconds

Automate your competitor analysis and get market insights in moments

risk assessment case study examples

Create Your Account To Continue!

Automate your competitor analysis and get deep market insights in moments, stay ahead of your competition. discover new ways to unlock 10x growth., just copy and paste any url to instantly access detailed industry insights, swot analysis, buyer personas, sales prospect profiles, growth opportunities, and more for any product or business..

risk assessment case study examples

Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

Home > Books > Risk Assessment

Risk Assessment for Collaborative Operation: A Case Study on Hand-Guided Industrial Robots

Reviewed: 17 August 2017 Published: 20 December 2017

DOI: 10.5772/intechopen.70607

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Risk Assessment

Edited by Valentina Svalova

To purchase hard copies of this book, please contact the representative in India: CBS Publishers & Distributors Pvt. Ltd. www.cbspd.com | [email protected]

Chapter metrics overview

1,830 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

IntechOpen

Total Chapter Views on intechopen.com

Risk assessment is a systematic and iterative process, which involves risk analysis, where probable hazards are identified, and then corresponding risks are evaluated along with solutions to mitigate the effect of these risks. In this article, the outcome of a risk assessment process will be detailed, where a large industrial robot is used as an intelligent and flexible lifting tool that can aid operators in assembly tasks. The realization of a collaborative assembly station has several benefits, such as increased productivity and improved ergonomic work environment. The article will detail the design of the layout of a collaborative assembly workstation, which takes into account the safety and productivity concerns of automotive assembly plants. The hazards associated with hand-guided collaborative operations will also be presented.

  • hand-guided robots
  • industrial system safety
  • collaborative operations
  • human-robot collaboration
  • risk assessment

Author Information

Varun gopinath *.

  • Division of Machine Design, Department of Management and Engineering, Linköping University, Sweden

Kerstin Johansen

Johan ölvander.

*Address all correspondence to: [email protected]

1. Introduction

In a manufacturing context, collaborative operations refer to specific applications where operators and robots share a common workspace [ 1 , 2 ]. This allows operators and industrial robots to share assembly tasks within the pre-defined workspace—referred to as collaborative workspace—and this ability to work collaboratively is expected to improve productivity as well as the working environment of the operator [ 3 ].

As pointed out by Marvel et al. [ 1 ], collaborative operation implies that there is a higher probability for occurrence of hazardous situations due to close proximity of humans and industrial robots. The hazardous situations can lead to serious injury and, therefore, safety needs to be guaranteed while developing collaborative applications [ 4 ].

ISO 10218-1 [ 5 ] and ISO 10218-2 [ 6 ] are international standards aimed at specifying requirements for safety on the design of industrial robots and robotic systems, respectively. They recognize collaborative applications and list four specific types of collaborative operations, namely (1) safety-rated monitored stop, (2) hand-guiding, (3) speed and separation monitoring, and (4) power and force limiting that can be implemented either individually or as a combination of one or more types.

As industrial robots and robotic systems are designed and integrated into specific manufacturing applications, the safety standards state that a risk assessment needs to be conducted is to ensure safe and reliable operations. Risk assessment, as standardized in ISO 12100 [ 7 ], is a detailed and iterative process of (1) risk analysis followed by (2) risk evaluation. The safety standards also state that the effect of residual risks needs to be eliminated or mitigated through appropriate risk reduction measures. The goal of a risk assessment program is to ensure that operators, equipment as well as the environment are protected.

As pointed out by Clifton and Ericson [ 8 ], hazard identification is a critical step, where the aim is the cognitive process of hazard recognition, whereas the solutions to mitigate the risks are relatively straightforward. Etherton et al. noted that designers lack a database of known hazards during innovation and design stages [ 9 ]. The robot safety standards (ISO 10218-1 [ 5 ] and ISO 10218-2 [ 6 ]) also have tabulated a list of significant hazards whose purpose is to inform risk assessors of probable inherent dangers associated with robot and robotic systems. Therefore, a case study [ 10 ] is used to investigate the characteristics of hazards and the associated risks that are relevant for collaborative operation. The study is focused on a collaborative assembly station, where large industrial robots and operators are to share a common workspace enabled through the application of a systematic and standardized risk assessment process followed by risk reduction measures.

This article is structured as follows: in Section 2, an overall description of the methodology used to conduct the research will be presented along with limitations; Section 3 will detail theoretical background; and Section 4 will present the results of the article followed by discussion of the result and conclude with remarks on future work.

1.1. Background

Recently, there have been many technological advances within the areas of robot control which aims to solve perceived issues associated with robot safety [ 11 ]. A safe collaborative assembly cell, where operators and industrial robots collaborate to complete assembly tasks, is seen as an important technological solution for several reasons including (1) ability to adapt to market fluctuations and trends [ 12 ]. (2) Have the possibility to decrease takt time [ 13 , 14 ]. (3) Improving working environment by decreasing the ergonomic load of the operator [ 15 ].

having a high production rate, where the capacity of the plant can vary significantly depending on several factors, such as variant, plant location, etc.

being dependent on manual labor as the nature of assembly tasks require highly dexterous motion with good hand-eye coordination along with general decision-making skills.

Though, operators are often aided by powered tools to carry out assembly tasks such as pneumatic nut-runners as well as lifting tools, there is a need to improve the ergonomics of their work environment. As pointed by Ore et al. [ 15 ], there is demonstrable potential for collaborative operations to aid operators in various tasks including assembly and quality control.

Earlier attempts at introducing automation devices, such as cobots [ 13 , 16 ], have resulted in custom machinery that functions as ergonomic support. Recently, industrial robots specifically designed for collaboration such as UR10 [ 17 ] and KUKA iiwa [ 18 ] are available that can be characterized as: (1) having the ability to detect collisions with any part of the robot structure; and (2) having the ability to carry smaller load and shorter reach compared to traditional industrial robots. This feature coupled with the ability to detect collisions fulfills the condition for power and force limiting.

Industrial robots that does not have power and force limiting feature, such as KUKA KR210 [ 18 ] or the ABB IRB 6600 [ 19 ], have traditionally been used within fenced workstations. In order to enter a robot workspace, the operator was required to deliberately open a gate, which is monitored by a safety device that stops all robot and manufacturing operations within the workstation. As mentioned before, the purpose of the research project was to explore collaborative operations where traditional industry robots are employed for assembly tasks. These robots have the capacity to carry heavy loads with long reach that can be effective for various assembly tasks. However, these advantages correspond to an inherent source of hazard that needs to be understood and managed with appropriate safety focused solutions.

2. Working methodology

To take advantage of the physical performance characteristics of large industrial robots along with the advances in sensor and control technologies, a research project ToMM [ 20 ] comprising of members representing the automotive industry, research, and academic institutions were tasked with understanding and specifying industry-relevant safety requirements for collaborative operations.

2.1. Industrial relevance

The requirements for safety that are relevant for the manufacturing industry are detailed in various standards such as ISO EN 12100 and ISO EN 10218 (parts 1 and 2) which are maintained by various organizations such as International Organization for Standardization (ISO [ 21 ]) and International Electrotechnical Commission (IEC [ 22 ]). Though these organizations do not have the authority to enforce the standards, a legislatory body such as the European Union, through the EU Machinery directive mandates compliance with normative standards [ 23 ] which are prefixed with an EN before their reference number.

2.2. Problem study and data collection

Regular meeting in order to have detailed discussion with engineers and line managers at the assembly plant [ 24 ].

Visits to the plant allowed the researchers to directly observe the functioning of the station. This also enabled the researchers to have informal interviews with line workers regarding the assembly tasks as well as the working environment.

The researchers participated in the assembly process, guided by the operators, allowed the researchers to gain intuitive understanding of the nature of the task.

Literature sourced from academia, books as well as documentation from various industrial equipment manufactures were reviewed.

2.3. Integrating safety in early design phase

Introduction of a robot into a manual assembly cell might lead to unforeseen hazards whose potential to cause harm needs to be eliminated or minimized. The machinery safety standard [ 7 ] suggests the practice of conducting risk assessment followed by risk reduction measures to ensure the safety of the operator as well as other manufacturing processes. The risk assessment process is iterative that concludes when all probable hazards have been identified along with solutions to mitigate the effects of these hazards have been implemented. This process is usually carried out through a safety program and can be documented according to [ 25 ].

Figure 1 depicts an overview of the safety-focused design strategy employed during the research and development phase. The case study was analyzed to understand the benefits of collaborative operations done through a conceptual study, where the overall robot, operator, and collaborative tasks were specified. Employing the results of the conceptual study, the risk assessment methodology followed by risk reduction was carried out where each phase was supported by the use of demonstrators. Björnsson [ 26 ] and Jonsson [ 27 ] have elaborated the principles of demonstrator-based design along with their perceived benefits and this methodology has been employed in this research work within the context of safety for collaborative operations.

risk assessment case study examples

Overview of the demonstrator-based design methodology employed to ensure a safe collaborative workstation.

3. Theoretical background

In this section, beginning with an overview of industrial robots, concepts from hazard theory, industrial system safety and reliability, and task-based risk assessment methodology will be detailed.

3.1. Industrial robotic system and collaborative operations

An industrial robot is defined as an automatically controlled, reprogrammable, multipurpose manipulator, programmable in three or more axes, which can be either fixed in place or mobile for use in industrial automation applications [ 28 ]. Figure 2(A) shows an illustration of an articulated six-axis manipulator along with the control cabinet and a teach pendant. The control cabinet houses various control equipment such as motor controller, input/output modules, network interfaces, etc.

risk assessment case study examples

(A) An example of a manipulator along with the control box and the teach pendant. Examples include KUKA KR-210 [ 18 ] and ABB IR 6620 [ 19 ]. (B) Illustrates the interaction between the three participants of a collaborative assembly cell within their corresponding workspaces [ 3 ].

The teach pendant is used to program the robot, where each line of code establish the robot pose—in terms of coordinates in x, y, z and angles A, B, C—which when executed allow the robot to complete a task. This method of programming is referred to as position control, where individual robot poses are explicitly hard coded. In contrast to position control, sensor-based control allows motion control to be regulated by sensor values. Examples of sensors include vision, force and torque, etc.

On a manufacturing line, robots can be programmed to move at high speed undertaking repetitive tasks. This mode of operation is referred to as automatic mode, and allows the robot controller to execute the program in a loop, provided all safety functions are active. Additionally, ISO 10218-1 [ 5 ] has defined manual reduced-speed to allows safe programming and testing of the intended function of the robotic system, where the speed is limited to 250 mm/s at the tool center point. The manual high-speed allows the robot to be moved at high speed, provided all safety functions are activate and this mode is used for verification of the intended function.

The workspace within the robotic station where robots run in automatic mode is termed Robot Workspace (see Figure 2(B) ). In collaborative operations, where operators and robots can share a workspace, a clearly defined Collaborative Workspace is suggested by [ 29 ]. Though the robot can be moved in automatic mode within the collaborative workspace, the speed of the robot is limited [ 29 ] and is determined during risk assessment.

Safety-rated monitored stop stipulates that the robot ceases its motion with a category stop 2 when the operator enters the collaborative workspace. In a category stop 2, the robot can decelerate to a stop in a controlled manner.

Hand-guiding allows the operator to send position commands to the robot with the help of a hand-guiding tool attached at or close to the end-effector.

Speed and separation monitoring allows the operator and the robot to move concurrently in the same workspace provided that there is a safe separation distance between them which is greater than the prescribed protective separation distance determined during risk assessment.

Power and force limiting operation refers to robots that are designed to be intrinsically safe and allows contact with the operator provided it does not exert force (either quasi-static or transient contact) larger than a prescribed threshold limit.

3.2. Robotic system safety and reliability

An industrial robot normally functions as part of an integrated manufacturing system (IMS) where multiple subsystems that perform different functions operate cohesively. As noted by Levenson (page 14 [ 30 ]), safety is a system property (not a component property) and needs to be controlled at the system level. This implies that safety as a property needs to be considered at early design phases, which Ericson (page 34 [ 8 ]) refers to as CD-HAT or Conceptual Design Hazard Analysis Type. CD-HAT is the first seven types of hazard analysis types, which needs to be considered during various design phases in order to avoid costly design rework.

To realize a functional IMS, a coordinated effort in the form of a system safety program (SSP [ 8 ]) which involve participants with various levels of involvement (such as operators, maintenance, line managers, etc.) are carried out. Risk assessment and risk reduction processes are conducted in conjecture with the development of an IMS, in order to promote safety, during development, commissioning, maintenance, upgradation, and finally decommissioning.

3.2.1. Functional safety and sensitive protective equipment (SPE)

Functional safety refers to the use of sensors to monitor for hazardous situations and take evasive actions upon detection of an imminent hazard. These sensors are referred to as sensitive protective equipment (SPE) and the selection, positioning, configuration, and commissioning of equipment have been standardized and detailed in IEC 62046 [ 31 ]. IEC 62046 defines the performance requirements for this equipment and as stated by Marvel and Norcross [ 32 ], when triggered, these sensors use electrical safety signals to trigger safety function of the system. They include provisions for two specific types: (1) electro-sensitive protective equipment (ESPE) and (2) pressure-sensitive protective equipment (PSPE). These are to be used for the detection of the presence of human beings and can be used as part of the safety-related system [ 31 ].

Electro-sensitive protective equipment (ESPE) uses optical, microwaves, and passive infrared techniques to detect operators entering a hazard zone. That is, unlike physical fence, where the operators and the machinery are physically separated, ESPE relies on the operators to enter a specific zone for the sensor to be triggered. Examples include laser curtains [ 33 ], laser scanners [ 34 ], and vision-based safety systems such as the SafetyEye [ 35 ].

Pressure-sensitive protective equipment (PSPE) has been standardized in parts 1–3 of ISO13856, and works on the principle of an operator physically engaging a specific part of the workstation. These include: (1) ISO 13856-1—pressure sensitive mats and floors [ 36 ]; (2) ISO 13856-2—pressure sensitive bars, edges [ 37 ]. (3) ISO 13856-3—bumpers, plates, wires, and similar devices [ 38 ].

3.2.2. System reliability

Successful robotic systems are both safe to use and reliable in operation. In an integrated manufacturing system (IMS), reliability is the probability that a component of the IMS will perform its intended function under pre-specified conditions [ 39 ]. One measure of reliability is MTTF (mean time to failure) and ranges of this measure has been standardized into five discrete level levels or performance levels (PL) ranging from a to e. For example, PL = d refers to a 10 –6  > MTTF ≥ 10 –7 , which is the required performance level with a category structure 3 ISO 10218-2 (page 10, Section 5.2.2 [ 6 ]). That is, in order to be viable to the industry, the final design of the robotic system should reach or exceed the minimum required performance level.

3.3. Hazard theory: hazards, risks, and accidents

Ericson [ 8 ] states that a mishap or an accident is an event which occurs when a hazard, or more specifically hazardous element, is actuated upon by an initiating mechanism. That is, a hazard is a pre-requisite for an accident to occur and is defined as a potential source of harm [ 7 ] and is composed of three basic components: (1) hazardous element (HE), (2) initiating mechanism (IM), and (3) target/threat (T/T).

A hazardous element is a resource that has the potential to create a hazard. A target/threat is the person or the equipment directly affected when the hazardous element is activated by an initiating mechanism. These three components, when combined together, can be referred to as a hazard (see Figure 3(A) ) and are essential components for it to exist. Based on these definitions, if any of the three components are removed or eliminated, by any means (see Section 3.4.2), it is possible to eliminate or reduce the effect of the hazard.

risk assessment case study examples

(A) The hazard triangle where the three components of hazards—hazardous element, initiating mechanism, and target/threat—are essential and required for the hazard to exist (adapted from page 17 [ 8 ]). (B) Shows the layout of the robotic workstation where a fatal accident took place on July 21, 1984 [ 40 ].

To better illustrate these concepts, consider the fatal accident that took place on July 21, 1984, where an experienced operator entered a robotic workstation while the robot was in automatic mode (see Figure 3(B) ). The robot was programmed to grasp a die-cast part, dip the part in a quenching tank and place it on an automatic trimming machine. According to Lee et al. [ 40 ], the operator was found pinned between the robot and a safety-pole by another operator of an adjacent die-cast station who became curious after hearing the hissing noise of the air-hose for 10–15 min. The function of the safety pole was to limit robot motion and together with the robot-arm can be considered to be a hazardous element. The hazard was initiated by the operator who intentionally entered the workstation either by jumping over the rails or through a 19-inch unguarded spacing and caused the accident. The operator was the target of this unfortunate accident and was pronounced dead after 5 days of the accident.

A hazard is designed into a system [ 8 , 30 ] and for accident to occur depends on two factors: (1) unique set of hazard components and (2) accident risk presented by the hazard components, where risk is defined

Ericson notes that a good hazard description can support the risk assessment team to better understand the problem and therefore can enable them to make better judgments (e.g., understanding the severity of the hazard), and therefore suggest that the a good hazard description needs to contain the three hazard components.

3.4. Task-based risk assessment and risk reduction

Risk assessment is a general methodology where the scope is to analyze and evaluate risks associated with complex system. Various industries have specific methodologies with the same objective. Etherton has summarized a critical review of various risk assessment methodologies for machine safety in [ 41 ]. According to ISO 12100, risk assessment (referred to as MSRA—machine safety risk assessment [ 41 ]) is an iterative process which involves two sequential steps: (1) risk analysis and (2) risk evaluation. ISO 12100 suggests that if risks are deemed serious, measures should be taken to either eliminate or mitigate the effects of the risks through risk reduction as depicted in Figure (4) .

risk assessment case study examples

An overview of the task-based risk assessment methodology.

3.4.1. Risk analysis and risk evaluation

Within the context of machine safety, risk analysis begins with identifying the limits of machinery, where the limits in terms of space, use, time are identified and specified. Within this boundary, activities focused on identifying hazards are undertaken. The preferred context for identifying hazards for robotics systems is task-based, where he tasks that needs to be undertaken during various phases of operations are first specified. Then the risk assessors specify the hazards associated with each tasks. Hazard identification is a critical step and ISO 10218-1 [ 5 ] and ISO 10218-2 [ 6 ] tabulates significant hazards associated with robotic systems. However, they do not explicitly state the hazards associated with collaborative operations.

Risk evaluation is based on a systematic metrics where severity of injury, exposure to hazard and avoidance of hazard are used to evaluate the hazard (see page 9, RIA TR R15.306-2014 [ 25 ]). The evaluation results in specifying the risk level in terms of negligible, low, medium-high, and very-high, and determine risk reduction measures to be employed. To support the activities associated with risk assessment, ISO TR 15066 [ 29 ] details information required to conduct risk assessment specifically for collaborative applications.

3.4.2. Risk reduction

When risks are deemed serious, the methodology demands measures to eliminate and/or mitigate the risks. The designers have a hierarchical methodology that can be employed to varying degree depending on the risks that have to be managed. The three hierarchical methods allow the designers to optimize the design and can choose either one or a combination of the methods to sufficiently eliminate/mitigate the risks. They are: (1) inherently safe design measures; (2) safeguarding and/or complementary protective measures; and (3) information for use.

4. Result: demonstrator for a safe hand-guided collaborative operation

In this section, the development and functioning of a safe assembly station will be detailed, where a large industrial robot is used in a hand-guided collaborative operation. In order to understand potential benefits with hand-guided industrial robots, an automotive assembly station will be presented as a case study in Section 4.1. With the aim to improve the ergonomics of the assembly station and increase the productivity, the assembly tasks are conceptualized as robot, operator, and collaborative task where the collaborative task is the hand-guided operation and is described in Section 4.2. The results of the iterative risk assessment and risk reduction process (see Section 3.4) will be detailed in Section 4.3. The final layout and the task sequence will be detailed in Section 4.4, and Table 1 will document the hazards that were identified during risk assessment that were used to improve the safety features of the assembly cell.

4.1. Case study: manual assembly of a flywheel housing cover

An operator picks up the flywheel housing cover (FWC) with the aid of a lifting device from position P1. The covers are placed on a material rack and can contain upto three part variants.

This operator moves from position P1 to P2 by pushing the FWC and installs it on the machine (integrated machinery) where secondary operations will be performed.

After the secondary operation, the operator pushes the FWC to the engine housing (position P3). Here, the operator needs to align the flywheel housing cover with the engine block with the aid of guiding pins. After the two parts are aligned, the operator pushes the flywheel housing cover forward until the two parts are in contact. The operator must exert force to mate these two surfaces.

Then the operators begin to fasten the parts with several bolts with the help of two pneumatically powered devices. In order to keep low takt time, these tasks are done in parallel and require the participation of more than one operator.

risk assessment case study examples

(A) Shows the manual workstation where several operators work together to assemble flywheel housing covers (FWC) on the engine block. (B) Shows the robot placing the FWC on the integrated machinery. (C) Shows the robot being hand-guided by an operator thereby reducing the ergonomic effort to position the flywheel housing cover on the engine block.

4.2. Task allocation and conceptual design of the hand-guiding tool

Figure 5(B) and (C) , shows ergonomic simulations reported by Ore et al. [ 15 ] and shows the operator being aided by an industrial robot to complete the task. The first two tasks can be automated by the robot, i.e., picking the FWC from Position 1 and moving it to the integrated machine (position P2, Figure 5(B) ). Then, the robot moves the FWC to the hand over position where the robot will come to a stop and signal to the operator that the collaborative mode is activated. This allows the operator to hand-guide the robot by grasping the FWC and directing the motion towards the engine block.

Once the motion of the robot is under human control, the operator can assemble the FWC onto the engine block and proceeds to secure it with bolts. After the bolts have been fastened, the operator then moves the robot back to the hand-over position and reactivates the automatic mode which starts the next cycle.

4.3. Safe hand-guiding in the collaborative workspace

The risk assessment identified several hazardous situations that can affect the safe functioning during the collaborative mode—that is when the operator goes into the workstation and hand-guides the robot to assemble the FWC—and has been tabulated in Table 1 .

The robot needs to be programmed to move at slow speed so that it can stop (in time) according to speed and separation monitoring mode of collaborative operation.

To implement speed and separation monitoring, a safety rated vision system might be probable solution. However, this may not be viable solution on the current factory floor.

risk assessment case study examples

(A) and (B) are two versions of the end-effector that was prototyped to verify and validate the design.

A change in design that would allow the operator to visually align the pins on the engine block with the mating holes on the FWC.

A change in design to improve reliability as well as avoid tampering through the use of standardized components. Ensure that the operator feel safer during hand-guiding by ensuring that the robot arms are not close to the operator.

risk assessment case study examples

The layout of the physical demonstrator installed in a laboratory environment.

No.Hazard descriptionHazardous element (HE)Initiating mechanism (IM)Target/threat (T/T)Risk reduction measure
1.The operator can accidentally enter robot workspace and collide with the robot moving at high speedFast moving robotOperator is unaware of the system stateOperators1. A light curtain to monitor the robot workspace. 2. A lamp to signal the system state
2.In collaborative mode, sensor-guided motion is active. Robot motion can be triggered unintentionally resulting in unpredictable motionCrushingOperator accidentally activate the sensor,Operator(s) and/or equipment(s)An enabling device, when actuated, will start sensor-guided motion. An ergonomically designed enabling device can act as a hand-guiding tool
3.The operator places their hands between the FWC and the engine, thereby crushing their handsCrushingOperator distracted due to assembly taskOperatorAn enabling device can ensure that the operator’s hands are at a predefined location.
4.While aligning the pins with the holes, the operator can break the pins by moving vertically or horizontallyImprecise hand-guided motionOperator fails to keep steady motionOperators1. Vertical hand-guided motion needs to be eliminated. 2. Operator training
5.The robot collides with an operator while being hand-guided by another operatorCollisionDesignated operator is not aware of others in the vicinityOperatorsThe designated operator has clear view of the station
6.An operator accidentally engages mode-change button though the collaborative task is incompleteError in judgment of the operatorsEngaging the mode-change buttonOperator/equipmentA button on the hand-guiding tool that the operator engages before exiting the workspace

Table 1.

The table describes the hazards that were identified during the risk assessment process.

Design featureDesign ADesign BDesign evaluation
1. Orientation of the end-effectorEnd-effector is parallel to the robot wristEnd-effector is perpendicular to the robot wrist.In design A, the last two links of the robot are close to the operator which might make the operators feel unsafe. Design B might allow for an overall safer design due to use of standardized components
2. Position of Flywheel housing cover (FWC)The FWC is positioned left to the operatorThe FWC is positioned in front of the operatorDesign A requires more effort from the operator to align the locating pins (on the engine block) and the mating holes (on the FWC). The operator loses sight of the pins when the two parts are close to each other. In Design B, it is possible to align the two parts by visually aligning the outer edges
3. Location of Emergency stopGood location and easy to actuateGood location and easy to actuateIn design A, it was evaluated that the E-stop can be accidentally actuated which might lead to unproductive stops
4. Location of visual interfacesGood location and visibilityNo visual interfacesEvaluation of design A resulted in the decision that interfaces need to be visible to all working within the vicinity
5. Location of physical interfacesGood location with easy reach.Minimal physical interfacesEvaluation of design A resulted in the decision that interfaces are optimally placed outside the fences area
6. Overall ergonomic designThe handles are angled and is more comfortableThe distance between the handles is shortDesigns A and B have good overall design. Design B uses standardized components. Design A employs softer materials and interfaces that are easily visible

Table 2.

Feature comparison of two versions of the end-effector shown in Figure 6(A) and (B) .

4.4. Demonstrator for a safe hand-guided collaborative assembly workstation

Figure 7 shows a picture of the demonstrator developed in a laboratory environment. Here, a KUKA KR-210 industrial robot is part of the robotic system where the safeguarding solutions include the use of physical fences as well as sensor-based solutions.

The robot tasks, which are preprogramed tasks undertaken in automatic mode. When the robot tasks are completed, it is programmed to stop at the hand-over position.

The collaborative task which begins when the operators enters the monitored space and takes control of the robot using the hand-guiding device. The collaborative mode is complete when the operator returns the robot to the hand-over position and restarts the automatic mode.

The operator task is the fastening of the bolts required to secure the FWC to the engine block. The operators need to fasten several bolts and therefore use pneumatically powered tool (not shown here) to help them with this task.

risk assessment case study examples

The figure describes the task sequence of the collaborative assembly station where an industrial robot is used as an intelligent and flexible lifting tool. The tasks are decomposed into three — Operator task (OT), Collaborative task (CT) and Robot task (RT) — which are detailed in Table 3 .

TasksTask description
1. Robot taskThe robot tasks are to pick up the flywheel housing cover, place the part on the fixture and when the secondary operators are completed, pick up the part and wait at the hand-over position. During this mode, the warning lamp is red, signaling automatic mode. The hand-over position is located inside the enclosed area and is monitored by laser curtains. The robot will stop if an operator accidentally enters this workspace and can be restarted by the auto-continue button ( )
2. Operator taskEnter collaborative space: When the warning lamp turns to green, the laser curtains are deactivated; the operator enters the collaborative workspace
3. Collaborative taskEngage enabling switch: the operator begins hand-guiding by engaging both the enabling switches simultaneously. This activates the sensor-guided motion and the operator can move the robot by applying force on the enabling device. If the operator releases the enabling switch, the motion is deactivated (see point 2 in ). To reactivate motion, the operator engages both the enabling switches
4. Collaborative taskHand-guide the robot: the operator moves the FWC from the hand-over position to the assembly point. Then removes the clamp and return the robot back to the hand-over position
5. Collaborative taskEngage automatic mode: before going out of the assembly station, the operator needs to engage the three-button switch. This deliberate action signals to the robot that the collaborative task is complete
6. Robot taskThe operator goes out and engages the mode-change button. Then, the following sequence of events is carried out: (1) laser curtains are activated, (2) warning lamp turns from green to red, and (3) the robot starts the next cycle

Table 3.

The table articulates the sequence of tasks that were formulated during the risk assessment process.

4.4.1. Safeguarding

With an understanding that operators are any personnel within the vicinity of hazardous machinery [ 7 ], the physical fences can be used to ensure that they do not accidentally enter a hazardous zone. The design requirements stated that the engine block needs to be outside the enclosed zone, meant that the robot needs to move out of the fenced area during collaborative mode (see Figure 8 ). Therefore, the hand over position is located inside the enclosure and the assembly point is located outside of the enclosure and both these points are part of the collaborative workspace. The opening in the fences is monitored during automatic mode using laser curtains.

4.4.2. Interfaces

During risk evaluation, the decision to have several interfaces was motivated. A single warning LED lamp (see Figure 8 ) can convey that when the robot has finished the preprogrammed task and waiting to be hand-guided. Additionally, the two physical buttons outside the enclosure has separate functions. The Auto-continue button allows the operator to let the robot continue in automatic mode if the laser curtains were accidentally triggered by an operator and this button is located where it is not easily reached. The second button is meant to start the next assembly cycle (see Table 1 ). Table 1 (Nos. 2 and 3) motivates the use of enabling devices to trigger the sensor guided motion (see Figure 6(B) ). The two enabling devices provide the following functions: (1) it acts as a hand-guiding tool that the operator can use to precisely maneuver the robot. (2) By specifying that the switches on the enabling device are engaged for hand-guiding motion, the operators hands are at a prespecified and safe location. (3) Additionally, by engaging the switch, the operator is deliberately changing the mode of the robot to collaborative-mode. This ensures that unintended motion of the robot is avoided.

5. Discussion

In this section, the discussion will be focused on the application of the risk assessment methodology and the hazards that were identified during this process.

5.1. Task-based risk assessment methodology

A risk assessment (RA) is done on a system that exists in a form that can function as a context within which hazards can be documented. In the case study, a force/torque sensor was used to hand-guide the robot and this technique was chosen at the conceptual stage. RA based on this technique led to decision of introducing enabling devices (No. 2 in Table 1 ) to ensure that, while the operator is hand guiding the robot, the hands are within a predetermined safe location and is engaged. Another industrially viable solution is the use of joysticks to hand-guide the robot but this option was not explored further during discussion as it might be less intuitive than force/torque based control. Regardless, it is implicit that the choice of technique poses its own hazardous situation and the risk assessors need a good understanding of the system boundary.

Additionally, during risk assessment, the failure of the various components was not considered explicitly. For example, what if laser curtains failed to function as intended? The explanation lies in the choice of components. As stated in Section 3.2.2, a robotic system to be considered reliable, the components must have a performance level PL = d, which implies a very low probability of failure. Most safety-equipment manufactures publish their MTTF values along with their performance levels and the intended use.

5.2. Hazards

The critical step in conducting risk assessment (RA) is hazard identification. In Section 3.3, a hazard was decomposed into three: (1) hazardous element (HE), (2) initiating mechanism (IM), and (3) target/threat (T/T). The three sides of the hazard triangle (Section 3.3) have lengths proportional to the degree with which these components can trigger the hazard and cause an accident. That is, if the length of IM side is much larger than the other two, then the most influencing factor to cause an accident is IM. The discussion on risk assessment (Section 3.4) stresses on eliminating/mitigating hazards which implies that the goal of risk assessment can be understood as a method to reduce/remove one or more of the sides of the hazard triangle. Therefore, documenting the hazards in terms of its components might allow for simplified and straightforward downstream RA activities.

The hazards presented in Table 1 can be summarized as follows: (1) the main source of hazardous element (HE) is slow/fast motion of the robot. (2) The initiating mechanism (IM) can be attributed to unintended actions by an operator. (3) The safety of the operator can be compromised and has the possibility to damage machinery and disrupt production. It can also be motivated, based on the presented case study, that through the use of systematic risk assessment process, hazards associated with collaborative motion can be identified and managed to an acceptable level of risk.

As noted by Eberts and Salvendy [ 44 ] and Parsons [ 45 ], human factors play a major role in robotic system safety. There are various parameters that can be used to better understand the effect of human behavior in system such as overloaded and/or underloaded working environment, perception of safety, etc. The risk assessors need to be aware of human tendencies and take into consideration while proposing safety solutions. Incidentally, in the fatal accident discussed in Section 3.3, perhaps the operator did not perceive the robot as a serious threat and referred to the robot as Robby [ 40 ].

In an automotive assembly plant, as the production volume is relatively high and requires collaborating with other operators, there is a higher probability for an operator to make errors. In Table 1 (No. 6), a three-button switch was specified to ensure unintentional mode change of the robot. It is probable that an operator can accidentally engage the mode-change button (see Figure 7 ) while the robot is in collaborative mode or the hand-guiding operator did not intend the collaborative mode to be completed. In such a scenario, a robot operating in automatic mode was evaluated to have a high risk level, and therefore the decision was made to have a design change with an additional safety-interface—the three-button switch—that is accessible only to the hand-guiding operator.

Informal interviews suggested that the system should be inherently safe for the operators and that the task sequence—robot, operator, and collaborative tasks—should not demand constant monitoring by the operators as it might lead to increased stress. That is, operators should feel safe and in control and that the tasks should demand minimum attention and time.

6. Conclusion and future work

The article presents the results of a risk assessment program, where the objective was the development of an assembly workstation that involves the use of a large industrial robot in a hand-guiding collaborative operation. The collaborative workstation has been realized as a laboratory demonstrator, where the robot functions as an intelligent lifting device. That is, the tasks that can be automated have been tasked to the robot and these sequences of tasks are preprogrammed and run in automatic mode. During collaborative mode, operators are responsible for tasks that are cognitively demanding that require the skills and flexibility inherent to a human being. During this mode, the hand-guided robot carries the weight of the flywheel housing cover, thereby improving the ergonomics of the workstation.

In addition to the laboratory demonstrator, an analysis of the hazards pertinent to hand-guided collaborative operations has been presented. These hazards were identified during the risk assessment phase, where the hazardous element mainly stems from human error. The decisions taken during the risk reduction phase to eliminate or mitigate the risks associated with these hazards have also been presented.

The risk assessment was carried out through different phases, where physical demonstrators supported each phase of the process. The demonstrator-based approach allowed the researchers to have a common understanding of the nature of the system and the associated hazards. That is, it acted as platform for discussion. The laboratory workstation can act as a demonstration platform where operators and engineers can judge for themselves the advantage and disadvantages of collaborative operations. The demonstration activities can be beneficial to researchers as it can function as a feedback mechanism with respect to the decisions that have been made during the risk assessment process.

Therefore, the next step is to invite operators and engineers in trying out the hand-guided assembly workstation. The working hypothesis in inviting operators and engineers is that, personnel whose main responsibility during their working time in an assembly plant is to find the optimal balance between various production related parameters (such as maintenance time, productivity, safety, working environment, etc.) might have deeper insight into the challenges of introducing large industrial robots in the assembly line.

Acknowledgments

The authors would like to thank Björn Backman of Swerea IVF, Fredrik Ore and Lars Oxelmark of Scania CV for their valuable contributions during the research and development phase of this work. This work has been primarily funded within the FFI program and the authors would like to graciously thank them for their support. In addition, we would like to thank ToMM 2 project members for their valuable input and suggestions.

  • 1. Marvel JA, Falco J, Marstio I. Characterizing task-based human-robot collaboration safety in manufacturing. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2015; 45 (2):260-275
  • 2. Tsarouchi P, Matthaiakis A-S, Makris S. On a human-robot collaboration in an assembly. International Journal of Computer Integrated Manufacturing. 2016; 30 (6):580-589
  • 3. Gopinath V, Johansen K. Risk assessment process for collaborative assembly—A job safety analysis approach. Procedia CIRP. 2016; 44 :199-203
  • 4. Caputo AC, Pelagagge PM, Salini P. AHP-based methodology for selecting safety devices of industrial machinery. Safety Science. 2013; 53 :202-218
  • 5. Swedish Standards Institute. SS-ISO 10218-1:2011—Robots and Robotic Devices—Safety Requirements for Industrial Robots. Part 1: Robot. Stockholm, Sweden: Swedish Standards Institute; 2011
  • 6. Swedish Standards Institute. SS-ISO 10218-2:2011—Robots and Robotic Devices—Safety Requirements for Industrial Robots. Part 2: Robot Systems and Integration. Stockholm, Sweden: Swedish Standards Institute; 2011
  • 7. Swedish Standards Institute (SIS). SS-ISO 12100:2010: Safety of Machinery - General principles of Design - Risk assessment and risk reduction. Stockholm, Sweden: Swedish Standards Institute (SIS); 2010. 96 p.
  • 8. Clifton A, Ericson II. Hazard Analysis Techniques for System Safety. Hoboken, New Jersey, USA: John Wiley & Sons; 2015
  • 9. Etherton J, Taubitz M, Raafat H, Russell J, Roudebush C. Machinery risk assessment for risk reduction. Human and Ecological Risk Assessment: An International Journal. 2001; 7 (7):1787-1799
  • 10. Robert K.Yin. Case Study Research - Design and Methods. 5th ed. California, USA: Sage Publications Inc; 2014. 282 p
  • 11. Brogårdh T. Present and future robot control development – An industrial perspective. Annual Reviews in Control. 2007; 31 (1):69-79
  • 12. Krüger J, Lien TK, Verl A. Cooperation of human and machines in assembly lines. CIRP Annals - Manufacturing Technology. 2009; 58 (2):628-646
  • 13. Siciliano B, Khatib O, editors. Springer Handbook of Robotics. Secaucus, NJ, USA: Springer-Verlag New; 2007
  • 14. Krüger J, Bernhardt R, Surdilovic D. Intelligent assist systems for flexible. CIRP Annals - Manufacturing Technology. 2006; 55 (1):29-32
  • 15. Ore F, Hanson L, Delfs N, Wiktorsson M. Human industrial robot collaboration—Development and application of simulation software. International Journal of Human Factors Modelling and Simulation. 2015; 5 :164-185
  • 16. J. Edward Colgate, Michael Peshkin, and Witaya Wannasuphoprasit. Cobots: Robots for collaboration with human operators. Proceedings of the ASME Dynamic Systems and Control Division. Atlanta, GA, 1996; 58 :433-440.
  • 17. Universal Robots. Universal Robots [Internet]. Available from: https://www.universal-robots.com/ [Accessed: March 2017]
  • 18. KUKA AG. Available from: http://www.kuka.com/ [Accessed: March 2017]
  • 19. ABB AB. Available from: http://www.abb.com/ [Accessed: January 2017]
  • 20. ToMM2—Framtida-samarbete-mellan-manniska-och-robot/. Available from: https://www.vinnova.se/ [Accessed: June 2017]
  • 21. The International Organization for Standardization (ISO). Available from: https://www.iso.org/home.html [Accessed: June 2017]
  • 22. International Electrotechnical Commission (IEC). Available from: http://www.iec.ch/ [Accessed: June 2017]
  • 23. David Macdonald. Practical machinery safety. 1st ed. Jordan Hill, Oxford: Newnes; 2004. 304 p
  • 24. Leedy PD, Ormrod JE. Practical Research: Planning and Design. Upper Saddle River, New Jersey: Pearson; 2013
  • 25. Robotic Industrial Association. RIA TR R15.406-2014: Safeguarding. 1st ed. Ann Arbour, Michigan, USA: Robotic Industrial Association; 2014. 60 p
  • 26. Andreas Björnsson. Automated layup and forming of prepreg laminates [dissertation]. Linköping: Linköping University; 2017.
  • 27. Marie Jonsson. On Manufacturing Technology as an Enabler of Flexibility: Affordable Reconfigurable Tooling and Force-Controlled Robotics [dissertation]. Linköping, Sweden: Linköping Studies in Science and Technology. Dissertations: 1501; 2013.
  • 28. Swedish Standards Institute. SS-ISO 8373:2012—Industrial Robot Terminology. Stockholm, Sweden: Swedish Standards Institute; 2012
  • 29. The International Organization for Standardization. ISO/TS 15066: Robots and robotic devices—Collaborative robots. Switzerland: The International Organization for Standard-ization; 2016
  • 30. Leveson NG. Engineering a Safer World: Systems Thinking Applied to Safety. Engineering Systems ed. USA: MIT Press; 2011
  • 31. The International Electrotechnical Commision. IEC TS 62046:2008 – Safety of machiner- Application of protective equipment to detect the presence of persons. Switzerland: The International Electrotechnical Commision; 2008
  • 32. Marvel JA, Norcross R. Implementing speed and separation monitoring in collaborative robot workcells. Robotics and Computer-Integrated Manufacturing. 2017; 44 :144-155
  • 33. SICK AG. Available from: http://www.sick.com [Accessed: December 2016]
  • 34. REER Automation. Available from: http://www.reer.it/ [Accessed: December 2016]
  • 35. Pilz International. Safety EYE. Available from: http://www.pilz.com/ [Accessed: May 2014]
  • 36. The International Organization for Standardization. ISO 13856-1:2013 – Safety of machinery – Pressure-sensitive protective devices – Part 1: General principles for design and testing of pressure-sensitive mats and pressure-sensitive floors. Switzerland: The International Organization for Standardization; 2013
  • 37. The International Organization for Standardization. ISO 13856-2:2013 – Safety of machinery– Pressure-sensitive protective devices – Part 2: General principles for design and testing of pressure-sensitive edges and pressure-sensitive bars. Switzerland: The International Organization for Standardization; 2013
  • 38. The International Organization for Standardization. ISO 13856-3:2013 – Safety of machinery – Pressure-sensitive protective devices – Part 3: General principles for design and testing of pressure-sensitive bumpers, plates, wires and similar devices. Switzerland: The International Organization for Standardization; 2013
  • 39. Dhillon BS. Robot reliability and Safety. New York: Springer-Verlag; 1991
  • 40. Sanderson LM, Collins JW, McGlothlin JD. Robot-related fatality involving a U.S. manufacturing plant employee: Case report and recommendations. Journal of Occupational Accidents. 1986; 8 :13-23
  • 41. Etherton JR. Industrial machine systems risk assessment: A critical review of concepts and methods. Risk Analysis. 2007; 27 (1):17-82
  • 42. Varun Gopinath, Kerstin Johansen, and Åke Gustafsson. Design Criteria for Conceptual End Effector for Physical Human Robot Production Cell. In: Swedish Production Symposium; Göteborg, Sweden:2014.
  • 43. Gopinath V, Ore F, Johansen K. Safe assembly cell layout through risk assessment—An application with hand guided industrial robot. Procedia CIRP. 2017; 63 :430-435
  • 44. Eberts R, Salvendy G. The contribution of cognitive engineering to the safe. Journal of Occupational Accidents. 1986; 8 :49-67
  • 45. McIlvaine Parsons H. Human factors in industrial robot safety. Journal of Occupational Accidents. 1986; 8 (1-2):25-47

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Published: 28 February 2018

By Riitta Molarius

1347 downloads

By Stig O. Johnsen

1655 downloads

By Safet Kozarevic, Emira Kozarevic, Pasqualina Porre...

1516 downloads

IntechOpen Author/Editor? To get your discount, log in .

Discounts available on purchase of multiple copies. View rates

Local taxes (VAT) are calculated in later steps, if applicable.

risk decisions

  • Predict! Software Suite
  • Training and Coaching
  • Predict! Risk Controller
  • Rapid Deployment
  • Predict! Risk Analyser
  • Predict! Risk Reporter
  • Predict! Risk Visualiser
  • Predict! Cloud Hosting
  • BOOK A DEMO
  • Risk Vision
  • Win Proposals with Risk Analysis
  • Case Studies
  • Video Gallery
  • White Papers
  • Upcoming Events
  • Past Events

risk assessment case study examples

Fehmarnbelt case study

. . . . . learn more

risk assessment case study examples

Lend Lease case study

risk assessment case study examples

ASC case study

risk assessment case study examples

Tornado IPT case study

risk assessment case study examples

LLW Repository case study

risk assessment case study examples

OHL case study

risk assessment case study examples

Babcock case study

risk assessment case study examples

HUMS case study

risk assessment case study examples

UK Chinook case study

  • EMEA: +44 (0) 1865 987 466
  • Americas: +1 (0) 437 269 0697
  • APAC: +61 499 520 456

risk assessment case study examples

Subscribe for Updates

Copyright © 2024 risk decisions. All rights reserved.

  • Privacy Policy
  • Cookie Policy
  • Terms and Conditions
  • Company Registration No: 01878114

Powered by The Communications Group

  • How it works
  • Case studies

13 case studies on how risk managers are assessing their risk culture

William Sanders

Continuing on from last week's post, There’s no such thing as risk culture, or is there? , this is the third in a series of blogs in which we are summarising key insights gained from about 50 risk managers and CROs interviewed between December 2019 and May 2020.

There are various techniques and different mindsets on how to assess and measure risk culture. We round-up the very best case studies, tools and templates used by risk managers around the world.

To survey or not to survey?

If you start from a base of assuming you need a survey (or perhaps you have an executive or board who want one), then you are faced with two main choices:

  • Include a number of questions in a larger employee engagement/culture survey, probably being run by HR (as one of our Member organisations did, only to discover the results didn’t align with their anecdotal feedback and experiences)
  • Conduct a dedicated risk culture survey, which might later be re-run as a benchmark (as one former CRO at an international airline did upon joining the organisation).

However, not everyone believes a survey is the way to go. Or at least, not a survey in isolation.

It’s a self-assessment tool, for one thing, as former Bank of Queensland CRO Peter Deans pointed out in a recent Intelligence contribution (Members: access this here ). You may not get the true risk picture you need, if you are only asking people if they believe they are making risk-aware decisions and are satisfied with the culture.

UK risk consultant Roger Noon shared with us a variety of tools risk managers can use in-house to help understand behaviours and diagnose culture (Members: access these tools here) . Of quantitative risk culture surveys, he says: “Survey instruments can also be used so long as you and your sponsors recognise that they are typically very blunt tools, often with poor validity. They're very ‘point in time and context’ driven, and they don't really provide you with objective observable output. 

“However, they can be used to generate interesting data that creates helpful dialogue at the senior management table. They’re also useful to build engagement with the people that are part of the culture, and as part of a wider, triangulated set of data.”

In other instances, risk managers found it was not employees they initially needed to survey, but their board. Across different industries, different understandings of risk culture exist. If your board is asking about risk culture, it can be a good idea to check in that you (and they, among themselves) are all on the same page before beginning any broader projects. (Members: take a look at some sample questions about risk culture for the board here .)

So overt it’s covert

When it comes to an organisation’s overall approach to assessing and changing risk culture, there are also a few fundamentally different mindsets.

For some companies, the ‘culture overhaul’ needs to be a large project with lots of publicity and a big push from the top. In such cases, when it comes to driving change, extensive engagement and communications programs are planned, potentially including video.

We collected one case study, however, that stood out for its far more subtle and positive approach. In it, the head of risk at a large organisation with a few thousand staff spread across nine departments said there were a lot of preconceptions and quite a bit of nervousness around the idea of ‘working on risk culture’. This risk manager had therefore developed a different kind of self-assessment tool, which helped participants map their own risk culture using evidence-based attributes. 

At the end of the initial meeting (which took no more than an hour and a half), participants had identified their own areas for improvement and incorporated culture elements into their future risk planning. (Members: access this case study here .)

Sometimes risk managers reach a point where they simply have to be realistic about their resources and prospects for implementing large scale change.

In another example from the Middle East, an expat risk manager found it was a case of trying to move his company’s risk culture at different ‘clock speeds’ across the organisation’s verticals, catering to different levels of appetite, awareness and need for change between delivery teams and the C-Suite. (Members: access this case study here .)

And, finally, sometimes risk managers reach a point where they simply have to be realistic about their resources and prospects for implementing large scale change. If there’s no appetite from the top for a risk culture shift, the risk manager will have an uphill battle. We’ve collected ideas from the former risk leader at a government utility, who devised tactics for embedding changes into existing systems and processes to deliver better risk outcomes for the business. (Members: access these ideas here .)

Measuring, reporting and dashboards

We found that the facet of culture where everybody most wanted to know what everybody else was measuring and what they were doing in terms of reporting and dashboards.

Again, there were a number of different methods shared by our Members and contributors, as well as contrasting views on what actually should be measured.

For example, is it redundant to actually measure ‘risk culture’? After all, isn’t the entire point of improving risk culture to improve risk outcomes? Why not just focus on measuring the risk outcomes, with culture change happening in the background to facilitate? 

Certainly, this was the view of the former risk manager at a prominent United States government organisation, who spoke to us about building up their organisation’s risk capability over several years. (Members: read more on this here .)

Is it redundant to actually measure ‘risk culture’? After all, isn’t the entire point of improving risk culture to improve risk outcomes?

However, others saw value in tracking specific culture metrics, even if these goals were a means to an end. A scorecard or dashboard became a talking point to launch difficult conversations with different managers or executives, and the ability to show progress over time helped maintain momentum and commitment.

Over time, Peter Deans at BOQ developed and refined a ‘basket of risk culture measures’ along the same lines as the consumer price index, which he regularly updated and used to give leadership a ‘big picture view’ of how risk culture was doing.

Other contributing risk managers shared their scorecards and dashboards with us as templates, such as a scorecard example using a traffic light system across nine key risk indicators. We also collected ideas for dashboard metrics and a spreadsheet-based sunburst tool, alongside risk culture pillars.

On a final note, UK risk advisor Danny Wong shared a detailed case study on how to use data to drive an impactful risk narrative. For any risk managers who are striving to bring risk into line with many other functions in contemporary business – such as product development, sales, operations, and others that regularly use data strategically to inform decision making and best practice – this piece is essential reading. (Members: access this piece here .)

Risk Leadership Network’s Intelligence platform – our searchable database of peer-contributed case-studies, tools and templates – delves deeper into risk culture with more on diagnosing culture , addressing culture and ethics , and building a risk culture survey of boards . (Members only)

Are you an in-house risk manager who could benefit from collaborating with a global network of senior risk professionals talk to us about becoming a member today ., related posts you may be interested in.

risk assessment case study examples

5 ways to become a better leader in risk culture

risk assessment case study examples

There’s no such thing as risk culture, or is there?

risk assessment case study examples

Three useful tools to optimise a risk culture review

Get new posts by email.

How to Do a Risk Assessment: A Case Study

John Pellowe

Healthy , Organizational Leadership , Winning Strategy | Execution , Organizational Health Management , Risk management , Strategic planning

how to do a risk assessment  a case study

Christian Leadership Reflections

An exploration of Christian ministry leadership led by CCCC's CEO John Pellowe

There’s no shortage of consultants and authors to tell boards and senior leaders that risk assessment is something that should be done. Everyone knows that. But in the chronically short-staffed world of the charitable sector, who has time to do it well? It’s too easy to cross your fingers and hope disaster won’t happen to you!

If that’s you crossing your fingers, the good news is that risk assessment isn’t as complicated as it sounds, so don’t be intimidated by it. It doesn’t have to take a lot of time, and you can easily prioritize the risks and attack them a few at a time. I recently did a risk assessment for CCCC and the process of creating it was quite manageable while also being very thorough.

I’ll share my experience of creating a risk assessment so you can see how easy it is to do.

Step 1: Identify Risks

The first step is obvious – identify the risks you face. The trick is how you identify those risks. On your own, you might get locked into one way of thinking about risk, such as people suing you, so you become fixated on legal risk. But what about technological risks or funding risks or any other kind of risk?

I found a helpful way to identify the full range of risks is to address risk from three perspectives:

  • Two of the mission-related risks we identified at CCCC were 1) if we gave wrong information that a member relied upon to their detriment; and 2) if a Certified member had a public scandal.
  • We listed several risks to organization health for CCCC. Among them were 1) a disaster that would shut down our operations at least temporarily, and 2) a major loss from an innovation that did not work.
  • We identified a risk related to the sociopolitical environment.

I began the risk assessment by reviewing CCCC from these three perspectives on my own. I scanned our theory of change, our strategy map, and our programs to identify potential risks. I then reviewed everything we had that related to organizational health, which included our Vision 2020 document (written to proactively address organizational health over the next five years),  financial trends, a consultant’s report on a member survey, and a review of our operations by an expert in Canadian associations. I also thought about our experience over the past few years and conversations I’ve had with people. Finally, I went over everything we know about our environments and did some Internet research to see what else was being said that might affect us.

With all of this information, I then answered questions such as the following:

  • What assumptions have I made about current or future conditions? How valid are the assumptions?
  • What are my nightmare scenarios?
  • What do I avoid thinking about or just hope never happens?
  • What have I heard that went wrong with other organizations like ours?
  • What am I confident will never happen to us? Hubris is the downfall of many!
  • What is becoming more scarce or difficult for us?

At this point, I created a draft list of about ten major risks and distributed it to my leadership team for discussion. At that meeting we added three additional risks. Since the board had asked for a report from staff for them to review and discuss at the next board meeting, we did not involve them at this point.

risk assessment case study examples

Step 2: Probability/Impact Assessment

Once you have the risks identified, you need to assess how significant they are in order to prioritize how you deal with them. Risks are rated on two factors:

  • How likely they are to happen (That is, their Probability )
  • How much of an effect could they have on your ministry (Their anticipated Impact )

Each of these two factors can be rated High , Medium , or Low . Here’s how I define those categories:

  • High : The risk either occurs regularly (such as hurricanes in Florida) or something specific is brewing and becoming more significant over time, such that it could affect your ministry in the next few years.
  • Medium : The risk happens from time to time each year, and someone will suffer from it (such as a fire or a burglary). You may have an elevated risk of suffering the problem or you might have just a general risk, such as everyone else has. There may also be a general trend that is not a particular problem at present but it could affect you over the longer term,
  • Low : It’s possible that it could happen, but it rarely does. The risk is largely hypothetical.
  • High : If the risk happened, it would be a critical life or death situation for the ministry. At the least, if you survive it would change the future of the ministry and at its worst, the ministry may not be able to recover from the damage and closure would be the only option.
  • Medium : The risk would create a desperate situation requiring possibly radical solutions, but there would be a reasonable chance of recovering from the effects of the risk without long term damage.
  • Low : The risk would cause an unwelcome interruption of normal activity, but the damage could be overcome with fairly routine responses. There would be no question of what to do, it would just be a matter of doing it.

I discussed my assessments of the risks with staff and then listed them in the agreed-upon priority order in six Probability/Impact combinations:

  • High/High – 2 risks
  • High/Medium – 1 risk
  • Medium/High – 2 risks
  • Medium/Medium – 3 risks
  • Low/High – 3 risks
  • Low/Medium – 2 risks

I felt that the combinations High/Low, Medium/Low, and Low/Low weren’t significant enough to include in the assessment. The point of prioritizing is to help you be a good steward as you allocate time and money to address the significant risks. With only thirteen risks, CCCC can address them all, but we know which ones need attention most urgently.

Step 3: Manage Risk

After you have assessed the risks your ministry faces (steps 1 and 2), you arrive at the point where you can start managing  the risks. The options for managing boil down to three strategies:

  • Prevent : The risk might be avoided by changing how you do things. It may mean purchasing additional equipment or redesigning a program. In most cases, though, you probably won’t actually be able to prevent the risk from ever happening. More likely you will only be able to mitigate the risk.
  • Mitigate : Mitigate means to make less severe, serious, or painful. There are two ways to mitigate risk: 1) find ways to make it less likely to happen; and 2) lessen the impact of the risk if it happens. Finding ways to mitigate risk and then implementing the plan will take up most of the time you spend on risk assessment and management. This is where you need to think creatively about possible strategies and action steps. You will also document the mitigating steps you have already taken.
  • Transfer  or Eliminate : If you can’t prevent the risk from happening or mitigate the likelihood or impact of the risk, you are left with either transferring the risk to someone else (such as by purchasing insurance) or getting rid of whatever is causing the risk so that the risk is no longer applicable. For example, a church with a rock climbing wall might purchase insurance to cover the risk or it might simply take the wall down so that the risk no longer exists.

Step 4: Final Assessment

Armed with all this information, it’s time to prepare a risk report for final review by management and then the board. I’ve included a download in this post to help you write the report. It is a template document with an executive summary and then a detailed report. They are partially filled out so you can see how it is used.

risk assessment case study examples

After preparing your report, review it and consider whether or not the mitigating steps and recommendations are sufficient, Do you really want to eliminate some aspect of your ministry to avoid risk? Do you believe that whatever action has been recommended is satisfactory and in keeping with the ministry’s mission and values? Are there any other ways to get the same goal achieved or purpose fulfilled without attracting risk?

Finally, after all the risk assessment and risk management work has been done, the ministry is left with two choices:

  • Accept whatever risk is left and get on with the ministry’s work
  • Reject the remaining risk and eliminate it by getting rid of the source of the risk

Step 5: Ongoing Risk Management

On a regular basis, in keeping with the type of risk and its threat, the risk assessment and risk management plan should be reviewed to see if it is still valid. Have circumstances changed? Are the plans working? Review the plan and adjust as necessary.

Key Thought: You have to deal with risk to be a good steward, and it is not hard to do.

Share this post

Sign up for Christian Leadership Reflections today!

More from christian leadership reflections.

  • The Long-Term Benefits of a Sabbatical (Jun. 14, 2023)
  • How to Release Your Mission Statement’s Power (May. 20, 2023)
  • A Theology of Strategy Development (May. 8, 2023)
  • God’s Christmas Gift to Us: Peace through Christ (Dec. 13, 2022)
  • Looking Around: Corporate Values (Oct. 18, 2022)
  • Adaptive(17)
  • Ample Resources(9)
  • Best Practices(10)
  • Christian(39)
  • Christian Faith(25)
  • Christian Fundraising(10)
  • Christian Identity(6)
  • Christian Mission(1)
  • Christian Spirituality(5)
  • Christian Witness(7)
  • Church-agency(2)
  • Collaborative(9)
  • Community Leadership(44)
  • COVID-19(1)
  • Effective(76)
  • Employee engagement(2)
  • Exemplary(46)
  • Flourishing People(34)
  • Fundraising(3)
  • Governance(25)
  • Great Leadership(115)
  • Great Leadership(1)
  • Healthy(180)
  • Impeccable(12)
  • Intellectual Creativity(15)
  • Leadership(2)
  • Leadership - Theology(6)
  • No category(2)
  • Organizational Health(2)
  • Organizational Leadership(54)
  • Personal Leadership(60)
  • Personal Reflection(7)
  • Planful(12)
  • Religious Philosophy(1)
  • Skillful Execution(9)
  • Spirituality of Leadership(32)
  • Strategy(34)
  • Team Leadership(29)
  • Teamship(5)
  • Thoughtful(38)
  • Trailblazing(16)
  • Uncategorized(8)
  • Winning Strategy(24)
  • A Milestone 360(3)
  • Appreciation at Work(3)
  • Christian Identity(3)
  • Conflict Resolution(4)
  • Corporate life as corporate witness(6)
  • Dad's Passing(2)
  • Delegation God's Way(1)
  • Essential Church Leadership(1)
  • Faithful Strategy Development(18)
  • Harvard Business School(12)
  • Hearing God speak(4)
  • How a board adds value(5)
  • Loving Teamship(3)
  • Oxford University(4)
  • Pastors: A Hope and a Future(24)
  • Program Evaluation(7)
  • Sabbatical(37)
  • Sector Narrative(4)
  • Stanford University(3)
  • Who We Serve
  • What We Value
  • 50th Anniversary
  • Board of Directors
  • Financials and Policies
  • Membership Options
  • Accreditation Program
  • Member Stories
  • Sector Representation
  • Legal Defence Fund
  • Member Support Team
  • Professional Associate Directory
  • HR Consulting
  • Employee Group Benefit Plans
  • Pension Plan
  • Christian Charity Jobs
  • Canadian Ministry Compensation Survey
  • Learning Table Online Courses
  • CCCC Knowledge Base
  • Live Webinars
  • Completing Your T3010
  • Free Resources
  • Spiritual Resources
  • Property and Liability Insurance
  • Protection for Vulnerable Persons
  • Give Confidently
  • CCCC Community Trust Fund
  • Donor Information
  • Fundraiser Information

Page Tips

Home / Resources / ISACA Journal / Issues / 2020 / Volume 5 / How FAIR Risk Quantification Enables

Case study: how fair risk quantification enables information security decisions at swisscom.

journal volume 5

Swisscom is Switzerland’s leading telecom provider. Due to strategic, operational and regulatory requirements, Swisscom Security Function (known internally as Group Security) has implemented quantitative risk analysis using Factor Analysis of Information Risk (FAIR). Over time, Swisscom’s FAIR implementation has enabled Group Security to objectively assess, measure and aggregate security risk. Along the way, Swisscom’s Laura Voicu, a senior security architect, has led the Swisscom security risk initiative.

Introduction

Information risk is the reason businesses have security programs, and a risk management process can be a core security program enabler. With an effective risk program, business risk owners are well-informed about risk areas and take accountability for them. They are able to integrate risk considerations into managing value-producing business processes and strategies. They can express their risk tolerance (i.e., appetite) to technical and operational teams and, at a high level, direct the risk treatment strategies those teams take.

Most organizations now operate as digital businesses with a high reliance on IT. They can benefit by shifting the corporate culture from one that focuses on meeting IT compliance obligations to one that targets overall risk reduction. Visibility into the overall security of the organization plays an important role in establishing this new dialog. Security leaders can prioritize their security initiatives based on the top risk areas that an organization faces.

Swisscom uses quantifiable risk management enabled through Open FAIR to:

  • Communicate security risk to the business
  • Ascertain business risk appetites and improve business owner accountability for risk
  • Prioritize risk mitigation resources based on business impact
  • Calculate the return on investment (ROI) of security initiatives
  • Meet new and more stringent regulatory requirements

Company Background

Swisscom is the leading telecom provider in Switzerland and one of its foremost IT companies, headquartered in Ittigen, near the capital city of Bern. In 2019, 19,300 employees generated sales of CHF 11,453 (USD $12,490) million. It is 51 percent confederation-owned and is considered one of Switzerland’s most sustainable and innovative companies. Swisscom offers mobile telecommunications, fixed network, Internet, digital TV solutions and IT services for business and residential customers. Swisscom’s Group Security, which is a centrally managed function at Swisscom, provides policies and standards for all lines of business, while allowing each business to operate independently.

WHATEVER ITS MANY BENEFITS, DIGITIZATION IN THE VIRTUAL WORLD ALSO HAS A DARKER SIDE AND ORGANIZATIONS ARE FACING NEW KINDS OF RISK.

Figure 1

Qualitative Risk Analysis Pain Points

Prior to 2019, Swisscom managed and assessed information risk using qualitative analysis methods. The process was well-suited to quick decisions and easy to communicate with a visually appealing heat map. However, the Swisscom security team identified several fundamental flaws, including bias, ambiguity in meaning (e.g., What does "red” or “high" really mean?) and a probability that the person doing the measurement had not taken the time to clearly define what it is he or she just measured.

For reference, figure 1 illustrates a sample 5x5 heat map plotting nine risk areas (R1 to R9) on a graph where the vertical access plots the probability of a risk materializing and the horizontal access plots the hypothetical impact.

Risk Terminology

  • Risk (per FAIR) —The probable frequency and probable magnitude of future loss
  • Open FAIR —Factor Analysis of Risk (as standardized by The Open Group)
  • Information risk —Risk of business losses due to IT operational or cybersecurity events
  • Qualitative risk analysis —The practice of rating risk on ordinal scales, such as 1 equals low risk, 2 equals medium risk or 3 equals high risk
  • Quantitative risk analysis —The practice of assigning quantitative values, such as number of times per year for likelihood or frequency, and mapping impact to monetary values
  • Enterprise risk management —The methods and processes used by organizations to manage the business risk universe (e.g., financial, operational, market) as well as to seize opportunities related to the achievement of their objectives

Inconsistent Risk Estimates Qualitative risk estimates tended to be calculated in an inconsistent manner and were often found to be unhelpful. Because analysts did not use a rigorous risk quantification model such as FAIR to rate risk, they relied on the mental models or years of habit.

Early staff experiments with quantifying security risk also failed; per a senior security officer at Swisscom, the reasons for this were, “Too little transparency and too many assumptions. In short: a constant discussion about the evaluation method and not about the risk itself.”

Too Many “Mediums” Odd things happened: Virtually all risk areas were rated “medium.” A high rating is a strong statement and draws unwanted attention to the risk from business management, who might then demand some strong justification for the rating. A low rating would look foolish if something bad actually happened. Rating risk “medium” equals the safe way out.

Inability to Prioritize Risk Issues Although utilizing qualitative methods may provide some prioritization capability (a risk rated red is some degree worse than one rated yellow), Swisscom had no way of economically evaluating the difference between a red and yellow, between one red or two yellows, or even between two yellows such as R1 and R9 as shown in figure 1. In short, Swisscom had poor visibility into the security risk landscape, thus potentially misprioritizing critical issues. Over time, Swisscom staff came to share the FAIR practitioner community objections articulated in the article “Thirteen Reasons Why Heat Maps Must Die.” 1

Demand for More Accurate Risk Assessments After a Breach In 2018, Swisscom went public to announce a large data breach. Swisscom took immediate action to tighten the internal security measures to prevent such an incident from happening again. Further precautions were introduced in the course of the year.

Following the data breach, Swisscom IT and security executives sought to improve the risk assessment process. Staff had made early attempts to quantify security risk using single numerical values, or single-point estimates of risk by assigning values for discrete scenarios to see what the outcome might be in each. This technique provided little visibility into the uncertainty and variability surrounding the risk estimate.

Establishing a Quantitative Risk Analysis Program

Swisscom’s Group Security team learned about FAIR in 2018 and became convinced that its model was superior to in-house risk quantification approaches that the team had attempted to use in the past. FAIR allows security professionals to present estimates of risk (or loss exposure) that show decision-makers a range of probable outcomes. Using ranges brings a higher degree of accuracy to estimates with enough precision to be useful.

FAIR ALLOWS SECURITY PROFESSIONALS TO PRESENT ESTIMATES OF RISK (OR LOSS EXPOSURE) THAT SHOW DECISIONMAKERS A RANGE OF PROBABLE OUTCOMES.

The decision was made to use FAIR in 2018 and Senior Security Architect Laura Voicu was assigned to lead a core team of a few part-time FAIR practitioners. The risk project’s initial phase was to define risk scenarios in a consistent manner throughout Swisscom. As result of this work effort, the team produced a formal definition and consistent structure for normalizing risk register entries into FAIR-compliant nomenclature, shown in figure 2 .

Figure 2

The FAIR team performed multiple analyses and continued to deepen its experience with the quantitative approach. As a best practice, the team interviewed or held workshops with subject matter experts (SMEs) on controls, incidents, impacts and other areas representing variables in the FAIR analysis.

Starting in early 2019, a small group of stakeholders within the security organization conducted a proof of concept (POC) to perform assessments of the customer portal data breach risk, risk associated with different cloud workload migration strategies, outage of systems or networks due to ransomware and, recently, remote working use cases to continue operating amid the COVID-19 disruption.

In parallel, Group Security defined roles, analysis processes and risk management processes. The team defined the following roles:

  • Risk reporters —Security professionals who help identify and report security risk. Risk reporters work interdepartmentally to identify, assess and reduce security risk factors by recommending specific measures that can improve the overall security posture. They also have the overall responsibility to oversee the coordinated activities to direct and control risk.
  • Risk owners —Business owners and operations managers who manage the security risk scenarios that exist within their business areas. They are responsible for implementing corrective actions to address process and control deficiencies, and for maintaining effective controls on a day-to-day basis. They assume ownership, responsibility and accountability for directly controlling and mitigating risk.

The team also established the following processes:

  • Identification —Uncover the risk factors (or potential loss events) and define them in a detailed, structured format. Assign ownership to the areas of risk.
  • Assessment —Assess the probable frequency of risk occurrence, and the probable impacts. This helps prioritize risk. It also enables comparison of risk relative to each other and against the organization’s risk appetite.
  • Response —Define an approach for treating each assessed risk factor. Some may require no actions and only need to be monitored. Other risk factors considered unacceptable require an action plan to avoid, reduce or transfer them.
  • Monitoring and reporting —Reporting is a core part of driving decision-making in effective risk management. It enables transparent communication to the appropriate levels (according to Swisscom’s internal rules of procedure and accountability) of the net or residual risk.

Thus, the risk analysis processes normalize risk scenarios into the FAIR model, prioritize them and assess the actual financial loss exposure associated with each risk scenario. In parallel to the strategic risk analysis of the top risk areas, the FAIR team can also provide objective analysis to support tactical day-to-day risk or spending decisions. These analyses can help assess the significance of individual audit findings and efficacy of given controls, and can also justify investments and resource allocations based on cost-benefit.

The FAIR team is constantly improving and simplifying the process of conducting quantitative risk assessments using the FAIR methodology. In a workshop-based approach, the team tries to understand the people, processes and technologies that pose a risk to the business.

Ongoing Work Items

As of early 2020, Swisscom’s core FAIR team consists of three part-time staff members. This team is part of a virtual community of practitioners concerned with security risk management in the company.

The team continues to drive the following work items:

  • Risk scenario analysis
  • Risk scenario reporting
  • Risk portfolio analysis and reporting
  • Internal training
  • Improving the tool chain
  • Improving risk assessment processes

Risk Scenario Analysis The FAIR team performs the deep analysis of risk scenarios using an open-source tool adapted for Swisscom’s use. Based on the analysis, it provides quantitative estimates for discussion with risk, IT and business analysts ( figure 3 ).

Figure 3

Figure 3 ’s loss exceedance curve depicts a common visualization of FAIR risk analysis output. The Y axis, Probability of Loss or Greater, shows the percentage of Monte Carlo simulations that resulted in a loss exposure greater than the financial loss amount on the X axis. Each Monte Carlo simulation is like a combination of random coin tosses of all the risk components of the FAIR risk ontology shown in figure 2 . During the analysis, the FAIR team generates calibrated estimates for the range of values for each risk component. A calibrated estimate is an SME’s best estimate of the minimum, maximum and most likely probability of the risk factor. Each estimated risk factor in the ontology is fed into the Monte Carlo simulation by the FAIR tool.

THE FAIR TEAM PERFORMS THE DEEP ANALYSIS OF RISK SCENARIOS USING AN OPENSOURCE TOOL ADAPTED FOR SWISSCOM’S USE.

Although the SMEs tend to provide fact-based, objective information for use in estimates to the best of their abilities, challenges can arise when presenting initial completed analyses to stakeholders.

“Risk owners tend to want to push the numbers down, but security leaders try to keep them up,” Voicu explained.

Often, however, the stakeholders can meet in the middle for a consensus and come together on risk treatment proposals with a strong return on security investment (ROSI) measured by the difference between the inherent risk analysis and the residual risk analysis.

In the case of the customer portal data breach scenario, the FAIR team and the business stakeholders agreed on adding two-factor authentication (2FA) for portal users. This solution had a low cost because Swisscom already possessed the 2FA capability and needed only to change the default policy configuration to require 2FA. Figure 5 shows a diagram of the current (or inherent) vs. residual risk analysis amounts using fictional numbers aligned with the assessment shown in figure 4 . The current risk depicts the amount of risk estimated to exist without adding new controls to the current state. The residual risk shows the amount of risk estimated to exist after the hypothetical addition of the new 2FA control.

Figure 4

Risk Scenario Reporting Once the analysts reach a consensus on estimates during working meetings, the FAIR team provides management reports using one-page summaries with quantitatively scaled, red-yellow-green diagrams based on the risk thresholds (i.e., risk appetite) of the risk owner ( figure 4 ). The Swisscom FAIR team has found that often management trusts the teams’ analysis and does not want to see the FAIR details. However, the numerical analysis drill-down is available if management wishes to understand or question the risk ratings and recommendations.

Risk Portfolio Analysis and Reporting Strategic risk analyses are typically driven by boards and C-level executives with the intent of understanding, communicating and managing security risk holistically and from a business perspective. This enables executives to define their risk appetite and boards to approve it. The organization can also right-size security budgets, prioritize risk mitigation initiatives and accept certain levels of risk. Strategic risk analyses conducted by the FAIR team can be used to measure risk trending over time. The FAIR team began providing a strategic risk analysis report on a quarterly basis to the board of directors in early 2020. Figure 6 provides an example.

Figure 6

Internal Training The team began by socializing FAIR concepts among the cybersecurity functions and other internal groups to establish a broader FAIR adoption. The team provided workshops and training for additional security staff as well as stakeholders and aims to further extend training offerings.

Improving the Tool Chain Swisscom has assessed several FAIR risk quantification tools:

  • Basic risk analysis —Pen and paper, qualitative method using Measuring and Managing Information Risk: A FAIR Approach 2
  • FAIR-U —Free, basic version of RiskLens. For noncommercial use only. Registration required.
  • RiskLens —Commercial, fee-based FAIR application
  • Evaluator —Free open-source application, OpenFAIR implementation built and run on R + Shiny
  • PyFair —FAIR implementation built on Python
  • FAIR Tool —Free open-source application built on R + Shiny
  • OpenFAIR Risk Analysis Tool —OpenGroup’s Excel-based application. Registration required.
  • RiskQuant —Open-source application built in Python

In the end, Swisscom has opted for developing the tool in-house by adapting the RiskQuant analysis module. Swisscom is improving the tool chain by enhancing the analysis module with reporting capabilities and multiscenario aggregated analyses capabilities. The in-house tool is designed to support the entire security risk management life cycle—from risk identification and scoping to risk analysis and prioritization to the evaluation of risk mitigation options to risk reporting. The team is progressively adding additional modules to the in-house tool, such as:

  • Decision support —Enabling decisions on the best risk mitigation options based on their effectiveness in reducing financial loss exposure. The tool already provides the capability for conducting comparative and cost-benefit analyses to assess what changes in security strategy or what risk mitigation options provide the best ROI.
  • Security data warehouse —Swisscom’s existing security data warehouse defines, stores and manages critical assets in a central location. Risk tools can leverage this information in risk scenarios related to assets. Stakeholders can also view the risk areas and issues associated with their assets and understand the risk posture on a continuous basis.
  • Risk portfolio —The module aims to provide a deeper understanding of enterprise risk as well as aggregate or portfolio views of risk across business units. This module will also allow Swisscom to set key metrics to measure and manage cyberrisk, such as risk appetite, and conduct enterprise-level what-if analyses.

WHAT STARTED AS A SHORT-TERM OPPORTUNITY TO NORMALIZE AND PRIORITIZE RISK TURNED INTO A LONG-TERM JOURNEY TO MANAGE A PORTFOLIO OF SECURITY INVESTMENTS.

Improving Risk Assessment Processes To enhance Swisscom’s ability to identify risk scenarios deserving full FAIR analyses, the FAIR team is creating a triage questionnaire that will enable IT and security staff to perform a quick assessment of issues before submitting them as risk areas for analysis. The triage consists of 10 yes-or-no questions and requires less than 15 minutes to complete.

Lessons Learned

It is instructive to review lessons learned after establishing a risk program:

  • Bring the discussion to the business owners of the risk and the budget. Prior to the FAIR program, the risk acceptance process was not formally aligned to Swisscom’s rules of procedures and accountability. These rules provide a process whereby executives are authorized to accept risk up to certain levels, and how to decide whether higher risk can be accepted. When the FAIR program was introduced, Swisscom began identifying the executives who will end up covering the losses if risk scenarios actually materialize. With very rare exceptions, those identified business executives should also be responsible for owning or accepting risk.
  • Focus on the assumptions, not the numbers. As noted earlier, risk ratings or quantities can become politicized. Some parties may desire lower or higher results depending on their own agendas. The FAIR model can act as a neutral arbiter if stakeholders understand the assumptions. Although participants in the risk process will always have agendas, focusing on assumptions puts the discussion on a more logical footing.
  • Be flexible about reporting formats. Once risk analysts learn FAIR, there can be a temptation to take a “purist” position and evangelize the methodology too ardently. However, not all stakeholders were interested in the complexity of simulations and ontology. The Swisscom FAIR team found that the one-page risk summary using a familiar “speedometer” diagram (figure 4) facilitated easier acceptance of quantitative analysis results from the business risk owners. It should be noted that quantitative risk values still underlie the one-page summary. Behind the scenes, quantitative risk appetites and risk estimates determine a risk’s status as red, yellow or green.
  • Maintain momentum. When the FAIR journey started, the project scope was fluid. The FAIR team has found that the more the scope expanded, the more resources were required to provide increasing value. What started as a short-term opportunity to normalize and prioritize risk turned into a long-term journey to manage a portfolio of security investments.

Swisscom is currently preparing to begin tracking formal risk metrics. Figure 7 displays planned metrics and observations on the data collected or expected at this time.

Figure 7

Swisscom considers the benefits of the FAIR process to be that the company can:

  • Objectively assess information risk, which enhances the ability to approve large security initiatives
  • Measure aggregated information risk exposure
  • Break out risk exposure for business units, risk categories and top assets or crown jewels

The team is optimistic as of 2020 about the ability of the FAIR program to enable data-driven decision-making. The team is improving its risk reporting portfolio to produce reports such as the ones shown in figure 6 both at an enterprise level and at the business unit level. The team plans to conduct ROI analyses to assess the effectiveness of security spending. It is also currently in discussions with operational risk management and enterprise risk management (ERM) functions on the possibility of expanding the use of FAIR, especially in the domain of operational availability risk.

1 Salah, O.; “Thirteen Reasons Why Heat Maps Must Die,” FAIR Institute Blog, 28 November 2018, https://www.fairinstitute.org/blog/13-reasons-why-heat-maps-must-die 2 Freund, J.; J. Jones; Measuring and Managing Information Risk: A FAIR Approach , Butterworth-Heinemann, United Kingdom, 2014, p. 205–214

Dan Blum, CISSP, Open FAIR

Is an internationally recognized strategist in cybersecurity and risk management. His forthcoming book is Rational Cybersecurity for the Business . He was a Golden Quill Award-winning vice president and distinguished analyst at Gartner, Inc., has served as the security leader at several startups and consulting companies, and has advised hundreds of large corporations, universities and government organizations. Blum is a frequent speaker at industry events and participates in industry groups such as ISACA ® , FAIR Institute, IDPro, ISSA, the Cloud Security Alliance and the Kantara Initiative.

Laura Voicu, Ph.D.

Is an experienced and passionate enterprise architect with more than 10 years of experience in telecommunication and other industries. She is a leader in enterprise and data architecture, cybersecurity and quantitative risk analysis. Her latest passion is data science and driving innovation with a focus on big data and machine learning. Voicu frequently presents at conferences and volunteers as an ISACA SheLeadsTech Ambassador.

risk assessment case study examples

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Committee on Risk Assessment Methodology. Issues in Risk Assessment. Washington (DC): National Academies Press (US); 1993.

Cover of Issues in Risk Assessment

Issues in Risk Assessment.

  • Hardcopy Version at National Academies Press

Appendix E Case Studies and Commentaries

Case study 1: tributyltin risk management in the united states.

R. J. Huggett and M. A. Unger, Virginia Institute of Marine Sciences

Tributyltin (TBT) is a chemical with a variety of biocidal applications, including use as an antifouling agent in boat paints (Blunden and Chapman, 1982). Biological effects of TBT on marine and estuarine organisms and the concentrations of TBT that induce them vary widely among species (Huggett et al., 1992). A water concentration of 1,000 ng/L (1 part per billion) is lethal to larvae of some species, and nonlethal effects have been observed at concentrations as low as 2 ng/L (2 parts per trillion, ppt). Both laboratory and field studies of toxicity were initially hampered by difficulties in measuring the low concentrations that were toxic to some organisms.

Adverse effects on nontarget organisms, including commercially valuable species of shellfish, were observed in Europe in the early 1980s (Alzieu, 1986; Abel et al., 1986). Abnormal shell growth was documented in Crassostrea gigas (European oyster) and linked through laboratory experiments to TBT leached from antifouling paints. That connection led to restrictive regulations in France (in 1982) and Great Britain (in 1985 and 1987). In the United States, concentrations exceeding those determined experimentally to be effective have been found in many areas, particularly in harbors with large marinas. Snails in the vicinity of a marina on the York River, Virginia, were shown to have an abnormally high incidence of imposex (expression of male characteristics by female organisms), an effect previously observed under laboratory conditions in female European oysters, Ostrea edulis (Huggett et al., 1992). EPA began to assess effects of TBT in 1986, but has not yet issued any regulations. Meanwhile, restrictive actions have been taken by states and by the Congress.

A proposal by the U.S. Navy to use TBT paints on its entire fleet was prohibited by Congress in 1986, despite a Navy study that predicted no adverse environmental impact. Virginia enacted legislation and an emergency regulation in 1987, and Maryland, Michigan, and other states have since taken similar actions. Congress enacted national legislation restricting use of TBT paints in 1988. Those actions generally banned or restricted the use of TBT paints on small boats (less than 25 m long) and placed limits on leaching rates from paints used on larger vessels. Studies in Virginia had shown that most TBT releases were from small boats. Small-scale monitoring studies (e.g., in France and Virginia) have shown that the restrictions have been effective in reducing environmental concentrations and adverse impacts of TBT.

Risk management of TBT has been unusual in several ways. The initial basis for concern was field observation of adverse effects, not extrapolation from laboratory bioassays and field chemistry data. Risk assessment and risk management were conducted by state agencies and legislatures, rather than by EPA. Although the risk assessments were made without formalized methods, the results of the independent assessments were the same. Finally, TBT is the first compound banned by the Congress and the first regulated for environmental reasons alone.

(Led by L. Barnthouse, Oak Ridge National Laboratory, and P. F. Seligman, Naval Ocean Systems Center)

The case study addressed, with differing completeness, each of the five recommended steps in risk assessment and management. Hazard identification included the observation of abnormalities in the field and the same effects in experimentally exposed animals. Dose-response identification included data both from the field (correlative) and from the laboratory (experimental). Exposure assessment was based on estimated use and release rates rather than on monitoring or modeling studies. Risk characterization was only qualitative; it did not address such issues as the number and distribution of species that were vulnerable, or the degree of damage to the shellfish industry. Risk management actions were based on the demonstrable existence of hazard, on societal concern for the vulnerable species, and on the ready availability of alternative antifouling agents.

Some workshop participants were critical of the risk assessment approach adopted by Congress and state regulatory agencies. No attempt was made to plan and execute a formal risk assessment. Risk identification was based primarily on data on nonnative species. The Eastern oyster and blue crab, the species putatively at greatest risk, have been found to be less sensitive. Regulatory responses were based on findings of high environmental concentrations of TBT in yacht harbors and marinas, rather than in ecologically important regions such as breeding grounds. The central issue is whether a safe loading capacity (environmental concentration) of TBT for nontarget organisms can be defined, given substantially reduced rates of input. Recent information on fate and persistence, chronic toxicity, and dose-response relationships could support a more quantitative risk assessment with the possibility of more or less stringent restrictions.

CASE STUDY 2: Ecological Risk Assessment for Terrestrial Wildlife Exposed to Agricultural Chemicals

R. J. Kendall, Clemson University

The science of ecological risk assessment for exposure of terrestrial wildlife to agricultural chemicals has advanced rapidly during the 1980s. EPA requires detailed assessments of the toxicity and environmental fate of chemicals proposed for agricultural use (EPA, 1982; Fite et al., 1988). Performance of an ecological risk assessment requires data from several disciplines: analytical toxicology, environmental chemistry, biochemical toxicology, ecotoxicology, and wildlife ecology.

Addressing the ecological risks associated with the use of an agricultural chemical involves a complex array of laboratory and field studies—in essence, a research program. This paper provides examples of integrated field and laboratory research programs, such as The Institute for Wildlife and Environmental Toxicology (TIWET) at Clemson University. Preliminary toxicological and biochemical evaluations include measurements of acute toxicity (LC 50 and LD 50 ), toxicokinetics, and observations of wildlife in areas of field trials. Assessment of reproductive toxicity includes studies with various birds and other wildlife, particularly European starlings that nest at high densities in established nest boxes; these studies include measurements of embryo and nestling survival, postfledgling survival, behavior, diet, and residue chemistry (Kendall et al., 1989). Nonlethal assessment methods include measurement of plasma cholinesterase activity associated with organophosphate pesticide exposures (Hooper et al., 1989). A wide variety of birds, mammals, and invertebrates have been used in these studies.

End points evaluated in wildlife toxicological studies include mortality, reproductive success, physiological and biochemical changes, enzyme impacts, immunological impairment, hormonal changes, mutagenesis and carcinogenesis, behavioral changes, and residues of parent compounds and metabolites (Kendall, 1992).

The paper includes a case history of a comparative evaluation of Carbofuran and Terbufos as granular insecticides for control of corn rootworms. Carbofuran has been responsible for many incidents of wildlife poisoning and is recognized as being very hazardous to wildlife. In contrast, although Terbufos is highly toxic to wildlife in laboratory studies, exposure of wildlife under field conditions appears generally to be relatively low, and widespread mortality is not evident. Field studies of Terbufos conducted by TIWET might be the only ones conducted to date that satisfy EPA's requirements for a Level 2 field study, a more quantitative assessment of the magnitude of the effects of a pesticide than the qualitative Level 1 studies. (Level 2 studies are performed when toxicity tests and use patterns suggest a detailed study is warranted.) Data generated in those studies support an ecological risk assessment for Terbufos that is reported in the paper. However, the research program on Terbufos represents many years of effort with integration of laboratory and field research to achieve a full-scale level 2 study in just one geographic area on one crop. Ecological modeling techniques will be needed to generalize the results to other chemicals or to other situations.

(Led by B. Williams, Ecological Planning and Toxicology, Inc., and J. Gagne, American Cyanamid Company)

Dr. Williams noted that each step in ecological risk assessment is more complex and less understood than the corresponding step in human health risk assessment. Although hazard can be assumed when a toxic chemical is released, the species and populations at risk must first be defined. The appropriate selection of surrogate species for testing in the laboratory is usually unclear. Measurement of environmental concentrations is only the first step in exposure characterization. Exposure assessment also requires consideration of foraging behavior, avoidance, and food-web considerations, as well as spatial and temporal variability. Risk characterization involves comparison of exposure estimates with measures of hazard; this process might result in compounding of errors. Ecological risk assessments do not track individuals over time and so do not accurately reflect population changes.

The activities presented in the case study have a large research component, which is focused on dose-response assessment and exposure assessment. One discussant characterized risk assessment, as presented in the case study, as a retrospective exercise based on focused characterization of hazard and exposure in wildlife. Given the difficulties in conducting environmental risk assessments, the four-part paradigm might not be applicable at levels of organization above that of the population.

CASE STUDY 3A: Models of Toxic Chemicals in the Great Lakes: Structure, Applications, and Uncertainty Analysis

D. M.DiToro, Hydroqual, Inc.

This paper reviewed and summarized efforts to model the distribution and dynamics of toxic chemicals in the Great Lakes, with applications to PCBs, TCDD, and other persistent, bioaccumulated compounds. The models were based on the principle of conservation of mass (Thomann and Di Toro, 1983). Analysis proceeded through five steps: water transport, dynamics of solids, dynamics of a tracer, dynamics of the toxicant, and bioaccumulation in aquatic organisms. Mechanisms considered include settling, resuspension, sedimentation, partitioning, photolysis, volatilization, biodegradation, growth, respiration, predation, assimilation, excretion, and metabolism. The model of toxicant dynamics considered three phases (sorbed, bound, and dissolved) in each of two media (water column and sediments) and 21 pathways into, out of, or between these phases. The model of bioaccumulation included 25 compartments (four trophic levels with one to 13 age classes at each level) with five pathways into or out of each compartment. Because of the large number of coefficients (rate constants), sparseness of knowledge of inputs, and little opportunity for field calibration, uncertainty analysis was important in all the modeling exercises.

The first example modeled the dynamics of total PCBs in Lake Michigan (Thomann and Connolly, 1983). Plutonium-239 was used as a tracer to analyze sediment dynamics, and the model suggested that resuspension is an important mechanism. Calculation of PCB concentrations was limited by an order-of-magnitude uncertainty in the mass loading. Predictions of PCB concentrations and their rate of decline were sensitive to the value assumed for the mass-transfer coefficient for volatilization.

The second example modeled TCDD in Lake Ontario and attempted to predict the relationship between one source of input and the resulting incremental concentrations of TCDD (Endicott et al., 1989). In the absence of knowledge of other inputs, field data could not be used to calibrate the model. Hence, a formal uncertainty analysis was performed with Monte Carlo methods and assumed probability distributions of the rate coefficients. The 95% confidence limits of predicted TCDD concentrations in water and sediment differed by a factor of 10-100. Uncertainties in rate constants for photolysis and volatilization were the most important sources of uncertainty in predicted TCDD concentrations.

The third example extended the Lake Ontario TCDD model to eight other hydrophobic chemicals and incorporated a food-chain model to predict concentrations in lake trout (Endicott et al., 1990). The model predicted wide differences in toxicant concentrations, depending primarily on the degree of hydrophobicity as indexed by the octanol-water partition coefficient, Kow. The range of uncertainty in the predicted concentrations also varied among the chemicals. In-lake removal processes (sedimentation, volatilization, and degradation) were important for all chemicals.

CASE STUDY 3B: Ecological Risk Assessment of TCDD and TCDF

M. Zeeman, U.S. Environmental Protection Agency

This paper is based on a full-scale ecological risk assessment of chlorinated dioxin and furan emissions from paper and pulp mills that use the chlorine bleaching processes (Schweer and Jennings, 1990). Although the risk assessment addressed potential risks to terrestrial and aquatic wildlife exposed to TCDD and 2,3,7,8-tetrachlorodibenzofuran (TCDF) via a number of environmental pathways, the case study was limited to exposure of terrestrial wildlife to TCDD resulting from land disposal of paper and pulp sludges. This route of exposure was identified as one of the most hazardous in the multiroute risk assessment.

The specific exposure pathway considered was uptake of TCDD by soil organisms (earthworms and insects) from soil to which pulp sludge has been applied, and the consumption of soil organisms by birds and other small animals. Transfer factors were estimated both by modeling and from data collected in a field study in Wisconsin, in which an average soil TCDD concentration of 11 ppt led to concentrations of up to 140 ppt in a composite of six robin eggs. The models used three alternative sets of assumptions: low estimate, best estimate, and high estimate. The best estimates of tissue concentrations derived from the model were often similar to those observed in the field study: the low and high estimates were lower and higher, respectively, by a factor of roughly 10.

Risk estimates for terrestrial wildlife were derived by comparing exposure estimates (usually converted to daily intake rates) with benchmark toxicity values. The values used as benchmarks were either lowest-observed-adverse-effect levels (LOAELs) or no-observed-adverse-effect levels (NOAELs) for reproductive toxicity in birds and mammals —specifically, the lowest reported LOAELs and NOAELs. The risk quotient (RQ) for each species considered was defined as the ratio of the estimate of exposure to the corresponding benchmark value. On the basis of transfer estimates for land disposal of paper sludges, RQs could exceed 60:1 for the most exposed species (robins, woodcocks, and shrews). To estimate soil concentrations of TCDD ''safe" for these species, two uncertainty factors of 10 could be applied: one to allow for interspecies variability in sensitivity and one for an extrapolation from laboratory to field and/or the use of a LOAEL as the benchmark value. The corresponding estimates of safe concentrations were estimates that would lead to RQs less than 0.01:1 for the most heavily exposed species considered. Under those assumptions, soil concentrations of TCDD safe for highly exposed species would be about 0.03 ppt.

Led by L. A. Burns, U.S. Environmental Protection Agency, and D. J. Paustenbach, McLaren/Hart)

These case studies present only estimates of environmental concentrations—i.e., exposure assessment—and do not address other elements of risk assessment. Compared with traditional human health assessments, they show a greater concern for accuracy (as opposed "policy-driven conservatism"), a greater use of formal uncertainty analysis, and better opportunities for verifying accuracy of exposure and uptake models.

Criticism of the models focused on the omission of processes and on the assumed linear relationship between loading and environmental concentrations. Omitted processes include in-lake generation of solids (phytoplankton), transport in the benthic boundary layer, effects of water clarity on photolysis rates, and daily cycles in pH. A nonlinear relationship between loading and toxicant concentrations might occur if the toxicant reaches high enough concentrations to change the processes that control its own fate. For example, reduction in fish populations might allow for higher populations of zooplankton, which clarify the water column by decreasing populations of phytoplankton, thereby increasing photolysis rates and stabilizing pH.

CASE STUDY 4: Risk Assessment Methods in Animal Populations: The Northern Spotted Owl as an Example

D. R. Anderson, U.S. Fish and Wildlife Service

This paper described an analysis of northern spotted owl population dynamics performed to support ongoing studies of the impacts of clear-cutting of old-growth forest on the prospects for future survival of this endangered species (Salwasser, 1986). The paper summarized a method for estimating rates of population increase or decrease based on capture-recapture techniques and illustrates the methods with data on the northern spotted owl. The method proceeds in three steps: use of capture-recapture data to estimate age-specific survival or fecundity rates, estimation of the finite rate of population change (Leslie's parameter ), and experiments on samples of marked animals in natural environments. Mathematical models for estimating population parameters, including , have been developed extensively, and computer programs are available (Burnham et al., 1987). Experimental studies are desirable to test hypotheses about relationships between population parameters and risk factors.

The case study was of a population of northern spotted owls in California studied for 6 years (Franklin et al., 1990). Capture-recapture data yielded estimates of age-specific survival and fecundity for females, as well as estimates of mean population size (37 females) and annual recruitment (0 to 19 females; mean, 8). On the average, the eight females entering the population each year would have included six immigrants from outside the study area and only two locally raised recruits. The calculated value of was 0.952 ± 0.028, which indicated a decreasing population.

In this case, the risk factor was clearance of the old-growth forest on which the species is believed to depend. Although the study area contained much suitable habitat, the population appeared not to be self-sustaining, but to be maintained by immigration from remaining areas of old-growth. It was suggested that the study population is temporarily above the long-term carrying capacity because of the drastic loss of habitat in surrounding areas; these circumstances lead to a large "floating" component of the population.

The paper concluded that risk assessment in higher vertebrate populations must often rely on analysis of samples of marked individuals. A robust theory exists for study design and the analysis of such data. Selection of appropriate models is critical for rigorous assessment of impacts. Analysis of capture-recapture data allows inferences about the separate processes of birth, death, emigration, and immigration. Risk to a population does not affect population size directly; rather, it acts on the fundamental processes of birth and death.

(Led by M. E. Kentula, U.S. Environmental Protection Agency, and O. L. Loucks, Miami University)

Dr. Kentula commented that the case study (like others in the workshop) focused on individuals and populations and thus took a bottom-up approach. An alternative, top-down approach is to conduct an ecosystem risk assessment from a landscape perspective. For example, Kentula stated that EPA's Wetlands Research Program is developing methods to assess impacts on landscape function due to cumulative wetlands loss (Abbruzzese et al., 1990). The method proceeds in two-stages: a landscape characterization map is used to classify and rank units of the landscape according to relative risk, and can also be used to set priorities for effort and allocation of resources; a response curve expresses the hypothesized relationship between stressors (such as loss or modification of wetlands) and reduction in landscape functions (e.g., maintenance of water quality, or life support). The system can be used both to identify areas at risk and to guide management decisions for landscapes that are already affected.

Dr. Loucks commented that the case study presents the consequences of the stress to one local owl population at one time. For assessment of risk to the regional or total population, one would need to construct a "dose-response" relationship, in which "dose" would be a measure of the degree of stress (e.g., the percentage of the old-growth forest that has been destroyed) and "response" would be the probability of extinction of the population within an appropriate period (e.g., 250 years). Calculation of the probability from the birth, death, and dispersal rates estimated in the case study would require stochastic population modeling that takes account of uncertainty and variability in the population parameters.

The Endangered Species Act is an example of preemptive risk management, in that a high probability of extinction of a single species is designated as unacceptable. A species-by-species approach, however, does not lead to quantitative assessment of the risk of impoverishment of an ecosystem. Where possible, ecological risk assessment should work across levels of organization and should assess risks of reduction in system utility.

CASE STUDY 5: Ecological Benefits and Risks Associated with the Introduction of Exotic Species for Biological Control of Agricultural Pests

R. I. Carruthers, USDA Agricultural Research Service

The accidental or deliberate introduction of exotic species into regions where they are not native can cause positive, negative, or no observable effects, depending on a wide variety of biological, sociological, economic, and other factors. About 40% of the major arthropod pests (Sailer, 1983) and 50-75% of weed species (Foy et al., 1983) in the United States are introduced species, and introduced pests also include vertebrates, mollusks, and disease organisms that affect animals and plants. Many countries have developed formal programs to limit the introduction and establishment of unwanted exotic organisms, and many have developed methods to assess benefits and risks associated with planned introductions. The United States has no federal statute or set of statutes that governs introductions; instead, it has cumbersome and sometimes conflicting regulations, protocols, and guidelines.

This paper addressed assessment of risks and benefits of "classical biological control" (CBC): the planned introduction of exotic enemies of an introduced pest collected from the pest's home range (DeBach, 1974). Classical biological control (either alone or integrated with other pest management methods) has frequently been successful in controlling introduced pests and often provides large economic or environmental advantages over alternative methods. An example given in the paper is control of the alfalfa weevil: introduction and widespread releases of 11 species of parasitic hymenoptera have yielded substantial control of this major pest with no known negative side effects and with an estimated benefit-to-cost ratio of 87:1.

Risks of CBC programs have three different sources: the organism itself (e.g., parasitism or predation on nontarget species), associated organisms (e.g., pests of the introduced beneficial organism), and unrelated passenger organisms arriving with shipments of the introduced organism. Some adverse effects of all three types have been documented (Pimentel et al., 1984, Howarth, 1991), including local extinctions of nontarget species, especially in island situations. Although there is little documentation of notable adverse impacts of CBC programs in the United States, more precise prediction of benefits and risks would be desirable. Unfortunately, accurate prediction of both positive and negative impacts (target and nontarget effects) of CBC programs has not been achieved. The lack of predictive ability leaves CBC risk assessments in the realm of informed scientific judgment-based on limited published data.

In addition to requirements of various federal laws, guidelines have been developed to improve safety in CBC. Agricultural Research Service protocols (now under revision) require federal permits for importation and movement of organisms, quarantine, authoritative identifications, environmental and safety evaluations, documentation of movements and releases, and retention of voucher specimens. Current policy requires an environmental assessment (EA) to accompany applications for permits for field release of exotic organisms. Although the components of an EA depend on the specific situation, the documentation required is fairly extensive. At any step in the process, a proposed introduction can be deemed inappropriate and the project terminated.

(Led by J. T. Carlton, Williams College, and D. Policansky, National Research Council)

Classical biological control is only one kind of introduction of nonnative species. Others include range expansions (either natural or mediated by human modification of habitats), deliberate introductions to "improve nature" or for aquaculture or horticulture, and a wide variety of accidental introductions. CBC seems to have a better safety record than other types of introduction. It is not clear whether this is because the activity is basically benign, because the safety precautions work well, or because CBC involves small organisms that pose smaller risks than larger organisms. The worst failures in all categories have occurred in insular environments such as islands and lakes.

The assessment of risks posed by introductions has been addressed separately by scientists in different disciplines (e.g., agriculture, freshwater and marine ecology, and nature conservation). Communication between the disciplines is poor, and several sets of criteria, procedures, and protocols have been developed independently. Whereas the U.S. Department of Agriculture has adopted flow charts as a way to systematize decision-making, other agencies (e.g., the International Council for the Exploration of the Sea) have concluded that too little is known about ecosystem functioning for flow charts to be useful.

Dr. Policansky commented that risk assessment for species introductions is difficult to fit into the four-step Red Book paradigm. Hazard is taken for granted (because it is the introduction of the species itself); dose-response and exposure are yes-no categories, not continuous variables, because the more important point is whether the species is present or not, not how much of the species is present. A more suitable paradigm might be that presented in the 1986 NRC report Ecological Knowledge and Environmental Problem-Solving: Concepts and Case Studies , which placed more emphasis on problem-scoping and problem-solving than on categorical activities.

CASE STUDY 1: Uncertainty and Risk in an Exploited Ecosystem: A Case Study of Georges Bank

M. J. Fogarty, A. A. Rosenberg, and M. P. Sissenwine, National Marine Fisheries Service

This paper addressed the risks of overexploitation of harvested marine ecosystems, with specific application to Georges Bank, a highly productive area off the northeastern United States. In this context, risk assessment involves determining the probability that a population will be depleted to an arbitrarily predetermined "small" (e.g., 1% or 5%) size. The "quasi-extinction" level may be defined (Ginzburg et al., 1982) as (1) the population level below which the probability of poor recruitment increases appreciably or (2) the smallest population capable of supporting a viable fishery.

The primary determinant of the long-term dynamics of any population is the relationship between the adult population (stock) and recruitment. The null hypothesis is that the relationship is linear, i.e., that recruitment is independent of density (Sissenwine and Shepherd, 1987). Compensatory changes in survival or in reproductive output result in nonlinear stock-recruitment curves. Nonlinearity permits stable equilibrium under harvesting pressure (i.e., under increased mortality rates), up to a critical exploitation level, beyond which the population will decline to quasi-extinction. Stochastic variation in the stock-recruitment relationship or in multispecies interactions can increase risks of adverse effects at moderate exploitation levels. In practice, because of uncertainties resulting from stochastic variations and measurement errors, it is often impossible to reject the null hypothesis of no compensation. Assuming there is no compensation will, in general, result in a conservative assessment of production capacity and its ability to withstand exploitation.

Haddock populations on Georges Bank fluctuated about relatively stable levels between 1930 and 1960 when the fraction of the total haddock population killed per year by fisherman (annual fishing mortality rate) varied between 0.3-0.6, but collapsed after the fishing mortality rate increased to 0.8 during the 1960s (Grosslein et al., 1980). The empirical relationship between stock and recruitment was extremely variable with little indication of the form of the underlying curve. Analysis of the population dynamics showed that a density-independent null model could not be rejected and gave a neutral equivalent harvest rate of 0.5, which agrees well with the stable period of the fishery. In contrast, the compensatory model is over optimistic with respect to the long-term harvest rate.

The decrease in populations of haddock and other groundfish was accompanied by increases in other species, notably elasmobranchs (rays and sharks). The biomass of predatory species increased dramatically with attendant consequences for the overall system structure (Fogarty et al., 1989). Population modeling suggests that the stock-recruitment relationship for haddock might have been changed and that the population cannot now withstand as heavy fishing mortality as it could before the increase in predation pressure.

Risk assessment for exploited systems must take into account uncertainties in population abundance, harvest rates, and system structure. Adoption of risk-averse management strategies would minimize the possibility of stock depletion or undesirable alterations in the structure of the system.

(Led by R. M. Peterman, Simon Fraser University, and J. L. Ludke, National Fisheries Research Center-Leetown)

Discussion focused on the idea of statistical power—the probability that an experiment (or set of observations) will correctly reject a null hypothesis that is false, i.e., the probability that an experiment will detect effects that actually exist. In fisheries cases, the high degree of variability in population parameters means that most studies have very low power to detect changes, unless the studies are continued for many years or involve frequent measurements (Peterman and Bradford, 1987). Published papers in fisheries biology (and in other disciplines related to risk assessment) rarely report statistical power and hence can misleadingly report negative findings. The case study recommended adopting a conservative null hypothesis to allow for the low power of the observational studies. Other approaches are to improve the design of studies (e.g., by more frequent sampling), to incorporate uncertainties into formal decision analysis, and to reverse the burden of proof (to put the burden of documenting whether detrimental effects are occurring on exploiters of the resource, rather than in the management agency). If "proof" of safety is required, a formal statement of the power of studies should be provided for a size of effect deemed relevant.

The Georges Bank fishery is only one of a long series of cases in which overexploitation has occurred despite a nominal system of scientific stock assessment and fishery management. Discussants generally felt that overexploitation was due to failures of management, rather than to deficiencies in assessment or failure to communicate results to managers.

The assessment of the risk to fish populations associated with exploitation in the Georges Bank case study is implicitly consistent with the 1983 health risk assessment framework, although the explicit steps differ. The case study illustrates the 1983 risk assessment paradigm within the larger context of problem-solving. However, the dose-response and exposure steps might be only loosely analogous. Differing circumstances of function, scale, and certitude could require variation in the method of risk assessment.

The numerous sources of uncertainty in assessing risk associated with exploitation of fish populations vary and increase in magnitude with increase in scale. Regulation of harvest of geographically confined populations can be achieved with greater confidence than can regulation of wide-ranging populations such as Chesapeake Bay striped bass and Lake Michigan lake trout. Sources of uncertainty include variation in recruitment, measurement (which requires many assumptions), and management and institutional characteristics. Management techniques for reducing risks associated with overexploitation of populations are fairly blunt instruments, and strong actions are usually taken only after the fact. Rarely, if ever, are risk reduction measures considered until an actual impact is noticed or a potential threat emerges.

Subtle and cumulative factors that are unknown or are measured imprecisely—e.g., chronic or episodic changes in predation, migration, and disease—are some of the issues with information gaps that contribute to uncertainties in ecological risk assessment. The Georges Bank case study describes multispecies interactions and consequences of selective harvesting practices within the fish community, but falls short of a systematic understanding of cause and effect with regard to changes in multispecies abundance.

  • Cite this Page National Research Council (US) Committee on Risk Assessment Methodology. Issues in Risk Assessment. Washington (DC): National Academies Press (US); 1993. Appendix E, Case Studies and Commentaries.
  • PDF version of this title (4.6M)

In this Page

  • Tributyltin Risk Management In the United States
  • Ecological Risk Assessment for Terrestrial Wildlife Exposed to Agricultural Chemicals
  • Models of Toxic Chemicals in the Great Lakes: Structure, Applications, and Uncertainty Analysis
  • Ecological Risk Assessment of TCDD and TCDF
  • Risk Assessment Methods in Animal Populations: The Northern Spotted Owl as an Example
  • Ecological Benefits and Risks Associated with the Introduction of Exotic Species for Biological Control of Agricultural Pests
  • Uncertainty and Risk in an Exploited Ecosystem: A Case Study of Georges Bank

Recent Activity

  • Case Studies and Commentaries - Issues in Risk Assessment Case Studies and Commentaries - Issues in Risk Assessment

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

IMAGES

  1. case study on risk assessment

    risk assessment case study examples

  2. case study on risk assessment

    risk assessment case study examples

  3. ⇉Risk Assessment Case Study Essay Example

    risk assessment case study examples

  4. risk assessment approach case study

    risk assessment case study examples

  5. (PDF) Risk Management in IT Projects

    risk assessment case study examples

  6. (PDF) Aseptic Transfer Risk Assessment: A Case Study

    risk assessment case study examples

VIDEO

  1. BL5 Audit Risk Assessment Case Study / 18 Dec 2020 Session 3

  2. Video Teaser: Site assessment case study

  3. Historic Research as a Tool in Unexploded Bomb Risk Assessment: Case Study Sarajevo

  4. Final Assessment Case Study (EIB20603 Supply Chain Management)

  5. Geospatial Multicriteria Analysis for Earthquake Risk Assessment: Case Study over Fujairah, UAE

  6. HAVS Risk Assessment Case Study Video

COMMENTS

  1. Enterprise Risk Management Examples l Smartsheet

    In an enterprise risk assessment example, ... For example, the case study cites a risk that the company assessed as having a 5 percent probability of a somewhat better-than-expected outcome but a 10 percent probability of a significant loss relative to forecast. In this case, the downside risk was greater than the upside potential.

  2. A case study exploring field-level risk assessments as a leading safety

    The results provide insight into promising ways to measure and document as well as support and manage a risk-based program over several years. After common barriers to risk assessment implementation are discussed, mini case examples to illustrate how the organization improved and used their FLRA process to identify leading indicators follow.

  3. Module 1: Case Studies & Examples

    The Three-Point Range Values. Using three-point values is a simple and effective way to express a range, such as the level of threat and likelihood associated with an event or activity. The three values are minimum, most likelihood, and maximum. When we quantify risk, we use the formula Threat x Likelihood = Risk.

  4. Risk Assessment Case Studies

    Case Study: Manufacturing Company. Background: A safety products company was contracted to perform a risk assessment. Result: The most expensive products and solutions were recommended by the product company. The client purchased and installed the materials, resulting in an improper application of a safety device.

  5. A Powerful Risk Assessment Example

    Risk Assessment Examples. When it comes to risk assessment, it can be helpful to examine real-world examples to better understand how the process works and how it can be applied in different industries. In this section, we will explore industry-specific examples and case studies provided by SafetyCulture to illustrate the application of risk ...

  6. PDF Quality Risk Management Principles and Industry Case Studies

    Case study utilizes recognized quality risk management tools. Case study is appropriately simple and succinct to assure clear understanding. Case study provides areas for decreased and increased response actions. 7. Case study avoids excessive redundancy in subject and tools as compared to other planned models. 8.

  7. Risk Assessment for Collaborative Operation: A Case Study on Hand

    Risk assessment is a systematic and iterative process, which involves risk analysis, where probable hazards are identified, and then corresponding risks are evaluated along with solutions to mitigate the effect of these risks. ... The case study was analyzed to understand the benefits of collaborative operations done through a conceptual study ...

  8. (PDF) A case study exploring field-level risk ...

    A case study exploring field-level risk assessments as a leading safety indicator. January 2017; Transactions 342(1):22-28; ... and scanned the various risk assessment example documents .

  9. PDF 22 A case study exploring field-level risk assessments as a leading

    A case study exploring field-level risk ... a variety of mini case examples that showcase how the organization worked through these barriers to ... Risk assessment is a process used to gather ...

  10. PDF Risk assessment case study

    The aim of the case studies was to apply and evaluate the applicability of different methods for risk analysis (i.e. hazard identification and risk estimation) and to some extent risk evaluation of drinking water supplies. The case studies will also provide a number of different examples on how risks in drinking water systems can

  11. PDF CASE STUDY AUDIT PLANNING & RISK ASSESSMENT 1. INTRODUCTION

    1. INTRODUCTION. The objective of this case study is to reinforce the messages contained in the Audit Planning & Risk Assessment Guide through the completion of a practitioner based case study that will cover the following key stages in the audit planning and risk assessment cycle: Identification of the Audit Universe and related objectives;

  12. Risk Management in IT Projects

    ges. It is an integral element of management. based on a holistic approach to risk, i.e. risk. is a collection of many di erent factors .". Szczepaniak (2013) distinguishes four . steps in the ...

  13. Risk Management Articles, Research, & Case Studies

    by Samuel G. Hanson, David S. Scharfstein, and Adi Sunderam. In modern economies, a large fraction of economy-wide risk is borne indirectly by taxpayers via the government. Governments have liabilities associated with retirement benefits, social insurance programs, and financial system backstops. Given the magnitude of these exposures, the set ...

  14. PDF fall risk case studies

    Timed Up and Go: 15 seconds with a cane on left, minimal arm swing noted. 30-Second Chair Stand Test: Able to rise from the chair 7 times without using her arms. 4-Stage Balance Test: Able to stand for 10 seconds in Position 1(feet side by side) and Position 2 (semi-tandem). However, she loses her balance after 3 seconds in Position 3 (tandem).

  15. PDF Case Study 1: Risk Assessment and Lifecycle Management Learning

    Risk assessment should be carried out initially and be repeated throughout development in order to assess in how far the identified risks have become controllable. The time point of the risk assessment should be clearly stated. A summary of all material quality attributes and process parameters.

  16. Risk Management Case Studies

    How do different organisations use Predict! to manage their risks and opportunities? Read our risk management case studies to learn from their experiences and insights. Find out how Predict! helps them to achieve their strategic objectives, deliver projects on time and budget, and improve their risk culture.

  17. A Case Study in Assessing a Potential Severity Framework for Incidents

    The primary objective of this case study is to determine the applicability and feasibility of a framework that leverages occupational incident details to prospectively identify "potential Serious Injury or Fatality" (pSIF) cases. This study comprehensively reviewed a random sample of 1,081 injury and illness cases across 21 generalized incident types spanning over a decade at Lawrence ...

  18. 13 case studies on how risk managers are assessing their risk culture

    UK risk consultant Roger Noon shared with us a variety of tools risk managers can use in-house to help understand behaviours and diagnose culture (Members: access these tools here). Of quantitative risk culture surveys, he says: "Survey instruments can also be used so long as you and your sponsors recognise that they are typically very blunt ...

  19. Cloud Computing Risk Assessment: A Case Study

    Conclusion. Businesses are realizing the power of cloud computing, and its use is increasing. This case study represents a one-time attempt at risk assessment of the cloud computing arrangement. The risk assessment helped uncover some of the key risks, prioritize those risks and formulate a plan of action. Given the evolving nature of risks in ...

  20. How to Do a Risk Assessment: A Case Study

    Accept whatever risk is left and get on with the ministry's work; Reject the remaining risk and eliminate it by getting rid of the source of the risk; Step 5: Ongoing Risk Management. On a regular basis, in keeping with the type of risk and its threat, the risk assessment and risk management plan should be reviewed to see if it is still valid.

  21. Case Study: How FAIR Risk Quantification Enables Information ...

    Security leaders can prioritize their security initiatives based on the top risk areas that an organization faces. Swisscom uses quantifiable risk management enabled through Open FAIR to: Communicate security risk to the business. Ascertain business risk appetites and improve business owner accountability for risk.

  22. Case Studies and Commentaries

    The case study addressed, with differing completeness, each of the five recommended steps in risk assessment and management. Hazard identification included the observation of abnormalities in the field and the same effects in experimentally exposed animals. Dose-response identification included data both from the field (correlative) and from ...

  23. Information Security Risks Assessment: A Case Study

    This project carries out a detailed risk assessment for a case study organisation. It includes a comprehensive literature review analysing several professional views on pressing issues in ...