SlideShare a Scribd company logo
1 of 3
Download to read offline
10/19/2016
On an optimal threshold for terrorism industry loss
aggregation
artemis.bm/blog/2016/10/19/on-an-optimal-threshold-for-terrorism-industry-loss-aggregation/
Industry insured loss aggregation consists of a few basic components, but they’re crucial to the process. The
three basic ingredients are reliable data sources comprising sufficient share of an event to enable extrapolation
of the full market loss, a set of rules (methodology) and a way to define events, and a threshold below which
events aren’t investigated.
Not Too Hot, Not Too Cold—Just Right
While loss aggregation requires all three, the last of these ingredients can be difficult to determine. Too low, and
industry players may not participate. Too high, and events that could enrich the program are sacrificed. For
global terrorism, given the relative infrequency of past large terror events, setting the right industry loss level for
event definition can be particularly difficult, in terms of industry loss and the volatility around severity.
Amongst headline terror attacks over the past 30 years (excluding aviation), few have been large enough to
warrant insurance industrywide focus. With terror industry loss warranties (ILWs) currently trading with triggers
starting at $1 billion, generally only five events since 1990 either exceeded that threshold or came close, using
inflation-adjusted loss estimates from Swiss Re Sigma, Property Claim Services® (PCS®, a Verisk Analytics
business), Pool Re, and the Australian Reinsurance Pool Corporation (ARPC). The only one far exceeding the
$1 billion level is the collection of coordinated attacks on September 11, 2001. Unlike other anthropogenic perils
such as fire or motor, the sort of event the industry seeks to hedge just isn’t sufficiently frequent to support
traditional historical loss-based analysis.
However, overall terrorism event frequency is quite high. According
to Verisk Maplecroft, 8,114 terror events occurred worldwide in
2015, and 3,541 came in the first nine months of 2016. There have
been nearly 130,000 events going back to 2004. Taking such a
broad view, as opposed to concentrating only on large events, offers
much more scope for analysis. It’s also been shown that there are
invariant severity trends that emerge after enough event data is
gathered. See Clauset & Woodard, 2012 [featured plot], Cirillo &
Taleb, 2016, and Johnson et al., 2006—all building on pioneering
early work done on war violence by Richardson, 1960. These
metrics are not suitable for bottom-up, risk-by-risk underwriting but
are potentially ideal for industrywide global triggers.
The problem is that such data refer to the pure hazard—not the loss, which only results after exposure,
vulnerability, and financial conditions. Without corresponding industry loss data, the industry loses a crucial part
of the event’s story. For example, the global terrorism pricing and accumulation model built by Dr Raveem Ismail
for Ariel Re, coauthor of this article, has stochastic event sets for every country in the world. These are
appropriately informed by leading experts from IMSL (Intelligence Management Services Limited) on forward-
looking frequencies. But calibration against loss data is a validation step for models—and the paucity of industry
loss data allows this to happen completely only for the handful of countries for which such data exists. Modelling
for most of the remaining countries, therefore, remains only partially calibrated.
The need for industrywide loss data is therefore acute, especially since many carriers are looking to manage risk
and capital (at least in part) by using ILWs.
Now, ideally, every one of those events since 2004 would have an industry loss estimate attached to it. Even if
1/3
the significantly large fraction of events that have zero insured loss are excluded, the data collection and
analysis effort would still be profound. Pairing exposure data and modelling capabilities (such as those from AIR
Worldwide or the Ariel Re model), a robust view of a portfolio’s terror risk can be assembled.
To collect that loss information for every such event requires as many data requests to insurers and reinsurers,
all of whom have plenty of other work to do. If ILWs trigger at $1 billion in property and business interruption,
calling up for every car bombing, active shooter, and improvised explosive device is an inappropriate level of
granularity to ask for. If the number of data requests is too high, the value of ongoing collaboration on the loss
aggregation program is attenuated by the amount of work necessary.
What is the ideal threshold?
Pragmatically, the threshold for a global terror industry loss index needs to be balanced. It should enable
insurers and reinsurers to contribute enough to create a significant historical data set. And that exercise should
prove the ability to collect loss data going forward—while minimising up-front and subsequent operational strain.
In reviewing terror activity and estimated industrywide insured losses for the past 30 years, $50 million seems to
emerge as an appropriate threshold. According to an interim analysis involving data from a wide range of
sources, 16 events warrant investigation, although they may not all meet this threshold and be added to the final
event set. It’s sufficient to demonstrate loss aggregation capabilities going forward without putting unnecessary
pressure on participating insurers and reinsurers. For trading purposes, of course, proving the concept is crucial.
And while the ability for deep analysis is thin, the resulting data set would provide reference points for significant
attacks that could help guide the assumptions involved in subsequent modelling activities.
Consider, for example, the 7 July 2005 attacks in London, where insured loss was $63 million (Swiss Re Sigma,
indexed for inflation). It was a small event for the insurance industry, but a major attack for the city as a whole.
This illustrates how an event set resulting from a $50 million threshold is different from what’s seen in property
catastrophe—and not just because of the difference in frequency. There are clusters of events, arising from
political/ideological motivation and counterbalanced by mitigation efforts, the dynamic that underlies all terrorism
activity. The predictability of, for example, El Niño in property catastrophe is replaced by the frequency of attack
that comes from a group with a clear mission, knowledge of its targets, the means and experience to operate
effectively, and, critically, failure of security services.
Within the context of industry loss aggregation, concentrations of U.K. loss events of 1992, 1993, and 1996 are
illustrative, particularly since the next U.K. event occurred in 2005 and involved a different perpetrator group.
Unlike what we see off the coast of Florida (particularly recently), the threat can disappear quickly, only to be
replaced by a new one in a different location. Hurricanes are hurricanes, and they tend to affect the same areas
based on the same causes. Likewise earthquakes. But the local specifics of terror change constantly.
Frequency and severity are lumpy for global terror. But it’s the reference point that matters more than having a
sufficient data set for rigorous modelling (unlike in the property catastrophe space). If it’s known what a certain
type of event looks like, then the analytical decision making can become much more informed.
Ultimately, the right threshold for industry loss aggregation—as mentioned—is that which gets the solution up
and running. With terror accumulations increasing across the reinsurance industry, the need for stand-alone
terror trading is poised to grow, creating a salient need for a wider tool set to facilitate risk and capital
management. Getting loss aggregation right should help the entire industry in a time of coming need and,
ultimately, increase society’s resilience against terrorism.
This article was contributed by Tom Johansmeyer and Dr Raveem Ismail.
2/3
Tom Johansmeyer
www.linkedin.com/in/tjohansmeyer
tjohansmeyer@verisk.com
Tom Johansmeyer is assistant vice president, PCS Strategy and Development, at ISO
Claims Analytics, a division of Verisk Insurance Solutions. He leads all client- and market-
facing activities at PCS, including new market entry, new solution development, and
reinsurance/ILS activity. Currently, Tom is spearheading initiatives in global terror, global
energy and marine, and regional property-catastrophe loss aggregation. Previously, Tom
held insurance industry roles at Guy Carpenter (where he launched the first corporate blog
in the reinsurance sector) and Deloitte. He’s a veteran of the U.S. Army, where he proudly pushed paper in a
personnel position in the late 1990s.
Dr Raveem Ismail, DPhil, MSc, MPhys (Oxon), MInstP
www.linkedin.com/in/raveem
raveem.ismail@oxon.org
Raveem Ismail is assistant vice president, Specialty Treaty Underwriter, at Ariel Re,
Bermuda; chair of the Reinsurance Special Interest Group of COST action IS1304 on
Structured Expert Judgement; and a cofounder of The Journal Of Terrorism & Cyber
Insurance. He was previously at Validus and Aon Benfield and has consulted for
quantitative political violence at Exclusive Analysis (now IHS Markit). Raveem is a triple graduate of Oxford
University, where his research was in atmospheric physics modelling. He constantly strives to bring analytical
strength to bear on challenging underwriting problems, especially for prickly perils such as terrorism and cyber.
3/3

More Related Content

Similar to JohansmeyerIsmailArtemisBm2016 - On An Optimal Threshold For Terrorism Industry Loss Aggregation

Evolution terriskmod woo_journalre
Evolution terriskmod woo_journalreEvolution terriskmod woo_journalre
Evolution terriskmod woo_journalre
dacooil
 
Evolution terriskmod woo_journalre
Evolution terriskmod woo_journalreEvolution terriskmod woo_journalre
Evolution terriskmod woo_journalre
dacooil
 
Econ honors thesis jingting yi
Econ honors thesis jingting yiEcon honors thesis jingting yi
Econ honors thesis jingting yi
Dr Lendy Spires
 
Original Article‘Acting like chameleons’ On the McDonaldi.docx
Original Article‘Acting like chameleons’ On the McDonaldi.docxOriginal Article‘Acting like chameleons’ On the McDonaldi.docx
Original Article‘Acting like chameleons’ On the McDonaldi.docx
gerardkortney
 
WORK & STRESS, 1998, VOL. 12, NO. 3 293-306 Achieving a sa.docx
WORK & STRESS, 1998, VOL. 12, NO. 3 293-306 Achieving a sa.docxWORK & STRESS, 1998, VOL. 12, NO. 3 293-306 Achieving a sa.docx
WORK & STRESS, 1998, VOL. 12, NO. 3 293-306 Achieving a sa.docx
ambersalomon88660
 
Clements 2nd risk index report Jan2016 FINAL PRINT-2
Clements 2nd risk index report Jan2016 FINAL PRINT-2Clements 2nd risk index report Jan2016 FINAL PRINT-2
Clements 2nd risk index report Jan2016 FINAL PRINT-2
Patricia Loria
 
discuss how the types of threats discussed in the article.docx
discuss how the types of threats discussed in the article.docxdiscuss how the types of threats discussed in the article.docx
discuss how the types of threats discussed in the article.docx
bkbk37
 
2016 Data Breach Investigations Report
2016 Data Breach Investigations Report2016 Data Breach Investigations Report
2016 Data Breach Investigations Report
Sneha Kiran
 

Similar to JohansmeyerIsmailArtemisBm2016 - On An Optimal Threshold For Terrorism Industry Loss Aggregation (20)

Mark Lynch - Importance of Big Data and Analytics for the Insurance Market
Mark Lynch - Importance of Big Data and Analytics for the Insurance MarketMark Lynch - Importance of Big Data and Analytics for the Insurance Market
Mark Lynch - Importance of Big Data and Analytics for the Insurance Market
 
Evolution terriskmod woo_journalre
Evolution terriskmod woo_journalreEvolution terriskmod woo_journalre
Evolution terriskmod woo_journalre
 
Evolution terriskmod woo_journalre
Evolution terriskmod woo_journalreEvolution terriskmod woo_journalre
Evolution terriskmod woo_journalre
 
Econ honors thesis jingting yi
Econ honors thesis jingting yiEcon honors thesis jingting yi
Econ honors thesis jingting yi
 
Original Article‘Acting like chameleons’ On the McDonaldi.docx
Original Article‘Acting like chameleons’ On the McDonaldi.docxOriginal Article‘Acting like chameleons’ On the McDonaldi.docx
Original Article‘Acting like chameleons’ On the McDonaldi.docx
 
Underinsurance of property risks - Sigma5
Underinsurance of property risks - Sigma5Underinsurance of property risks - Sigma5
Underinsurance of property risks - Sigma5
 
WORK & STRESS, 1998, VOL. 12, NO. 3 293-306 Achieving a sa.docx
WORK & STRESS, 1998, VOL. 12, NO. 3 293-306 Achieving a sa.docxWORK & STRESS, 1998, VOL. 12, NO. 3 293-306 Achieving a sa.docx
WORK & STRESS, 1998, VOL. 12, NO. 3 293-306 Achieving a sa.docx
 
Exact Catastrophe Exposure Management - Whitepaper
Exact Catastrophe Exposure Management - WhitepaperExact Catastrophe Exposure Management - Whitepaper
Exact Catastrophe Exposure Management - Whitepaper
 
Financial Institutions, Merchants, and the Race Against Cyberthreats
Financial Institutions, Merchants, and the  Race Against CyberthreatsFinancial Institutions, Merchants, and the  Race Against Cyberthreats
Financial Institutions, Merchants, and the Race Against Cyberthreats
 
Clements 2nd risk index report Jan2016 FINAL PRINT-2
Clements 2nd risk index report Jan2016 FINAL PRINT-2Clements 2nd risk index report Jan2016 FINAL PRINT-2
Clements 2nd risk index report Jan2016 FINAL PRINT-2
 
discuss how the types of threats discussed in the article.docx
discuss how the types of threats discussed in the article.docxdiscuss how the types of threats discussed in the article.docx
discuss how the types of threats discussed in the article.docx
 
s4c.paper2010
s4c.paper2010s4c.paper2010
s4c.paper2010
 
Unprepared for Cybersecurity in Saudi Arabia: Argument for a Shift Towards Cy...
Unprepared for Cybersecurity in Saudi Arabia: Argument for a Shift Towards Cy...Unprepared for Cybersecurity in Saudi Arabia: Argument for a Shift Towards Cy...
Unprepared for Cybersecurity in Saudi Arabia: Argument for a Shift Towards Cy...
 
Rp dbir 2016_report_en_xg
Rp dbir 2016_report_en_xgRp dbir 2016_report_en_xg
Rp dbir 2016_report_en_xg
 
2016 Data Breach Investigations Report
2016 Data Breach Investigations Report2016 Data Breach Investigations Report
2016 Data Breach Investigations Report
 
Verizon Data Breach Investigation Report
Verizon Data Breach Investigation ReportVerizon Data Breach Investigation Report
Verizon Data Breach Investigation Report
 
Rp dbir 2016_report_en_xg
Rp dbir 2016_report_en_xgRp dbir 2016_report_en_xg
Rp dbir 2016_report_en_xg
 
2016 data breach investigations report
2016 data breach investigations report2016 data breach investigations report
2016 data breach investigations report
 
Verizon DBIR-2016
Verizon DBIR-2016Verizon DBIR-2016
Verizon DBIR-2016
 
2017 global-cyber-risk-transfer-report-final
2017 global-cyber-risk-transfer-report-final2017 global-cyber-risk-transfer-report-final
2017 global-cyber-risk-transfer-report-final
 

JohansmeyerIsmailArtemisBm2016 - On An Optimal Threshold For Terrorism Industry Loss Aggregation

  • 1. 10/19/2016 On an optimal threshold for terrorism industry loss aggregation artemis.bm/blog/2016/10/19/on-an-optimal-threshold-for-terrorism-industry-loss-aggregation/ Industry insured loss aggregation consists of a few basic components, but they’re crucial to the process. The three basic ingredients are reliable data sources comprising sufficient share of an event to enable extrapolation of the full market loss, a set of rules (methodology) and a way to define events, and a threshold below which events aren’t investigated. Not Too Hot, Not Too Cold—Just Right While loss aggregation requires all three, the last of these ingredients can be difficult to determine. Too low, and industry players may not participate. Too high, and events that could enrich the program are sacrificed. For global terrorism, given the relative infrequency of past large terror events, setting the right industry loss level for event definition can be particularly difficult, in terms of industry loss and the volatility around severity. Amongst headline terror attacks over the past 30 years (excluding aviation), few have been large enough to warrant insurance industrywide focus. With terror industry loss warranties (ILWs) currently trading with triggers starting at $1 billion, generally only five events since 1990 either exceeded that threshold or came close, using inflation-adjusted loss estimates from Swiss Re Sigma, Property Claim Services® (PCS®, a Verisk Analytics business), Pool Re, and the Australian Reinsurance Pool Corporation (ARPC). The only one far exceeding the $1 billion level is the collection of coordinated attacks on September 11, 2001. Unlike other anthropogenic perils such as fire or motor, the sort of event the industry seeks to hedge just isn’t sufficiently frequent to support traditional historical loss-based analysis. However, overall terrorism event frequency is quite high. According to Verisk Maplecroft, 8,114 terror events occurred worldwide in 2015, and 3,541 came in the first nine months of 2016. There have been nearly 130,000 events going back to 2004. Taking such a broad view, as opposed to concentrating only on large events, offers much more scope for analysis. It’s also been shown that there are invariant severity trends that emerge after enough event data is gathered. See Clauset & Woodard, 2012 [featured plot], Cirillo & Taleb, 2016, and Johnson et al., 2006—all building on pioneering early work done on war violence by Richardson, 1960. These metrics are not suitable for bottom-up, risk-by-risk underwriting but are potentially ideal for industrywide global triggers. The problem is that such data refer to the pure hazard—not the loss, which only results after exposure, vulnerability, and financial conditions. Without corresponding industry loss data, the industry loses a crucial part of the event’s story. For example, the global terrorism pricing and accumulation model built by Dr Raveem Ismail for Ariel Re, coauthor of this article, has stochastic event sets for every country in the world. These are appropriately informed by leading experts from IMSL (Intelligence Management Services Limited) on forward- looking frequencies. But calibration against loss data is a validation step for models—and the paucity of industry loss data allows this to happen completely only for the handful of countries for which such data exists. Modelling for most of the remaining countries, therefore, remains only partially calibrated. The need for industrywide loss data is therefore acute, especially since many carriers are looking to manage risk and capital (at least in part) by using ILWs. Now, ideally, every one of those events since 2004 would have an industry loss estimate attached to it. Even if 1/3
  • 2. the significantly large fraction of events that have zero insured loss are excluded, the data collection and analysis effort would still be profound. Pairing exposure data and modelling capabilities (such as those from AIR Worldwide or the Ariel Re model), a robust view of a portfolio’s terror risk can be assembled. To collect that loss information for every such event requires as many data requests to insurers and reinsurers, all of whom have plenty of other work to do. If ILWs trigger at $1 billion in property and business interruption, calling up for every car bombing, active shooter, and improvised explosive device is an inappropriate level of granularity to ask for. If the number of data requests is too high, the value of ongoing collaboration on the loss aggregation program is attenuated by the amount of work necessary. What is the ideal threshold? Pragmatically, the threshold for a global terror industry loss index needs to be balanced. It should enable insurers and reinsurers to contribute enough to create a significant historical data set. And that exercise should prove the ability to collect loss data going forward—while minimising up-front and subsequent operational strain. In reviewing terror activity and estimated industrywide insured losses for the past 30 years, $50 million seems to emerge as an appropriate threshold. According to an interim analysis involving data from a wide range of sources, 16 events warrant investigation, although they may not all meet this threshold and be added to the final event set. It’s sufficient to demonstrate loss aggregation capabilities going forward without putting unnecessary pressure on participating insurers and reinsurers. For trading purposes, of course, proving the concept is crucial. And while the ability for deep analysis is thin, the resulting data set would provide reference points for significant attacks that could help guide the assumptions involved in subsequent modelling activities. Consider, for example, the 7 July 2005 attacks in London, where insured loss was $63 million (Swiss Re Sigma, indexed for inflation). It was a small event for the insurance industry, but a major attack for the city as a whole. This illustrates how an event set resulting from a $50 million threshold is different from what’s seen in property catastrophe—and not just because of the difference in frequency. There are clusters of events, arising from political/ideological motivation and counterbalanced by mitigation efforts, the dynamic that underlies all terrorism activity. The predictability of, for example, El Niño in property catastrophe is replaced by the frequency of attack that comes from a group with a clear mission, knowledge of its targets, the means and experience to operate effectively, and, critically, failure of security services. Within the context of industry loss aggregation, concentrations of U.K. loss events of 1992, 1993, and 1996 are illustrative, particularly since the next U.K. event occurred in 2005 and involved a different perpetrator group. Unlike what we see off the coast of Florida (particularly recently), the threat can disappear quickly, only to be replaced by a new one in a different location. Hurricanes are hurricanes, and they tend to affect the same areas based on the same causes. Likewise earthquakes. But the local specifics of terror change constantly. Frequency and severity are lumpy for global terror. But it’s the reference point that matters more than having a sufficient data set for rigorous modelling (unlike in the property catastrophe space). If it’s known what a certain type of event looks like, then the analytical decision making can become much more informed. Ultimately, the right threshold for industry loss aggregation—as mentioned—is that which gets the solution up and running. With terror accumulations increasing across the reinsurance industry, the need for stand-alone terror trading is poised to grow, creating a salient need for a wider tool set to facilitate risk and capital management. Getting loss aggregation right should help the entire industry in a time of coming need and, ultimately, increase society’s resilience against terrorism. This article was contributed by Tom Johansmeyer and Dr Raveem Ismail. 2/3
  • 3. Tom Johansmeyer www.linkedin.com/in/tjohansmeyer tjohansmeyer@verisk.com Tom Johansmeyer is assistant vice president, PCS Strategy and Development, at ISO Claims Analytics, a division of Verisk Insurance Solutions. He leads all client- and market- facing activities at PCS, including new market entry, new solution development, and reinsurance/ILS activity. Currently, Tom is spearheading initiatives in global terror, global energy and marine, and regional property-catastrophe loss aggregation. Previously, Tom held insurance industry roles at Guy Carpenter (where he launched the first corporate blog in the reinsurance sector) and Deloitte. He’s a veteran of the U.S. Army, where he proudly pushed paper in a personnel position in the late 1990s. Dr Raveem Ismail, DPhil, MSc, MPhys (Oxon), MInstP www.linkedin.com/in/raveem raveem.ismail@oxon.org Raveem Ismail is assistant vice president, Specialty Treaty Underwriter, at Ariel Re, Bermuda; chair of the Reinsurance Special Interest Group of COST action IS1304 on Structured Expert Judgement; and a cofounder of The Journal Of Terrorism & Cyber Insurance. He was previously at Validus and Aon Benfield and has consulted for quantitative political violence at Exclusive Analysis (now IHS Markit). Raveem is a triple graduate of Oxford University, where his research was in atmospheric physics modelling. He constantly strives to bring analytical strength to bear on challenging underwriting problems, especially for prickly perils such as terrorism and cyber. 3/3