Skip to content

Tackling the cyber data dilemma

There is much debate about how the insurance industry can benefit from the explosion of data - a topic that was once again raised in a recent The Times Raconteur supplement, looking at the future of insurance.

  • 2 Minute Read

There is much debate about how the insurance industry can benefit from the explosion of data - a topic that was once again raised in a recent The Times Raconteur supplement, looking at the future of insurance. An article written by Rossalyn Warren examined how big data has transformed the personal lines insurance sector, with vast amounts of information available for collection, storage and analysis of information.

The article referenced the role of data in consumer products such as motor and life insurance. It led me to ponder over the data dilemma facing the cyber risk industry - how is it possible to model for cyber, given that there is little historic data and that the nature of this dynamic risk is constantly evolving? Traditional insurance modeling of future risk such as nat cats relies very heavily on historical data, sometimes going back hundreds of years. Cyber (and, in particular, hyper-connected systems and the mainstream use of the Internet) is only roughly 20 years old. Against this backdrop, how accurate can a model be?

Just as in natural catastrophe modeling, cyber risk models use multiple sources of data to derive an estimation of the frequency and severity outcomes of different events on a given insurance portfolio. The secret to good modeling is undoubtedly high quality and diverse data. Ideally, we need accurate data that is both deep, broad and relevant to the problem we’re trying to solve. Given the limited historic information for this line of business, it is important to collect data from a wide range of sources, normalise, correlate and turn it into a usable resource that can be used to model cyber risk effectively and accurately.

The more relevant information available, the more credibility we have in our assumptions, ultimately improving the certainty in modeled outcomes. The models that we create lean more towards having good quality data, rather than just focusing on volume. 

In her article, Rossalyn cites how personal information gathered from fitbits and smart cars have transformed the personal lines market. To build cyber insurance models requires vast amounts of data from the Internet of Things and from the enormous array of information flowing from internet-connected sources. 

However, volume is not the panacea. To prevent data overload, the cyber insurance industry needs to build an analytics platform based on quality - not just quantity. Creating a targeted, directional flow of useful insights is fundamental to the future of cyber modeling.

Related Articles