Optimized printing

2018 Corporate Responsibility Report

Focus: Case study on algorithmic decision-making

A woman programming an algorithm (photo)

Increasingly important in decision-making, algorithms may also reinforce human judgement and are difficult to audit. To cope with potential ethical concerns while adapting to rapid digitalisation, Swiss Re is addressing algorithmic decision-making as an emerging risk.

Algorithms play an increasingly powerful role in ever more areas. They perform certain tasks faster than human beings, eg the analysis of (big) data, and their decisions are held to be more accurate and unbiased. But how objective can a computer programme that has been written, calibrated and tested by humans really be?

With artificial intelligence (AI), machine learning and unsupervised training data from which the machine learns, this question is becoming more virulent. The underlying algorithms may directly feed from human behaviour, thus in fact replicating both human virtues and flaws. Adapting, confirming and reinforcing imperfect human judgment may lead to discriminatory effects and raise ethical concerns. In a re/insurance context, such discriminatory bias can translate into defective modelling and prediction, in addition to causing reputational issues.

A particular challenge of algorithms is their explainability and thus auditability. While the accuracy of outputs may be tested empirically, the explainability of algorithmic workings remains limited. This “black box” problem of algorithms has spurred a whole new field of AI research, which seeks to enhance users’ understanding of algorithmic processes and thus help build trust towards them.

With the advent of the “algorithmic economy”, ethical debate and regulatory attention have picked up. One particular focus is the lack of – and potential need for – clear governance around the development and application of algorithms.

The re/insurance industry needs to cope with two types of pressure. One is to adapt in real time to the rapid digitalisation of the world, for example to the expected spread of autonomous vehicles.

The other is to be attentive to the risks that may be created if the speed of the digital transformation enlarges potential blind spots.

Swiss Re has covered important aspects of algorithmic decision-making in various publications. “Blame your robot” was the title of an emerging risk theme in the SONAR 2017 report. It was dedicated to the potential shifts in liability regimes caused by the advent of AI. The SONAR 2018 report highlights some of the challenges of algorithmic decision-making described above, under the heading “Algorithms are only human too – opaque, biased, misled”.

Swiss Re Institute also supported IRGC in organising a multi-disciplinary workshop on the governance of decision-making algorithms.

irgc.epfl.ch/issues/projects-cybersecurity/the-governance-of-decision-making-algorithms/