Posts in AI and Insurance
New Draft: Seth Lazar and Jake Stone 'On the Site of Predictive Justice'

We argue that, as well as more obvious concerns about the downstream effects of ML-based decision-making, there can be moral grounds for the criticism of these predictions themselves. We introduce and defend a theory of predictive justice, according to which differential model performance for systematically disadvantaged groups can be grounds for moral criticism of the model, independently of its downstream effects. As well as helping resolve some urgent disputes around algorithmic fairness, this theory points the way to a novel dimension of epistemic ethics, related to the recently discussed category of doxastic wrong.

Read More
MINT Leads Successful AUD950k Linkage Project

MINT, in collaboration with Insurance Australia Group, the Gradient Institute, the University of Sydney and HMI has been awarded an AUD 495,000 Australian Research Council Linkage grant to study Socially Responsible Insurance in the Age of AI. The project will receive a further AUD 350,000 funding from IAG, and AUD 100,000 funding from ANU.

Read More
Should I Use That Rating Factor? A Philosophical Approach to an Old Problem

This paper is a collaboration between HMI, IAG and Gradient, and reflects our broader concern that new methods that use machine learning to influence risk predictions to determine insurance premiums won't be able to distinguish between risks the costs of which people should bear themselves, and those that should be redistributed across the broader population, and might also involve using data points that it is intrinsically wrong to use for this purpose.

Read More