MINT Lab’s Seth Lazar and PhD student Jake Stone have published a new paper in Noûs on the site of predictive justice.
Read MoreWell done to Jake Stone for passing his Thesis Proposal Review! He’s working on ‘A Theory of Justice for Algorithmic Systems’, and it’s going to smash. Did a great job crossing this important hurdle. Well done Jake!
Read MoreWe argue that, as well as more obvious concerns about the downstream effects of ML-based decision-making, there can be moral grounds for the criticism of these predictions themselves. We introduce and defend a theory of predictive justice, according to which differential model performance for systematically disadvantaged groups can be grounds for moral criticism of the model, independently of its downstream effects. As well as helping resolve some urgent disputes around algorithmic fairness, this theory points the way to a novel dimension of epistemic ethics, related to the recently discussed category of doxastic wrong.
Read MoreMINT, in collaboration with Insurance Australia Group, the Gradient Institute, the University of Sydney and HMI has been awarded an AUD 495,000 Australian Research Council Linkage grant to study Socially Responsible Insurance in the Age of AI. The project will receive a further AUD 350,000 funding from IAG, and AUD 100,000 funding from ANU.
Read MoreThis paper is a collaboration between HMI, IAG and Gradient, and reflects our broader concern that new methods that use machine learning to influence risk predictions to determine insurance premiums won't be able to distinguish between risks the costs of which people should bear themselves, and those that should be redistributed across the broader population, and might also involve using data points that it is intrinsically wrong to use for this purpose.
Read More