Claire Benn and Seth Lazar ask what is wrong with online behavioural advertising and recommender systems, in this paper published in the Canadian Journal of Philosophy.
Read MoreSeth co-chaired the 4th AAAI/ACM Conference on AI, Ethics, and Society, a hybrid conference that took place on 19-21 May 2021.
Read MoreShould we use large-scale facial recognition systems? This article in The Conversation distinguishes between facial recognition and face surveillance and argues that we should demand a moratorium on face surveillance.
Read MoreClaire Benn and Seth Lazar recorded an interview with Rashna Farrukh for the Philosopher’s Zone podcast on Radio National. The theme: moral skill and artificial intelligence. Does the automation of moral labour threaten to diminish our capacity for moral judgment, much as automation in other areas has negatively impacted human skill?
Read MoreIn a joint submission, HMI identified 7 areas for further development in the Human Rights and Technology discussion paper proposed by the Australian Human Rights Commission. The main three concerned: defining ‘AI-informed decision-making’; the demand for explanations; and the absence of a formally link between design and assessment.
Read MoreAs humans, our skills define us. No skill is more human than the exercise of moral judgment. We are already using Artificial Intelligence (AI) to automate morally-loaded decisions. In other domains of human activity, automating a task diminishes our skill at that task. Will 'moral automation' diminish our moral skill? If so, how can we mitigate that risk, and adapt AI to enable moral 'upskilling'? Our project, funded by the Templeton World Charity Foundation, will use philosophy, social psychology, and computer science to answer these questions.
Read More