Based at Harvard University’s Allen Lab for Democracy Renovation, this network of senior advisors, fellows, and researchers, led by Danielle Allen, assembles expertise from disciplines across Harvard and other higher education institutions. Network members collaborate with public and private sector impact and support partners including Plurality Institute, RadicalXChange, new_publics, UMass Initiative for Digital Public Infrastructure, Microsoft Research, Council for Responsible Social Media, Collective Intelligence Project, Open Society University Network, and the Digital Humanism Initiative.
Read MoreThe Institute for Ethics in AI brings together world-leading philosophers and other experts in the humanities with the technical developers and users of AI in academia, business and government. The ethics and governance of AI is an exceptionally vibrant area of research at the University of Oxford and the Institute is an opportunity to take a bold leap forward from this platform.
Read MoreThe ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S Centre) is a new, cross-disciplinary, national research centre, which aims to create the knowledge and strategies necessary for responsible, ethical, and inclusive automated decision-making. The Centre combines social and technological disciplines in an international industry, research and civil society network that brings together experts from Australia, Europe, Asia and America. It will formulate world-leading policy and practice and inform public debate with the aim to reduce risks and improved outcomes in the priority domains of news and media, transport, social services and health.
Read MoreMINT is collaborating with Insurance Australia Group, Australia’s largest insurer, on our ‘Socially Responsible Insurance in the Age of AI’ ARC Linkage grant.
This project aims to discover the social costs and benefits of using Artificial Intelligence in insurance, and to design practical interventions—responsible design workshops, practical guidance, regulatory proposals, new algorithmic tools—that realise the benefits while mitigating the costs. It expects to generate new knowledge drawing on philosophy, law and sociology, working closely with practitioners at the forefront of deploying AI in insurance. Expected outcomes include novel ethical AI-based approaches to product design, pricing and claims administration. This should benefit insurers and consumers, realising efficiency gains made possible by AI, without unacceptable costs to privacy, fairness, and the unaccountable exercise of power.
Read MoreOur goal is to learn from practitioners what are the real problems that they face when deploying data and AI, so that we ensure that our research is laser-focused on the problems that matter. We then want to maximise the impact of our research, by working with partners in government, industry and civil society. We aim to influence policy, to guide the ethical implementation of data and AI, and to help shape the next generation of democratically legitimate AI systems. If you can help us do that—and if we can help you—then get in touch.
Read MoreAs humans, our skills define us. No skill is more human than the exercise of moral judgment. We are already using AI to automate morally-loaded decisions. In other domains of human activity, automating a task diminishes our skill at that task. Will ‘moral automation’ diminish our moral skill? If so, how can we mitigate that risk, and adapt AI to enable moral ‘ups killing’? Our project will use philosophy and social psychology to answer these questions.
Read MoreA not-for-profit research institute created in 2018, Gradient is made up of world-class machine learning researchers. Their vision, like ours, is to progress the research, design, development and adoption of ethical AI systems.
Read More