Response to Statement on Existential Risk

There is a growing fear of existential risk arising from AI—that AI itself could threaten the existence of humanity as a whole. A statement put out by many leading figures in AI research calling for a priority to be placed on “mitigating the risk of extinction from AI”.

MINT Lab’s Seth Lazar has teamed up with AI researchers Jeremy Howard and Arvind Narayanan to argue that the risk comes not from AI directly, but from the people in a position to control it. They argue that we can best avoid existential risk from AI by building robust research communities that work to mitigate better-understood risks from concrete AI systems.

You can read the full essay here.