airo * ne


Links and further reading

Prominent research organisations (in and out of academia)

Future of Humanity Institute

The Center for Human-Compatible AI

The Machine Intelligence Research Institute

The Center for the Study of Existential Risk


Google Deepmind

The Berkeley Existential Risk Initiative


The Global Priorities Institute

The Center on Long Term Risk

The Center for Security and Emerging Technology

AI Impacts

Leverhulme Center for the Future of Intelligence

AI Safety Camp

The Future of Life Institute

Convergent AI

Median Group

AI Pulse

Global Catastrophic Risk Institute

Other resources, blogs, etc.

Alignment Forum

The Open Philanthropy

Less Wrong

AI Safety Papers

AI Alignment

Books and publications

Bostrom, Nick, 'Superintelligence', Oxford: Oxford University Press, 2014.

Ord, Toby, 'The Precipice', New York: Hachette, 2020.

Christian, Brian, 'The Alignment Problem: Machine Learning and Human Values', New York: WW Norton and Company, 2020.

Russell, Stuart, 'Human Compatible: AI and the Problem of Control', New York: Viking, 2019.

Parfit, Derek, 'Reasons and Persons', Oxford: Oxford University Press, 1984.