airo * ne

about

story

airo*ne is an observatory that documents the civilisational risks, as well as opportunities for human flourishing, that arise from advanced AI.

But why? There are a number of resources at the intersection of AI and existential or catastrophic risk, see links. However, most are academic, technical or policy-orientated in nature, and tend to focus on identifying and discussing the most important issues in the field. Certain organisations and blogs seek to communicate their research to a broader public, but relatively few seek to connect AI research to neighbouring conversations, particularly in the cultural sphere.

This particular thicket is heavily bordered by philosophy, ethics, cognitive science, sociology, and the dynamics of political-economic systems. It is therefore relatively amenable to artistic treatment: art is a way of presenting apparently disparate or contradictory perspectives - for a moment suspending judgement or decision - thereby perhaps allowing illuminating questions to surface.

For whom? This site is intended for people interested in AI generally, who perhaps aren't sure why there might be substantive issues that go beyond bias and privacy, which are well-covered already. This can be students, writers, artists, or other cultural producers.

By whom? The site is maintained by Kanad Chakrabarti (he/him), an artist and writer interested in how technology, interests, and power structures entangle, and potentially lead to suboptimal long-term outcomes for societies. His other projects can be found here.

Declaration-of-Interest At present (January 2022), neither this project nor the author is funded by any company or organisation involved in AI research.

Where now? Here are a few motivating questions and a glossary to help with jargon. For fun, an encyclopaedic dataset from the Future of Life Institute has been repurposed into a spidergraph - to the detriment of readability and usefulness.

name

airo*ne stands for Artificial Intelligence Risk Observatory for the Novel Epoch.

Currently, the project manifests as an online research archive. Subsequently, it will expand into a platform with print publication, moving-image, and discursive elements, potentially embedded in an architectural intervention. It is an installation that can expand and contract as appropriate, as a teaching tool for audiences outside, and occasionally within, the white cube.

license

The source code of this website is under the MIT License, but the assets and text content of this website is under the BY-NC-SA4.0 License.

Most of the general code for the site are from Hundred Rabbits and the licenses above derive from same.

A specific exception is data on topics in AI safety, which were compiled by Richard Mallah for the Future of Life Institute's Value Alignment Research Landscape map, and are utilised on a fair-use basis. The code for the network graph is from the 3D-Force-Graph library. Some animations come from the P5JS library which is a Javascript-port of the awesome Processing library. Most code has been substantially modified by the author.

contact

If you see any errors or omissions in the content, or have concerns about misinterpretation/infohazards, please get in touch. If you'd like to fund this project, to collaborate, or have ideas to make this better (for the AI Safety or Artworld communities), that would be great ! Contact on Instagram @ai_r__o___n____e or kanad dot chakrabarti at ukc10014.org.