airo * ne

questions

A reasonable number of people who have spent a lot of time thinking about the direction AI is going, believe there is at least the possibility of a problem. The easiest place to start is the 2021 Reith Lectures on BBC Radio 4 by Professor Stuart Russell, who has also written a book called Human Compatible. This Vanity Fair article writes about Elon Musk's concerns while the late Stephen Hawking warned of AI on the BBC.

This is, sadly, a several part answer. These is my take, see also the Reith Lectures and Russell's book (links above).

This piece on Vox also explains the problem pretty well.

No, but they are related. Algorithmic bias has been a huge issue, covered in the general media, as well as by artists and art writers. It affects hiring decisions, surveillance, and policing in the carceral state. It has been recognised as a problem within the machine learning community, and the technical reasons it happens are specifically covered in Chapters 1 and 2 in 'The Alignment Problem' (2020). Briefly, it arises from the fact that the algorithms were trained on datasets that, owing to how they were collected, mis-represented the societies in which the algorithms would be used; in some cases, the algorithms encoded, sometimes in subtle or unforeseen ways, biases that already existed in parts of society. To an extent, an under-representation of women and people-of-colour in the companies and research organisations that developed the algorithms made the problem worse.

AI Safety is, however, a broader, distinct, and (potentially) more severe problem than bias. Bias can be traced back to datasets and can (mostly) be corrected in real-time. However, the issues around AI Safety are more foundational and subtle: their root is how massive, complicated, recursive, and non-transparent systems process data and behave. Moreover, AI systems can, if they are allowed to, replicate quickly and cheaply, making problems more widespread. AI systems may also be embedded in, or connected with, vital infrastructure such as power grids or medical facilities.

No! Terminator had a killer robot that looked sort of like a human and that was intentionally hunting down people, not to mention travelling to 1984 from the future (2029). AI can be a problem without anyone intentionally programming it or designing it to harm people. It can be a problem owing to our incompetence in designing powerful technology, or simply failing to foresee very non-obvious problems in something that humans, as a species, have never done before. It is a little like making the first bridge, airplane, or nuclear power station. We would try to be careful in all those cases, but bad things could happen.

However, there are such things as Lethal Autonomous Weapons Systems (LAWS), which are like drones trained to kill humans in a largely automated fashion, but they generally don't look like Arnold Schwarznegger and aren't good at time travel. They are still a huge and near-term concern, as this video on Slaughterbots vividly illustrates.

There is a wide range of views on whether there is a problem, and if so, when/how it might manifest. Many AI researchers think there is an issue worth investigating, and many major companies doing AI research nominally have an safety or ethics oversight function. Some companies, such as Google/Deepmind and Microsoft, seem to be more committed, publishing papers specifically focused on safety, or funding academic or hybrid academic-private research into AI Safety. An outside observer, not unreasonably based on Big Tech's recent history of self-regulation, could view these efforts cynically as move to stave off stricter governmental regulation.

There are a number of academic or non-profit organisations dedicated to this topic, and an ecosystem of blogs, mailing lists, career development opportunities, and curricula, that try to build expertise, consensus, as well as a useful range of views/approaches. See links.

That said, there are divergent views on how fast human-level AI will arrive, how much of a problem it really will be, whether we will have advance warning, and whether there will be opportunities to 'dial back and regroup' if things go badly. There is a tension between the Silicon Valley ethos of 'move fast and break things' and the risk that AI Safety might not allow such an approach. As an analogy consider nuclear power vs the latest game. Nuclear power is quite unforgiving - we should prefer to prevent accidents because fixing them after the fact is difficult. On the other hand, the average computer game is different, it is mostly fine to release it in the world and then iterate or tweak it, depending on user feedback. AI risk should be viewed more like nuclear power than games, in that we should adopt a precautionary principle even if that slows down 'progress' or reduces potential profits.

Even within the AI Safety community, there are divergent views on how to approach the problem, see these comments Stuart Russell made about his disagreement (on the severity of the AI risk) with Stephen Pinker; and for context, the original Pinker-Russell conversation, and this more technical interview with Ben Garfinkel, who scrutinises some of the classic arguments about AI risk and finds them somewhat wanting (though, to be clear, he still thinks AI Safety is worth focusing upon).

If we, intentionally or accidentally, create an AI that is unsafe (or unaligned, the more precise term used in the field) in certain ways, some of which are known or foreseen to us, others of which are unknown, this AI could undertake actions that put humanity's present existence and its future at risk. An obvious example is if an AI were hooked up the nuclear command and control system of the US, Russia or China, and that led to a global thermonuclear exchange (basically the script of the 1970 film Colossus: The Forbyn Project). The resulting nuclear war would probably kill or injure hundreds of millions, and the likely nuclear winter may set back civilisation significantly.

Importantly, unlike The Terminator, such an AI needn't particularly 'hate us', or even have any sense of what that means. It may simply act in a way that takes no account of our existence or interests.

This quote by researcher Eliezier Yudkowsky captures the point: The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.

This 1965 quote from British mathematician I.J. Good is less pithy but apt:

'Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside science fiction.'

More sobering is his view in unpublished notes dating from 1998 (writing in the third person):

'The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his [Good’s] words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.'

No, there is a taxonomy of risks not all of which would wipe out all humans alive in a short period (i.e. the Hollywood version). The brief answer is Catastrophic Risks are really bad (the global power grid goes out owing to a naturally occurring, but large, solar electrical storm, and we have a blackout spanning weeks or months); Existential Risks involve the vast majority of humans being wiped out. A more detailed analysis is presented in the very readable 2020 book by Toby Ord, 'The Precipice'. Importantly, many risks, such as climate change or, as we have seen, a modestly lethal and transmissible pandemic, have highly unequal impacts from a distributional perspective: poor people often fare worse. So a risk can be non-global or non-existential, but be awful for very many people. Certain extreme AI risk scenarios seem, however, to fit the definition of existential risk, see plateauing.

Some Existential Risks are pretty tangible and/or may manifest in the near-term, over our lifetimes or those of our (grand-)children, and it seems self-evident to care about these. Hence climate change, engineered pandemics, or nuclear war seem like things everyone should care about, as they can be or are immediate threats.

Other risks, such as AI-related ones, are more complicated or subtle, and may not materialise in the near-future. To the extent these risks do not greatly affect us or our near descendants, it is more a matter of taste whether one cares about these, but some philosophers and moral systems think we should try to leave the world in a fit state even for our distant descendants. Some of these considerations are lumped under the term 'Longtermism' (see glossary). An accessible starting point is Toby Ord's 'The Precipice' (2020), as well as this short article about the late Derek Parfit and this longer piece in The New Yorker.

Other people could very reasonably take the view that longer-run, non-obvious risks are less important than immediate issues (climate change, inequality, discrimination, poverty, disease, war, etc.), because we can impact the former less. Even so, there might be a case that people shouldn't dismiss Catastrophic or Existential Risks as exotic, fringe, or parochial: Stuart Russell's 2021 Reith Lectures pretty powerfully presents the case for AI Safety as a 'right-now' problem; a slightly wonkier perspective comes from this discussion with researcher Carl Shulman.

Increasingly powerful AI could be a great benefit to humanity, for instance by massively improving medical science and drug development; radically speeding up scientific progress in general; boost economic growth; potentially make deep-space exploration more feasible. Whether these are judged to be good things depends on one's philosophical perspective on growth and progress. Importantly, certain AI scenarios may actually cause impoverishment of biological humans, which would need to be mitgated by massive basic-income support, as Stuart Russell writes in 'Human Compatible' (2019).

They all refer to slightly different things, but the glossary can help.

A high-level view from Holden Karnofsky who has reviewed and summarised a number of detailed forecasts, is: I estimate that there is more than a 10% chance we'll see transformative AI within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100).

The term 'AI winter' refers to two boom-bust cycles, 1974-1980 and 1987-1993, when funding and interest in AI dried up, as radically inflated expectations on what AI could do were dashed. There is some sense amongst insiders, that another such cycle might be on its way, not least because equity market valuations are so stretched (December 2020), but also possibly for technical reasons. While its hard to say anything concrete about timing, historical AI winters have lasted less than a decade, so it may not drastically affect the risk landscape.

Those are indeed potentially more pressing concerns, particularly at the moment, and should continue to get the lion's share of resources, and should be a primary focus of philanthropic efforts. However, someone should spend some time and money thinking about AI Safety, which is probably somewhat under-resourced (in terms of capable researchers if not money).

The short answer is 'no'. AI comes in various flavours, corresponding to how 'powerful' it is, and ML mostly fits into a narrow definition of AI that is good at specific tasks for which models have been trained on large sets of data using large amounts of computing power. Although this is changing, many of these models are less good at generalising to other tasks, even very similar ones. See Moravec's Paradox, the TLDR of which is: AI is good at things we do badly (like maths), and bad at things we do well (like being creative). AGI, or in Nick Bostrom's 2014 terminology, Superintelligence (see links), corresponds to intelligence that is at, or exceeds, human level. Finally, TAI is the notion that some variety of AI, presumably closer to AGI than contemporary ML, has dramatic effects on our world, comparable to the Industrial Revolution. See here for more background and here for a more technical discussion of relative timelines between TAI and AGI.