Jul. 10th, 2013

xuenay: (Default)

Of the (admittedly not many) papers I’ve written so far, this is the one that I’m the proudest of:

Responses to Catastrophic AGI Risk: A Survey

Abstract: Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may pose a catastrophic risk to humanity. After summarizing the arguments for why AGI may pose such a risk, we survey the field’s proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors, and proposals for creating AGIs that are safe due to their internal design.

Cite as: Sotala, Kaj and Roman V. Yampolskiy. 2013. Responses to Catastrophic AGI Risk: A Survey. Technical Report, 2013-2. Machine Intelligence Research Institute, Berkeley, CA.

It’s the first comprehensive survey of what’s been said in the AGI risk field, starting from the arguments for AGI being a risk and then moving on to the various reactions that different people have had. Arguments that AGI isn’t a risk in the first place or that this is a wasted effort, various proposals for regulating AGI or even banning it altogether, “Oracle AIs”, different high-level approaches for building safe AGIs, and so on. At least every major kind of proposal should be in there, even if not every particular exemple of them is. (And we did try to find most of the particular examples, too.)

Clocks in at 82 pages and over 250 references. Took a while to write, and I did the vast majority of it.

Download link here.

Originally published at Kaj Sotala. You can comment here or there.

December 2018

S M T W T F S
      1
2345678
910 1112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 12th, 2025 08:25 pm
Powered by Dreamwidth Studios