Re: obviousness of possible failure modes

Date: 2010-05-15 05:16 am (UTC)
SIAI is doing great things. But I can point out 3 obvious failure modes SIAI and/or LW are already in:

- Being exclusively human-centric. This is the elephant in the room that nobody will talk about, for fear of scaring off the donors. Humans aren't that great. I look forward to a future where I don't have to deal with them on a regular basis. Understanding the possibilities ahead of us, and yet trying to keep the future safe for humans anyway, is the greatest evil anyone has ever attempted. I study history and I still mean that literally.

- Being super-secretive and paranoid. SIAI says they want to make tools for AI researchers; yet Eliezer doesn't trust even the visiting fellows with what he's working on. Do it open-source, or don't do it.

- Not gathering the data and making the models needed to understand the phenomena they talk about, and to enumerate and build a probability distribution over possible futures. Maybe this falls outside their mission.

Which brings up a failure mode that the rest of us have fallen into:

- Placing the burden of planning for the Singularity entirely on SIAI.
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

December 2018

S M T W T F S
      1
2345678
910 1112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 27th, 2025 07:00 pm
Powered by Dreamwidth Studios