xuenay: (Default)
[personal profile] xuenay
As promised, here's the first part of my interview series. It's meant to give you a feeling of what kinds of people I'm spending my time with in the Visiting Fellows program.

Alicorn ([livejournal.com profile] alicorn24 on LJ) was already here when I got here, and will in all likelihood still be here after I leave. Less Wrongers will know her as being one of the people with the highest karma on the site. I was kinda surprised when I met her, because somehow I'd gotten the impression that she'd be really shy and stuff, but she turned out to be really outgoing and extroverted and not shy after all. Her school allowed people to make up their own degrees so long as they also completed some 'real' degree, and as a result she has degrees both in Philosophy and World-Building, which is totally awesome.

She writes serial fiction together with Tethys at elcenia.com, and has her own webcomic (which is cute and which I like) at htht.elcenia.com. Alicorn also makes good food (see her cooking blog) and likes petting people's hair, if they allow it.

Kaj: So, tell our readers, how did you come to be here?
Alicorn: I originally sent an e-mail last fall, asking about the summer, because at the time I expected to be in grad school for the forseeable future. I didn't get a firm response because there were so many summer applications to sort through and no clear idea of how many spots there were. Then, come the spring semester, I decided I wasn't happy, discerned no school-compatible way to fix that, and asked Anna if I could come out if I were able to leave right away instead of at the end of the school year. After some consideration and discussion, the answer turned out to be "yes"; I withdrew from school, packed up, flew out here, and proved useful enough to be kept around.
Kaj: 'Useful enough to be kept around' leads us pretty naturally to the next question, which is, what are the things that you do around here?
Alicorn: I write Less Wrong posts sometimes, although lately while I have lots of ideas, they aren't gelling properly. I've started doing a lot of outreach, because I love to chat with people, including the people SIAI wants someone to stay in touch with. I've also been doing some human capital development projects, absorbing more content and developing new skills.
Kaj: Too many potential lines of interr... uhh, interviewing that I could pursue, I have difficulty picking which ones. The outreach thing sounds interesting - do you generally get to talk with people about Singularity-type stuff a lot, or is it more general conversation? What kinds of people do you talk with that you count as outreach?
Alicorn: I have a pretty low ratio of Singularity-stuff to general conversation. For one thing, this probably increases my long-term quantity of Singularity conversations: people will be more willing to listen to me pontificate on that sometimes if it's not all I ever talk about! A lot of my contacts are people I was already friends with before I got involved with SIAI - some through Less Wrong, some not. In order to count for outreach at all, they have to have relevant interests, though - I can't include every one of my friends on my list of contacts for this reason.
Kaj: That makes sense. How do people in your experience generally react to Singularity-type stuff when it does come up? And do you actively seek out new contacts?
Alicorn: I do actively seek new contacts, although I prefer not to "cold call" - or rather, "cold e-mail" - I like to know a little about who I'm talking to first. People have surprised me with their reactions to Singularity type stuff. Some people reject it so thoroughly - even if they usually seem willing to listen to what I have to say and think I rarely have stupid ideas! - and others seem to follow everything I present, but don't find it at all motivating. People who don't fall into one of those categories, I've typically met through Less Wrong or the SIAI - so I can't claim to have converted anyone.
Kaj: Alright. You also mentioned that you've been doing human capital development and acquiring new skills. What kinds of skills in particular?
Alicorn: Since I seem to have comparative advantage at luminosity, I've been putting extra effort into verbalizing how I do that - the luminosity sequence wasn't as good as I think it could have been, and I'll probably give the topic another crack on LW in a few months. I attend some of the workshops that people in the house give, which are on all kinds of topics. And of course I read books and articles.
Kaj: Say a few words about luminosity, for those readers of mine who aren't LW regulars?
Alicorn: Luminosity is self-knowledge: the ability to monitor what's going on in your mind, predict what you're going to do next, and find the best ways to change these if you want to.
Kaj: And here's the link to Alicorn's Luminosity sequence, for anyone who's interested. So how do you like living and working here in the house?
Alicorn: I like it a lot! All of the people here are really great. There are some challenges associated with living in a large group, but we navigate them pretty effectively. It's easy to wander around and find an interesting conversation if I have nothing to do, and lots of people to feed my delicious food.
Kaj: Cool. I occasionally have the feeling that the opportunity to talk with all these people so easily sometimes gets me distracted and prevents me from getting things done, do you manage to avoid that?
Alicorn: I'm highly interruptible. While it costs me time to get sidetracked, it doesn't tend to make it much harder to pick up the project that was interrupted later. But if I need not to be disturbed, I can go in my room and close the door - or, for a less heavy-duty solution, put on my headphones.
Kaj: That works. What else. Oh yeah, how did you originally hear about the Singularity and all this stuff?
Alicorn: I was aware of the Singularity as a background idea, but considered it a science fiction trope more than something that might actually happen, for a long time - I assume I picked it up from all the fiction I read. I started taking it seriously after I found Less Wrong, which I discoverd via the Overcoming Bias link after having found OB through Stumbleupon.
Kaj: I thought it'd be something like that. So what are your own thoughts about the Singularity and our posthuman future? Do you think we'll just inevitably end up welcoming our robot overlords, for instance?
Alicorn: Can you rephrase that question, please?
Kaj: Sure. Basically I just meant to ask what your views were on things like the path to the Singularity, the likely timeframes and our chances of making it through intact. For instance, I'd personally be surprised if we didn't have real AI in say fifty years, and I suspect humanity has a pretty low chance of surviving the transition in a way that we'd consider positive (though I'd love to be proven wrong on that, obviously).
Alicorn: Hm. I'm not absolutely convinced we'll encounter a Singularity at all. I think it's entirely possible that there's some bottleneck in how fast technology can progress that we haven't hit yet, which, when it manifests itself, will smooth out all our further advancement and have us moving forward in a distinctly non-Singularityesque way. We could also all die, which would be bad. I'm skeptical that, if the Singularity happens, it will happen in fifty years or less: estimates for when things happen are often pushed back and virtually never pushed forward. The good part is that gives us lots of runway space in which to steer, insofar as we can steer. But I'm very dubious about CEV as a solution to fragility of value, and I think there are far more and deeper differences in human moral beliefs and human preferences than any monolithic solution can address. That doesn't mean we can't *drastically* improve things, though - or at least wind up with something that *I* like!
Kaj: Alright. I think I'm starting to run out of questions... oh yeah. With degrees in philosophy and world-building, you're somewhat different from the average Visiting Fellow, with a lot of people here tending to have more mathy or computer science-ish backgrounds. Do you think that's led to any situations where the difference is clearly noticable, or is it something that hardly ever comes up? Do you think there should be more diversity of backgrounds here?
Alicorn: I stay out of the really technical discussions, generally. I don't think it comes up too much apart from that. I think cognitive diversity is underrated in general. My post "Epistemic Luck" mentions ideological families within a discipline - I don't find it remotely hard to believe that similar things could happen to disciplines entire, which seems a dangerous thing not to guard against. The Less Wrong/SIAI community is quite homogeneous in more ways than just aptitudes for science and math, and I worry that we're missing some gigantic, obvious failure mode that someone from a different background would spot at once.
Kaj: That does sound worryingly plausible. Alright, so I'm out of real questions, so time to go meta. Is there any question that you'd have liked me to ask you, and if so what and how would you answer it, and no, a meta-response like "how would you like me to answer that question" isn't allowed?
Alicorn: Gosh, I don't know, I'm just so relentlessly fascinating, how can I pick a single one of the arbitrarily large number of things you could have asked me as the best one when any of them would have gloriously entertained your readers?
Kaj: :D Roll a die?
Alicorn: My dice are in the mail from my old apartment and won't be here for a couple days.
Kaj: Ah well. Well, aside from answers to difficult meta-questions, is there anything else you'd like to say before we're done with the interview?
Alicorn: Um, if anybody wants to be my friend, they should send me an e-mail or IM me. I am very approachable and nobody should find me intimidating at all.
Kaj: Cool. If anyone wants to do that, Alicorn's AIM name is alicorn24, and which e-mail addy should I mention?
Alicorn: alicorn@singinst.org works fine. But if someone uses gChat, my ID there is elcenia@gmail.com, and my MSN address is alicorn@elcenia.com :)
Kaj: :) And that's it, I think. Thanks for the interview!
Alicorn: You're welcome!

Date: 2010-05-12 09:49 pm (UTC)
From: [identity profile] ciphergoth.livejournal.com
Thanks for this!

additional LW link

Date: 2010-05-13 06:45 am (UTC)
From: [identity profile] alekseiriikonen.livejournal.com
Where you write "fragility of value", a link to the Less Wrong post defining the expression might be useful:

http://lesswrong.com/lw/y3/value_is_fragile/

Re: additional LW link

Date: 2010-05-13 07:14 am (UTC)
From: [identity profile] xuenay.livejournal.com
Hmm, you're right. Added that.

obviousness of possible failure modes

Date: 2010-05-13 10:17 am (UTC)
From: [identity profile] alekseiriikonen.livejournal.com
Alicorn: "I worry that we're missing some gigantic, obvious failure mode that someone from a different background would spot at once."


If so, I doubt that the failure mode can be *very* obvious. There's a substantial number of people from very-different-to-SIAI-folks backgrounds who have expressed that they think SIAI folks are obviously insane (Dale Carrico is an example). But these people don't tend to manage to be honest or knowledgeable in their characterizations of SIAI folks.

So SIAI folks' possible failure modes can't be extremely obvious, or these people would have made more sense than they have.

Watching out for less obvious failure modes is of course important.

Re: obviousness of possible failure modes

Date: 2010-05-15 05:16 am (UTC)
From: [identity profile] shagbark.livejournal.com
SIAI is doing great things. But I can point out 3 obvious failure modes SIAI and/or LW are already in:

- Being exclusively human-centric. This is the elephant in the room that nobody will talk about, for fear of scaring off the donors. Humans aren't that great. I look forward to a future where I don't have to deal with them on a regular basis. Understanding the possibilities ahead of us, and yet trying to keep the future safe for humans anyway, is the greatest evil anyone has ever attempted. I study history and I still mean that literally.

- Being super-secretive and paranoid. SIAI says they want to make tools for AI researchers; yet Eliezer doesn't trust even the visiting fellows with what he's working on. Do it open-source, or don't do it.

- Not gathering the data and making the models needed to understand the phenomena they talk about, and to enumerate and build a probability distribution over possible futures. Maybe this falls outside their mission.

Which brings up a failure mode that the rest of us have fallen into:

- Placing the burden of planning for the Singularity entirely on SIAI.

Re: obviousness of possible failure modes

Date: 2010-05-15 08:22 am (UTC)
From: (Anonymous)
"- Not gathering the data and making the models needed to understand the phenomena they talk about, and to enumerate and build a probability distribution over possible futures. Maybe this falls outside their mission."

Are you aware of the Uncertain Future project? (http://www.singinst.org/blog/2009/12/12/the-uncertain-future/)

Re: obviousness of possible failure modes

Date: 2010-05-17 09:20 pm (UTC)
From: [identity profile] shagbark.livejournal.com
Yes! That's nice, actually.

I had different things in mind, but the list of things we'd like to model is so broad that I guess it's silly of me to fault them for not tackling my particular list.

Re: obviousness of possible failure modes

Date: 2010-05-15 08:58 am (UTC)
From: [identity profile] alekseiriikonen.livejournal.com
"Being exclusively human-centric. This is the elephant in the room that nobody will talk about, for fear of scaring off the donors. Humans aren't that great. I look forward to a future where I don't have to deal with them on a regular basis. Understanding the possibilities ahead of us, and yet trying to keep the future safe for humans anyway, is the greatest evil anyone has ever attempted. I study history and I still mean that literally."


What you mean by "exclusively human-centric" seems to be quite unusual and weird, if "trying to keep the future safe for humans" means that's what one is doing.

What would be evil would be *not* wanting to see to it that humans *also* will be safe in the future. Seeing to it that humans are safe will in no way limit the possibilities for "more developed" lifeforms, unless you have childish misanthropic fantasies about punishing all of the human race or something.

The future should be safe for humans just as it should be safe for future more complicated lifeforms and currently existing non-human animals. This thought isn't human-centric, it's just the responsible and emotionally mature thing to do to want to take humans into account just like all the other lifeforms.

Re: obviousness of possible failure modes

Date: 2010-05-17 09:16 pm (UTC)
From: [identity profile] shagbark.livejournal.com
So, does that mean you want to make the world safe for cats and dogs?

At a bare minimum, you must outlaw cars.

I believe what I said. But I should have said "trying to reserve the future for humans", which is what Eliezer wants to do, is closer to what most people automatically think they want to do; and is even more evil.

Re: obviousness of possible failure modes

Date: 2010-05-18 04:11 am (UTC)
From: [identity profile] shagbark.livejournal.com
Or, an even more conservative position: Without saying whether it's right or wrong to make the future safe for humans, you can't both say that, and eat hamburger. That's a blatant hypocrisy that AIs will see right through.

Re: obviousness of possible failure modes

Date: 2010-05-18 06:48 am (UTC)
From: [identity profile] alekseiriikonen.livejournal.com
I'm pretty familiar with animal rights thinking, and used to be rather militant in that regard (I also btw was the spokesperson for the Animal Liberation Front (http://en.wikipedia.org/wiki/Animal_Liberation_Front) Support Group in my country for a short while).

I however think it's very wise that SIAI is not preaching vegetarianism/etc very much (Anissimov has actually done some of that on his blog, and if you look closely, you find a lot of vegetarianism among SIAI folks even though they don't usually make a big deal of it). Myself, I have eaten something like 4 hamburgers in the last ~5 years, so I'm indeed not 100% vegan anymore.

The reason is that paying attention to existential risks is a concern so astronomically more important, that insofar as being (or becoming) vegan/etc requires *effort*, one in most cases shouldn't do it, or require others to do it. Essentially all effort we're able to spend on being ethical should be spent on trying to minimize existential risks, since the stakes are so unbelievably high there.

So I'm not vegan in the situations where it requires effort (like looking for a new place to eat at) or makes life difficult for people I'm visiting or something, and SIAI shouldn't expect veganism from its employees -- it would be similarly silly from an ethics POV as requiring people to save every earthworm that they see drying to death on asphalt. There unfortunately are greater threats to tackle, and we all have limited effort to spend. One thing that we really should avoid is becoming like those animal rights people who spend much time trying to go from 99,9% vegan to 100% vegan, and debating what perfect veganism actually is.

Not requiring a particular diet is a very good anti-cultishness measure, it has been said. This argument I also support wholeheartedly. I was quite uncomfortable with the amount of cultishness I saw when I was part of the radical animal rights movement.

Re: obviousness of possible failure modes

Date: 2010-05-18 09:31 am (UTC)
From: [identity profile] alekseiriikonen.livejournal.com
"Not requiring a particular diet is a very good anti-cultishness measure, it has been said."


Hmm, since I feel that was an exceptionally poor choice of words, I'll clarify what I meant:

"Not requiring a particular diet" is one thing that one really should do to avoid cultishness and insularity, but of course this "measure" alone doesn't guarantee much at all. It's necessary, but very very far from sufficient.

Re: obviousness of possible failure modes

Date: 2010-05-18 02:45 pm (UTC)
From: [identity profile] shagbark.livejournal.com
What I meant is that the principle by which you think AIs should give humans a safe environment after the singularity, also dictates that humans should give cows a safe environment today. Not leave it up to individual choice - meat-eating should be outlawed. Possibly with an exemption for seafood, worms, and other creatures without a limbic system.

Re: obviousness of possible failure modes

Date: 2010-05-18 04:32 pm (UTC)
From: [identity profile] alekseiriikonen.livejournal.com
If there was a referendum on whether to globally outlaw meat-eating (the kind that actually causes harm to animals), I would vote for the ban.

This is a different question from whether I should spend my effort on campaigning for such a referendum, or otherwise be aggressive in promoting vegetarianism.

So long as there are much more pressing concerns than animal rights concerns, I should focus my effort on those other concerns. If in the future there no longer are more pressing concerns, I'll be happy to return to semi-professional animal rights activism.

It's fine by me if future AIs act similarly i.e. don't respect human rights maximally if that would diminish the effort they're able to spend on combating astronomically more relevant threats. I can't really picture a realistic scenario where this principle would cause great harm, at least not if we suppose a successful Singularity.

Re: obviousness of possible failure modes

Date: 2010-05-18 04:41 pm (UTC)
From: [identity profile] xuenay.livejournal.com
Cloned meat doesn't seem very far away. We might shift to eating meat grown in a dish even before we have AIs. (Not to mention that any AI worth its salt will be able to come up with methods for simulating the sensory of experience of eating meat via proper brain stimulation, if people want that.)

Re: obviousness of possible failure modes

Date: 2010-05-15 09:00 am (UTC)
From: [identity profile] alekseiriikonen.livejournal.com
"Do it open-source, or don't do it."


What about if you were developing nukes that don't require uranium, just software? Would you also want to do that open-source?

You don't seem to understand the risks involved in AGI development.

Re: obviousness of possible failure modes

Date: 2010-05-17 09:17 pm (UTC)
From: [identity profile] shagbark.livejournal.com
You don't seem to understand the risks involved in Eliezer having the only AGI.

Re: obviousness of possible failure modes

Date: 2010-05-18 04:09 am (UTC)
From: [identity profile] shagbark.livejournal.com
Actually, the more important reply is this: Regardless of whether you believe that Eliezer has both good intentions and a good plan - I consider the first more probable than the second - the problem is so complex that Eliezer can't solve it on his own. To believe that a group of two, or ten, people on their own, crippled by groupthink, will solve this problem in isolation and get it right the first time, is not just hubris, it's full-blown megalomania.

Re: obviousness of possible failure modes

Date: 2010-05-18 07:06 am (UTC)
From: [identity profile] alekseiriikonen.livejournal.com
That doesn't sound like an argument for making it open-source. Designing a nuke was also difficult back in the day, but the building of potentially very destructive things shouldn't be made easier by throwing any and all information public.

Incidentally, I *do* agree that SIAI's current primary strategy will likely fail (though I'm not certain that it'll fail, and think effort should be put into trying). SIAI doesn't make bold statements regarding the likelihood of success either. But still, the scenario SIAI is currently primarily striving for would be vastly preferable to the realistic alternatives, and needs to be tried. (I think your description of what SIAI is doing isn't entirely accurate, but I won't go into that now.)

The alternatives are things like working in cooperation with the military/etc, which I think is more likely to happen than SIAI's primary plan working. At some point, military/etc folks will fully wake up to the importance of AGI, and they'll start going around making offers people can't refuse. SIAI needs to try to have a good deal of knowledge and competence by then that they can bargain with.

Re: obviousness of possible failure modes

Date: 2010-05-18 02:49 pm (UTC)
From: [identity profile] shagbark.livejournal.com
We need to get it right, and to get it right, we need a community of people engaging with each other, discussing, criticizing. There's no point talking about whether what SIAI is striving for would be preferable; SIAI is not going to get it right on their own. Your only choices are

1. More open discussion involving hundreds or thousands of people, leading to an understanding of the problem, leading to somebody, somewhere, having a chance of getting it right, or

2. No open discussion, SIAI gets it wrong and either produces no AI, or an AI that does horrible things, or

3. No discussion, skynet.

Re: obviousness of possible failure modes

Date: 2010-05-18 04:54 pm (UTC)
From: [identity profile] alekseiriikonen.livejournal.com
I agree, and it seems to me that SIAI is putting a lot of effort into making that discussion happen. That's what the Singularity Summits are for, for example, and Less Wrong is kinda also working for that.

Needing to have lots of discussion however doesn't mean that all code should be open-source. Very few if any of the things that currently need to be discussed are such that the code level is where they should be articulated.

Or would you like to point to specific individuals who are very willing and able to contribute to Singularity discussion, but whose ability to do so is significantly handicapped because of lack of access to code that SIAI may or may not have?

(My impression is that there isn't much yet done on the code level anyway, since the big problems still are on a higher level of abstraction.)

Re: obviousness of possible failure modes

Date: 2010-05-18 06:05 pm (UTC)
From: [identity profile] shagbark.livejournal.com
> Very few if any of the things that currently need to be discussed are such that the code level is where they should be articulated.

That may be true. But:

1. SIAI's website says that part of its core mission is to "Provide the AI community at large with conceptual, mathematical, and software tools that they can use to move and accelerate their AI R&D work toward the direction of safe and beneficial general intelligence."

2. In very difficult mathematical problems, you often have huge misconceptions about what you're talking about that aren't apparent until you spell it out at the code level. (For example, Russell's paradox showed that existing concepts of "formal mathematics" weren't formal enough.) So a useful discussion is necessarily at the code level.

> (My impression is that there isn't much yet done on the code level anyway, since the big problems still are on a higher level of abstraction.)

Eliezer's been working at the code level for years, but AFAIK no one but Marcello knows what he's doing, because he won't tell people, even within SIAI.

Re: obviousness of possible failure modes

Date: 2010-05-19 07:57 am (UTC)
From: (Anonymous)
> Eliezer's been working at the code level for years

Math, not code.

Date: 2010-05-13 10:28 am (UTC)
From: [identity profile] nancylebov.livejournal.com
Thank you both for doing this. What are the human capital development projects?

Date: 2010-05-13 02:37 pm (UTC)
From: [identity profile] squid314.livejournal.com
Degree in world-building? As in, like, con-worlding? How did I not go to this college??!

(oh, yeah, I applied to Amherst as my first choice and got rejected. @$#%.)

Date: 2010-05-17 09:42 pm (UTC)
From: [identity profile] alicorn24.livejournal.com
The school that let me do the worldbuilding degree was http://simons-rock.edu/ , not UMass (the latter was where I went to grad school, and I only studied philosophy there).

December 2018

S M T W T F S
      1
2345678
910 1112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 6th, 2025 11:08 pm
Powered by Dreamwidth Studios