xuenay: (Default)
[personal profile] xuenay
Yesterday, I:

- Wrote a blog post ( http://lesswrong.com/lw/23r/the_concepts_problem/ )
- Again wrote some personal e-mails
- Got my intro speech from Justin

Today, I:

- Will either make a new LW post or research one

From today onwards, I will attempt to have a new LW post up on an average of once per two days. We'll see for how long it lasts.


Yesterday, I realized a cause for my insecurities here. I hadn't been given any guidelines of how much I should be achieving, so to make up for that, I was imposing myself requirements that were possibly way too strict. So during our daily ten-minute meeting, I asked Anna about it. She was reluctant to give a direct answer ("you should do the best you can - what use would it be to place a minimum requirement on you?"), so I reworded it as "well, the Singularity Institute did have some goal in mind when the Visiting Fellows program was instituted, right?" That got me a longer answer (disclaimer: I've probably forgotten some parts of it already). Part of the purpose is to simply bring together people with potential and interest in SIAI/existential risk and improve their skills so that they can benefit the organization's cause even after having left the program. On the other hand, it would also be good if we "did stuff". She specifically mentioned that some people thought Less Wrong was dying (a claim which surprised me, personally) and that it'd be good to get more quality posts up there, especially ones of a slightly more technical nature. Furthermore, we should try to look productive so as to inspire new visitors to be productive as well, plus to build a growth atmosphere in general.

Justin also explained to me his conception of what SIAI's long-term strategy should look like. Briefly, growth -> (global) intelligence enhancement -> uploads -> Friendly AI. Right now, the organization should concentrate on outreach and teaching activities and seek to grow, then attempting to leverage its size and resources for a general raising of the sanity waterline as well as for global intelligence enhancement. Eventually, we should get the technology for uploads and for uploading FAI programmers, who could then hopefully build FAI. That's a rather ambitious plan, which I found myself mostly agreeing with. I do think that IA methods are sorely needed in general, and that a partially upload-mediated Singularity would be the safest one, if it's possible. Notably, it makes the operational assumption that real AI is still several decades away. That may or may not be true, but if somebody does have an almost finished AI project in their basement, there probably isn't very much we can do in any case. Justin's going to discuss his plans with more of the SIAI leadership.

People are optimistic about SIAI's growth prospects. Michael Vassar was here some days back, and he mentioned 40% yearly growth as a realistic goal. On the downside, the rapid growth SIAI has had so far has also left things in a somewhat chaotic state, without a cohesive large-scale strategy and different members of the organization being somewhat out of touch of what the others are doing. Justin is hoping to get that fixed.

We finally shared our mindmaps. The instructions for those had been rather loose, so everyone had rather different-looking ones. (Alicorn didn't have a mindmap at all, but rather a long outline and a list of questions.) Jasen's was probably the scariest. He discussed the threat of bioterrorism, and thought it possible that in some decades, synthetic biology might allow for the creation of diseases that no human's immune system can defend against. Furthermore, mixing together e.g. a disease that is very good at infecting people and a disease with a long incubation period might become possible and easy even before that. Until now, biowarfare has been relatively hard, but future breakthroughs could make it possible even for your basic graduate student to create such nightmare combinations. Also, there apparently are real death cults (or at least individuals out there), which doesn't exactly help me feel more safe.

I thought the presentations were good, though, and got a bunch of ideas for various things I could write about. For now, I've set myself the goal of just writing a lot here. We'll see how that goes.

Before I came here, I was feeling rather burnt out on my studies. I was figuring that I'd spend several months abroad, concentrate purely on whatever I'd be doing there and not think about my studies. Then I'd come back home, spend one month doing mostly nothing but relaxing, and then return to the studies filled with energy. Unfortunately, as good as that plan sounded, it doesn't seem to be working so far. I'm spending a lot of time worrying about the courses I should be taking after I get home, wondering which ones are the ones I should be taking, whether I should switch majors to CS after my bachelor's or stay in cognitive science, wondering whether it was mistake to come here and forgoing the chance to finish some more courses and to maybe net a summer job... meh.

Previously (like two paragraphs ago - this entry was composed over a period of several hours), I was thinking that I'd been spending most of my time here trying to get some academic writing done, in the hopes I could get enough publications together that I could pass them off as my Master's thesis in a year or two. But now I'm increasingly getting the feeling that I really don't want to do a Master's degree after getting the Bachelor's done. Unfortunately, the Master is the norm in Finland, so trying to get some kind of a job with just a Bachelor is going to be tricky. So maybe I should concentrate more on deepening my programming skills and maybe contributing to some open source project while here, to get something to show on a resume...

Re: intelligence augmentation

Date: 2010-04-17 08:52 pm (UTC)
From: [identity profile] vnesov.livejournal.com
Value drift is a form of being wiped out, a slow and non-obvious one. It should be seen as an existential threat, not a technology to be embraced (and I don't need to explain here how some advanced technologies offer similar seduction, wanted and liked, but potentially harmful beyond reason). Which preference do you think the IA-affected people will prefer to instantiate in the FAI? It might be in the interest of present humanity to avoid involving IA-affected people in FAI research, to the extent it will be possible.

That something is inevitable is not a moral argument for it being good, and I'm arguing specifically that it's not a good idea to use mind modification on the path to FAI.

Re: intelligence augmentation

Date: 2010-04-17 09:11 pm (UTC)
From: [identity profile] xuenay.livejournal.com
We experience value drift constantly, both as individuals and societies. The values of 2000s society are a lot different than those of 1700s society, and the values I have today are a lot different than the ones I had as a newborn. I fail to see this as a threat. Aside for a few core values (like not wanting to cause unnecessary suffering to anyone) which I find unlikely to change even if we did have IA, I have no particular interest in freezing my current set of values as permanent, any more than I have an interest in permanently freezing my set of memories and skills to their current state. (I would probably have chosen to merge with the baby-eaters and superhappies.)

Furthermore, if I experience value drift as a consequence of IA, that implies that my increased intelligence causes me to see inconsistencies in my previous values that I didn't see before. I would welcome that kind of value revision.

Re: intelligence augmentation

Date: 2010-04-17 09:28 pm (UTC)
From: [identity profile] vnesov.livejournal.com
> We experience value drift constantly, both as individuals and societies

Of course, but again, not a moral argument. From the point of view of given preference, any drift away from that preference is a bad thing.

> I fail to see this as a threat.

Humans are still humans, our preference is reset to more or less the same thing with each new generation having the same genetically determined construction. Culture has some influence, but given that preference is what you want in the limit of reflection, you'd probably be able to reinvent all the existing cultures twice over (metaphorically speaking), thus making the distinction between different environments in which different people happen to be brought up insignificant.

Changing the architecture of human mind is a whole new level of modification. Compare this with using an argument about humans of different IQ in a discussion of superintelligent AIs, or using an argument about religious zealots in a discussion about AGI values. It's just not the right order of variation to model the implications of the discussed order of variation on.

> I have no particular interest in freezing my current set of values as permanent, any more than I have an interest in permanently freezing my set of memories and skills to their current state

This statement shows that either you don't understand the idea of fixed preference (more likely), or that you are talking about humans as stuff that gets optimized, rather than agents that do the optimizing. Preference is *defined* as that which you won't want ever changed, because it talks about what the world should actually be, and there is only one world, which can't ever be changed (in the timeless sense). You should read my blog (go through the current sequence, ask questions, discuss with people at SIAI -- I expect agreement on the major issues).

> Furthermore, if I experience value drift as a consequence of IA, that implies that my increased intelligence causes me to see inconsistencies in my previous values that I didn't see before.

You might get better at implementing your values, but the values themselves can also change. You'll be better at implementing the changed values, not the original values. The changed values will be different for reasons other than getting more consistent, as you can't hold a magical property (see "magical categories" of LW) fixed while varying an object having that property, without rigorous idea of what that property is, exactly. You can't change some property of human mind without altering preference without knowing exactly what preference is. And we don't.

Re: intelligence augmentation

Date: 2010-04-17 09:48 pm (UTC)
From: [identity profile] xuenay.livejournal.com
Oh, I certainly understand the idea of fixed preference (or at least I think I do, and yes, I've read the current sequence in your blog). What I'm saying is that I don't have fixed preferences outside a very narrow set. I would certainly be very cautious about using any IA that would threaten to change the preferences falling into that set.

Though I feel this discussion is getting rather abstract. Of course we should consider the pros and cons of each individual IA technique as they come along. But I don't think saying "we shouldn't use IA because it might change some of our values" is very useful when we don't know what realistic IA techniques might actually be and how they work. Certainly none of the techniques that are currently available, or of the ones that will be available in say 15 years, will be in the category of being able to radically change our values.

Re: intelligence augmentation

Date: 2010-04-17 10:06 pm (UTC)
From: [identity profile] vnesov.livejournal.com
> Though I feel this discussion is getting rather abstract.

It's no more abstract than asserting that all AGIs, except very specific FAIs constructed with rigorous understanding of preference, are fatal.

> What I'm saying is that I don't *have* fixed preferences outside a very narrow set.

Translated to my terminology, this is still an assertion about your fixed preference (even Microsoft Word gets a fixed preference), namely that your preference involves a lot of indifference to detail. But why would it be this way? And how could you possibly know? We don't know our preference, we can only use it (rather inaptly). Even if preference significantly varies during one's life (varying judgment doesn't imply varying preference!), it's a statement independent of how it can be characterized at specific moments.

Re: intelligence augmentation

Date: 2010-04-18 05:12 am (UTC)
From: [identity profile] xuenay.livejournal.com
Okay, I re-read your posts on preference to see if there was something to the definition that I missed, but I still don't see how your questions make sense.

I would understand the "how could you know your preference" question if it was used to in the context of "we're going to alter the world in way X, how can you know that you'll actually like it". In that case, it'd mean that the model I have about my preferences is incorrect, and if I actually experiences that world, I'd find I preferred the unaltered world. But that's not the question you're asking. You're asking "we're going to alter your preferences in way X, how can you know that you won't disapprove of being changed". If I have no reason to assume beforehand that I'd disapprove, then I don't disapprove beforehand, and afterwards I presumably won't disapprove either, because I will like having my new preferences.

December 2018

S M T W T F S
      1
2345678
910 1112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 22nd, 2025 09:06 am
Powered by Dreamwidth Studios