> We experience value drift constantly, both as individuals and societies
Of course, but again, not a moral argument. From the point of view of given preference, any drift away from that preference is a bad thing.
> I fail to see this as a threat.
Humans are still humans, our preference is reset to more or less the same thing with each new generation having the same genetically determined construction. Culture has some influence, but given that preference is what you want in the limit of reflection, you'd probably be able to reinvent all the existing cultures twice over (metaphorically speaking), thus making the distinction between different environments in which different people happen to be brought up insignificant.
Changing the architecture of human mind is a whole new level of modification. Compare this with using an argument about humans of different IQ in a discussion of superintelligent AIs, or using an argument about religious zealots in a discussion about AGI values. It's just not the right order of variation to model the implications of the discussed order of variation on.
> I have no particular interest in freezing my current set of values as permanent, any more than I have an interest in permanently freezing my set of memories and skills to their current state
This statement shows that either you don't understand the idea of fixed preference (more likely), or that you are talking about humans as stuff that gets optimized, rather than agents that do the optimizing. Preference is *defined* as that which you won't want ever changed, because it talks about what the world should actually be, and there is only one world, which can't ever be changed (in the timeless sense). You should read my blog (go through the current sequence, ask questions, discuss with people at SIAI -- I expect agreement on the major issues).
> Furthermore, if I experience value drift as a consequence of IA, that implies that my increased intelligence causes me to see inconsistencies in my previous values that I didn't see before.
You might get better at implementing your values, but the values themselves can also change. You'll be better at implementing the changed values, not the original values. The changed values will be different for reasons other than getting more consistent, as you can't hold a magical property (see "magical categories" of LW) fixed while varying an object having that property, without rigorous idea of what that property is, exactly. You can't change some property of human mind without altering preference without knowing exactly what preference is. And we don't.
Re: intelligence augmentation
Date: 2010-04-17 09:28 pm (UTC)Of course, but again, not a moral argument. From the point of view of given preference, any drift away from that preference is a bad thing.
> I fail to see this as a threat.
Humans are still humans, our preference is reset to more or less the same thing with each new generation having the same genetically determined construction. Culture has some influence, but given that preference is what you want in the limit of reflection, you'd probably be able to reinvent all the existing cultures twice over (metaphorically speaking), thus making the distinction between different environments in which different people happen to be brought up insignificant.
Changing the architecture of human mind is a whole new level of modification. Compare this with using an argument about humans of different IQ in a discussion of superintelligent AIs, or using an argument about religious zealots in a discussion about AGI values. It's just not the right order of variation to model the implications of the discussed order of variation on.
> I have no particular interest in freezing my current set of values as permanent, any more than I have an interest in permanently freezing my set of memories and skills to their current state
This statement shows that either you don't understand the idea of fixed preference (more likely), or that you are talking about humans as stuff that gets optimized, rather than agents that do the optimizing. Preference is *defined* as that which you won't want ever changed, because it talks about what the world should actually be, and there is only one world, which can't ever be changed (in the timeless sense). You should read my blog (go through the current sequence, ask questions, discuss with people at SIAI -- I expect agreement on the major issues).
> Furthermore, if I experience value drift as a consequence of IA, that implies that my increased intelligence causes me to see inconsistencies in my previous values that I didn't see before.
You might get better at implementing your values, but the values themselves can also change. You'll be better at implementing the changed values, not the original values. The changed values will be different for reasons other than getting more consistent, as you can't hold a magical property (see "magical categories" of LW) fixed while varying an object having that property, without rigorous idea of what that property is, exactly. You can't change some property of human mind without altering preference without knowing exactly what preference is. And we don't.