Let met try to start by summarizing the way I understand your definitions for "preference" and "like", and you can correct me if I got anything wrong.
The difference between a like and a preference is that a like is merely a description of how I react to things, while a preference is an ordering between possible worlds. I may like eating ice cream, but that only means that there are some situations in which doing so feels pleasurable. There is no unambiguous translation of likes to preferences. A liking for eating ice cream might suggest a preference for a world where there is ice cream, or where there exist people (including me) who have a chance to eat ice cream. Or it might not have any direct effect at all on the preference. I may merely prefer a world where there exist sensory experiences that are at least as enjoyable than eating ice cream, even if those experiences are entirely different in kind.
If so, I agree that this definition of preference sounds good and valuable in theory. But I'm confused of how you are managing the translation between preference-in-general and human preference. (Even after your brief discussion of this in "Preference of programs".) You say that since preference is an ordering of worlds, then if A and B are contradictory it's impossible to prefer both A and B. Given your definition, I agree; however, it doesn't seem to be obvious that humans are consistent enough to have just a single set of preferences that fits your criteria. A human may in fact have one preference at 3 PM and another at 10 AM. (As just one datapoint, I notice that my mood seems to have a major impact on whether I'm leaning more towards negative or positive utilitarianism.) Are you using a CEV-style "the preference that you most wanted to be taken into account" criteria, or something similar?
Re: Re (2): intelligence augmentation
Date: 2010-05-19 12:04 am (UTC)Let met try to start by summarizing the way I understand your definitions for "preference" and "like", and you can correct me if I got anything wrong.
The difference between a like and a preference is that a like is merely a description of how I react to things, while a preference is an ordering between possible worlds. I may like eating ice cream, but that only means that there are some situations in which doing so feels pleasurable. There is no unambiguous translation of likes to preferences. A liking for eating ice cream might suggest a preference for a world where there is ice cream, or where there exist people (including me) who have a chance to eat ice cream. Or it might not have any direct effect at all on the preference. I may merely prefer a world where there exist sensory experiences that are at least as enjoyable than eating ice cream, even if those experiences are entirely different in kind.
If so, I agree that this definition of preference sounds good and valuable in theory. But I'm confused of how you are managing the translation between preference-in-general and human preference. (Even after your brief discussion of this in "Preference of programs".) You say that since preference is an ordering of worlds, then if A and B are contradictory it's impossible to prefer both A and B. Given your definition, I agree; however, it doesn't seem to be obvious that humans are consistent enough to have just a single set of preferences that fits your criteria. A human may in fact have one preference at 3 PM and another at 10 AM. (As just one datapoint, I notice that my mood seems to have a major impact on whether I'm leaning more towards negative or positive utilitarianism.) Are you using a CEV-style "the preference that you most wanted to be taken into account" criteria, or something similar?