> Though I feel this discussion is getting rather abstract.
It's no more abstract than asserting that all AGIs, except very specific FAIs constructed with rigorous understanding of preference, are fatal.
> What I'm saying is that I don't *have* fixed preferences outside a very narrow set.
Translated to my terminology, this is still an assertion about your fixed preference (even Microsoft Word gets a fixed preference), namely that your preference involves a lot of indifference to detail. But why would it be this way? And how could you possibly know? We don't know our preference, we can only use it (rather inaptly). Even if preference significantly varies during one's life (varying judgment doesn't imply varying preference!), it's a statement independent of how it can be characterized at specific moments.
Re: intelligence augmentation
Date: 2010-04-17 10:06 pm (UTC)It's no more abstract than asserting that all AGIs, except very specific FAIs constructed with rigorous understanding of preference, are fatal.
> What I'm saying is that I don't *have* fixed preferences outside a very narrow set.
Translated to my terminology, this is still an assertion about your fixed preference (even Microsoft Word gets a fixed preference), namely that your preference involves a lot of indifference to detail. But why would it be this way? And how could you possibly know? We don't know our preference, we can only use it (rather inaptly). Even if preference significantly varies during one's life (varying judgment doesn't imply varying preference!), it's a statement independent of how it can be characterized at specific moments.