ext_321706 ([identity profile] squid314.livejournal.com) wrote in [personal profile] xuenay 2010-05-28 10:10 pm (UTC)

Yeah, I think so. The whole point with consequentialism is that it's got math in it so different goods can be compared. As long as two people have similar utility functions and similar factual beliefs, in theory they can determine whether an action generally increases or decreases utility.

Can you imagine a friendly AI built around virtue ethics? If not, and you can imagine one built around consequentialism, that's what I mean by consequentialism actually having a right answer where the other two don't.

For a good example of an attempt to actually solve a moral problem with this kind of reasoning, see http://notsneaky.blogspot.com/2007/05/how-much-of-jerk-do-you-have-to-be-to.html

Post a comment in response:

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting