Yeah, I think so. The whole point with consequentialism is that it's got math in it so different goods can be compared. As long as two people have similar utility functions and similar factual beliefs, in theory they can determine whether an action generally increases or decreases utility.
Can you imagine a friendly AI built around virtue ethics? If not, and you can imagine one built around consequentialism, that's what I mean by consequentialism actually having a right answer where the other two don't.
no subject
Can you imagine a friendly AI built around virtue ethics? If not, and you can imagine one built around consequentialism, that's what I mean by consequentialism actually having a right answer where the other two don't.
For a good example of an attempt to actually solve a moral problem with this kind of reasoning, see http://notsneaky.blogspot.com/2007/05/how-much-of-jerk-do-you-have-to-be-to.html