Date: 2010-05-28 10:10 pm (UTC)
Yeah, I think so. The whole point with consequentialism is that it's got math in it so different goods can be compared. As long as two people have similar utility functions and similar factual beliefs, in theory they can determine whether an action generally increases or decreases utility.

Can you imagine a friendly AI built around virtue ethics? If not, and you can imagine one built around consequentialism, that's what I mean by consequentialism actually having a right answer where the other two don't.

For a good example of an attempt to actually solve a moral problem with this kind of reasoning, see http://notsneaky.blogspot.com/2007/05/how-much-of-jerk-do-you-have-to-be-to.html
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

December 2018

S M T W T F S
      1
2345678
910 1112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 6th, 2025 12:44 am
Powered by Dreamwidth Studios