If there was a referendum on whether to globally outlaw meat-eating (the kind that actually causes harm to animals), I would vote for the ban.
This is a different question from whether I should spend my effort on campaigning for such a referendum, or otherwise be aggressive in promoting vegetarianism.
So long as there are much more pressing concerns than animal rights concerns, I should focus my effort on those other concerns. If in the future there no longer are more pressing concerns, I'll be happy to return to semi-professional animal rights activism.
It's fine by me if future AIs act similarly i.e. don't respect human rights maximally if that would diminish the effort they're able to spend on combating astronomically more relevant threats. I can't really picture a realistic scenario where this principle would cause great harm, at least not if we suppose a successful Singularity.
Re: obviousness of possible failure modes
Date: 2010-05-18 04:32 pm (UTC)This is a different question from whether I should spend my effort on campaigning for such a referendum, or otherwise be aggressive in promoting vegetarianism.
So long as there are much more pressing concerns than animal rights concerns, I should focus my effort on those other concerns. If in the future there no longer are more pressing concerns, I'll be happy to return to semi-professional animal rights activism.
It's fine by me if future AIs act similarly i.e. don't respect human rights maximally if that would diminish the effort they're able to spend on combating astronomically more relevant threats. I can't really picture a realistic scenario where this principle would cause great harm, at least not if we suppose a successful Singularity.