xuenay: (Default)
[personal profile] xuenay
I was going to write about the new points system we have here and stuff, but yesterday there were a bunch of things that triggered a weird change in me. I'm still not entirely sure of what's happening, so I'll try to document it here.

It all started when Michael Vassar was talking about his take on the Twelve Virtues of Rationality. He was basically saying that a lot of the initial virtues (curiosity, relinquishment, lightness, evenness) were variants of the same thing, that is, not being attached to particular states of the world. If you do not have an emotional preference on what the world should be like, then it's also easier to perfectly update your beliefs whenever you encounter new information.

As he was talking about it, he also made roughly the following comment: "Pain is not suffering. Pain is just an attention signal. Suffering is when one neural system tells you to pay attention, and another says it doesn't want the state of the world to be like this." At some point he also mentioned that the ideal would be for a person's motivations not to be directly related to states of the world, but rather their own actions. If you tie your feelings to states of the world, you risk suffering needlessly about things not under you control. On the other hand, if you tie your feelings to your actions, your feelings are created by something that is always under your control. And once you stop having an emotional attachment to the way the world is, actually changing the world becomes much easier. Things like caring about what others think of you cease to be a concern, paradoxically making you much more at ease in social situations.

I thought this through, and it seemed to make a lot of sense. As Louie would comment later on, it was basically the old "attachment is suffering" line from Buddhism, but that's a line one has heard over and over so many times that it's ceased to have much significance and become just a phrase. Reframing it as "suffering is conflict between two neural systems" somehow made it far more concrete.

An early objection that came to mind was that, if pain is not suffering, why does physical pain feel like suffering? My intuition would be that if this hypothesis is correct, then humans have strong inborn desires not to experience pain (which leads to the mistaken impression that pain is suffering). If you break your leg, your brain is flooded with pain signals, and it's built to prefer states of the world where there isn't pain. But it's possible to react indifferently to your own sensation of pain. Pain asymbolia, according to Wikipedia, is "a condition in which pain is perceived, but does not cause suffering ... patients report that they have pain but are not bothered by it, they recognize the sensation of pain but are mostly or completely immune to suffering from it". Further support comes from the fact that our emotional states and the knowledge we have may often have a big influence on how painful (sufferful?) something feels. You can sometimes sustain an injury that doesn't feel very bad until you actually look at it and see how badly it's hurt. Being afraid also makes pain worse, while a feeling of being in control makes pain feel less bad.

On a more emotional front, I discovered a long time ago that trying to avoid thinking about unpleasant memories was a bad idea. The negative affect would fade a lot quicker if I didn't even try to push them out of my mind, but rather let them come and let them envelope me over and over until they didn't bother me anymore.

So I started wondering about how to apply this in practice. For a long time, things such as worry for my friends ending up in accidents and anguish for the fact that there is so much suffering in the world have seriously reduced my happiness. I've felt a strong moral obligation to work towards improving the world, and felt guilty at the times when I've been unable to e.g. study as hard as conceivably possible. If I could shift my motivations away from states of the world, that could make me considerably happier and therefore help me to actually improve the world.

But shifting to focus to actions instead of consequences sounded like getting dangerously close to deontology. Since a deontologist judges actions irrespective of their consequences, they might e.g. consider it wrong to kill a person even if that ended up saving a hundred others. I still wanted my actions to do the most good possible, and that isn't possible if you don't evaluate the effects your actions have on the world-state. So I would have to develop a line of thought that avoided the trap of deontology, while still shifting the focus on actions. That seemed tricky, but not impossible. I could still be motivated to do the actions that caused the most good and shifted the world-state the most towards my preferred direction, while at the same time not being overly attached to any particular state of the world.

While I was still thinking about this, I went ahead and finished reading The Happiness Hypothesis, a book about research on morality and happiness that I'd started reading previously. One of the points the book makes that we're divided beings: to use the book's metaphor, there is an elephant and there is the rider. The rider is the conscious self, while the elephant consists of all the low-level, unconscious processes. Unconscious processes actually carry out most of what we do and the rider trains them and tells them what they should be doing. Think of e.g. walking or typing on the computer, where you don't explicitly think about every footstep or every press of the button, but instead just decide to walk somewhere or type in something.

Readers familiar with PJ Eby will recognize this to be the same as his Multiple Self philosophy (my previous summary, original article). What I had not thought of before was that this also applies to ethics. Formal, virtuous theories of ethics are known by the rider, but not by the elephant, which leads to a conflict between what people know to be right and what they actually do. On these grounds, The Happiness Hypothesis critiqued the way Western ethics, both in the deontologist tradition started by Immanuel Kant and the consequentialist tradition started by Jeremy Bentham have been becoming increasingly reason-based:

The philosopher Edmund Pincoffs has argued that consequentialists and deontologists worked together to convince Westerners in the twentieth century that morality is the study of moral quandaries and dilemmas. Where the Greeks focused on the character of a person and asked what kind of person we should each aim to become, modern ethics focuses on actions, asking when a particular decision is right or wrong. Philosophers wrestle with life-and-death dilemmas: Kill one to save five? Allow aborted fetuses to be used as a source of stem cells? [...] This turn from character ethics to quandary ethics has turned moral education away from virtues and towards moral reasoning. If morality is about dilemmas, then moral education is training in problem solving. Children must be taught how to think about moral problems, especially how to overcome their natural egoism and take into their calculations the needs of others.

[...] I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week [...] The second problem with the turn to moral reasoning is that it relies on bad psychology. Many moral education efforts since the 1970s take the rider off the elephant and train him to solve problems on his own. After being exposed to hours of case studies, classroom discussions about moral dilemmas, and videos about people who faced dilemmas and made the right choices, the child learns how (not what) to think. Then class ends, the rider gets back on the elephant, and nothing changes at recess. Trying to make children behave ethically by teaching them to reason well is like trying to make a dog happy by wagging its tail. It gets causality backwards.
Reading this chapter, that critique and the description of how people like Benjamin Franklin made it into an explicit project to cultivate their various virtues one at a time, I could feel a very peculiar transformation take place within me. The best way I can describe it is that it felt like a part of my decision-making or world-evaluating machinery separated itself from the rest and settled into a new area of responsibility that I had previously not recognized as a separate one. While I had previously been primarily a consequentialist, that newly-specialized part declared its allegiance to virtue ethics, even though the rest of the machinery remained consequentialist. (I actually saw this in my mind's eye. First a flat plane with a uniform texture, floating in some vaguely brown-tinted space. Then from the left end of the plane, a part maybe one fourth the surface area of the whole thing breaks off and moves up and to the left. It settles in what is about a 45 degree angle above and to the left of the point where it broke off from the old plane, and acquires a slightly different texture. You gotta marvel whatever visualization algorithm my brain was using to generate that image based on what had to be really fuzzy input.)

What has this meant in practice? Well, I'm not quite sure of the long-term effects yet, but I think that my emotional machinery kind of separated from my general decision-making and planning machinery. Think of "emotional machinery" as a system that takes various sorts of information as input and produces different emotional states as output. Optimally, your emotional machinery should attempt to create emotions that push you towards taking the kinds of actions that are most appropriate given your goals. Previously I was sort of embedded in the world and the emotional system was taking its input from the entire whole: the way I was, the way the world was, and the way that those were intertwined. It was simultaneously trying to optimize for all three, with mixed results.

But now, my self-model was set separate from the world-model, and my emotional machinery started running its evaluations primarily based on the self-model. The main questions became "how could I develop myself", "how could I be more virtuous" and "how could I best act to improve the world". From the last bit, you can see that I haven't lost the consequentialist layer in my decision-making: I am still trying to act in ways that improve the world. But now it's more like my emotional systems are taking input from the consequentialist planning system to figure out what virtues to concentrate on, instead of the consequentialist reasoning being completely intertwined with my emotional systems.

So far, I'm not sure of the permanence of this effect. I've previously had feelings of major personal change that sooner or later ended up fading (several of them which are chronicled in this LJ). The rider may get what feels like a major revelation, but the elephant is still running the show, and it needs to be trained over an extended period for there to be any lasting change. So since yesterday, I've been doing my best to keep watch over my thoughts and practice detachment from world-states.

I have the questionable luck of having an easy way of practicing this: I have rashes that frequently make my skin itch. On a couple of occasions, I've tried meditation and the practice of simply passively observing any thoughts and feelings that come to mind until they go away on their own. I began applying that technique to the feeling of itchy skin, and it felt like I was able to ignore the feeling for longer. During the night, I woke up to the feeling of an itch, and on previous nights when that happened I'd been forced to either scratch my skin half to death or get up and apply several layers of moisturizer on it. This time around, even though I did end up scratching it a bit, I was eventually able to fall back to sleep without doing either of those. Also, I believe I was able to some degree detach myself from the feeling of discomfort that I got while I was jogging this morning and getting physically tired. (Not completely, mind you, but to some degree.)

On the less physical front, I've been trying to keep an eye on my thoughts and modify them whenever they didn't really suit the new scheme I'm trying to run. For instance, I noticed that one of my motivations for writing this post was to win the approval of other people who might be interested in this kind of thing or who might admire my skill in introspection or detachment. When I noticed that thought pattern, I attempted to modify it to become more rooted in personal virtue: I am writing this post in order to gain better insight into my transformation, to provide useful or interesting data for others, and so forth. Both introspective insight and voluntarily contributing to humanity's shared reserves of information are virtuous by themselves. I do not need to involve into it the "people's evaluation of me" part, which belongs to my model of the external world and to my model of myself.

It might just be a placebo effect, but if I can keep this up, I expect to see a considerable reduction in both social anxiety and the worries about future careers that I've been having. On the other side, I need to be careful not to end up in a semi-solipsistic state where I don't care about the fortunes or misfortunes of others and be unable to truly enjoy the company of my friends. Fortunately, things such as empathy, caring and being able to enjoy company are easily construed as virtues themselves, so I don't think I need to worry about that too much. I suspect that framing my thoughts in this way will also considerably help in combating the fixed mindset.

I don't yet have an explicit set of virtues that I'm using, and am not sure if I even need one. Still, there's one list offered in The Happiness Hypothesis which I'm finding useful. Based on research from the positive psychology movement, it lists the virtues (and their sub-virtues) of Wisdom (Curiosity, Love of learning, Judgment, Ingenuity, Emotional intelligence, Perspective), Courage (Valor, Perseverance, Integrity), Humanity (Kindness, Loving), Justice (Citizenship, Fairness, Leadership), Temperance (Self-control, Prudence, Humility) and Transcendence (Appreciation of beauty and excellence, Gratitude, Hope, Spirituality, Forgiveness, Humor, Zest). That should be an okay basis to draw inspiration from.

Date: 2010-05-28 01:33 am (UTC)
From: [identity profile] xuenay.livejournal.com
Virtue ethics can't actually solve moral problems. If I support abortion, and you oppose abortion, I'm sure we could both come up with virtues that support our own position. I could say that I'm supporting the virtue of compassion, or mercy. You could say you're supporting the virtue of respect, or justice, or whatever. We both can say whatever the heck we wanted before we thought about ethics, we can both leave feeling righteous and like we've gained status from the interaction, and the only people we don't help are the ones actually trying to figure out whether to get an abortion or not, who are just as lost as before.

By the way - is this really any different in consequentialism or deontology?

Date: 2010-05-28 10:10 pm (UTC)
From: [identity profile] squid314.livejournal.com
Yeah, I think so. The whole point with consequentialism is that it's got math in it so different goods can be compared. As long as two people have similar utility functions and similar factual beliefs, in theory they can determine whether an action generally increases or decreases utility.

Can you imagine a friendly AI built around virtue ethics? If not, and you can imagine one built around consequentialism, that's what I mean by consequentialism actually having a right answer where the other two don't.

For a good example of an attempt to actually solve a moral problem with this kind of reasoning, see http://notsneaky.blogspot.com/2007/05/how-much-of-jerk-do-you-have-to-be-to.html

December 2018

S M T W T F S
      1
2345678
910 1112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 5th, 2025 11:16 pm
Powered by Dreamwidth Studios