xuenay: (Default)

Much of relationship compatibility comes down to a fuzzy concept that’s variously referred to as “chemistry”, “clicking”, or just feeling good and comfortable in the other’s presence. This is infamously difficult to predict by any other means than actually spending time around the other. OKCupid-style dating sites, with their extensive batteries of questions about values and preferences, are good at predicting a match in values and preferences but almost useless at predicting this kind of compatibility.

What I think is largely going on is that it’s about compatible patterns of emotional association. Each of us has deep in them various patterns of emotional associations that are hard to predict by an outsider because they seem to follow little “sensible” logic: rather they are formed by a combination of a person’s life experiences and their inborn temperament. Somebody fears abandonment and will freak out whenever they hear an expression that their parent used when angry; for another person that very same expression was used as one of affection, and has the opposite meaning. (Or the same expression might be associated with both fighting an affection: there’s a possibly apocryphal tale about a couple who made sure that whenever they’d been fighting so that their children had witnessed it, they’d make sure to call each other “love” and “dear” to let the children know that they still cared about each other. This lasted until the day that their kids came running to them, complaining that “He called me ‘love’!” “She started it, she called me ‘dear’!”…)

These are relatively superficial examples: typically the patterns go deeper and subtler. In retrospect, I’ve noticed that some of the people with whom I’ve had mutual attraction have exhibited sub-clinical signs of something like avoidant personality disorder, and I feel like exhibiting sub-clinical signs of AvPD has also been the case for myself. There have been few obvious signs of this at the time, but whatever those subtle signs were, some intuitive part of each of us picked up on them, thought this person is like me, and felt attracted without the rest of our minds knowing more than I feel good around this person.

Many failed relationships can be explained as a pattern of emotional compatibility that was a match in one situation (such as when you were going out on dates) but a mismatch in another (such as when you tried living together). Sometimes exactly the same traits cause opposite emotional reactions in different situations. Someone who is hard-working and has lots of impressive achievements can feel like a very appealing partner when you’re just getting to know each other, but feel much less desirable when you realize that they will never have much time for you and that their work will always come first.

The discouraging implication – for those of us who are single or otherwise looking – is that even if you manage to hit off on a date, that’s no guarantee of long-term compatibility. The encouraging implication is that we may already happen to be friends with someone who could be our dream partner: we just haven’t realized it yet. The yet-again-discouraging implication is that it’s pretty hard to find out who that hidden dream partner might be, without spending a lot of time in their presence.

“Love” is a word with many meanings, but maybe the deepest form of love is when you come to genuinely care about the other, in the same way as you care about yourself. Not just caring about the other so that they’ll like you in return, but putting intrinsic value on their well-being, the same way you put intrinsic value on your own well-being.

You ultimately get here, I suspect, by having enough smoothly-going interactions to experience increasing synchrony. Situations where your patterns of emotional association are so compatible that each of you intuitively acts in a way that promotes positive feelings in the other. My guess is that you start caring about the other as much as you care about yourself because some part of your mind comes to actually believe, on a level of emotional logic if not fact, that the two of you are the same.

This feeling of two people becoming one may actually be correct in a very concrete sense, as studies of people who co-operate and like each other show that their behavioral patterns, body language and spoken language, and neural patterns tend to become synchronized with each other. I am once again reminded of this quote from Michael “Vassar” Arc:

> In real-time domains, one rapidly assesses the difficulty of a challenge. If the difficulty seems manageable, one simply does, with no holding back, reflecting, doubting, or trying to figure out how one does. Figuring out how something is done implicitly by a neurological process which is integrated with doing. […] People with whom you are interacting […] depend on the fact that you and they are in a flow-state together. In so far as they and you become an integrated process, your actions flow from their agency as well as your own[.]

The opposite of synchrony, when things get really bad, is described as “walking on eggshells” or “being constantly unsure of what the other wants”. It is when the other person’s emotional associations are so out of sync with yours that it feels like anything you say or do may trigger a negative response, or when they really crave from you some behavior that would trigger in them a specific positive response – but you don’t know what that desired behavior would be. Because your patterns of emotional association are dissimilar, you have no idea of what is expected of you, and have no way of intuitively simulating it. “Put yourself in the other’s shoes” does not work because the two of you have different-sized feet: the kinds of shoes that feel comfortably tight to you feel excruciatingly small for your partner, and vice versa.

If a situation gets described as walking on eggshells, it has likely to do with a pattern of mutual incompatibilities that has become self-reinforcing and spiraled out of control. He is expecting a bit of peace and quiet and time for himself; she does not realize this and seeks his company. He tries to make her back away but she doesn’t understand the signals, until he lashes out in frustration. She experiences this seemingly-out-of-nowhere reaction as inexplicable rejection and is shocked to silence for a while, until she can no longer hold it in and bursts out – at which point he is shocked by this seemingly inexplicable hostile reaction that to him came out of nowhere. Afterwards, she feels insecure about their relationship so she pursues mutual closeness more aggressively, while he feels like his independence is at risk so he tries to get more distance. The pattern repeats, getting worse each time.

It does not help that having a negative emotional association triggered is experienced as a threat: it is not actually a matter of life and death, but the way people often react, it might as well be. The ideal thing to do at this point would be for both to draw deep breaths, mutually work to dispel each other’s reactions of panic, and figure out what actually happened and what both meant. The thing that commonly happens instead that both are in too much pain to think clearly and do everything they can to just make it hurt less. This often includes blaming the other and trying to make them admit that they were in the wrong, so that the other would promise to never do anything like that again.

Besides the other obvious problems in using this as a persuasion tactic, there is the fact that even if one partner did manage to force such a promise out of the other, the other still does not know what exactly triggered the reaction in the other. In other words, one person has promised to avoid doing something, but they don’t actually know what it is that they’ve promised not to do. They may know some specific things that they should avoid, but not understanding the emotional logic behind that rule, they are likely to do something else – to them seemingly different – that will trigger the same reaction. And when that happens, their partner will be even more upset at them, because “they broke their promise”.

This is why some people feel that a relationship having explicit rules is a warning sign. Not because having rules would be a bad thing by itself, but because needing to have codified rules means that one of the partners doesn’t understand the other’s emotions well enough to be able to avoid trouble just on an intuitive basis. In the worst case, the number of rules will bloat and get out of hand, as more and more of them will need to be added to cover all the different eventualities.

On a more encouraging note, it’s not actually necessary to solve all the incompatibilities. It’s possible to get away with just accepting that in some situations you will always have incompatible emotional patterns, and then have both partners tacitly avoid getting into such situations. Successful couples don’t actually resolve all of their problems: rather they just get good at dealing with them. In the meanwhile, couples who feel that they should be able to agree on everything end up worse and worse off.

Many if not most people crave a feeling of being understood. They want to feel that their desires and emotions are both understood and also accepted by the people who are important for them. Possibly this desire is so strong in us because of everything above: mutual emotional understanding allows us to have relationships (romantic or otherwise) where things just work, and where each partner can trust the other to understand the emotional logic driving them and can trust the other not to accidentally set off any emotional landmines. It may also be the reason for the thing I mentioned at the beginning of the article, where I’ve experienced mutual attraction with people who share some of my psychological issues: an intuitive part of our minds looks for emotionally similar people.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)

A model that I’ve found very useful is that pain is an attention signal. If there’s a memory or thing that you find painful, that’s an indication that there’s something important in that memory that your mind is trying to draw your attention to. Once you properly internalize the lesson in question, the pain will go away.

That’s a good principle, but often hard to apply in practice. In particular, several months ago there was a social situation that I screwed up big time, and which was quite painful to think of afterwards. And I couldn’t figure out just what the useful lesson was there. Trying to focus on it just made me feel like a terrible person with no social skills, which didn’t seem particularly useful.

Yesterday evening I again discussed it a bit with someone who’d been there, which helped relieve the pain a bit, enough that the memory wasn’t quite as aversive to look at. Which made it possible for me to imagine myself back in that situation and ask, what kinds of mental motions would have made it possible to salvage the situation? When I first saw the shocked expressions of the people in question, instead of locking up and reflexively withdrawing to an emotional shell, what kind of an algorithm might have allowed me to salvage the situation?

Answer to that question: when you see people expressing shock in response to something that you’ve said or done, realize that they’re interpreting your actions way differently than you intended them. Starting from the assumption that they’re viewing your action as bad, quickly pivot to figuring out why they might feel that way. Explain what your actual intentions were and that you didn’t intend harm, apologize for any hurt you did cause, use your guess of why they’re reacting badly to acknowledge your mistake and own up to your failure to take that into account. If it turns out that your guess was incorrect, let them correct you and then repeat the previous step.

That’s the answer in general terms, but I didn’t actually generate that answer by thinking in general terms. I generated it by imagining myself back in the situation, looking for the correct mental motions that might have helped out, and imagining myself carrying them out, saying the words, imagining their reaction. So that the next time that I’d be in a similar situation, it’d be associated with a memory of the correct procedure for salvaging it. Not just with a verbal knowledge of what to do in abstract terms, but with a procedural memory of actually doing it.

That was a painful experience to simulate.

But it helped. The memory hurts less now.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (sonictails)

The prevalent wisdom about why social media is distracting is that it provides a constant opportunity for immediate distraction. Whenever your work feels even the slightly unsatisfying, there’s the temptation to get a momentary break by looking at Facebook, and then you’ve spent fifteen minutes chatting away when you should have been working.

There’s a lot of truth to this. I’ve experienced it first-hand many times, and talked a lot about it in my essay about the addiction economy.

But I find that’s only a part of the problem. I find that in addition to sapping short-term attention, social media also damages long-term attention. (I’m focusing on social media here, because it’s the one that I’m the most hooked on myself – but any other source of quick, immediate reward would also have the same effect.)

Take a day when I don’t have access to social media, and don’t have anything else in particular to do, either. My typical behavior on such days is that I might be bored for a while, maybe take a walk, and then gradually, over some time, get ideas for projects that I could be doing, and start working on them.

In contrast, on a day when I do have access to Facebook, say, at the point when I start growing bored I’ll glance at Facebook, because hey, why not? I’m just taking a quick look to see if there are any updates or new notifications, I’ll get offline right after that.

And maybe I do. Often I do succeed in just checking the updates and notifications, maybe briefly commenting on something, then closing Facebook again. But what then happens is that sometime later, I’ll take another quick look on Facebook again. And again. And again.

And then that period of idle, slightly bored mind-wandering never gets to the point where I start gathering the motivation to work on my own project. Because at the point when I start feeling bored, my default action is to look at Facebook, filling my mind with whatever is happening there, rather than it starting to come up with new things to do. Even when I close the browser tab, the gradually forming  idea of “hey, maybe I could do X” has been flushed away by whatever was in the window, meaning that it needs more time to reform.

Sometimes I take longer breaks from social media, after having used it quite heavily on previous days. On such occasions, it’s often been my experience that it takes a day for my mind to recalibrate its expectations – on the first day I’m constantly anxious to go on Facebook, but after that I’m starting to have more creativity. It is written:

Complex systems learn by adjusting to feedback, and feedback that is sufficiently loud and frequent will oversaturate the system’s inputs, leading it to reduce its overall sensitivity in order to register changes. When instant and immediate gratification becomes the norm, more subtle forms of feedback become harder to register. Getting engrossed in a book becomes increasingly difficult. The same goes for different kinds of stories: it’s easier to sit through an action movie than a drama because the story is simple and the movie is mostly comprised of satisfying bits of conflict resolution in the simple form of karate chops and shootouts. We might force ourselves to sit through a few chapters of Tolstoy, but the real issue is that we ultimately have to re-calibrate our receptivity to feedback in order to gain interest in more subtle flavors of experience.

Subtle flavors of experience, like the barely noticeable sensation in your mind that’s the stirring of a new idea, which you could allow to grow and develop.

Studies suggest that the mental effort involved in a task may be proportional to the opportunity cost of not doing something else. In other words, things aren’t so much intrinsically appealing or unappealing, but more appealing or unappealing relative to the appealingness of the best thing that you could be doing instead. If you have constant access to video games, going outside for a walk may seem like something pretty boring, but if you don’t have anything better to do, you may notice that going for a walk actually feels like a pretty nice idea.

Presumably this works for unconscious task-selection, too. If the social media is always available as an option, then momentarily checking that may be treated by your unconscious brain as something that has a higher reward than starting to think about something with a more long-term payoff, such as a creative project.

The insidious thing here is that you may not notice the effect this has on you. From your perspective, yeah, you’re looking at social media every now and then, but it’s always just short moments, and you’re spending the vast majority of your time not on social media. So why are you still feeling listless and easily distracted?

Because it isn’t enough to spend the majority of your time away from distractions, if that time isn’t also spent continuously away from them.

As it happens, I had been thinking about this topic for a while, but only wrote up this essay on an occasion when I’d decided to spend the rest of the day off social media. Then this essay started formulating itself in my mind, and I wrote it up in pretty much one go, to be posted at a later time.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (sonictails)

I just got home from a four-day rationality workshop in England that was organized by the Center For Applied Rationality (CFAR). It covered a lot of content, but if I had to choose a single theme that united most of it, it was listening to your emotions.

That might sound like a weird focus for a rationality workshop, but cognitive science has shown that the intuitive and emotional part of the mind (”System 1”) is both in charge of most of our behavior, and also carries out a great deal of valuable information-processing of its own (it’s great at pattern-matching, for example). Much of the workshop material was aimed at helping people reach a greater harmony between their System 1 and their verbal, logical System 2. Many of people’s motivational troubles come from the goals of their two systems being somehow at odds with each other, and we were taught to have our two systems have a better dialogue with each other, harmonizing their desires and making it easier for information to cross from one system to the other and back.

To give a more concrete example, there was the technique of goal factoring. You take a behavior that you often do but aren’t sure why, or which you feel might be wasted time. Suppose that you spend a lot of time answering e-mails that aren’t actually very important. You start by asking yourself: what’s good about this activity, that makes me do it? Then you try to listen to your feelings in response to that question, and write down what you perceive. Maybe you conclude that it makes you feel productive, and it gives you a break from tasks that require more energy to do.

Next you look at the things that you came up with, and consider whether there’s a better way to accomplish them. There are two possible outcomes here. Either you conclude that the behavior is an important and valuable one after all, meaning that you can now be more motivated to do it. Alternatively, you find that there would be better ways of accomplishing all the goals that the behavior was aiming for. Maybe taking a walk would make for a better break, and answering more urgent e-mails would provide more value. If you were previously using two hours per day on the unimportant e-mails, possibly you could now achieve more in terms of both relaxation and actual productivity by spending an hour on a walk and an hour on the important e-mails.

At this point, you consider your new plan, and again ask yourself: does this feel right? Is this motivating? Are there any slight pangs of regret about giving up my old behavior? If you still don’t want to shift your behavior, chances are that you still have some motive for doing this thing that you have missed, and the feelings of productivity and relaxation aren’t quite enough to cover it. In that case, go back to the step of listing motives.

Or, if you feel happy and content about the new direction that you’ve chosen, victory!

Notice how this technique is all about moving information from one system to another. System 2 notices that you’re doing something but it isn’t sure why that is, so it asks System 1 for the reasons. System 1 answers, ”here’s what I’m trying to do for us, what do you think?” Then System 2 does what it’s best at, taking an analytic approach and possibly coming up with better ways of achieving the different motives. Then it gives that alternative approach back to System 1 and asks, would this work? Would this give us everything that we want? If System 1 says no, System 2 gets back to work, and the dialogue continues until both are happy.

Again, I emphasize the collaborative aspect between the two systems. They’re allies working for common goals, not enemies. Too many people tend towards one of two extremes: either thinking that their emotions are stupid and something to suppress, or completely disdaining the use of logical analysis. Both extremes miss out on the strengths of the system that is neglected, and make it unlikely for the person to get everything that they want.

As I was heading back from the workshop, I considered doing something that I noticed feeling uncomfortable about. Previous meditation experience had already made me more likely to just attend to the discomfort rather than trying to push it away, but inspired by the workshop, I went a bit further. I took the discomfort, considered what my System 1 might be trying to warn me about, and concluded that it might be better to err on the side of caution this time around. Finally – and this wasn’t a thing from the workshop, it was something I invited on the spot – I summoned a feeling of gratitude and thanked my System 1 for having been alert and giving me the information. That might have been a little overblown, since neither system should actually be sentient by itself, but it still felt like a good mindset to cultivate.

Although it was never mentioned in the workshop, what comes to mind is the concept of wu-wei from Chinese philosophy, a state of ”effortless doing” where all of your desires are perfectly aligned and everything comes naturally. In the ideal form, you never need to force yourself to do something you don’t want to do, or to expend willpower on an unpleasant task. Either you want to do something and do, or don’t want to do it, and don’t.

A large number of the workshop’s classes – goal factoring, aversion factoring and calibration, urge propagation, comfort zone expansion, inner simulation, making hard decisions, Hamming questions, againstness – were aimed at more or less this. Find out what System 1 wants, find out what System 2 wants, dialogue, aim for a harmonious state between the two. Then there were a smaller number of other classes that might be summarized as being about problem-solving in general.

The classes about the different techniques were interspersed with ”debugging sessions” of various kinds. In the beginning of the workshop, we listed different bugs in our lives – anything about our lives that we weren’t happy with, with the suggested example bugs being things like ”every time I talk to so-and-so I end up in an argument”, ”I think that I ‘should’ do something but don’t really want to”, and ”I’m working on my dissertation and everything is going fine – but when people ask me why I’m doing a PhD, I have a hard time remembering why I wanted to”. After we’d had a class or a few, we’d apply the techniques we’d learned to solving those bugs, either individually, in pairs, or small groups with a staff member or volunteer TA assisting us. Then a few more classes on techniques and more debugging, classes and debugging, and so on.

The debugging sessions were interesting. Often when you ask someone for help on something, they will answer with direct object-level suggestions – if your problem is that you’re underweight and you would like to gain some weight, try this or that. Here, the staff and TAs would eventually get to the object-level advice as well, but first they would ask – why don’t you want to be underweight? Okay, you say that you’re not completely sure but based on the other things that you said, here’s a stupid and quite certainly wrong theory of what your underlying reasons for it might be, how does that theory feel like? Okay, you said that it’s mostly on the right track, so now tell me what’s wrong with it? If you feel that gaining weight would make you more attractive, do you feel that this is the most effective way of achieving that?

Only after you and the facilitator had reached some kind of consensus of why you thought that something was a bug, and made sure that the problem you were discussing was actually the best way to address to reasons, would it be time for the more direct advice.

At first, I had felt that I didn’t have very many bugs to address, and that I had mostly gotten reasonable advice for them that I might try. But then the workshop continued, and there were more debugging sessions, and I had to keep coming up with bugs. And then, under the gentle poking of others, I started finding the underlying, deep-seated problems, and some things that had been motivating my actions for the last several months without me always fully realizing it. At the end, when I looked at my initial list of bugs that I’d come up with in the beginning, most of the first items on the list looked hopelessly shallow compared to the later ones.

Often in life you feel that your problems are silly, and that you are affected by small stupid things that ”shouldn’t” be a problem. There was none of that at the workshop: it was tacitly acknowledged that being unreasonably hindered by ”stupid” problems is just something that brains tend to do.  Valentine, one of the staff members, gave a powerful speech about ”alienated birthrights” – things that all human beings should be capable of engaging in and enjoying, but which have been taken from people because they have internalized beliefs and identities that say things like ”I cannot do that” or ”I am bad at that”. Things like singing, dancing, athletics, mathematics, romantic relationships, actually understanding the world, heroism, tackling challenging problems. To use his analogy, we might not be good at these things at first, and may have to grow into them and master them the way that a toddler grows to master her body. And like a toddler who’s taking her early steps, we may flail around and look silly when we first start doing them, but these are capacities that – barring any actual disabilities – are a part of our birthright as human beings, which anyone can ultimately learn to master.

Then there were the people, and the general atmosphere of the workshop. People were intelligent, open, and motivated to work on their problems, help each other, and grow as human beings. After a long, cognitively and emotionally exhausting day at the workshop, people would then shift to entertainment ranging from wrestling to telling funny stories of their lives to Magic: the Gathering. (The game of ”bunny” was an actual scheduled event on the official agenda.) And just plain talk with each other, in a supportive, non-judgemental atmosphere. It was the people and the atmosphere that made me the most reluctant to leave, and I miss them already.

Would I recommend CFAR’s workshops to others? Although my above description may sound rather gushingly positive, my answer still needs to be a qualified ”mmmaybe”. The full price tag is quite hefty, though financial aid is available and I personally got a very substantial scholarship, with the agreement that I would pay it at a later time when I could actually afford it.

Still, the biggest question is, will the changes from the workshop stick? I feel like I have gained a valuable new perspective on emotions, a number of useful techniques, made new friends, strengthened my belief that I can do the things that I really set my mind on, and refined the ways by which I think of the world and any problems that I might have – but aside for the new friends, all of that will be worthless if it fades away in a week. If it does, I would have to judge even my steeply discounted price as ”not worth it”. That said, the workshops do have a money-back guarantee if you’re unhappy with the results, so if it really feels like it wasn’t worth it, I can simply choose to not pay. And if all the new things do end up sticking, it might still turn out that it would have been worth paying even the full, non-discounted price.

CFAR does have a few ways by which they try to make the things stick. There will be Skype follow-ups with their staff, for talking about how things have been going since the workshop. There is a mailing list for workshop alumni, and the occasional events, though the physical events are very US-centric (and in particular, San Francisco Bay Area-centric).

The techniques that we were taught are still all more or less experimental, and are being constantly refined and revised according to people’s experiences. I have already been thinking of a new skill that I had been playing with for a while before the workshop, and which has a bit of that ”CFAR feel” – I will aim to have it written up soon and sent to the others, and maybe it will eventually make its way to the curriculum of a future workshop. That should help keep me engaged as well.

We shall see. Until then, as they say in CFAR – to victory!

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (sonictails)

Whatever next? Predictive brains, situated agents, and the future of cognitive science (Andy Clark 2013, Behavioral and Brain Sciences) is an interesting paper on the computational architecture of the brain. It’s arguing that a large part of the brain is made up of hierarchical systems, where each system uses an internal model of the lower system in an attempt to predict the next outputs of the lower system. Whenever a higher system mispredicts a lower system’s next output, it will adjust itself in an attempt to make better predictions in the future.

So, suppose that we see something, and this visual data is processed by a low-level system (call it system L). A higher-level system (call it system H) attempts to predict what L’s output will be and sends its prediction down to L. L sends back a prediction error, indicating the extent to which H’s prediction matches L’s actual activity and processing of the visual stimulus. H will then adjust its own model based on the prediction error. By gradually building up a more accurate model of the various regularities behind L’s behavior, H is also building up a model of the world that causes L’s activity. At the same time, systems H+, H++ and so on that are situated “above” H build up still more sophisticated models.

So the higher-level systems have some kind of model of what kind of activity to expect from the lower-level systems. Of course, different situations elicit different kinds of activity: one example given in the paper is that of an animal “that frequently moves between a watery environment and dry land, or between a desert landscape and a verdant oasis”. The kinds of visual data that you would expect in those two situations differs, so the predictive systems should adapt their predictions based on the situation.

And apparently, that is what happens – when salamanders and rabbits are put to varying environments, half of their retinal ganglion cells rapidly adjust their predictions to keep up with the changing image predictions. Presumably, if the change of scene was unanticipated, the higher-level systems making predictions of the ganglion cells will then quickly get an error signal indicating that the ganglion cells are now behaving differently from what was expected based on how they acted just a moment ago; this should also cause them to adjust their predictions, and data about the scene change gets propagated up through the hierarchy.

This process involves the development of “novelty filters”, which learn to recognize and ignore the features of the input that most commonly occur together within some given environment. Thus, things that are “familiar” (based on previous experience) and behave in expected ways aren’t paid attention to.

So far we’ve discussed a low-level system sending the higher-level an error signal when the predictions of the higher-level system do not match the activity of the lower-level system. But the predictions sent by the higher-level system also serve a function, by acting as Bayesian priors for the lower-level systems.

Essentially, high up in the hierarchy we have high-level models of how the world works, and what might happen next based on those models. The highest-level system, call it H+++, makes a prediction of what the next activity of H++ is going to be like, and the prediction signal biases the activity of H++ in that direction. Now the activity of H++ involves making a prediction of H+, so this also causes H++ to bias the activity of H+ in some direction, and so on. When the predictions of the high-level models are accurate, this ends up minimizing the amount of error signals sent up, as the high-level systems adjust the expectations of the lower-level systems to become more accurate.

Let’s take a concrete example (this one’s not from the paper but rather one that I made up, so any mistakes are my own). Suppose that I am about to take a shower, and turn on the water. Somewhere in my brain there is a high-level world model which says that turning on the shower faucet will lead to water pouring out, and because I’m standing right below it, the model also predicts that the water will soon be falling on my body. This prediction is expressed in terms of the expected neural activity of some (set of) lower-level system(s). So the prediction is sent down to the lower systems, each of which has its own model of what it means for water to fall on my body, and each of which send that prediction down to yet more lower-level systems.

Eventually we reach some pretty low-level system, like one predicting the activity of the pressure- and temperature-sensing cells on my skin. Currently there isn’t yet water falling down on me, and this system is a pretty simple one, so it is currently predicting that the pressure- and temperature-sensing cells will continue to have roughly the same activity as they do now. But that’s about to change, and if the system did continue predicting “no change”, then it would end up being mistaken. Fortunately, the prediction originating from the high-level world-model has now propagated all the way down, and it ends up biasing the activity of this low-level system, so that the low-level system now predicts that the sensors on my skin are about to register a rush of warm water. Because this is exactly what happens, the low-level system generates no error signal to be sent up: everything happened as expected, and the overall system acted to minimize the overall prediction error.

If the prediction from the world-model would have been mistaken – if the water had been cut, or I accidentally turned on cold water when I was expecting warm water – then the biased prediction would have been mistaken, and an error signal would have been propagated upwards, possibly causing an adjustment to the overall world-model.

This ties into a number of interesting theories that I’ve read about, such as the one about conscious attention as an “error handler”: as long as things follow their familiar routines, no error signals come up, and we may become absent-minded, just carrying out familiar habits and routines. It is when something unexpected happens, or something of where we don’t have a strong prediction of what’s going to happen next, that we are jolted out of our thoughts and forced to pay attention to our surroundings.

This would also help explain why meditation is so notoriously hard: it involves paying attention to a single unchanging stimuli whose behavior is easy to predict, and our brains are hardwired to filter any unchanging stimuli whose behavior is easy to predict out of our consciousness. Interestingly, extended meditation seems to bring some of the lower-level predictions into conscious awareness. And what I said about predicting short-term sensory stimuli ties nicely into the things I discussed back in anticipation and meditation. Savants also seem to have access to lower-level sensory data. Another connection is the theory of autism as weakened priors for sensory data, i.e. as a worsened ability for the higher-level systems to either predict the activity of the lower-level ones, or to bias their activity as a consequence.

The paper has a particularly elegant explanation of how this model would explain binocular rivalry, a situation where a test subject is shown one image (for example, a house) to their left eye and another (for example, a face) to their right eye. Instead of seeing two images at once, people report seeing one at a time, with the two images alternating. Sometimes elements of unseen image are perceived as “breaking through” into the seen one, after which the perceived image flips.

The proposed explanation is that there are two high-level hypotheses of what the person might be seeing: either a house or a face. Suppose that the “face” hypothesis ends up dominating the high-level system, which then sends its prediction down the hierarchy, suppressing activity that would support the “house” interpretation. This decreases the error signal from the systems which support the “face” interpretation. But even as the error signal from those systems decreases, the error signal from the systems which are seeing the “house” increases, as their activity does not match the “face” prediction. That error signal is sent to the high-level system, decreasing its certainty in the “face” prediction until it flips its best guess prediction to be one of a house… propagating that prediction down, which eliminates the error signal from the systems making the “house” prediction but starts driving up the error from the systems making the “face” prediction, and soon the cycle repeats again. No single hypothesis of the world-state can account for all the existing sensory data, so the system ends up alternating between two conflicting hypotheses.

One particularly fascinating aspect of the whole “hierarchical error minimization” theory as presented so far is that it can also cover not only perception, but also action! As hypothesized in the theory, when we decide to do something, we are creating a prediction of ourselves doing something. The fact that we are actually not yet doing anything causes an error signal, which in turn ends up modifying the activity of our various motor systems so as to cause the predicted behavior.

As strange as it sounds, when your own behaviour is involved, your predictions not only precede sensation, they determine sensation. Thinking of going to the next pattern in a sequence causes a cascading prediction of what you should experience next. As the cascading prediction unfolds, it generates the motor commands necessary to fulfill the prediction. Thinking, predicting, and doing are all part of the same unfolding of sequences moving down the cortical hierarchy.

Everything that I’ve written here so far only covers approximately the first six pages of the paper: there are 18 more pages of it, as well as plenty of additional commentaries. I haven’t yet had the time to read the rest, so I recommend checking out the paper itself if this seemed interesting to you.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)

Germund Hesslow’s paper Conscious thought as simulation of behaviour and perception, which I first read maybe three months back, has an interesting discussion about anticipations.

I was previously familiar with the idea of conscious thought involving simulation of behavior. Briefly, the idea was that when you plan an action, you are simulating (imagining) various courses of action and evaluating their possible outcomes in your head. So you imagine bringing your boyfriend some flowers, think of how he’d react to that, and then maybe decide to buy him chocolate instead. Imagining things is a process of constructing a simulation of them. Nothing too surprising in that idea. Here’s how Hesslow puts it:

What we perceive is quite often determined by our own behaviour: visual input is changed when we move our head or eyes; tactile stimulation is generated by manipulating objects in the hands. The sensory consequences of behaviour are to a large extent predictable (Fig. 2a). The simulation hypothesis postulates the existence of an associative mechanism that enables the preparatory stages of an action to elicit sensory activity that resembles the activity normally caused by the completed overt behaviour (Fig. 2b). A plausible neural substrate for such a mechanism is the extensive fibre projection from the frontal lobe to all parts of sensory cortex. Very little is known about the function of these pathways, but there is physiological evidence from monkeys that neurons in polysensory cortex can be modulated by movement[33].

But the “buy flowers or chocolate?” example concerns relatively long-term decision-making. We also simulate the short-term consequences of our actions (or at least try to). And what I had not consciously realized before, but what was implied in the excerpt above, was that very immediate consequences will be simulated as well.

Discussing this paper with a friend, we considered the subjective experience of such anticipatory simulations. Suppose that I want to open a door, and start pushing down the handle. Even before I’ve pushed it all the way down, I seem to already experience a mild foretaste of what having pushed it down feels like. I know what it will feel like to have completed the action, a fraction of a second before actually having completed that action, and it feels faintly pleasing when that anticipation is realized.

Which was interesting to realize, but not particularly earth-shattering by itself. But the real discovery came soon after reading the paper. I was doing some vipassana-style meditation, focusing on the feeling of discomfort that came from wanting to swallow as there was excess saliva gathering in my mouth. I realized that what I thought of as “discomfort” was actually a denied anticipation. I wanted to swallow, and there was already in my mind a simulation of what swallowing would feel like. I was already experiencing some of the pleasure that I would get from swallowing, and my discomfort came from the fact that I wanted to experience the rest of that pleasure. When I realized this, I focused on that anticipated pleasure, trying to either make it stop feeling pleasant, or alternatively, strengthen the pleasure so that I could enjoy it without actually swallowing. My clock rang before I could fully succeed in either, but I did notice that it made it considerably easier to resist the urge.

On my way to town, I started observing my mental processes and noticed that that tiny anticipation of pleasure was everywhere. Coming to the train station, there was an anticipation of not needing to wait for long. Using a machine to buy more time on my train card, there was an anticipation of the machine working. Waiting for the train, there was an anticipation of seeing the train arrive and getting to board it. And each time that I experienced discomfort, it was from that subtle anticipation being denied. Anticipating the experience of seeing the train being there on time could have led to frustration if it was running late. Anticipating the experience of boarding the train led to impatience as the train wasn’t there yet, and that sequence of planned action that had already been partially initiated couldn’t finish. Suddenly I was seeing the anticipatory component in every feeling of discomfort I had.

When I realized that, I started writing an early draft of this post, which contained the following rather excited paragraph:

That’s what “letting go of attachments” refers to. That’s what “living in the moment” refers to. Letting go of the attachment to all predictions and anticipations, even ones that extend only seconds into the future. If one doesn’t do that, they will constantly be awaiting what happens in some future moment, and will experience constant frustrations. On some intellectual level I already understood that, but I needed to develop the skill for actually noticing all my split-second anticipations before I could really get it.

Unfortunately, what often happens with insights gained from meditation is that one simply forgets to apply them. Or if one does, in principle, remember that they should apply the insights, they’ll have forgotten how. Being able to isolate the anticipation from the general feeling of frustration, and then knowing how to let go of the attachment to it, is a tricky skill. And I ended up mostly just forgetting about it, especially once my established routine of meditating once per day got interrupted for a month or so.

I did some meditation today, and finally remembered to try out this technique again. I started looking for such anticipations whenever I experienced a feeling of discomfort, and when I found any, I just observed them and let go of them. And it worked – I was capable of meditating for a total of 70 minutes in one sitting, and got myself to a pleasant state of mind where everything felt good. That feeling persisted for most of the rest of the day.

But after that session, it feels like my earlier characterization of the technique as “a cessation of attachments to predictions” would be a little off. That description feels clunky, and like it doesn’t properly describe the experience. “Letting go of a desire for sensations to feel different” sounds more like it, but I’m not sure of what exactly the difference is.

This probably also relates to another meditation experience, which I had about two months back. I was concentrating on my breath, and again, I noticed that the sensation of saliva in my mouth was bothering me. At first I tried to just ignore it and keep my attention on my breath; or alternatively, to let go of the feeling of distraction so that the sensation of saliva wouldn’t bother me anymore. When neither worked, I essentially just thought “oh, screw it” and accepted the sensation just as it was, as well as accepting the fact that it would continue to bother me. And then, once I had accepted that it would bother me… the feeling of it bothering me melted away, and vanished from my consciousness entirely. I was left with a warm, strongly pleasant feeling that lasted for many hours after I’d stopped meditating.

I haven’t been able to put myself back into that exact state, because as far as I can tell it, getting into it requires you to genuinely accept the fact that you’re feeling uncomfortable. In other words, you cannot use the acceptance as a means to an end, thinking that “I’ll now accept this unpleasantness so that I’ll get back to that nice state where it doesn’t feel unpleasant anymore”. That’s not genuine acceptance anymore, and therefore it doesn’t work.

Anyway, it feels like the “isolate anticipations and let go of them” and “accept your feelings and discomforts exactly as they are” techniques would be two different ways of achieving the same end. The feeling of pleasure I got today wasn’t as strong as the feeling of pleasure I got when I managed to accept my discomforts as they were, but it seemed to have much of the same character.

Some – though not all – meditators report a lack of achievement after reaching high levels of skill. They’re just happy with doing whatever, with no need to accomplish more things. And after meditating today, I too felt happy with whatever would happen, with no urgency to accomplish (nor avoid!) any of the things that I had planned for today. There seems to be a fine line between “use meditation to get rid of your disinclination for doing the things you want to do” and “use meditation and get rid of your inclination to do anything”.

In any case, I will have to try to remember this technique from now on, and keep experimenting with it. Hopefully, having written this post will help.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)
Our moral reasoning is ultimately grounded in our moral intuitions: instinctive "black box" judgements of what is right and wrong. For example, most people would think that needlessly hurting somebody else is wrong, just because. The claim doesn't need further elaboration, and in fact the reasons for it can't be explained, though people can and do construct elaborate rationalizations for why everyone should accept the claim. This makes things interesting when people with different moral intuitions try to debate morality with each other.


Why do modern-day liberals (for example) generally consider it okay to say "I think everyone should be happy" without offering an explanation, but not okay to say "I think I should be free to keep slaves", regardless of the explanation offered? In an earlier age, the second statement might have been considered acceptable, while the first one would have required an explanation.

In general, people accept their favorite intuitions as given and require people to justify any intuitions which contradict those. If people have strongly left-wing intuitions, they tend to consider right-wing intuitions arbitrary and unacceptable, while considering left-wing intuitions so obvious as to not need any explanation. And vice versa.

Of course, you will notice that in some cultures specific moral intuitions tend to dominate, while other intuitions dominate in other cultures. People tend to pick up the moral intuitions of their environment: some claims go so strongly against the prevailing moral intuitions of my social environment that if I were to even hypothetically raise the possibility of them being correct, I would be loudly condemned and feel bad for even thinking that way. (Related: Paul Graham's What you can't say.) "Culture" here is to be understood as being considerably more fine-grained than just "the culture in Finland" or the "culture in India" - there are countless of subcultures even within a single country.


Social psychologists distinguish between two kinds of moral rules: ones which people consider absolute, and ones which people consider to be social conventions. For example, if a group of people all bullied and picked on one of them, this would usually be considered wrong, even if everyone in the group (including the bullied person) thought it was okay. But if there's a rule that you should wear a specific kind of clothing while at work, then it's considered okay not to wear those clothes if you get special permission from your boss, or if you switch to another job without that rule.

The funny thing is that many people don't realize that the distinction of which is which is by itself a moral intuition which varies from people to people, and from culture to culture. Jonathan Haidt writes in The Righteous Mind: Why Good People Are Divided by Politics and Religion of his finding that while the upper classes in both Brazil and USA were likely to find violations of harmless taboos to be violations of social convention, lower classes in both countries were more likely to find them violations of absolute moral codes. At the time, moral psychology had mistakenly thought that "moving on" to a conception of right and wrong that was only grounded in concrete harms would be the way that children's morality naturally develops, and that children discover morality by themselves instead of learning it from others.

So moral psychologists had mistakenly been thinking about some moral intuitions as absolute instead of relative. But we can hardly blame them, for it's common to fail to notice that the distinction between "social convention" and "moral fact" is variable. Sometimes this is probably done for purpose, for rhetorical reasons - it's a much more convincing speech if you can appeal to ultimate moral truths rather than to social conventions. But just as often people simply don't seem to realize the distinction.

(Note to international readers: I have been corrupted by the American blogosphere and literature, and will therefore be using "liberal" and "conservative" mostly to denote their American meanings. I apologize profusely to my European readers for this terrible misuse of language and for not using the correct terminology like God intended it to be used.)

For example, social conservatives sometimes complain that liberals are pushing their morality on them, by requiring things such as not condemning homosexuality. To liberals, this is obviously absurd - nobody is saying that the conservatives should be gay, people are just saying that people shouldn’t be denied equal rights simply because of their sexual orientation. From the liberal point of view, it is the conservatives who are pushing their beliefs on others, not vice versa.

But let's contrast "oppressing gays" to "banning polluting factories". Few liberals would be willing to accept the claim that if somebody wants to build a factory that causes a lot of harm to the environment, he should be allowed to do so, and to ban him from doing it would be to push the liberal ideals on the factory-owner. They might, however, protest that to prevent them from banning the factory would be pushing (e.g.) pro-capitalism ideals on them. So, in other words:

Conservatives want to prevent people from being gay. They think that this just means upholding morality. They think that if somebody wants to prevent them from doing so, that somebody is pushing their own ideals on them.

Liberals want to prevent people from polluting their environment. They think that this just means upholding morality. They think that if somebody wants to prevent them from doing so, that somebody is pushing their own ideals on them.

Now my liberal readers (do I even have any socially conservative readers?) will no doubt be rushing to point out the differences in these two examples. Most obviously the fact that pollution hurts other people than just the factory owner, like people on their nearby summer cottages who like seeing nature in a pristine and pure state, so it's justified to do something about it. But conservatives might also argue that openly gay behavior encourages being openly gay, and that this hurts those in nearby suburbs who like seeing people act properly, so it's justified to do something about it.

It's easy to say that "anything that doesn't harm others should be allowed", but it's much harder to rigorously define harm, and liberals and conservatives differ in when they think it's okay to cause somebody else harm. And even this is probably conceding too much to the liberal point of view, as it accepts a position where the morality of an act is judged primarily in the form of the harms it causes. Some conservatives would be likely to argue that homosexuality just is wrong, the way that killing somebody just is wrong.

My point isn't that we should accept the conservative argument. Of course we should reject it - my liberal moral intuitions say so. But we can't in all honestly claim an objective moral high ground. If we are to be honest to ourselves, we will accept that yes, we are pushing our moral beliefs on them - just as they are pushing their moral beliefs on us. And we will hope that our moral beliefs win.

Here's another example of "failing to notice the subjectivity of what counts as social convention". Many people are annoyed by aggressive vegetarians, who think anyone who eats meat is a bad person, or by religious people who are actively trying to convert others. People often say that it's fine to be vegetarian or religious if that's what you like, but you shouldn't push your ideology to others and require them to act the same.

Compare this to saying that it's fine to refuse to send Jews to concentration camps, or to let people die in horrible ways when they could have been saved, but you shouldn't push your ideology to others and require them to act the same. I expect that would sound absurd to most of us. But if you accept a certain vegetarian point of view, then killing animals for food is exactly equivalent to the Holocaust. And if you accept a certain religious view saying that unconverted people will go to Hell for an eternity, then not trying to convert them is even worse than letting people die in horrible ways. To say that these groups shouldn't push their morality to others is to already push your own ideology - which says that decisions about what to eat and what to believe are just social conventions, while decisions about whether to kill humans and save lives are moral facts - on them.

So what use is there in debating morality, if we have so divergent moral intuitions? In some cases, people have such widely differing intuitions that there is no point. In other cases, their intuitions are similar enough that they can find common ground, and in that case discussion can be useful. Intuitions can clearly be affected by words, and sometimes people do shift their intuitions as a result of having debated them. But this usually requires appealing to, or at least starting out from, some moral intuition that they already accept. There are inferential distances involved in moral claims, just as there are inferential distances involved in factual claims.

So what about the cases when the distance is too large, when the gap simply cannot be bridged? Well in those cases, we will simply have to fight to keep pushing our own moral intuitions to as many people as possible, and hope that they will end up having more influence than the unacceptable intuitions. Many liberals probably don't want to admit to themselves that this is what we should do, in order to beat the conservatives - it goes so badly against the liberal rhetoric. It would be much nicer to pretend that we are simply letting everyone live the way they want to, and that we are fighting to defend everyone's right for that.

But it would be more honest to admit that we actually want to let everyone live the way they want to, as long as they don't things we consider "really wrong", such as discriminating against gays. And that in this regard we're no different from the conservatives, who would likewise let everyone live the way they wanted to, as long as they don't do things the conservatives consider "really wrong".

Of course, whether or not you'll want to be that honest depends on what your moral intuitions have to say about honesty.
xuenay: (Default)
Today was the second session of the Neuroinformatics 4 course that I'm taking. Each participant has been assigned some paper from this list, and we're all supposed to have a presentation summarizing the paper. We're also supposed to write a diary about each presentation and hand it in in the end, which is the reason why I'm typing this entry. I figure that if I'm going to keep a diary about this, I might as well make it public.

Session I: Global Workspace Theory. I held the first presentation, which covered Global Workspace Theory as explained by Baars (2002, 2004). You can read about it in those papers, but the general idea of GWT is that that which we experience as conscious thought is actually information that's being processed in a "global workspace", through which various parts of the brain communicate with each other.

Suppose that you see in front of you a delicious pie. Some image-processing system in your brain takes that information, processes it, and sends that information to the global workspace. Now some attentional system or something somehow (insert energetic waving of hands) decides whether that stimulus is something that you should become consciously aware of. If it is, then that stimulus becomes the active content of the global workspace, and information about it is broadcast to all the other systems that are connected to the global workspace. Our conscious thoughts are that information which is represented in the global workspace.

There exists some very nice experimental work which supports this theory. For instance, Dehaene (2001) showed experimental subjects various words for a very short while (29 milliseconds each). Then, for the next 71 milliseconds, the subjects either saw a blank screen (the "visible" condition) or a geometric shape (the "masking" condition). Previous research had shown that in such an experiment, the subjects will report seeing the "visible" words and can remember what they said, while they will fail to notice the "masked" words. That was also the case here. In addition, fMRI scans seemed to show that the "visible" words caused considerably wider activation in the brain than the "masked" words, which mainly just produced minor activation in area relating to visual processing. The GWT interpretation of these results would be that the "visible" words made their way to the global workspace and activated it. For the "masked" words there was no time for that to happen, since the sight of the masking shape "overwrote" the contents of the visual system before the sight of the word had had the time to activate the global workspace.

That's all fine and good, but Baars's papers were rather vague on a number of details, like "how is this implemented in practice"? If information is represented in the global workspace, what does that actually mean? Is there a single representation of the concept of a pie in the global workspace, which all the systems manipulate together? Or is information in the global workspace copied to all of the systems, so that they are all manipulating their own local copies and somehow synchronizing their changes through the global workspace? How can an abstract concept like "pie" be represented in such a way that systems as diverse as those for visual processing, motor control, memory, and the generation of speech (say) all understand it?

Session II: Global Neuronal Workspace. Today's presentation attempted to be a little more specific. Dehaene (2011) discusses the Global Neuronal Workspace model, based on Baars's Global Workspace model.

The main thing that I got out of today's presentation was that the brain is the idea of the brain being divisible into two parts. The processing network is a network of tightly integrated, specialized processing units that mostly carry out non-conscious computation. For instance, early processing stages of the visual system, carrying out things like edge detection, would be a part of the processing network. The "processors" of the processing network typically have "highly specific local or medium range connections" - in other words, the processors in a specific region mostly talk with their close neighbors and nobody else.

The various parts of the processing network are connected by the Global Neuronal Workspace, a set of cortical neurons with long-range axons. The impression I got was this is something akin to a set of highways between cities, or different branches of a post office. Or planets (processing network areas) joined together by a network of Hyperpulse Generators (the Global Neuronal Workspace). You get the idea. I believe that it's some sort of a small world network.

Note that contrary to intuition and folk psychology (but consistently with the hierarchical consciousness hypothesis), this means that there is no single brain center where conscious information is gathered and combined. Instead, as the paper states, there is "a brain-scale process of conscious synthesis achieved when multiple processors converge to a coherent metastable state". Which basically means that consciousness is created by various parts of the brain interacting and exchanging information with each other.

Another claim of GNW is that sensory information is basically processed in a two-stage manner. First, a sensory stimulus causes activation in the sensory regions and begins climbing up the processor hierarchy. Eventually it reaches a stage where it may somehow be selected to be consciously represented, with the criteria being "its adequacy to current goals and attention state" (more waving of hands). If it does, it becomes represented in the GNW. It "is amplified in a top-down manner and becomes maintained by sustained activity of a fraction of GNW neurons": this might re-activate the stimulus signal in the sensory regions, where its activation might have already been declining. Something akin to this model has apparently been verified in a number of computer simulations and brain imaging studies.

Which sounds interesting and promising, though this still leaves a number of questions unclear. For instance, the paper claims that only one thing at a time can be represented in the GNW. But apparently the thing that gets represented in the GNW is partially selected by conscious attention, and the paper that I previously posted about placed the attentional network in the prefrontal cortex (i.e. not in the entire brain). So doesn't the content in the sensory regions then need to first be delivered to the attentional networks (via the GNW) so that the attentional networks can decide whether that content should be put into the GNW? Either there's something wrong with this model, or I'm not understanding it correctly. I should probably dig into the references. And again, there's the question of just what kind of information is actually put into the GNW in such a manner that all of the different parts of the brain can understand it.

(Yes, I realize that my confusion may seem incongruent with the fact that I just co-authored a paper where we said that we "already have a fairly good understanding on how the cerebral cortex processes information and gives rise to the attentional processes underlying consciousness". My co-author's words, not mine: he was the neuroscience expert on that paper. I should probably ask him when I get the chance.)
xuenay: (Default)

It proposes that what we experience as consciousness is built up in a hierarchical process, with various parts of the brain doing further processing on the flow of information and contributing their own part to the "feel" of consciousness. It's possible to subtract various parts of the process, thereby leading to an altered state of consciousness, without consciousness itself disappearing.

The prefrontal cortex is usually associated with "higher-level" tasks, including emotional regulation, but the authors suggest that this is due to the prefrontal cortex refining the outputs of the earlier processing stages, rather than inhibiting them:

"In such a view, the prefrontal cortex does not represent a supervisory or control system. Rather, it actively implements higher cognitive functions. It is further suggested that the prefrontal cortex does not act as an inhibitory agent of older, more primitive brain structures. The prefrontal cortex restrains output from older structures not by suppressing their computational product directly but by elaborating on it to produce more sophisticated output. If the prefrontal cortex is lost, the person simply functions on the next highest layer that remains.The structures implementing these next highest layers are not disinhibited by the loss of the prefrontal cortex. Rather, their processing is unaffected except that no more sophistication is added to their processing before a motor output occurs."

Their theory is that several altered states of consciousness involve a reduction in the activity of the prefrontal cortex:

"It is proposed in this article that altered states of consciousness are due to transient prefrontal deregulation. Six conscious states that are considered putative altered states (dreaming, the runner's high, meditation, hypnosis, daydreaming, and various drug-induced states) are briefly examined. These altered states share characteristics whose proper function are regulated by the prefrontal cortex such as time distortions, disinhibition from social constraints, or a change in focused attention. It is further proposed that the phenomenological uniqueness of each state is the result of the differential viability of various [dorsolateral] circuits. To give one example, the sense of self is reported to be lost to a higher degree in meditation than in hypnosis; whereas, the opposite is often reported for cognitive flexibility and willed action, which are absent to a higher degree in hypnosis.The neutralization of specific prefrontal contributions to consciousness has been aptly called ‘‘phenomenological subtraction’’ by Allan Hobson (2001).The individual in such an altered state operates on what top layers remain. In altered states that cause severe prefrontal hypofunction, such as non-lucid dreaming or various drug states, the resulting phenomenological awareness is extraordinarily bizarre. In less dramatic altered states, such as long-distance running, the change is more subtle."

And about meditation in particular, they hypothesize that it involves a general lowered prefrontal activity, with the exception of increased activation in the prefrontal attentional network:

"It is evident that more research is needed to resolve the conflicting EEG and neuroimaging data. Reinterpreting and integrating the limited data from existing studies, it is proposed that meditation results in transient hypofrontality with the notable exception of the attentional network in the prefrontal cortex. The resulting conscious state is one of full alertness and a heightened sense of awareness, but without content. Since attention appears to be a rather global prefrontal function (e.g., Cabeza & Nyberg, 2000), PET, SPECT, and fMRI scans showed an overall increase in DL activity during the practice of meditation. However, the attentional network is likely to overlap spatially with modules subserving other prefrontal functions and an increase as measured by fMRI does not inevitably signify the activation of all of the region's modules. Humans appear to have a great deal of control over what they attend to (Atkinson & Shiffrin, 1968), and in meditation, attentional resources are used to actively amplify a particular event such as a mantra until it becomes the exclusive content in the working memory buffer. This intentional, concentrated effort selectively disengages all other cognitive capacities of the prefrontal cortex, accounting for the a-activity. Phenomenologically, meditators report a state that is consistent with decreased frontal function such as a sense of timelessness, denial of self, little if any self-reflection and analysis, little emotional content, little abstract thinking, no planning, and a sensation of unity. The highly focused attention is the most distinguishing feature of the meditative state, while other altered states of consciousness tend to be more characterized by aimless drifting."

They do not discuss permanent changes caused by meditation in the paper, but if the prefrontal cortex is involved with last-stage processing of incoming sensory data, then prefrontal regulation would fit together with meditators' reports of being able to experience sensory information in a more "raw", unprocessed form. Likewise, if the prefrontal cortex unifies and integrates information from earlier processing stages, then meditation revealing the unity of self to be an illusion would be consistent would reduced prefrontal activity.

Vipassana jhanas, or other forms of meditation aimed towards reaching enlightenment, would then somehow involve permanently reducing or at least changing the nature of prefrontal processing. Meditation practicioners speak of "the Dark Night", an intermediate stage during the search for enlightenment, which is experienced as strongly unpleasant and where "our dark stuff tends to come bubbling up to the surface with a volume and intensity that we may never have known before". This is achieved after making sufficient progress in meditation, and will continue until the practicioner makes enough progress to make it go away.

Under the model suggested by the paper, the Dark Night would then be an intermediate stage where the activity of the prefrontal cortex had been reduced/changed to such an extent that it was no longer capable of moderating the output of the various earlier emotional systems. Resolving the Dark Night would involve somehow finding a new balance where the outputs of any systems involved with negative emotions could be better handled again, but I have no idea of how that happens.
xuenay: (Default)
One of the many things that I've learned from [livejournal.com profile] theferrett is that I don't have to believe in my emotions.

Here's an example of what I mean. Last night, I was suffering from insomnia. As frequently happens when I do, I got frustrated and started worrying about everything. It did not take long before this proceeded into severe self-doubt issues: will I ever amount to anything, will any of my projects actually succeed, et cetera. I was quickly - as usual - becoming convinced that the answer was no, and I should just stop being ambitious and settle for some safe but boring lifepath while I still had the chance.

Now, previously I'd only thought of two options in this kind of a situation:

A) Get rid of the thoughts by distracting myself or finding something that will cheer me up and get me out of that mood.
B) Fail to get out of the mood, keep thinking these thoughts.

For some reason, it had never occurred to me that there could also exist a third option:

C) Keep feeling miserable, but stop thinking those thoughts.

So that's what I did. I thought, "I'm feeling miserable because I can't sleep and I'm frustrated, but that has nothing to do with whether my projects and ambitions will be successful or not. My current emotions convey me no information about that topic. So it's pointless to doubt myself because of these emotions." (Not in so many words, but that was the general idea.)

So I stopped thinking those thoughts. And while I still felt generally miserable, the thoughts stopped making me feel even worse.

Previously I had thought that emotions and thoughts were connected in such a way that in some kinds of bad moods, you had no choice but to think negative thoughts. Now it appears that this isn't the case. Is this something that everyone but me knew already, or is it something that should be talked about a lot more?

Cross-posted: G+, FB.
xuenay: (Default)
Sturgeon's law: 90% of everything is crap.
Political sturgeon's law: 90% of all the arguments offered in the support of any given ideology or movement are crap.
The fish-shooting corollary: 99% of all the arguments made against any ideology or movement attack the crappy 90% of arguments, never bothering to engage with the quality ones.
The ignorance corollary: You have never even heard the good reasons for supporting most of the ideologies or movements you've rejected as obviously wrong.
xuenay: (Default)
Recently I've been trying to become consistently more happy and suffer less. An important component of this involves reaching a state of equanimity - "a state of mental or emotional stability or composure arising from a deep awareness and acceptance of the present moment", to use the Wikipedia definition. Although I have several techniques for overcoming negative feelings, it often happens that I'm simply not motivated enough to use them. In other words, I feel bad, and I know I could make myself feel better with some effort, but I just don't feel like mustering that effort.

By contrast, if I've managed to reach a state of equanimity, managing and dissolving negative feelings is something that happens almost on its own. While I'm not immune to emotional hurt, say, it's much easier to take care of. Things like practicing mindfulness on any sort of discomfort becomes almost automatic when in equanimious.

Getting into equanimity isn't always easy, even when I want to. Exercise and cold showers help in making me feel physically good, which helps. Ultimately, though, I need to think the right way.

There are a number of thoughts that I've noticed help me get into a state of equanimity. Not every one always works, which is why I've developed a number of them. If I have access to them all, usually at least one will work.

During the last month or two, my Enlightenment progress has stalled, and on most days I haven't been equanimious at all. Part of this, I think, has been because I forgot pretty much all of these thoughts. Every now and then some of them has come back to me, and sometimes it has helped for a day or two before it stopped working again. I finally realized I needed to compile a list of all such thoughts that I've used. This should help me to always have available *some* thought that might work in that state of mind.

I've divided these in three categories.

No self: Negative emotions arise from drawing a boundary between self and non-self. When one abandons the thought of a separate self that has to be defended from a hostile external world, emotions such as fear or uncertainty vanish.

No time: The need to defend yourself only exists if there is a chance that things will get worse in the future. Likewise, being impatient about something, or wanting desperately to experience something, only makes sense if it is combined with a notion of time passing. When one abandons a time-centered perspective and concentrates on the present, emotions such as fear or impatience vanish. When the present is the only moment that exists, my thought often goes, I should take heed and enjoy it.

No care: Suffering arises from identifying so strongly with your emotions that you cannot resolve attention-allocation conflicts. If you have a strong emotional attachment to eating expensive chocolate bananas on one hand, and on principle avoiding all chocolate on the other, you cannot reason your way out of such a conflict. When one stops identifying with their emotions but instead embraces them as useful feedback, the suffering related to negative emotions vanishes.

And here are the actual thoughts. Although listed as separate, some of these are overlapping and some build on each other. In particular, several of the "no time" theories presume parts of the "no self" theories. Some might also seem to somewhat contradict each other, but I don't think they ultimately do: they're simply based on different levels of analysis.

I don't really have the space or energy to comprehensively explain these all, so I'm not sure how much sense they will make to people. Still, maybe someone will find something useful here nonetheless.

- No self, psychological: There is no Cartesian Theater or homonculus, sitting in the center of the brain and running things. To take some specific part of the brain and call it "THE self" is not scientifically justified. Instead, there is only a vast collection of different subsystems, producing quite a variety of selves.

- No self, Occam's Razorical: It makes little sense to talk of an observer in the brain that is the one that observes everything. What would the positing of such an observer add to any theories? It makes more sense to say that there are various cognitive algorithms, which produce qualia as a side-effect of being run. Instead of there existing somebody who observes all the qualia produced by the brain, there are only the qualia which observe themselves and then cease to exist. If so, it makes little sense to identify with the qualia produced by my brain in particular. Instead I can identify with the qualia of all life everywhere. (I previously wrote about this view here, under "the self as how the algorithm feels from the inside".)

- No self, system-theoretical: To speak of a 'self' as separate from the environment makes little sense. My identity is defined by my environment. If all of my physical properties were held constant, you could make me think or do anything by choosing an appropriate environment to match. I'm part of a vast distributed cognitive system, and drawing the boundaries of self strictly around the physical shell housing my body makes little sense. (I previously wrote about this view here, under "the self as lack of personal boundaries".)

- No time, psychological: My mind can only act in the present. I can imagine the future, or remember the past, but both of these involve thought processes that operate in the now. I live in an eternal present.

- No time, physical multiverse: Depending on which Tegmarkian multiverses are real, all physically possible worlds exist or all logically possible worlds exist. Then, no matter what I wish to experience or what I fear, in some part of the multiverse I am already experiencing it. If I identify not with a specific observer but with qualia, then I'll know that I already have everything I could ever wish for, as well as already suffering from everything I could ever dread.

- No time, physical block: In a block universe conception of time, the whole universe already exists as an unmoving four-dimensional block. Time does not pass in the sense of the current me ceasing to exist and being replaced with another me after a moment passes: instead, this me, and all the other mes, exist eternally.

- No time, logical: If I identify with specific qualia instead of specific observers, then the qualia that "I am experiencing" (rather, the qualia which I am) at this very moment is the only qualia which I can be. Anything else would be a different qualia. Therefore, the me that exists at this very moment is the only logically possible one that I can be.

- No care, psychological: Our emotional reactions to anything are just an interpretative layer imposed by our brain, our emotions in general a mechanism to guide our action. They do not exist outside our brain. There is no inherent reason for why I should react to something with anger, and to something else with fear, and to something else with joy. In principle, I can choose to feel any emotion in conjunction with anything that I do or experience. (I previously discussed this view here.)

- No care, projective: All emotions exist within me. To think that somebody external pressures me, say, is incorrect to the extent that it assumes an external force. What is happening that others are activating processes that reside within me, and to ascribe them as pressuring me is projection.

- No care, philosophical: I can dis-identify with any thoughts or emotions that come into my mind. Instead of saying "I am angry", I can say "I'm hosting a feeling of anger as a visitor in my mind right now". I have desires, emotions and thoughts, but I am not my desires, emotions or thoughts. (This is the basis of at least some sort of mindfulness practice, which I previously discussed here.)


These might give you the impression that nothing matters and you might as well lay in bed until you die. Not so. Even if every possible experience exists, not all of them exist in the same proportion. If it did, we would not observe the kind of a regular, ordered universe that we do, but instead a chaotic, unpredictable universe [1]. Therefore our actions still matter and have consequences - it all adds up to normality.

It is still meaningful for me to have goals which I seek to accomplish - even if were logically, psychologically and physically impossible for "this" particular entity to experience their completion, some "other" entity will still reap their benefits. (Our language is not very well designed to handle self-lessness.) And of course, if I identify with all the qualia experienced with all sentient life everywhere in the world, the fact that this particular set of qualia will only be this set forever doesn't matter. I want my efforts to be happy and free of suffering to have as big of an effect as possible.

I think I'll stop here, in case I still have the occasional reader or two who considers me somehow sane.

[1] I should be more specific here. Yes, if all possible experiences exist, then it is logically necessary that *some* of those experiences would still be about a regular, predictable universe, regardless of whether the universe actually was chaotic or not. But there would only exist a small number of such experiences, while far more of them would exist if there was a more regular weighting. Therefore, given that "I" observe a regular universe, the subjective probability that I exist in one is higher.

Regardless of what kind of a theory we select, it has to be one that still allows probability theory to be meaningful. If it didn't, then nothing we did mattered, and we don't want that, now do we? Again, it should still add up to normality.

See e.g. here or here for views on how to make probability theory function even in a Big universe.
xuenay: (Default)
Cross-posted from Less Wrong.

Follow-up to: Suffering as attention-allocational conflict.

In many cases, it may be possible to end an attention-allocational conflict by looking at the content of the conflict and resolving it. However, there are also many cases where this simply won't work. If you're afraid of public speaking, say, the "I don't want to do this" signal is going to keep repeating itself regardless of how you try to resolve the conflict. Instead, you have to treat the conflict in a non-content-focused way.

In a nutshell, this is just the map-territory distinction as applied to emotions. Your emotions have evolved as a feedback and attention control mechanism: their purpose is to modify your behavior. If you're afraid of a dog, this is a fact about you, not about the dog. Nothing in the world is inherently scary, bad or good. Furthermore, emotions aren't inherently good or bad either, unless we choose to treat them as such.

We all know this, right? But we don't consistently apply it to our thinking of emotions. In particular, this has two major implications:

1. You are not the world: It's always alright to feel good. Whether you're feeling good or bad won't change the state of the world: the world is only changed by the actual actions you take. You're never obligated to feel bad, or guilty, or ashamed. In particular, since you can only influence the world through your actions, you will accomplish more and be happier if your emotions are tied to your actions, not states of the world.
2. Emotional acceptance: At the same time, "negative" emotions are not something to suppress or flinch away from. They're a feedback mechanism which imprints lessons directly into your automatic behavior (your elephant). With your subconsciousness having been trained to act better in the future, your conscious mind is free to concentrate on other things. If the feedback system is broken and teaching you bad lessons, then you should act to correct it. But if the pain is about some real mistake or real loss you suffered, then you should welcome it.

Internalizing these lessons can have some very powerful effects. I've been making very good progress on consistently feeling better after starting to train myself to think like this. But some LW posters are even farther along; witness Will Ryan:
I internalized a number of different conclusions during this period, although piecing together the exact time frame is somewhat difficult without rereading all of my old notes. The biggest conclusion was probably acceptance of the world as it is, or eliminating affective judgments about reality as a whole. I wanted to become an agent who was never harmed by receiving true information. Denying reality does not change it, only decreases our effectiveness in interacting with the world. An important piece of this acceptance is that the past is immutable, I realized that I should only have prospective emotions, since they are there to guide our future behavior. [...]
More things fell into place in early 2010, during a period in which I was breaking up with Katie at the same time our cat was dying of cancer. I learned to only have emotions about situations that were within my immediate control - between calls with the vet making life-or-death decisions about my pet, I was going to parties and incredibly enjoying myself. This immediately eliminated chronic stress of any kind, which has been greatly beneficial for my overall happiness and effectiveness. I felt alive in a way that I hadn't experienced before, living every moment with its own intensity. I don't (yet) experience this state constantly, but it does seem to happen much more frequently than it used to. This moment-intensity also induces incredible subjective time dilation, which I appreciate quite a bit.
I am not sure exactly when, but sometime during this period I began to develop sadness asymbolia - sadness lost its negative affect, and so I no longer avoided experiencing it. I came to the realization that sadness was precisely the right emotion I needed to internalize negative updates! Being able to internalize bad news about the world without fear or suffering is one of my biggest hacks to date, as far as I am concerned. I think this was related to internalizing the general idea of emotions as feedback, instead of some kind of intrinsic truth about the world.
In April 2010 I came to the realization that my systematic avoidance of certain things was in fact the emotion of fear. It seems obvious when stated this way in retrospect, but for most of my life I had prioritized thought over emotion, and at that time did not have particularly good access to my emotional state. Once I came to this realization, I also realized that my fear pointed towards my biggest areas of potential growth. Although I have not yet developed fear asymbolia, I have developed a habit of directing myself straight towards my biggest fears as soon as I recognize them. [...]
...my subjective experience of the pain-sensation did not seem to change much, what changed was a mental aversion to the stimulus.
Sadness... oh such sweet sadness! My enjoyment of all emotions scales with its intensity, so I actively try to cultivate more sadness when it occurs. I long for the feeling of warm tears rolling down my cheeks, my breath and body racked with sobbing... Emotional pain now feels euphorically pleasurable to release, and in its aftermath I am left with warmth and contentment. It is almost as though the pain realizes I have incorporated its lessons through my acknowledgment and expression, and then no longer demands my attention. The sensation itself is difficult to describe... it is definitely painful, but in no way aversive.
Some other LW posters who've made considerable progress on this are Jasen Murray, Frank Adamek and Michael Vassar.

How does one actually achieve emotional acceptance? It is a way of thought that has to be learned with practice. There are various techniques which help in this: I will cover one in this post, and others in future ones.

Mindfulness practice
"Many [mindfulness exercises] encourage individuals to attend to the internal experiences occurring in each moment, such as bodily sensations, thoughts, and emotions. Others encourage attention to aspects of the environment, such as sights and sounds ... All suggest that mindfulness should be practiced with an attitude of nonjudgmental acceptance. That is, phenomena that enter the individual’s awareness during mindfulness practice, such as perceptions, cognitions, emotions, or sensations, are observed carefully but are not evaluated as good or bad, true or false, healthy or sick, or important or trivial ... Thus, mindfulness is the nonjudgmental observation of the ongoing stream of internal and external stimuli as they arise." -- Mindfulness Training as a Clinical Intervention: A Conceptual and Empirical Review (R.A. Baer 2003, in Clinical psychology: Science and practice).
Mindfulness techniques are very useful in realizing that your thoughts and emotions are just things constructed by your mind:
"Several authors have noted that the practice of mindfulness may lead to changes in thought patterns, or in attitudes about one’s thoughts. For example, Kabat-Zinn (1982, 1990) suggests that nonjudgmental observation of pain and anxiety-related thoughts may lead to the understanding that they are “just thoughts,” rather than reflections of truth or reality, and do not necessitate escape or avoidance behavior. Similarly, Linehan (1993a, 1993b) notes that observing one’s thoughts and feelings and applying descriptive labels to them encourages the understanding that they are not always accurate reflections of reality. For example, feeling afraid does not necessarily mean that danger is imminent, and thinking “I am a failure” does not make it true. Kristeller and Hallett (1999), in a study of MBSR in patients with binge eating disorder, cite Heatherton and Baumeister’s (1991) theory of binge eating as an escape from self-awareness and suggest that mindfulness training might develop nonjudgmental acceptance of the aversive cognitions that binge-eaters are thought to be avoiding, such as unfavorable comparisons of self to others and perceived inability to meet others’ demands."
"All of the treatment programs reviewed here include acceptance of pain, thoughts, feelings, urges, or other bodily, cognitive, and emotional phenomena, without trying to change, escape, or avoid them. Kabat-Zinn (1990) describes acceptance as one of several foundations of mindfulness practice. DBT provides explicit training in several mindfulness techniques designed to promote acceptance of reality. Thus, it appears that mindfulness training may provide a method for teaching acceptance skills."
It also has clear promise in reducing suffering:
"According to Salmon, Santorelli, and Kabat-Zinn (1998), over 240 hospitals and clinics in the United States and abroad were offering stress reduction programs based on mindfulness training as of 1997. ... The empirical literature on the effects of mindfulness training contains many methodological weaknesses, but it suggests that mindfulness interventions may lead to reductions in a variety of problematic conditions, including pain, stress, anxiety, depressive relapse, and disordered eating."
I recommend the linked paper for a good survey about various therapies utilizing mindfulness, their effects and theoretical explanations for how they work.

While I haven't personally looked at any of the referenced therapies, I've found great benefit from the simple practice of turning my attention to any source of physical or emotional discomfort and simply nonjudgementally observing it. Frequently, this changes the pain from something that feels negative to something that feels neutral. My hypothesis is that this eliminates an attention-allocational conflict. The pain acts as a signal to concentrate on and pay attention to this source of discomfort, and once I do so, the signal has accomplished its purpose.

However, often I can do even better than just making the sensation neutral. If I make a conscious decision to experience this now-neutral sensation as something actively positive, that often works. Obviously, there are limits to the degree to which I can do this - the stronger the discomfort, the harder it is to just passively observe it and experience it as neutral. So far my accomplishments have been relatively mild, such as carrying several heavy bags and changing it from something uncomfortable to something enjoyable. But I keep becoming better at it with practice.
xuenay: (Default)
I previously characterized Michael Vassar's theory on suffering as follows: "Pain is not suffering. Pain is just an attention signal. Suffering is when one neural system tells you to pay attention, and another says it doesn't want the state of the world to be like this." While not too far off the mark, it turns out this wasn't what he actually said. Instead, he said that suffering is a conflict between two (or more) attention-allocation mechanisms in the brain.

I have been successful at using this different framing to reduce the amount of suffering I feel. The method goes like this. First, I notice that I'm experiencing something that could be called suffering. Next, I ask, what kind of an attention-allocational conflict is going on? I consider the answer, attend to the conflict, resolve it, and then I no longer suffer.

An example is probably in order, so here goes. Last Friday, there was a Helsinki Less Wrong meetup with Patri Friedman present. I had organized the meetup, and wanted to go. Unfortunately, I already had other obligations for that day, ones I couldn't back out from. One evening, I felt considerable frustration over this.

Noticing my frustration, I asked: what attention-allocational conflict is this? It quickly become obvious that two systems were fighting it out:

* The Meet-Up System was trying to convey the message: ”Hey, this is a rare opportunity to network with a smart, high-status individual and discuss his ideas with other smart people. You really should attend.”
* The Prior Obligation System responded with the message: ”You've already previously agreed to go somewhere else. You know it'll be fun, and besides, several people are expecting you to go. Not going bears an unacceptable social cost, not to mention screwing over the other people's plans.”

Now, I wouldn't have needed to consciously reflect on the messages to be aware of them. It was hard to not be aware of them: it felt like my consciousness was in a constant crossfire, with both systems bombarding it with their respective messages.

But there's an important insight here, one which I originally picked up from PJ Eby. If a mental subsystem is trying to tell you something important, then it will persist in doing so until it's properly acknowledged. Trying to push away the message means it has not been properly addressed and acknowledged, meaning the subsystem has to continue repeating it.

Imagine you were in the wilderness, and knew that if you weren't back in your village by dark you probably wouldn't make it. Now suppose a part of your brain was telling you that you had to turn back now, or otherwise you'd still be out when it got dark. What would happen if you just decided that the thought was uncomfortable, successfully pushed it away, and kept on walking? You'd be dead, that's what.

You wouldn't want to build a nuclear reactor that allowed its operators to just override and ignore warnings saying that their current course of action will lead to a core meltdown. You also wouldn't want to build a brain that could just successfully ignore critical messages without properly addressing them, basically for the same reason.

So I addressed the messages. I considered them and noted that they both had merit, but that honoring the prior obligation was more important in this situation. Having done that, the frustration mostly went away.

Another example: this is the second time I'm writing this post. The last time, I tried to save it when I'd gotten to roughly this point, only to have my computer crash. Obviously, I was frustrated. Then I remembered to apply the very technique I was writing about.

The Crash Message: You just lost a bunch of work! You should undo the crash to make it come back!
The Realistic Message: You were writing that in Notepad, which has no auto-save feature, and the computer crashed just as you were about to save the thing. There's no saved copy anywhere. Undoing the crash is impossible: you just have to write it again.

Attending to the conflict, I noted that the realistic message had it right, and the frustration went away.

It's interesting to note that it probably doesn't matter whether my analysis of the sources of the conflict is 100% accurate. I've previously used some rather flimsy evpsych just-so stories to explain the reasons for my conflicts, and they've worked fine. What's probably happening is that the attention-allocation mechanisms are too simple to actually understand the analysis I apply to the issues they bring up. If they were that smart, they could handle the issue on their own. Instead, they just flag the issue as something that higher-level thought processes should attend to. The lower-level processes are just serving as messengers: it's not their task to evaluate whether the verdict reached by the higher processes was right or wrong.

But at the same time, you can't cheat yourself. You really do have to resolve the issue, or otherwise it will come back. For instance, suppose you didn't have a job and were worried about getting one before you ran out of money. This isn't an issue where you can just say, ”oh, the system telling me I should get a job soon is right”, and then do nothing. Genuinely committing to do something does help; pretending to commit to something and then forgetting about it does not. Likewise, you can't say that "this isn't really an issue" if you know it is an issue.

Still, my experience so far seems to suggest that this framework can be used to reduce any kind of suffering. To some extent, it seems to even work on physical pain and discomfort. While simply acknowledging physical pain doesn't make it go away, making a conscious decision to be curious about the pain seems to help. Instead of flinching away from the pain and trying to avoid it, I ask myself, ”what does this experience of pain feel like?” and direct my attention towards it. This usually at least diminishes the suffering, and sometimes makes it go away if the pain was mild enough.

An important, related caveat: don't make the mistake of thinking that you could use this to replace all of your leisure with work, or anything like that. Mental fatigue will still happen. Subjectively experienced fatigue is a persistent signal to take a break which cannot be resolved other than by actually taking a break. Your brain still needs rest and relaxation. Also, if you have multiple commitments and are not sure that you can handle them all, then that will be a constant source of stress regardless. You're better off using something like Getting Things Done to handle that.

So far I have described what I call the ”content-focused” way to apply the framework. It involves mentally attending to the content of the conflicts and resolving them, and is often very useful. But as we already saw with the example of physical pain, not all conflicts are so easily resolved. A ”non-content-focused” approach – a set of techniques that are intended to work regardless of the content of the conflict in question – may prove even more powerful. I'll cover it in a separate post, though the example of dealing with physical pain is already getting into non-content-focused territory.

I'm unsure of exactly how long I have been using this particular framework, as I've been experimenting with a number of related content- and non-content-focused methods since February. But I believe that I consciously and explicitly started thinking of suffering as ”conflict between attention-allocation mechanisms” and began applying it to everything maybe two or three weeks ago. So far, either the content- or non-content-focused method has always seemed to at least alleviate suffering: the main problem has been in remembering to use it.

On the Self

May. 2nd, 2011 06:17 pm
xuenay: (Default)
I've gone through a variety of theories about the self and continuity of consciousness. Here they are.

1: The self as something arbitrary. Essentially the view I held at the time of writing this post from 2007. I thought that there is no inherent reason to think that the consciousness that now inhabits my brain will be the same one as the one inhabiting my brain tomorrow. Our minds and bodies change all the time: what's the thing that makes the me of today the same as the me of 20 years ago? Certainly one can come up with all sorts of definitions, but from a scientific point of view they're all unnecessary. One simply doesn't need to postulate a specific consciousness, a soul of sorts, in order to explain human behavior or thought. Noting that our memories create an illusion of continuity, and that this illusion is useful for maintaining useful things such as the self-preservation instinct, is more than enough as an explanation.

A thought experiment in philosophy asks: if you stepped into a Star Trek-style transporter, that disassembled you into your component parts and reassembled you somewhere else (possibly from different raw materials), would it still be you or would you it be killing you and creating a copy of you? Another: if the neurons in your brain would be gradually replaced with ones running in a computer, and the original brain was then shut down, would it still be you? Yet another: if you had been translated into software, and then fifteen copies of that mindfile were made and run, would they all be you?

To all of these questions, "the self as something arbitrary" replies: there's no inherent reason why they wouldn't be you. The difference between them would be less than that between you now, and you tomorrow. Of course, for psychological reasons, it is necessary for us to still believe to some degree that we're still the same person tomorrow as we are today. For this purpose, we're free to use pretty much any criteria we prefer: it's not like one of them would be wrong. One such criteria, suggested by Derek Parfit, is Relation R: psychological connectedness (namely, of memory and character) and continuity (overlapping chains of strong connectedness). This works fine for most purposes.

In practice, while I had this view, I tended to forget about the whole thing a lot. The illusion is built into us quite strongly, and the intellectual understanding of it is easy to forget.

2: The self as lack of personal boundaries. Upon reading Ken Wilber's No Boundary, I realized the following. Suppose that I choose to reject any criteria creating a relation between the me of now and the me of tomorrow, seeing them all as arbitrary. It follows that all consciousness-moments are separate beings. But symmetrically, one can take this to imply that all consciousness-moments are the same being. In other words: there is only one consciousness which experiences everything, instantiated in a wide variety of information-processing systems.

This point of view also gains support from noting that to a large degree, our behavior is determined by our environment. The people you hang around with have an immense impact on what you do and what you are. I might define myself using the word "student", which signifies a certain role within society - studying at a university ran by other people, from books written by others, my studies funded by money which the state gets by taxing my country's inhabitants. Or I might say that a defining aspect of myself is that I want to help avert existential risk. This is so because I happened to encounter writings about it at an appropriate point in my life, and it is a motivation which is constantly reinforced by being in contact with like-minded folks. On the other hand, it is a drive which is also constantly weakened by the lures of hedonism and affiliating with people who don't think such things are truly that important.

I'm only exaggarating a little if I say that basically everything in our personality is defined by our environment, and particularly the people within our environment. Change the environment I'm in, and you quite literally change what I am. Certainly I have lots of relatively stable personality traits that affect my behavior, but my environment defines the meaning those traits take. If I change my environment, I'll also change my own behavior. Looked at in this light, the self/non-self boundary becomes rather arbitrary and somewhat meaningless.

So now I was presuming that there was only one consciousness, instantiated in every possible body. All of these bodies and instantiations, taken together, make up a vast system that is me. I (in the sense of the specific brain-body now writing this) am part of the system in the same way that individual cells are parts of my body, or individual subprocesses in my brain are parts of my psyche. My personal accomplishments or my personal pride don't really matter that much: what matters is how I contribute to the overall system, and whether parts of the system are harmonious or conflicted between each other. Doing things like befriending new people means forging new connections between parts of myself. Learning to know people better means strengthening such connections.

Thinking like this felt good, and it worked for a while. But I had difficulty keeping up that line of thought. Again, the illusion of separateness is built strongly into us. On an intellectual level, I could easily think of myself as part of a vast system, with only a loose boundary between me and not-me. But since each brain can only access memories of being itself, and is strongly biased towards thinking itself separate, this was hard to really believe in on an emotional level. Frequently, I found myself thinking of myself as separate again.

3: The self as how the algorithm feels from the inside. The next step came when I realized that the notion of a consciousness experiencing things is an unnecessary element as well. Instead of saying that there are lots of different consciousnesses, or one consciousness instantiated in a lot of bodies, we can just note that we don't really need to presume any specific entity which observes various sensations. Instead, there are only the sensations themselves. A "consciousness" is simply a collection of sensations that are being observed within an organism at a specific time.

Putting this another way: there are a variety of processes running within our brains. As a side-effect of their functioning, they produce a stream of sensations (qualia, to use the technical term). There is no observer which observes or experiences these qualia: they simply occur. To the extent that there can be said to be an observer or a watcher, each sensation observes itself and then ceases to exist.

Of necessity, all of the qualia-producing algorithms we know of are located within information-processing systems which have a memory and are in some way capable of reporting that they have subjective experiences. Humans can verbalize or otherwise communicate being in pain; dogs can likewise behave in ways that sufficiently resemble our is-in-pain behaviors that we presume them to have qualia. As an animal's resemblance to a human grows smaller, we become more unsure of whether they have qualia. In principle, my computer could also have qualia, but if so it would have no way of reporting it, and I would have no way of knowing it. Because an entity needs to be able to somehow communicate having qualia in order for us to know about it, we've mistakenly began thinking that all qualia must by nature be observed by a consciousness. But the qualia observe themselves, which is enough. There is no Cartesian Theater, but rather something like multiple drafts.

So there is no "me" in the continuity of consciousness sense, nor is there any unified consciousness which experiences everything. Instead there are only ephemeral sensations, which vanish as soon as they've come to existence (though if eternalism is right, every moment may exist forever, and there may be an infinite number of copies of each "unique" sensation if multiverses are real). This may seem like a very unsettling theory from a psychological point of view, as it would seem like it'd make it harder to e.g. care about the next day. While both "the self as something arbitrary" and "the self as a lack of personal boundaries" allowed one to construct a definition of self extending in time - even if one acknowledged to be arbitrary - this view makes that rather impossible.

And at first, it was rather unsettling. After a while, however, I managed to come to grips with it. The important point to note is that even if there is continuity of consciousness, the concept of "me" still makes sense. It's simply referring to the information-processing system in which all of these algorithms are running. I can still meaningfully talk about my experience or about making plans. I'm simply referring to the experiences which will be produced by the algorithms running within this brain, and the plans which that brain will make. And there is no reason why I shouldn't feel pleasure from anticipation of future experiences, if those are good experiences to have.

I desire to reduce the number of negative qualia in the world and increase the number of positive ones. Positive qualia are correlated with positive feedback within the information-processing system; negative qualia, with negative feedback. In other words, the system/organism will tend to repeat the things it felt good about, as it gets wired to repeat those behaviors. (Though one should note here that the circuits for "wanting" and "liking" are actually different.) It is good for me to feel good about doing and behaving in ways which will make me more likely to achieve these goals. It is good for me to feel pleasure from the anticipation of doing good things, for this will cause me to actually do them. It is also good for me to feel happy: not only does feeling happy instead of unhappy make me more capable of doing things, it also directly serves my goal of increasing the amount of positive qualia in the world. This line of thought seems like a very successful way of fitting together utilitarianism and virtue ethics, the process of which I began a year ago and which has considerably contributed to my increased happiness of late.

Again, this is easy to think about on an intellectual level, but we're wired to think differently. I've been having more success consistently training myself to think like this than I had with the previous theories, however. Of course, I still frequently forget, but I'm making progress. Various meditation traditions seem to be aimed at helping grok something like this at an emotional level, and I'm dedicating an hour a day to meditation practice aimed at following the progression described in this book. I haven't really gotten any results so far, though.

I was going to also write more about the nature of suffering and how these shifts in thought have helped me become happier and suffer less. However, looking at how long this post got, I think I'll do that in a separate post.
xuenay: (Default)
I previously hypothesized that happiness might be an evolutionary mechanism that made us take more risks when we had spare resources and could risk doing so. As someone pointed out, the opposite interpretation sounds just as plausible, if not more so. That is, when you have lots of resources you should concentrate on not losing them, and when you have few, you should take more risks until you're in safer waters.

Now it seems to me that the opposite interpretation was indeed the correct one. The following is excerpted from Schwarz (2000):

As a large body of experimental research documents, individuals who are in a happy mood are more likely to adopt a heuristic processing strategy that is characterised by top-down processing, with high reliance on pre-existing knowledge structures and relatively little attention to the details at hand. In contrast, individuals who are in a sad mood are more likely to adopt a systematic processing strategy that is characterised by bottom-up processing, with little reliance on preexisting knowledge structures and considerable attention to the details at hand (for a review see Schwarz & Clore, 1996). [...] Consistent with the more detail-oriented processing style fostered by negative moods, Luce, Bettman, and Payne (1997, p. 384) observed that ``decision processing under increasing negative emotion both becomes more extensive and proceeds more by focusing on one attribute at a time’’ .

These differences in processing style presumably reflect that our thought processes are tuned to meet the requirements of the current situation, which are in part signalled by our affective states (Schwarz, 1990). In a nutshell, we usually feel bad when things go wrong and feel good when we face no particular problems. Hence, negative affective states may signal that the current situation is problematic and may hence elicit a processing style that pays close attention to the specifics of the apparently problematic situation. In contrast, a positive affective state may signal a benign environment that allows us to rely on our usual routines and preexisting knowledge structures. [...]

Hertel, Neuhof, Theuer, and Kerr (this issue) extend this line of research by addressing the impact of moods on individuals’ decision behaviour in a chicken dilemma game. Consistent with previous theorising, their findings suggest that individuals in a happy mood are likely to heuristically imitate the behaviour of other players, whereas individuals in a sad mood based their moves on a systematic analysis of the structure of the game.

The existence of the hedonic treadmill, I believe, supports this version of the hypothesis. Beyond a certain point, increases in e.g. income affect happiness only in relation to the income of others. Once your basic needs have been met, getting more money makes you happier only if it makes you better off as compared to others. People want to "keep up with the Joneses", which makes sense from an evolutionary perspective. For your genes to spread in a population, it is not enough to manage: your genes must spread more than those of your competitors.

Of course, the usual evolutionary psychology caveats must be kept in mind. The "keeping up with the Joneses" phenomenon is a very Western one, and Westerners have been shown not to be representative of the human race as a whole. It is my understanding that the Eastern cultures have traditionally been considerably less materialistic, which undermines the support that the Joneses provide to this hypothesis.

Curiously, Clore & Huntsinger (2007) summarize various studies as indicating that "participants in positive moods tend to rely more on stereotypes to guide their thinking about members of various social groups than do those in negative moods, who tend to rely on individuating information". This doesn't fit the usual intuitions of racism being caused by fear or adverse circumstances.

Contrary to most people’s intuitions, happy moods promote group stereotyping, whereas sad moods promote a focus on individuals [27,28]. One relevant study involved a mock trial in which a Latino student was accused of a stereotype-consistent offense. The results showed that individuals in happy moods were more likely than those in sad moods to have their verdicts influenced by the stereotype [29].

In this experiment, the stereotyping seems to reflect a general cognitive style rather than prejudice as such. Indeed, similar findings come from marketing and political science studies showing that happy moods promote reliance on brand names as opposed to product attributes among consumers [30], and a reliance on political party as opposed to candidate positions among voters [31].

In addition, a surprising result in the mock jury study [29] was that angry jurors responded like happy jurors, rather than like sad ones. This finding is consistent with affect-as-information logic, which always asks about the information inherent in affective states. Despite being a negative emotion, anger carries positive information about one’s own position. When angry, one believes oneself to be correct, which should increase confidence in one’s own cognitions. Thus, anger would be expected to show the same processing effects as happiness [5].


G.L. Clore & J.R. Huntsinger (2007). How emotions inform judgment and regulate thought. Trends in Cognitive Sciences, 11 (9), 393-399. doi:10.1016/j.tics.2007.08.005

N. Schwarz (2000). Emotion, cognition, and decision making. Cognition and Emotion, 14 (4), 433-440.
xuenay: (Default)
Yesterday evening, I pasted to two IRC channels an excerpt of what someone had written. In the context of the original text, that excerpt had seemed to me like harmless if somewhat raunchy humor. What I didn't realize at the time was that by removing the context, the person writing it came off looking like a jerk, and by laughing at it I came off looking as something of a jerk as well.

Two people, both of whom I have known for many years now and whose opinions I value, approached me by private message and pointed out that that may not have been the smartest thing to do. My initial reaction was defensive, but I soon realized that they were right and thanked them for pointing it out to me. Putting on a positive growth mindset, I decided to treat this event as a positive one, as in the future I'd know better.

Later that evening, as I lay in bed waiting to fall asleep, the episode replayed itself in my mind. I learnt long ago that trying to push such replays out of my mind would just make them take longer and make them feel worse. So I settled back to just observing the replay and waiting for it to go away. As I waited, I started thinking about what kind of lower-level neural process this feeling might be a sign of.

Artificial neural networks use what is called a backpropagation algorithm to learn from mistakes. First the network is provided some input, then it computes some value, and then the obtained value is compared to the expected value. The difference between the obtained and expected value is the error, which is then propagated back from the end of the network to the input layer. As the error signal works it way through the network, neural weights are adjusted in such a fashion to produce a different output the next time.

Backprop is known to be biologically unrealistic, but there are more realistic algorithms that work in a roughly similar manner. The human brain seems to be using something called temporal difference learning. As Roko described it: "Your brain propagates the psychological pain 'back to the earliest reliable stimulus for the punishment'. If you fail or are punished sufficiently many times in some problem area, and acting in that area is always preceeded by [doing something], your brain will propagate the psychological pain right back to the moment you first begin to [do that something]".

As I lay there in bed, I couldn't help the feeling that something similar to those two algorithms was going on. The main thing that kept repeating itself was not the actual action of pasting the quote to the channel or laughing about it, but the admonishments from my friends. Being independently rebuked for something by two people I considered important: a powerful error signal that had to be taken into account. Their reactions filling my mind: an attempt to re-set the network to the state it was in soon after the event. The uncomfortable feeling of thinking about that: negative affect flooding the network as it was in that state, acting as a signal to re-adjust the neural weights that had caused that kind of an outcome.

After those feelings had passed, I thought about the episode again. Now I felt silly for committing that faux pas, for now it felt obvious that the quote would come across badly. For a moment I wondered if I had just been unusually tired, or distracted, or otherwise out of my normal mode of thought to not have seen that. But then it occurred to me - the judgment of this being obviously a bad idea was produced by the network that had just been rewired in response to social feedback. The pain of the feedback had been propagated back to the action that caused it, so just thinking about doing that (or thinking about having done that) made me feel stupid. I have no way of knowing whether the "don't do that, idiot" judgment is something that would actually have been produced had I been paying more attention, or if it's a genuinely new judgment that wouldn't have been produced by the old network.

I tend to be somewhat amused by the people who go about claiming that computers can never be truly intelligent, because a computer doesn't genuinely understand the information it's processing. I think they're vastly overestimating how smart we are, and that a lot of our thinking is just relatively crude pattern-matching, with various patterns (including behavioral ones) being labeled as good or bad after the fact, as we try out various things.

On the other hand, there would probably have been one way to avoid that incident. We do have the capacity for reflective thought, which allows us to simulate various events in our heads without needing to actually undergo them. Had I actually imagined the various ways in which people could interpret that quote, I would probably have relatively quickly reached the conclusion that yes, it might easily be taken as jerk-ish. Simply imagining that reaction might then have provided the decision-making network with a similar, albeit weaker, error signal and taught it not to do that.

However, there's the question of combinatorial explosions: any decision could potentially have countless of consequences, and we can't simulate them all. (See the epistemological frame problem.) So in the end, knowing the answer to the question of "which actions are such that we should pause to reflect upon their potential consequences" is something we need to learn by trial and error as well.

So I guess the lesson here is that you shouldn't blame yourself too much if you've done something that feels obviously wrong in retrospect. That decision was made by an earlier version of you. Although it feels obvious now, that version of you might literally have had no way of knowing that it was making a mistake, as it hadn't been properly trained yet.
xuenay: (Default)
I'm currently reading Awaken the Giant Within by Anthony Robbins, which I'm finding a very promising book for permanent mood improvement. One thing that struck me as particularly useful was using specific questions on a constant basis to permanently improve your happiness and mood.

He suggests starting each morning by asking and answering a set of seven questions:

1. What am I happy about in my life right now?
- What about that makes me happy? How does that make me feel?

2. What am I excited about in my life right now?
- What about that makes me excited? How does that make me feel?

3. What am I proud about in my life right now?
- What about that makes me proud? How does that make me feel?

4. What am I grateful about in my life right now?
- What about that makes me grateful? How does that make me feel?

5. What am I enjoying most in my life right now?
- What about that makes do I enjoy? How does that make me feel?

6. What am I committed to in my life right now?
- What about that makes me committed? How does that make me feel?

7. Who do I love? Who loves me?
- What about that makes me loving? How does that make me feel?

By making a habit of asking and answering these questions, one trains their brain to be actively looking for the good things in their life. Depressed people tend to spend all of their time thinking about the bad things in their life, which is part of the reason why they're depressed. Similarly, actively spending time thinking about the good things in life will improve one's mood. This set of questions can also be used to improve one's mood when feeling down about something, as they will break the pattern where your thoughts keep getting stuck on the negative things and reorient your thoughts.

The important thing is that if you no obvious answer to these questions comes to your mind, keep looking until you come up with something! Only that way can your mind be trained to see the positive sides in everything. That advice is particularly true for the three evening questions:

1. What have I given today?
- In what ways have been a giver today?

2. What did I learn today?

3. How has today added to the quality of my life or how can I use today as an investment in the future?

Back when I was in the SIAI house, we talked a lot about achieving a growth mindset, but it seemed to me like we didn't have very many ways of actually fostering it. If you're on each day forced to come up with answers to both "what did I learn today" and "how has today added to the quality of my life", you should begin to see yourself as a constantly growing system instead of a static one. Furthermore, knowing that you'll need to answer these questions on the evening can also motivate you to challenge yourself more during the day, so that you'll have your answers.

Robbins says that the best part about the technique is that the questions can be asked anywhere, whether you are in the shower or making breakfast. Personally I think the technique probably works best for me if I first state my replies out loud, then write also them down. That'll engage more parts of the brain than just thinking about the answers would.
xuenay: (Default)
I was going to cross-post here a copy of my Less Wrong essay "problems in evolutionary psychology", but LJ seems to break the entry somehow. Ah well, you can always look at the original post.
xuenay: (Default)
In my experience, happy people tend to be more optimistic and more willing to take risks than sad people. This makes sense, because we tend to be more happy when things are generally going well for us: that is when we can afford to take risks. I speculate that the emotion of happiness has evolved for this very purpose, as a mechanism that regulates our risk aversion and makes us more willing to risk things when we have the resources to spare.

Incidentally, this would also explain why people falling in love tend to be intensly happy at first. In order to get and keep a mate, you need to be ready to take risks. Also, if happiness is correlated with resources, then being happy signals having lots of resources, increasing your prospective mate's chances of accepting you.

We can actually find at least three levels of granularity at which evolution might optimize the level of risk aversion that organisms exhibit. Take a species of organism which will end up in various kinds of environments. Some of these environments are rich in resources, and in those it pays off to take a lot of risks. Some of the environments are poor in resources, and in those it's more important to hang on to what you already have. If an organism was only found in one kind of an environment, then over time it would become genetically predisposed towards only exhibiting the level of risk aversion most optimal to that environment. But suppose that the organism was found in a varying range of environments. An individual member of the species might spend its whole life in a resource-rich environment, and its offspring in a resource-poor environment. In this case, evolution wouldn't have the time to tailor the whole species to match the environment.

In such a scenario, genes for high and low risk aversion both would be found in the population. When members of low risk aversion ended up in a resource-rich environment, they would gain a fitness advantage, only to lose it when they ended up in a resource-poor environment. Because environments vary, both as a function of time and location, neither gene variant would be strictly superior and therefore both would be maintained in the population.

But this first level of granularity is still rather coarse-grained. The risk aversion of any particular organism (and at least 50% of its offspring) is determined for life, and there is no way of revising it for exceptional circumstances. Now, evolution has a trick for situations like these. An organism can have an inborn capability for several different approaches, one of which is locked in based on the environment the organism spends its early life in. Members of the species could have a genetic predisposition towards a certain level of risk aversion, but have that be modified by the amount of resources available in the first environment encountered.

Yet even this second level of granularity determines an organism's level of risk aversion for its whole life. Even in a relatively stable environment, the organism might have good or bad luck at times. Therefore it makes sense to have part of the risk aversion level be determined by a cognitive process that runs throughout the organism's life. When it notices it's doing well, it is inspired to strive for even higher levels of accomplishment. When it notices it's doing badly, it is discouraged.

All three levels of granularity seem to be present in humans. We know that happiness has a major genetic component, and some people are just born happier than others. On the other hand, childhood experiences seem to have a major impact on how optimistically one ends up looking at the world. And of course, we all experience periodic ups and downs in our happiness throughout life.

I was previously talking with Will about the degree to which people's happiness might affect their tendency to lean towards negative or positive utilitarianism. We came to the conclusion that people who are naturally happy might favor positive utilitarianism, while naturally unhappy people might favor negative utilitarianism. If this theory of happiness is true, then that makes perfect sense: risk aversion and a desire to avoid pain corresponds to negative utilitarianism, and willingness to tolerate pain corresponds to positive utilitarianism.

Note that most Western humans have a far greater access to resources than our ancestors did, so we are likely all far more risk-averse than would be optimal given the environment.

September 2017

34567 89
171819202122 23


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 26th, 2017 03:47 am
Powered by Dreamwidth Studios