xuenay: (Default)

Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy. William Hirstein. Oxford University Press.

I found this book by accident, when somebody on Facebook happened to share a link to its Amazon page. I was intrigued to read the title, and even more intrigued to read the Amazon blurb:

William Hirstein argues that it is indeed possible for one person to directly experience the conscious states of another, by way of what he calls mindmelding. This would involve making just the right connections in two peoples’ brains, which he describes in detail. He then follows up the many other consequences of the possibility that what appeared to be a wall of privacy can actually be breached. Drawing on a range of research from neuroscience and psychology, and looking at executive functioning, mirror neuron work, as well as perceptual phenomena such as blind-sight and filling-in, this book presents a highly original new account of consciousness.

This description sounded very similar to my and Harri Valpola’s paper Coalescing Minds: Brain Uploading-Related Group Mind Scenarios, which was published last year. In that paper, we argued that it would be possible to join two minds together by creating artificial connections between their brains, and that this could allow anything ranging from mere improved communication to a full-blown merger between two minds. Since it seemed like Hirstein was talking about the same thing, I got curious – had this book, published a few months before our paper, already said everything that we argued for, and more?

Fortunately, it turns out that the book and the paper are actually rather nicely complementary. To briefly summarize the main differences, we intentionally skimmed over many neuroscientific details in order to establish mindmelding as a possible future trend, while Hirstein extensively covers the neuroscience but is mostly interested in mindmelding as a thought experiment. We seek to predict a possible future trend, while Hirstein seeks to argue a philosophical position: Hirstein focuses on philosophical implications while we focus on societal implications. Hirstein talks extensively about the possibility of one person perceiving another’s mental states while both remaining distinct individuals, while we mainly discuss the possibility of two distinct individuals coalescing together into one.

The main purpose of Hirstein’s book is to argue against a position in philosophy of mind which holds that conscious states are necessarily private, that is, only available to a single person. If conscious states were private, that could also be used to argue against materialism, the position that everything is physical, by the following privacy argument:

Premise 1: No physical states are private.

Premise 2: All conscious states are private.

Conclusion: No conscious states are physical states.

Hirstein seeks to use the possibility of mindmelding to refute this argument. He proposes that it should be possible to link the brains of two people together so that when A experienced something, that experience could be relayed to the brain of B, who would then also experience essentially the same thing. Thus, premise 2 of the privacy argument would be shown to be false.

To support his proposal, Hirstein arrays an impressive amount of neuroscience. I would briefly summarize his argument as follows: the brain employs what are called executive processes, which are responsible for dealing with novel or unanticipated situations:

There is an ongoing debate about what exactly is in the set of executive functions, but the following tend to appear in most lists: attention, remembering, decision-making, planning, task-switching, intending, and inhibiting. Executive processes play a part in our non-routine actions. When we attempt something new, executive processes are required. They are needed when there are no effective learned input-output links. As we get better at a new task, processing moves to other brain areas that specialize in effectively performing routine actions without conscious interruption. Gilbert and Burgess say that, ‘executive functions are the high-level cogitive processes that facilitate new ways of behaving, and optimise one’s approach to unfamiliar circumstances’ (2008, p.110). As Mille and Wallis pithily state it, ‘You do not executive control to grab a beer, but you will need it to finish college’ (2009, p.99). According to Gilbert and Burgess, ‘we particularly engage such processes when, for instance, we make a plan for the future, or voluntarily switch from one activity to another, or resist tempttion: in other words, whenever we do many of the things that allows us to lead independent, purposeful lives’ (2008, p.110).” (p. 87)

In order for the executive processes to be able to do their job correctly, they need just the right kind of information. For this purpose, the brain carries out an extensive amount of processing on all the sensory information it receives, creating a kind of an ”executive summary” of the most relevant content of that information. Executive processes then use that highly-preprocessed data in order to make their decisions. Essentially, conscious states are this ”executive summary”, and all the decisions that we consciously choose to make are made by the executive processes, which are the ones perceiving the conscious states.

Colors are one example of the kind of preprocessing that’s done on the sensory data before it’s presented to the executive processes. Light hits our eyes on a variety of different wavelengths, giving our visual system information about the way that light is reflected off various objects. The data about these various reflectance profiles then undergoes a complicated transformation in which the data is simplified, and the different objects are labeled with colors that summarize their reflectance profiles. This data, in turn, is useful for making sense of the things that we see: it allows us to tell different objects apart with considerable ease.

Emotions are another possible example of the kind of preprocessing that our brains carry out on sensory data before it’s presented to the executive processes. Hirstein doesn’t discuss emotions very much, but my ”Avoid misinterpreting your emotions” article from some time back discussed this theory of emotion:

The Information Principle says that emotional feelings provide conscious information from unconscious appraisals of situations. Your brain is constantly appraising the situation you happen to be in. It notes things like a passerby having slightly threatening body language, or conversation with some person being easy and free of misunderstandings. There are countless of such evaluations going on all the time, and you aren’t consciously aware of them because you don’t need to. Your subconscious mind can handle them just fine on its own. The end result of all those evaluations is packaged into a brief summary, which is the only thing that your conscious mind sees directly. That “executive summary” is what you experience as a particular emotional state. The passerby makes you feel slightly nervous and you avoid her, or your conversational partner feels pleasant to talk with and you begin to like him, even though you don’t know why.

Surveying neuroscientific data, Hirstein proposes that the temporal lobes seem to hold the ”final stage” of conscious states – data that has undergone all the preprocessing steps, and which is ready to be presented to the executive processes. The executive processes, in turn, are located in the prefrontal cortex, and access the data via thick fiber tracts connecting the two parts of the brain. Hirstein’s mindmelding proposal, then, is that if we could connect the temporal lobes of person A with the prefrontal cortex of person B, A and B could then simultaneously perceive A’s conscious states.

One can compare this to our paper, in which we discussed the possibility of a ”reverse split brain operation”: it is known that splitting the axons which connect the two hemispheres of a human brain will produce two different conscious minds, one for each hemisphere. Presumably, if such severed connections could be recreated, the two consciousnesses would merge back together. More speculatively, if artificial connections could be created between the hemispheres of two (or more) distinct humans, then the consciousnesses of those two people would eventually also merge together.

Of course, two people merging together to have only a single consciousness would probably be less useful than having two people who had merged together and had access to each other’s information and knowledge, but also had two separate streams of consciousness. So we postulated that one might construct an exocortex, a prosthetic which mimicked the functions of the brain and which would gradually integrate to become a seamless part of its user’s brain. Once this had happened, the exocortex could be connected to the exocortices of other people, with the exocortex having been built to manage the connection in a way that allowed for information-sharing but prevented the consciousnesses from becoming completely merged. We based our argument for the feasibility of the exocortex on the following three assumptions:

1. There seems to be a relatively unified cortical algorithm which is capable of processing different types of information. The brain seems to start out with a general-purpose algorithm which will gradually specialize for the kind of data it receives. Implement that general-purpose algorithm in an exocortex, and with enough time, it could learn to understand the thoughts of the brains that it was linked to. It could act as a kind of translator between the “mental language” of its user, and the “mental language” employed in other exocortexes.

2. We already have a fairly good understanding on how the cerebral cortex processes information and gives rise to the attentional processes underlying consciousness. We have a good reason to believe that an exocortex would be compatible with the existing cortex and would integrate with the mind.

3. The cortical algorithm has an inbuilt ability to transfer information between cortical areas. Information is known to move around in the brain. Long-term memories are first formed in the hippocampus but then gradually consolidated in the cerebral cortex; gradual damage to the cortex can cause it to shrink while the patient retains the ability to act normally, as damaged functions are relocated. Once a person was equipped with an exocortex, many of their existing memories and knowledge might gradually move over to it.

Hirstein’s work and ours, then, are nicely complementary: Hirstein does not really cover a full mindmeld at all, while we only briefly touch upon the mere sharing of access to another’s conscious states without a full mindmeld.

The societal implications of mind coalescence was one of the main focuses of our paper: we argued that it might lead to evolutionary scenarios in which individual minds would end up outcompeted, with all of the power accumulating to different group minds. We also suggested that exocortices might allow for mind uploading, transferring a human mind to run on a digital computer. As one’s biological brain gradually degraded and died, its functions could increasingly be transferred on the exocortex, until the individual’s mind was solely located on the exocortex.

In contrast, Hirstein seems content to treat mindmelding as a pure thought experiment, saying nothing about the consequences of the technology actually being developed. Perhaps this is because Hirstein wishes to present mindmelding as a serious philosophical argument, and avoid the stigma of being associated with science fictional speculation. Nonetheless, the style of mindmelding that he presents would have plenty of interesting consequences on its own.

Most obviously, if another person’s conscious states could be recorded and replayed, it would open the doors for using this as entertainment. Were it the case that you couldn’t just record and replay anyone’s conscious experience, but learning to correctly interpret the data from another brain would require time and practice, then individual method actors capable of immersing themselves in a wide variety of emotional states might become the new movie stars. Once your brain learned to interpret their conscious states, you could follow them in a wide variety of movie-equivalents, with new actors being hampered by the fact that learning to interpret the conscious states of someone who had only appeared in one or two productions wouldn’t be worth the effort. If mind uploading was available, this might give considerable power to a copy clan consisting of copies of the same actor, each participating in different productions but each having a similar enough brain that learning to interpret one’s conscious states would be enough to give access to the conscious states of all the others.

The ability to perceive various drug- or meditation-induced states of altered consciousness while still having one’s executive processes unhindered and functional would probably be fascinating for consciousness researchers and the general public alike. At the same time, the ability for anyone to experience happiness or pleasure by just replaying another person’s experience of it might finally bring wireheading within easy reach, with all the dangers associated with that.

A Hirstein-style mind meld might possibly also be used as an uploading technique. Some upload proposals suggest compiling a rich database of information about a specific person, and then later using that information to construct a virtual mind whose behavior would be consistent with the information about that person. While creating such a mind based on just behavioral data makes questionable the extent to which the new person would really be a copy of the original, the skeptical argument loses some of its force if we can also include in the data a recording of all the original’s conscious states during various points in their life. If we are able to use the data to construct a mind that would react to the same sensory inputs with the same conscious states as the original did, whose executive processes would manipulate those states in the same ways as the original, and who would take the same actions as the original did, would that mind then not essentially be the same mind as the original mind?

Hirstein’s argumentation is also relevant for our speculations concerning the evolution of mind coalescences. We spoke abstractly about the ”preferences” of a mind, suggesting that it might be possible for one mind to extract the knowledge from another mind without inherting its preferences, and noting that conflicting preferences would be one reason for two minds to avoid coalescing together. However, we did not say much about where in the brain preferences are produced, and what would be actually required for e.g. one mind to extract another’s knowledge without also acquiring its preferences. As the above discussion hopefully shows, some of our preferences are implicit in our automatic habits (the things that we show we value with our daily routines), some in the preprocessing of sensory data that our brains carry out (the things and ideas that are ”painted with” positive associations or feelings), and some in the configuration of our executive processes (the actions we actually end up doing in response to novel or conflicting situations). (See also.) This kind of a breakdown seems like very promising material for some neuroscience-aware philosopher to tackle in an attempt to figure out just what exactly preferences are; maybe someone has already done so.

Getting back to the topic of the book itself, a considerable part of Hirstein’s argumentation is focused on things that are probably of not very much interest to people who haven’t diven deep into the things that philosophers of mind care about. For example, it is important for Hirstein’s argument that A and B actually have access to the same conscious state, as opposed to B only having a copy of A’s conscious state, so he spends time establishing this, which I personally felt was somewhat uninteresting. Considerable attention is also given to other similar technical points of philosophy throughout the book. Some of these I did find rather interesting: for instance, I had previously been rather persuaded by the ”there is no such thing as a self” school of thought, but Hirstein makes a convincing argument for identifying the self with the executive functions, and also makes a good defense against possible homunculus accusations that this might cause. Others will probably find this whole line of argument meaningless.

The bulk of the book, however, is focused on establishing the philosophical and neuroscientific plausibility of mindmelding. So I would in any case recommend this book for anyone interested in seeing a detailed argument for how one variety of mindmelding could be accomplished. And if you already have a strong interest in philosophy of mind, all the better.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (sonictails)

Do you feel like your books are static, passive objects, just sitting on a shelf and waiting for you to turn them alive? Think again.

As long as there is a light source in your room, then light is constantly being reflected off any exposed books in the room – from their covers if they’re closed, their pages if they’re open. That light hits the surface of the book, in a constant stream, and the surface transforms it, encoding the information contained within the surface into a signal, as the surface selectively absorbs some of the light and reflects the rest of it away.

The form and shape of the book’s letters is now contained within the light that gets reflected off, broadcast all across the room. If you are in a room with many books, they are all constantly bombarding you with their message, all the different waves of information hitting you countless of times per second. Like a radio station that’s sending whether or not one tunes into it, those signals keep coming even if you don’t pay attention to them. When you finally do, your eyes transform one of the patterns of light into a pattern of electricity, the raw signal undergoing a series of further transformations as your visual cortex extracts the information the light contained. Like a truck containing boxed goods, from which one first unloads the boxes and then opens the boxes to reveal their content, the signal of light first gives up the information about the letters, and then the information about the shapes and forms of the letters gives way to reveal the semantic content of the writing, the actual meaning of the words. You might never even consciously see the physical form of the writing as that meaning comes to life within your brain, igniting intricate networks of memories and associations, plunging you into a different world.

Our ancestors – both humans and the early creatures which eventually evolved into humans – lived a life of predator and prey, a life where some objects in our environment were dangerous or at least capable of running away, requiring us to take immediate action. It is because of the need to instantly know whether we should consider acting that we automatically classify everything as either alive or dead, animated or static.

But “animated or static” is just an abstraction that our brain imposes on its model of reality, a classification scheme that has often been useful for our purposes. Look closer, at atomic and subatomic levels, and everything is in perpetual motion: the universe is constantly recomputing itself, as the laws of physics dictate. Tiny particles are dancing and vibrating, information is being transmitted, received and transformed. The world around us, even so-called dead matter, is ever alive.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)

Unfit for the Future: The Need for Moral Enhancement. Ingmar Persson & Julian Savulescu. Oxford University Press.

The core thesis of Unfit for the Future is that human morality evolved to allow cooperation and altruism in small groups, but that we today face challenges requiring extensive global coordination. Challenges such as weapons of mass destruction and climate change require both individual humans and nation-states to make various kinds of sacrifices for the benefit of all, but it is currently very unlikely to get everyone to actually make such sacrifices. Humans do have moral emotions such as a sense of justice and fairness that cause them to willingly make sacrifices in order to benefit those they know, but international cooperation requires trusting and helping faceless strangers – and humans have also evolved to be naturally suspicious or even xenophobic towards people outside their tribe. Since traditional moral education isn’t enough to overcome these challenges, we need to engage in “moral enhancement” and alter our biological moral dispositions.

The tone of the book is very academic and rational: there are few if any appeals to emotion, and logical reasoning from first principles is almost purely the style of argument. This makes the authors’ train of thought relatively clear to follow, though it also makes for a rather dry reading, and things are occasionally expressed in needlessly convoluted ways.

The best part of the book is the explanation of the coordination challenges involved with international cooperation, of why rational self-interest isn’t enough to overcome the challenges, and how our commonsense morality has evolved to solve some of these problems. The reader is assumed to already be mostly on board with the notion of risks from climate change and WMDs: some time is spent on explaining these risks, but probably not enough to sell the topic as a really extreme one for someone new to it.

Surprisingly, the book spends relatively little time (one chapter) talking about actual moral enhancement, and few concrete enhancement methods are proposed. Rather, there are a few examples of developing technologies that could be useful for moral enhancement, and a suggestion that more research be dedicated to developing more enhancement methods. Some criticisms of moral enhancement are discussed and argued against. The book concentrates on establishing the need for moral enhancement, not on proposing specific enhancement methods.

The main weakness of the book is that it does not always seem to engage with the strongest possible opposing arguments. A minor thesis that’s offered is that we should be ready to give up our privacy in order to prevent terrorists with WMDs, because of the untold damage that those terrorists could cause. The authors move to dismiss people having any moral right to privacy in only four (!) pages, and do so by considering two possible defenses for privacy: that violating privacy requires violating property rights, and that having one’s privacy violated makes one uncomfortable. The former is rendered irrelevant by the possibility of privacy violations that do not violate property rights (such as mind-reading devices or scanners that could see and hear through walls). The latter is rejected on the grounds that if you could forbid people from knowing something about you simply because it made you feel uncomfortable, “you could acquire very extensive rights against others just by being extremely sensitive about what others think of you”.

Leaving aside the fact that the latter argument is excessively simplistic, there is no absolutely no discussion of the fact that privacy gives people the chance to do harmless things for which they might nonetheless be discriminated against. Homosexual acts are the classic example, but even if one made the (false) assumption that liberal democracies – in the context of which the authors mostly frame their discussion – no longer exhibited homophobia, there are plenty of more examples to be found. Perhaps a person became sexually aroused by looking at (entirely non-sexual) pictures of children or animals, or enjoyed violent pornography, but would nevertheless never harm a soul. Ironically, a major part of the authors’ argument is that it is easier to destroy than to create, and that we find potential harms to be more pressing than an equivalent amount of potential gain. It is exactly because of such reasons that people who were thought to be possibly dangerous would be harshly and unfairly discriminated against – because even if the risk of them actually harming anyone would be small, few people would be willing to take that risk.

Nor do the authors discuss the fact that a lack of privacy could lead to excessive self-censorship, with even people who wouldn’t be discriminated against for acting according to their desires restricting their behavior just in case (again, the potential for harms outweighing the potential for gains). And once people could perfectly observe the behavior of everyone else, and see that everybody was acting conservatively, then even behavior that was previously within normal bounds might be come to be seen as suspicious, leading to an ever-more conformist, cautious, and unhappy society. The human suffering of such a development gives reason to believe in a strong moral right to privacy, and the suffering in question might easily outweigh the suffering from even several nuclear terrorist attacks. But aside for briefly mentioning that a fear of terrorism might cause some ethnic minorities to be unfairly discriminated against, the authors consider none of this.

It might also be somewhat distracting for some that the authors are clearly left-wing, which leads them to occasionally make ideological claims which are not very well-defended. For example, the authors briefly mention prevailing economic inequality as an example of one of humanity’s moral failings, citing differences between the poorest and richest nations as well as the poorest and richest people within some Western countries. None of the arguments for economic inequality of this form not necessarily being a bad thing are addressed. Fortunately, for the most part the left-wing digressions are minor points, and possible disagreement with them does not detract from the book’s major theses.

Overall, the book makes a nice argument for its core thesis, but could have been made much stronger by improving the strawmannish discussion of privacy, removing or better supporting ideologically contentious points, making the risk from WMDs better argued for, and by spending more time discussing moral enhancement itself, not just the need for it.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)
Our moral reasoning is ultimately grounded in our moral intuitions: instinctive "black box" judgements of what is right and wrong. For example, most people would think that needlessly hurting somebody else is wrong, just because. The claim doesn't need further elaboration, and in fact the reasons for it can't be explained, though people can and do construct elaborate rationalizations for why everyone should accept the claim. This makes things interesting when people with different moral intuitions try to debate morality with each other.

---

Why do modern-day liberals (for example) generally consider it okay to say "I think everyone should be happy" without offering an explanation, but not okay to say "I think I should be free to keep slaves", regardless of the explanation offered? In an earlier age, the second statement might have been considered acceptable, while the first one would have required an explanation.

In general, people accept their favorite intuitions as given and require people to justify any intuitions which contradict those. If people have strongly left-wing intuitions, they tend to consider right-wing intuitions arbitrary and unacceptable, while considering left-wing intuitions so obvious as to not need any explanation. And vice versa.

Of course, you will notice that in some cultures specific moral intuitions tend to dominate, while other intuitions dominate in other cultures. People tend to pick up the moral intuitions of their environment: some claims go so strongly against the prevailing moral intuitions of my social environment that if I were to even hypothetically raise the possibility of them being correct, I would be loudly condemned and feel bad for even thinking that way. (Related: Paul Graham's What you can't say.) "Culture" here is to be understood as being considerably more fine-grained than just "the culture in Finland" or the "culture in India" - there are countless of subcultures even within a single country.

---

Social psychologists distinguish between two kinds of moral rules: ones which people consider absolute, and ones which people consider to be social conventions. For example, if a group of people all bullied and picked on one of them, this would usually be considered wrong, even if everyone in the group (including the bullied person) thought it was okay. But if there's a rule that you should wear a specific kind of clothing while at work, then it's considered okay not to wear those clothes if you get special permission from your boss, or if you switch to another job without that rule.

The funny thing is that many people don't realize that the distinction of which is which is by itself a moral intuition which varies from people to people, and from culture to culture. Jonathan Haidt writes in The Righteous Mind: Why Good People Are Divided by Politics and Religion of his finding that while the upper classes in both Brazil and USA were likely to find violations of harmless taboos to be violations of social convention, lower classes in both countries were more likely to find them violations of absolute moral codes. At the time, moral psychology had mistakenly thought that "moving on" to a conception of right and wrong that was only grounded in concrete harms would be the way that children's morality naturally develops, and that children discover morality by themselves instead of learning it from others.

So moral psychologists had mistakenly been thinking about some moral intuitions as absolute instead of relative. But we can hardly blame them, for it's common to fail to notice that the distinction between "social convention" and "moral fact" is variable. Sometimes this is probably done for purpose, for rhetorical reasons - it's a much more convincing speech if you can appeal to ultimate moral truths rather than to social conventions. But just as often people simply don't seem to realize the distinction.

(Note to international readers: I have been corrupted by the American blogosphere and literature, and will therefore be using "liberal" and "conservative" mostly to denote their American meanings. I apologize profusely to my European readers for this terrible misuse of language and for not using the correct terminology like God intended it to be used.)

For example, social conservatives sometimes complain that liberals are pushing their morality on them, by requiring things such as not condemning homosexuality. To liberals, this is obviously absurd - nobody is saying that the conservatives should be gay, people are just saying that people shouldn’t be denied equal rights simply because of their sexual orientation. From the liberal point of view, it is the conservatives who are pushing their beliefs on others, not vice versa.

But let's contrast "oppressing gays" to "banning polluting factories". Few liberals would be willing to accept the claim that if somebody wants to build a factory that causes a lot of harm to the environment, he should be allowed to do so, and to ban him from doing it would be to push the liberal ideals on the factory-owner. They might, however, protest that to prevent them from banning the factory would be pushing (e.g.) pro-capitalism ideals on them. So, in other words:

Conservatives want to prevent people from being gay. They think that this just means upholding morality. They think that if somebody wants to prevent them from doing so, that somebody is pushing their own ideals on them.

Liberals want to prevent people from polluting their environment. They think that this just means upholding morality. They think that if somebody wants to prevent them from doing so, that somebody is pushing their own ideals on them.

Now my liberal readers (do I even have any socially conservative readers?) will no doubt be rushing to point out the differences in these two examples. Most obviously the fact that pollution hurts other people than just the factory owner, like people on their nearby summer cottages who like seeing nature in a pristine and pure state, so it's justified to do something about it. But conservatives might also argue that openly gay behavior encourages being openly gay, and that this hurts those in nearby suburbs who like seeing people act properly, so it's justified to do something about it.

It's easy to say that "anything that doesn't harm others should be allowed", but it's much harder to rigorously define harm, and liberals and conservatives differ in when they think it's okay to cause somebody else harm. And even this is probably conceding too much to the liberal point of view, as it accepts a position where the morality of an act is judged primarily in the form of the harms it causes. Some conservatives would be likely to argue that homosexuality just is wrong, the way that killing somebody just is wrong.

My point isn't that we should accept the conservative argument. Of course we should reject it - my liberal moral intuitions say so. But we can't in all honestly claim an objective moral high ground. If we are to be honest to ourselves, we will accept that yes, we are pushing our moral beliefs on them - just as they are pushing their moral beliefs on us. And we will hope that our moral beliefs win.

Here's another example of "failing to notice the subjectivity of what counts as social convention". Many people are annoyed by aggressive vegetarians, who think anyone who eats meat is a bad person, or by religious people who are actively trying to convert others. People often say that it's fine to be vegetarian or religious if that's what you like, but you shouldn't push your ideology to others and require them to act the same.

Compare this to saying that it's fine to refuse to send Jews to concentration camps, or to let people die in horrible ways when they could have been saved, but you shouldn't push your ideology to others and require them to act the same. I expect that would sound absurd to most of us. But if you accept a certain vegetarian point of view, then killing animals for food is exactly equivalent to the Holocaust. And if you accept a certain religious view saying that unconverted people will go to Hell for an eternity, then not trying to convert them is even worse than letting people die in horrible ways. To say that these groups shouldn't push their morality to others is to already push your own ideology - which says that decisions about what to eat and what to believe are just social conventions, while decisions about whether to kill humans and save lives are moral facts - on them.

So what use is there in debating morality, if we have so divergent moral intuitions? In some cases, people have such widely differing intuitions that there is no point. In other cases, their intuitions are similar enough that they can find common ground, and in that case discussion can be useful. Intuitions can clearly be affected by words, and sometimes people do shift their intuitions as a result of having debated them. But this usually requires appealing to, or at least starting out from, some moral intuition that they already accept. There are inferential distances involved in moral claims, just as there are inferential distances involved in factual claims.

So what about the cases when the distance is too large, when the gap simply cannot be bridged? Well in those cases, we will simply have to fight to keep pushing our own moral intuitions to as many people as possible, and hope that they will end up having more influence than the unacceptable intuitions. Many liberals probably don't want to admit to themselves that this is what we should do, in order to beat the conservatives - it goes so badly against the liberal rhetoric. It would be much nicer to pretend that we are simply letting everyone live the way they want to, and that we are fighting to defend everyone's right for that.

But it would be more honest to admit that we actually want to let everyone live the way they want to, as long as they don't things we consider "really wrong", such as discriminating against gays. And that in this regard we're no different from the conservatives, who would likewise let everyone live the way they wanted to, as long as they don't do things the conservatives consider "really wrong".

Of course, whether or not you'll want to be that honest depends on what your moral intuitions have to say about honesty.
xuenay: (Default)
"...some method for discounting future and distant consequences is necessary. It is possible, perhaps, that the degree of discounting would exactly correspond to the increasing degree of uncertainty that goes with predicting remote events. But there is no simple formula that relates time or distance to uncertainty—some events a year from now or 5,000 kilometers from here may be much more predictable than other events only one week from now or 100 meters away." (Wallach & Allen, Moral Machines)

This bit made me think. Wallach & Allen state correctly that the relation between (physical or temporal) distance and uncertainty is not a simple one, and some things which happen far away are more predictable than some things that happen nearby. But in what kind of a universe would that statement be incorrect? If it always made sense to consider events happening a week from now 20% more predictable than events happening a year from now, or events happening 100 meters away 20% more predictable than 5,000 kilometers away, regardless of the type of the event, what would that imply of the world one lived in?

It seems to me that this would have to be a world where the laws of nature were highly local. Things such as the speed of light and the boiling point of water would have to vary smoothly, depending on where and when you were. Actually, whether the universe had things such as "light", "water", or "boiling" in the first place would also vary by location. There are some things that we probably need to keep constant in order to avoid a logical contradiction, though. For instance, since we stipulated that events happening 100 meters away should be 20% more predictable than events happening 5,000 kilometers away, the geometry of the universe should not change as to make the concept of distance meaningless, nor should the axioms of arithmetic be changed at random.

Would it be possible to live in such a universe? Certainly if the discount rate was high enough, the universe would be so chaotic as to make all advance planning pointless. Not to mention the fact that organisms wouldn't live for very long if their blood might literally begin to boil at any time. But let's stipulate that the rate of change was slow, and that the organisms living in the universe were generally changed in such ways as to not outright kill them or drive them insane. Note that we are now again forced to introduce some predictability that is not a direct function of distance.

Things would perhaps get easier if we dropped the bit about change over time. The laws of nature would be spatially local, so traveling 5,000 kilometers would get you to a place where things worked quite differently, but things wouldn't change if you stayed put. For this, we'll obviously have to limit ourselves to laws of physics which allow for time and space to be separated in such a way. Now we don't need to protect the organisms living in this universe from its changes anymore - organisms in different regions will simply evolve to exploit their local laws of nature, and to avoid going into places where they cannot survive anymore.

Some of the regions in such a universe would be teeming with life (though whether we'd recognize it as life is another matter), while other regions would be desolate, incapable of supporting any kind of complex structure. Journeying far from your home would let you see things that were literally impossible back at your place of birth, but to travel far enough would mean a certain death. Although you could never directly witness the wonders of the regions that were too different from yours, you might find creatures that lived at the borders of such regions. They could travel farther away than you, although they could not come to the place you were from; and if you could find a way to communicate, the two of you might be able to swap tales. You could tell them of the things you had seen, and in turn, be told of wonders you could imagine, but never quite comprehend.
xuenay: (Default)
During my more pessimistic moments, I grow increasingly skeptical about our ability to know anything.

Take science. Academia is supposed to be our most reliable source of knowledge, right? And yet, a number of fields seem to be failing us. Any results shouldn't really be believed before they've been replicated several times. Yet, of the 45 most highly regarded studies within medicine suggesting effective interventions, 11 haven't been retested, and 14 have been shown to be convincingly wrong or exaggarated. John Ioannidis suggests that up to 90 percent of the published medical information that doctors rely on is flawed - and the medical community has for the most accepted most of his findings. ( http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/ ) His most cited paper, "Why Most Published Findings Are False" has been cited almost a thousand times.

Psychology doesn't seem to be doing that much better. Last May, the Journal of Personality & Social Psychology refused to publish ( http://psychsciencenotes.blogspot.com/2011/05/failing-to-replicate-bems-ability-to.html ) a failed replication of the parapsychology paper they published earlier. "The reason Smith gives is that JPSP is not in the business of publishing mere replications - it prioritises novel results, and he suggests the authors take their work to other (presumably lesser) journals. This is nothing new - flagship journals like JPSP all have policies in place like this. [...] ...major journals simply won't publish replications. This is a real problem: in this age of Research Excellence Frameworks and other assessments, the pressure is on people to publish in high impact journals. Careful replication of controversial results is therefore good science but bad research strategy under these pressures, so these replications are unlikely to ever get run. Even when they do get run, they don't get published, further reducing the incentive to run these studies next time. The field is left with a series of "exciting" results dangling in mid-air, connected only to other studies run in the same lab."

This problem is not unique to psychology - all fields suffer from it. But while we are on the subject of psychology, the majority of its results are from studies conducted on Western college students, who have been presumed to be representative of humanity. "A recent survey by Arnett (2008) of the top journals in six sub-disciplines of psychology revealed that 68% of subjects were from the US and fully 96% from ‘Western’ industrialized nations (European, North American, Australian or Israeli). That works out to a 96% concentration on 12% of the world’s population (Henrich et al. 2010: 63). Or, to put it another way, you’re 4000 times more likely to be studied by a psychologist if you’re a university undergraduate at a Western university than a randomly selected individual strolling around outside the ivory tower." Yet cross-cultural studies indicate a number of differences between industrialized and "small-scale" societies, in areas such "visual perception, fairness, cooperation, folkbiology, and spatial cognition". There are also a number of contrasts between "Western" and "non-Western" populations "on measures such as social behaviour, self-concepts, self-esteem, agency (a sense of having free choice), conformity, patterns of reasoning (holistic v. analytic), and morality" ( http://neuroanthropology.net/2010/07/10/we-agree-its-weird-but-is-it-weird-enough/ ; http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=7825833 ). Many supposedly "universal" psychological results may actually only be "universal" to US college students.

In any field, quantiative studies require intricate knowledge about statistics and a lot of care to get right. Academics are pressed to publish things at a fast pace, and the reviewers of scientific journals often have relatively low standards. The net result is that the researchers have neither the time nor the incentive to conduct their research with the necessary care.

Qualitative research doesn't suffer from this problem, but it suffers from the obvious problem of often having a limited sample group and difficult-to-generalize findings. Many social sciences that are heavily based on qualitative methods outright state that carrying out an objective analysis, where the researcher's personal attributes and opinions don't influence the results, is not just difficult but impossible in principle. At least with quantiative sciences, it may be possible to convincingly prove results wrong. With qualitative sciences, there is much more wiggle room.

And there's plenty of room for the wiggling to do a lot of damage even in the quantative sciences. From the previous article on John Ioannidis:

"Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process, in which journals ask researchers to help decide which studies to publish, to suppress opposing views. "You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct," says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine."

Of course, all of this is not to say that science wouldn't be good for anything. I'm typing this on a computer that obviously works, in an apartment built by human hands, surrounded by countless of technological widgets. The more closely related a science is to a branch of engineering, the more likely it is that it is basically right. Its ideas are constantly and rigorously being tested in a way that actually incentivizes being right, not just publishing impressive-looking studies. The farther out a science is from engineering and from having practical applications that can be tested at once, the more likely it is that it's just full of nonsense.

Take governmental institutions. Academia, at least, still has some incentive to seek the truth. Meanwhile, politicians have an incentive to look good to voters, who by and large do not care about the truth. The issues that citizens care the most strongly about tend to be the issues that they know the least about, and often they do not even know the political agendas of the parties or politicians that they vote for. For the average voter, who has very little influence on actual decisions but who can take a lot of pleasure from believing things that are actually pleasant to believe, remaining ignorant is actually a rational course of action. Statements that sound superficially good or that appeal to the predjudices of a certain segment of the population are much more important for politicians than actually caring about the truth. Often, even considering a politically unpopular opinion to be possibly true is thought to be immoral and suggestive of a suspicious character.

And various governmental institutions, from academics funded by government funds to supposedly neutral public institutions are all suspect to pressures from above to sound good and produce pleasing results. The official recommendations of any number of government agencies can be the result of political compromise as much as anything else, and researchers are routinely hired to act as the politicians' warriors ( http://www.overcomingbias.com/2011/01/academics-as-warriors.html ). Even seemingly apolitical institutions like schools and the police may fall victim to the pressure to produce good results and start reporting statistics and results that do not reflect reality. (For a particularly good illustration of this, watch all five seasons of The Wire, possibly the best television series ever made.)

Take the media. Is there any reason to expect the media to do much better? I don't see why there would be. Compared to academia, journalists are under even more time pressure to produce articles, have even less in the way of rigorous controls on truthfulness, and have even more of an incentive to focus on big eye-catching headlines. Even for the journalists who actually follow strict codes of ethics, the incentives for sloppy work are strong. Anybody who has an expertise in pretty much any field that's been reported on will know that what's written often has very little resemblance to reality.

Some time ago, there were big claims about how Twitter was powering revolutions and protests in a number of authoritarian countries. Many of us have probably accepted those claims as fact. But how true are they, really?

"In the Iranian case, meanwhile, the people tweeting about the demonstrations were almost all in the West. 'It is time to get Twitter’s role in the events in Iran right,' Golnaz Esfandiari wrote, this past summer, in Foreign Policy. 'Simply put: There was no Twitter Revolution inside Iran.' The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. 'Western journalists who couldn’t reach - or didn’t bother reaching? - people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,' she wrote. 'Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.'" ( http://www.newyorker.com/reporting/2010/10/04/101004fa_fact_gladwell )

Take the Internet. Online, we are increasingly living in filter bubbles ( http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html ; https://en.wikipedia.org/wiki/Filter_bubble ), where the services we use attempt to personalize the information we read to what they think we want to see. Maybe you've specifically gone to the effort of including both liberals and conservatives as your Facebook friends, as you want to be exposed to the opinions of both. But if you predominantly click on the liberal links, then eventually the conservative updates will be invisibly edited out by Facebook's algorithms, and you will only see liberal updates in your feed. Various sites are increasingly using personalization techniques, trying to only offer us content they think we want to see - which is often the content most likely to appeal to our existing opinions.

Take yourself. Depressed by all of the above? Think you should only trust yourself? Unfortunately, that might very well produce even worse results than trusting science. We are systematically biased to favorably misremember events, only seek evidence confirming our beliefs, and interpret everything in our own favor. Our conscious minds may not be evolved to look for the truth at all, but to choose of various defensible positions the one that the most favors ourselves. ( http://lesswrong.com/lw/8gv/the_curse_of_identity/ ; http://lesswrong.com/tag/whyeveryonehypocrite ) Our minds run on corrupted hardware: even as we think we are trying to impartially look for the truth, other parts of our brains are working hard to give us that impression while hiding the actual biased thought processes we engage in. We have conscious access to only a small part of our thought processes, and have to rely on countless amounts of information prepared by cognitive mechanisms whose accuracy we have no way of verifying directly. Science, at least, has _some_ safeguards in place that attempt to counter such mechanisms - in most cases, we will still do best by relying on expert opinion.

"But if you plan to mostly ignore the experts and base your beliefs on your own analysis, you need to not only assume that ideological bias has so polluted the experts as to make them nearly worthless, but you also need to assume that you are mostly immune from such problems!" ( http://www.overcomingbias.com/2011/02/against-diy-academics.html )

----

Most of the things I know are probably wrong: with each thing I think I learn, I might be learning falsehoods instead. Because the criteria for an idea catching on and for an idea to be true are different, the ideas that a person is the more likely to hear about are ones that are more likely to be wrong. Thus most of the things I run across in my life (and accept as facts) will be wrong.

And of course, I'm quite aware of the irony in that I have here appealed to a number of sources, all of which might very well be wrong. I hope I'm wrong about being wrong, but I can't count on it.

(Essay also cross-posted to Google Plus.)
xuenay: (Default)
Recently I've been trying to become consistently more happy and suffer less. An important component of this involves reaching a state of equanimity - "a state of mental or emotional stability or composure arising from a deep awareness and acceptance of the present moment", to use the Wikipedia definition. Although I have several techniques for overcoming negative feelings, it often happens that I'm simply not motivated enough to use them. In other words, I feel bad, and I know I could make myself feel better with some effort, but I just don't feel like mustering that effort.

By contrast, if I've managed to reach a state of equanimity, managing and dissolving negative feelings is something that happens almost on its own. While I'm not immune to emotional hurt, say, it's much easier to take care of. Things like practicing mindfulness on any sort of discomfort becomes almost automatic when in equanimious.

Getting into equanimity isn't always easy, even when I want to. Exercise and cold showers help in making me feel physically good, which helps. Ultimately, though, I need to think the right way.

There are a number of thoughts that I've noticed help me get into a state of equanimity. Not every one always works, which is why I've developed a number of them. If I have access to them all, usually at least one will work.

During the last month or two, my Enlightenment progress has stalled, and on most days I haven't been equanimious at all. Part of this, I think, has been because I forgot pretty much all of these thoughts. Every now and then some of them has come back to me, and sometimes it has helped for a day or two before it stopped working again. I finally realized I needed to compile a list of all such thoughts that I've used. This should help me to always have available *some* thought that might work in that state of mind.

I've divided these in three categories.

No self: Negative emotions arise from drawing a boundary between self and non-self. When one abandons the thought of a separate self that has to be defended from a hostile external world, emotions such as fear or uncertainty vanish.

No time: The need to defend yourself only exists if there is a chance that things will get worse in the future. Likewise, being impatient about something, or wanting desperately to experience something, only makes sense if it is combined with a notion of time passing. When one abandons a time-centered perspective and concentrates on the present, emotions such as fear or impatience vanish. When the present is the only moment that exists, my thought often goes, I should take heed and enjoy it.

No care: Suffering arises from identifying so strongly with your emotions that you cannot resolve attention-allocation conflicts. If you have a strong emotional attachment to eating expensive chocolate bananas on one hand, and on principle avoiding all chocolate on the other, you cannot reason your way out of such a conflict. When one stops identifying with their emotions but instead embraces them as useful feedback, the suffering related to negative emotions vanishes.

And here are the actual thoughts. Although listed as separate, some of these are overlapping and some build on each other. In particular, several of the "no time" theories presume parts of the "no self" theories. Some might also seem to somewhat contradict each other, but I don't think they ultimately do: they're simply based on different levels of analysis.

I don't really have the space or energy to comprehensively explain these all, so I'm not sure how much sense they will make to people. Still, maybe someone will find something useful here nonetheless.

- No self, psychological: There is no Cartesian Theater or homonculus, sitting in the center of the brain and running things. To take some specific part of the brain and call it "THE self" is not scientifically justified. Instead, there is only a vast collection of different subsystems, producing quite a variety of selves.

- No self, Occam's Razorical: It makes little sense to talk of an observer in the brain that is the one that observes everything. What would the positing of such an observer add to any theories? It makes more sense to say that there are various cognitive algorithms, which produce qualia as a side-effect of being run. Instead of there existing somebody who observes all the qualia produced by the brain, there are only the qualia which observe themselves and then cease to exist. If so, it makes little sense to identify with the qualia produced by my brain in particular. Instead I can identify with the qualia of all life everywhere. (I previously wrote about this view here, under "the self as how the algorithm feels from the inside".)

- No self, system-theoretical: To speak of a 'self' as separate from the environment makes little sense. My identity is defined by my environment. If all of my physical properties were held constant, you could make me think or do anything by choosing an appropriate environment to match. I'm part of a vast distributed cognitive system, and drawing the boundaries of self strictly around the physical shell housing my body makes little sense. (I previously wrote about this view here, under "the self as lack of personal boundaries".)

- No time, psychological: My mind can only act in the present. I can imagine the future, or remember the past, but both of these involve thought processes that operate in the now. I live in an eternal present.

- No time, physical multiverse: Depending on which Tegmarkian multiverses are real, all physically possible worlds exist or all logically possible worlds exist. Then, no matter what I wish to experience or what I fear, in some part of the multiverse I am already experiencing it. If I identify not with a specific observer but with qualia, then I'll know that I already have everything I could ever wish for, as well as already suffering from everything I could ever dread.

- No time, physical block: In a block universe conception of time, the whole universe already exists as an unmoving four-dimensional block. Time does not pass in the sense of the current me ceasing to exist and being replaced with another me after a moment passes: instead, this me, and all the other mes, exist eternally.

- No time, logical: If I identify with specific qualia instead of specific observers, then the qualia that "I am experiencing" (rather, the qualia which I am) at this very moment is the only qualia which I can be. Anything else would be a different qualia. Therefore, the me that exists at this very moment is the only logically possible one that I can be.

- No care, psychological: Our emotional reactions to anything are just an interpretative layer imposed by our brain, our emotions in general a mechanism to guide our action. They do not exist outside our brain. There is no inherent reason for why I should react to something with anger, and to something else with fear, and to something else with joy. In principle, I can choose to feel any emotion in conjunction with anything that I do or experience. (I previously discussed this view here.)

- No care, projective: All emotions exist within me. To think that somebody external pressures me, say, is incorrect to the extent that it assumes an external force. What is happening that others are activating processes that reside within me, and to ascribe them as pressuring me is projection.

- No care, philosophical: I can dis-identify with any thoughts or emotions that come into my mind. Instead of saying "I am angry", I can say "I'm hosting a feeling of anger as a visitor in my mind right now". I have desires, emotions and thoughts, but I am not my desires, emotions or thoughts. (This is the basis of at least some sort of mindfulness practice, which I previously discussed here.)

----

These might give you the impression that nothing matters and you might as well lay in bed until you die. Not so. Even if every possible experience exists, not all of them exist in the same proportion. If it did, we would not observe the kind of a regular, ordered universe that we do, but instead a chaotic, unpredictable universe [1]. Therefore our actions still matter and have consequences - it all adds up to normality.

It is still meaningful for me to have goals which I seek to accomplish - even if were logically, psychologically and physically impossible for "this" particular entity to experience their completion, some "other" entity will still reap their benefits. (Our language is not very well designed to handle self-lessness.) And of course, if I identify with all the qualia experienced with all sentient life everywhere in the world, the fact that this particular set of qualia will only be this set forever doesn't matter. I want my efforts to be happy and free of suffering to have as big of an effect as possible.

I think I'll stop here, in case I still have the occasional reader or two who considers me somehow sane.



[1] I should be more specific here. Yes, if all possible experiences exist, then it is logically necessary that *some* of those experiences would still be about a regular, predictable universe, regardless of whether the universe actually was chaotic or not. But there would only exist a small number of such experiences, while far more of them would exist if there was a more regular weighting. Therefore, given that "I" observe a regular universe, the subjective probability that I exist in one is higher.

Regardless of what kind of a theory we select, it has to be one that still allows probability theory to be meaningful. If it didn't, then nothing we did mattered, and we don't want that, now do we? Again, it should still add up to normality.

See e.g. here or here for views on how to make probability theory function even in a Big universe.
xuenay: (Default)
I previously characterized Michael Vassar's theory on suffering as follows: "Pain is not suffering. Pain is just an attention signal. Suffering is when one neural system tells you to pay attention, and another says it doesn't want the state of the world to be like this." While not too far off the mark, it turns out this wasn't what he actually said. Instead, he said that suffering is a conflict between two (or more) attention-allocation mechanisms in the brain.

I have been successful at using this different framing to reduce the amount of suffering I feel. The method goes like this. First, I notice that I'm experiencing something that could be called suffering. Next, I ask, what kind of an attention-allocational conflict is going on? I consider the answer, attend to the conflict, resolve it, and then I no longer suffer.

An example is probably in order, so here goes. Last Friday, there was a Helsinki Less Wrong meetup with Patri Friedman present. I had organized the meetup, and wanted to go. Unfortunately, I already had other obligations for that day, ones I couldn't back out from. One evening, I felt considerable frustration over this.

Noticing my frustration, I asked: what attention-allocational conflict is this? It quickly become obvious that two systems were fighting it out:

* The Meet-Up System was trying to convey the message: ”Hey, this is a rare opportunity to network with a smart, high-status individual and discuss his ideas with other smart people. You really should attend.”
* The Prior Obligation System responded with the message: ”You've already previously agreed to go somewhere else. You know it'll be fun, and besides, several people are expecting you to go. Not going bears an unacceptable social cost, not to mention screwing over the other people's plans.”

Now, I wouldn't have needed to consciously reflect on the messages to be aware of them. It was hard to not be aware of them: it felt like my consciousness was in a constant crossfire, with both systems bombarding it with their respective messages.

But there's an important insight here, one which I originally picked up from PJ Eby. If a mental subsystem is trying to tell you something important, then it will persist in doing so until it's properly acknowledged. Trying to push away the message means it has not been properly addressed and acknowledged, meaning the subsystem has to continue repeating it.

Imagine you were in the wilderness, and knew that if you weren't back in your village by dark you probably wouldn't make it. Now suppose a part of your brain was telling you that you had to turn back now, or otherwise you'd still be out when it got dark. What would happen if you just decided that the thought was uncomfortable, successfully pushed it away, and kept on walking? You'd be dead, that's what.

You wouldn't want to build a nuclear reactor that allowed its operators to just override and ignore warnings saying that their current course of action will lead to a core meltdown. You also wouldn't want to build a brain that could just successfully ignore critical messages without properly addressing them, basically for the same reason.

So I addressed the messages. I considered them and noted that they both had merit, but that honoring the prior obligation was more important in this situation. Having done that, the frustration mostly went away.

Another example: this is the second time I'm writing this post. The last time, I tried to save it when I'd gotten to roughly this point, only to have my computer crash. Obviously, I was frustrated. Then I remembered to apply the very technique I was writing about.

The Crash Message: You just lost a bunch of work! You should undo the crash to make it come back!
The Realistic Message: You were writing that in Notepad, which has no auto-save feature, and the computer crashed just as you were about to save the thing. There's no saved copy anywhere. Undoing the crash is impossible: you just have to write it again.

Attending to the conflict, I noted that the realistic message had it right, and the frustration went away.

It's interesting to note that it probably doesn't matter whether my analysis of the sources of the conflict is 100% accurate. I've previously used some rather flimsy evpsych just-so stories to explain the reasons for my conflicts, and they've worked fine. What's probably happening is that the attention-allocation mechanisms are too simple to actually understand the analysis I apply to the issues they bring up. If they were that smart, they could handle the issue on their own. Instead, they just flag the issue as something that higher-level thought processes should attend to. The lower-level processes are just serving as messengers: it's not their task to evaluate whether the verdict reached by the higher processes was right or wrong.

But at the same time, you can't cheat yourself. You really do have to resolve the issue, or otherwise it will come back. For instance, suppose you didn't have a job and were worried about getting one before you ran out of money. This isn't an issue where you can just say, ”oh, the system telling me I should get a job soon is right”, and then do nothing. Genuinely committing to do something does help; pretending to commit to something and then forgetting about it does not. Likewise, you can't say that "this isn't really an issue" if you know it is an issue.

Still, my experience so far seems to suggest that this framework can be used to reduce any kind of suffering. To some extent, it seems to even work on physical pain and discomfort. While simply acknowledging physical pain doesn't make it go away, making a conscious decision to be curious about the pain seems to help. Instead of flinching away from the pain and trying to avoid it, I ask myself, ”what does this experience of pain feel like?” and direct my attention towards it. This usually at least diminishes the suffering, and sometimes makes it go away if the pain was mild enough.

An important, related caveat: don't make the mistake of thinking that you could use this to replace all of your leisure with work, or anything like that. Mental fatigue will still happen. Subjectively experienced fatigue is a persistent signal to take a break which cannot be resolved other than by actually taking a break. Your brain still needs rest and relaxation. Also, if you have multiple commitments and are not sure that you can handle them all, then that will be a constant source of stress regardless. You're better off using something like Getting Things Done to handle that.

So far I have described what I call the ”content-focused” way to apply the framework. It involves mentally attending to the content of the conflicts and resolving them, and is often very useful. But as we already saw with the example of physical pain, not all conflicts are so easily resolved. A ”non-content-focused” approach – a set of techniques that are intended to work regardless of the content of the conflict in question – may prove even more powerful. I'll cover it in a separate post, though the example of dealing with physical pain is already getting into non-content-focused territory.

I'm unsure of exactly how long I have been using this particular framework, as I've been experimenting with a number of related content- and non-content-focused methods since February. But I believe that I consciously and explicitly started thinking of suffering as ”conflict between attention-allocation mechanisms” and began applying it to everything maybe two or three weeks ago. So far, either the content- or non-content-focused method has always seemed to at least alleviate suffering: the main problem has been in remembering to use it.

On the Self

May. 2nd, 2011 06:17 pm
xuenay: (Default)
I've gone through a variety of theories about the self and continuity of consciousness. Here they are.

1: The self as something arbitrary. Essentially the view I held at the time of writing this post from 2007. I thought that there is no inherent reason to think that the consciousness that now inhabits my brain will be the same one as the one inhabiting my brain tomorrow. Our minds and bodies change all the time: what's the thing that makes the me of today the same as the me of 20 years ago? Certainly one can come up with all sorts of definitions, but from a scientific point of view they're all unnecessary. One simply doesn't need to postulate a specific consciousness, a soul of sorts, in order to explain human behavior or thought. Noting that our memories create an illusion of continuity, and that this illusion is useful for maintaining useful things such as the self-preservation instinct, is more than enough as an explanation.

A thought experiment in philosophy asks: if you stepped into a Star Trek-style transporter, that disassembled you into your component parts and reassembled you somewhere else (possibly from different raw materials), would it still be you or would you it be killing you and creating a copy of you? Another: if the neurons in your brain would be gradually replaced with ones running in a computer, and the original brain was then shut down, would it still be you? Yet another: if you had been translated into software, and then fifteen copies of that mindfile were made and run, would they all be you?

To all of these questions, "the self as something arbitrary" replies: there's no inherent reason why they wouldn't be you. The difference between them would be less than that between you now, and you tomorrow. Of course, for psychological reasons, it is necessary for us to still believe to some degree that we're still the same person tomorrow as we are today. For this purpose, we're free to use pretty much any criteria we prefer: it's not like one of them would be wrong. One such criteria, suggested by Derek Parfit, is Relation R: psychological connectedness (namely, of memory and character) and continuity (overlapping chains of strong connectedness). This works fine for most purposes.

In practice, while I had this view, I tended to forget about the whole thing a lot. The illusion is built into us quite strongly, and the intellectual understanding of it is easy to forget.

2: The self as lack of personal boundaries. Upon reading Ken Wilber's No Boundary, I realized the following. Suppose that I choose to reject any criteria creating a relation between the me of now and the me of tomorrow, seeing them all as arbitrary. It follows that all consciousness-moments are separate beings. But symmetrically, one can take this to imply that all consciousness-moments are the same being. In other words: there is only one consciousness which experiences everything, instantiated in a wide variety of information-processing systems.

This point of view also gains support from noting that to a large degree, our behavior is determined by our environment. The people you hang around with have an immense impact on what you do and what you are. I might define myself using the word "student", which signifies a certain role within society - studying at a university ran by other people, from books written by others, my studies funded by money which the state gets by taxing my country's inhabitants. Or I might say that a defining aspect of myself is that I want to help avert existential risk. This is so because I happened to encounter writings about it at an appropriate point in my life, and it is a motivation which is constantly reinforced by being in contact with like-minded folks. On the other hand, it is a drive which is also constantly weakened by the lures of hedonism and affiliating with people who don't think such things are truly that important.

I'm only exaggarating a little if I say that basically everything in our personality is defined by our environment, and particularly the people within our environment. Change the environment I'm in, and you quite literally change what I am. Certainly I have lots of relatively stable personality traits that affect my behavior, but my environment defines the meaning those traits take. If I change my environment, I'll also change my own behavior. Looked at in this light, the self/non-self boundary becomes rather arbitrary and somewhat meaningless.

So now I was presuming that there was only one consciousness, instantiated in every possible body. All of these bodies and instantiations, taken together, make up a vast system that is me. I (in the sense of the specific brain-body now writing this) am part of the system in the same way that individual cells are parts of my body, or individual subprocesses in my brain are parts of my psyche. My personal accomplishments or my personal pride don't really matter that much: what matters is how I contribute to the overall system, and whether parts of the system are harmonious or conflicted between each other. Doing things like befriending new people means forging new connections between parts of myself. Learning to know people better means strengthening such connections.

Thinking like this felt good, and it worked for a while. But I had difficulty keeping up that line of thought. Again, the illusion of separateness is built strongly into us. On an intellectual level, I could easily think of myself as part of a vast system, with only a loose boundary between me and not-me. But since each brain can only access memories of being itself, and is strongly biased towards thinking itself separate, this was hard to really believe in on an emotional level. Frequently, I found myself thinking of myself as separate again.

3: The self as how the algorithm feels from the inside. The next step came when I realized that the notion of a consciousness experiencing things is an unnecessary element as well. Instead of saying that there are lots of different consciousnesses, or one consciousness instantiated in a lot of bodies, we can just note that we don't really need to presume any specific entity which observes various sensations. Instead, there are only the sensations themselves. A "consciousness" is simply a collection of sensations that are being observed within an organism at a specific time.

Putting this another way: there are a variety of processes running within our brains. As a side-effect of their functioning, they produce a stream of sensations (qualia, to use the technical term). There is no observer which observes or experiences these qualia: they simply occur. To the extent that there can be said to be an observer or a watcher, each sensation observes itself and then ceases to exist.

Of necessity, all of the qualia-producing algorithms we know of are located within information-processing systems which have a memory and are in some way capable of reporting that they have subjective experiences. Humans can verbalize or otherwise communicate being in pain; dogs can likewise behave in ways that sufficiently resemble our is-in-pain behaviors that we presume them to have qualia. As an animal's resemblance to a human grows smaller, we become more unsure of whether they have qualia. In principle, my computer could also have qualia, but if so it would have no way of reporting it, and I would have no way of knowing it. Because an entity needs to be able to somehow communicate having qualia in order for us to know about it, we've mistakenly began thinking that all qualia must by nature be observed by a consciousness. But the qualia observe themselves, which is enough. There is no Cartesian Theater, but rather something like multiple drafts.

So there is no "me" in the continuity of consciousness sense, nor is there any unified consciousness which experiences everything. Instead there are only ephemeral sensations, which vanish as soon as they've come to existence (though if eternalism is right, every moment may exist forever, and there may be an infinite number of copies of each "unique" sensation if multiverses are real). This may seem like a very unsettling theory from a psychological point of view, as it would seem like it'd make it harder to e.g. care about the next day. While both "the self as something arbitrary" and "the self as a lack of personal boundaries" allowed one to construct a definition of self extending in time - even if one acknowledged to be arbitrary - this view makes that rather impossible.

And at first, it was rather unsettling. After a while, however, I managed to come to grips with it. The important point to note is that even if there is continuity of consciousness, the concept of "me" still makes sense. It's simply referring to the information-processing system in which all of these algorithms are running. I can still meaningfully talk about my experience or about making plans. I'm simply referring to the experiences which will be produced by the algorithms running within this brain, and the plans which that brain will make. And there is no reason why I shouldn't feel pleasure from anticipation of future experiences, if those are good experiences to have.

I desire to reduce the number of negative qualia in the world and increase the number of positive ones. Positive qualia are correlated with positive feedback within the information-processing system; negative qualia, with negative feedback. In other words, the system/organism will tend to repeat the things it felt good about, as it gets wired to repeat those behaviors. (Though one should note here that the circuits for "wanting" and "liking" are actually different.) It is good for me to feel good about doing and behaving in ways which will make me more likely to achieve these goals. It is good for me to feel pleasure from the anticipation of doing good things, for this will cause me to actually do them. It is also good for me to feel happy: not only does feeling happy instead of unhappy make me more capable of doing things, it also directly serves my goal of increasing the amount of positive qualia in the world. This line of thought seems like a very successful way of fitting together utilitarianism and virtue ethics, the process of which I began a year ago and which has considerably contributed to my increased happiness of late.

Again, this is easy to think about on an intellectual level, but we're wired to think differently. I've been having more success consistently training myself to think like this than I had with the previous theories, however. Of course, I still frequently forget, but I'm making progress. Various meditation traditions seem to be aimed at helping grok something like this at an emotional level, and I'm dedicating an hour a day to meditation practice aimed at following the progression described in this book. I haven't really gotten any results so far, though.

I was going to also write more about the nature of suffering and how these shifts in thought have helped me become happier and suffer less. However, looking at how long this post got, I think I'll do that in a separate post.
xuenay: (Default)
Yesterday evening, I pasted to two IRC channels an excerpt of what someone had written. In the context of the original text, that excerpt had seemed to me like harmless if somewhat raunchy humor. What I didn't realize at the time was that by removing the context, the person writing it came off looking like a jerk, and by laughing at it I came off looking as something of a jerk as well.

Two people, both of whom I have known for many years now and whose opinions I value, approached me by private message and pointed out that that may not have been the smartest thing to do. My initial reaction was defensive, but I soon realized that they were right and thanked them for pointing it out to me. Putting on a positive growth mindset, I decided to treat this event as a positive one, as in the future I'd know better.

Later that evening, as I lay in bed waiting to fall asleep, the episode replayed itself in my mind. I learnt long ago that trying to push such replays out of my mind would just make them take longer and make them feel worse. So I settled back to just observing the replay and waiting for it to go away. As I waited, I started thinking about what kind of lower-level neural process this feeling might be a sign of.

Artificial neural networks use what is called a backpropagation algorithm to learn from mistakes. First the network is provided some input, then it computes some value, and then the obtained value is compared to the expected value. The difference between the obtained and expected value is the error, which is then propagated back from the end of the network to the input layer. As the error signal works it way through the network, neural weights are adjusted in such a fashion to produce a different output the next time.

Backprop is known to be biologically unrealistic, but there are more realistic algorithms that work in a roughly similar manner. The human brain seems to be using something called temporal difference learning. As Roko described it: "Your brain propagates the psychological pain 'back to the earliest reliable stimulus for the punishment'. If you fail or are punished sufficiently many times in some problem area, and acting in that area is always preceeded by [doing something], your brain will propagate the psychological pain right back to the moment you first begin to [do that something]".

As I lay there in bed, I couldn't help the feeling that something similar to those two algorithms was going on. The main thing that kept repeating itself was not the actual action of pasting the quote to the channel or laughing about it, but the admonishments from my friends. Being independently rebuked for something by two people I considered important: a powerful error signal that had to be taken into account. Their reactions filling my mind: an attempt to re-set the network to the state it was in soon after the event. The uncomfortable feeling of thinking about that: negative affect flooding the network as it was in that state, acting as a signal to re-adjust the neural weights that had caused that kind of an outcome.

After those feelings had passed, I thought about the episode again. Now I felt silly for committing that faux pas, for now it felt obvious that the quote would come across badly. For a moment I wondered if I had just been unusually tired, or distracted, or otherwise out of my normal mode of thought to not have seen that. But then it occurred to me - the judgment of this being obviously a bad idea was produced by the network that had just been rewired in response to social feedback. The pain of the feedback had been propagated back to the action that caused it, so just thinking about doing that (or thinking about having done that) made me feel stupid. I have no way of knowing whether the "don't do that, idiot" judgment is something that would actually have been produced had I been paying more attention, or if it's a genuinely new judgment that wouldn't have been produced by the old network.

I tend to be somewhat amused by the people who go about claiming that computers can never be truly intelligent, because a computer doesn't genuinely understand the information it's processing. I think they're vastly overestimating how smart we are, and that a lot of our thinking is just relatively crude pattern-matching, with various patterns (including behavioral ones) being labeled as good or bad after the fact, as we try out various things.

On the other hand, there would probably have been one way to avoid that incident. We do have the capacity for reflective thought, which allows us to simulate various events in our heads without needing to actually undergo them. Had I actually imagined the various ways in which people could interpret that quote, I would probably have relatively quickly reached the conclusion that yes, it might easily be taken as jerk-ish. Simply imagining that reaction might then have provided the decision-making network with a similar, albeit weaker, error signal and taught it not to do that.

However, there's the question of combinatorial explosions: any decision could potentially have countless of consequences, and we can't simulate them all. (See the epistemological frame problem.) So in the end, knowing the answer to the question of "which actions are such that we should pause to reflect upon their potential consequences" is something we need to learn by trial and error as well.

So I guess the lesson here is that you shouldn't blame yourself too much if you've done something that feels obviously wrong in retrospect. That decision was made by an earlier version of you. Although it feels obvious now, that version of you might literally have had no way of knowing that it was making a mistake, as it hadn't been properly trained yet.
xuenay: (Default)
I was going to write about the new points system we have here and stuff, but yesterday there were a bunch of things that triggered a weird change in me. I'm still not entirely sure of what's happening, so I'll try to document it here.

It all started when Michael Vassar was talking about his take on the Twelve Virtues of Rationality. He was basically saying that a lot of the initial virtues (curiosity, relinquishment, lightness, evenness) were variants of the same thing, that is, not being attached to particular states of the world. If you do not have an emotional preference on what the world should be like, then it's also easier to perfectly update your beliefs whenever you encounter new information.

As he was talking about it, he also made roughly the following comment: "Pain is not suffering. Pain is just an attention signal. Suffering is when one neural system tells you to pay attention, and another says it doesn't want the state of the world to be like this." At some point he also mentioned that the ideal would be for a person's motivations not to be directly related to states of the world, but rather their own actions. If you tie your feelings to states of the world, you risk suffering needlessly about things not under you control. On the other hand, if you tie your feelings to your actions, your feelings are created by something that is always under your control. And once you stop having an emotional attachment to the way the world is, actually changing the world becomes much easier. Things like caring about what others think of you cease to be a concern, paradoxically making you much more at ease in social situations.

I thought this through, and it seemed to make a lot of sense. As Louie would comment later on, it was basically the old "attachment is suffering" line from Buddhism, but that's a line one has heard over and over so many times that it's ceased to have much significance and become just a phrase. Reframing it as "suffering is conflict between two neural systems" somehow made it far more concrete.

An early objection that came to mind was that, if pain is not suffering, why does physical pain feel like suffering? My intuition would be that if this hypothesis is correct, then humans have strong inborn desires not to experience pain (which leads to the mistaken impression that pain is suffering). If you break your leg, your brain is flooded with pain signals, and it's built to prefer states of the world where there isn't pain. But it's possible to react indifferently to your own sensation of pain. Pain asymbolia, according to Wikipedia, is "a condition in which pain is perceived, but does not cause suffering ... patients report that they have pain but are not bothered by it, they recognize the sensation of pain but are mostly or completely immune to suffering from it". Further support comes from the fact that our emotional states and the knowledge we have may often have a big influence on how painful (sufferful?) something feels. You can sometimes sustain an injury that doesn't feel very bad until you actually look at it and see how badly it's hurt. Being afraid also makes pain worse, while a feeling of being in control makes pain feel less bad.

On a more emotional front, I discovered a long time ago that trying to avoid thinking about unpleasant memories was a bad idea. The negative affect would fade a lot quicker if I didn't even try to push them out of my mind, but rather let them come and let them envelope me over and over until they didn't bother me anymore.

So I started wondering about how to apply this in practice. For a long time, things such as worry for my friends ending up in accidents and anguish for the fact that there is so much suffering in the world have seriously reduced my happiness. I've felt a strong moral obligation to work towards improving the world, and felt guilty at the times when I've been unable to e.g. study as hard as conceivably possible. If I could shift my motivations away from states of the world, that could make me considerably happier and therefore help me to actually improve the world.

But shifting to focus to actions instead of consequences sounded like getting dangerously close to deontology. Since a deontologist judges actions irrespective of their consequences, they might e.g. consider it wrong to kill a person even if that ended up saving a hundred others. I still wanted my actions to do the most good possible, and that isn't possible if you don't evaluate the effects your actions have on the world-state. So I would have to develop a line of thought that avoided the trap of deontology, while still shifting the focus on actions. That seemed tricky, but not impossible. I could still be motivated to do the actions that caused the most good and shifted the world-state the most towards my preferred direction, while at the same time not being overly attached to any particular state of the world.

While I was still thinking about this, I went ahead and finished reading The Happiness Hypothesis, a book about research on morality and happiness that I'd started reading previously. One of the points the book makes that we're divided beings: to use the book's metaphor, there is an elephant and there is the rider. The rider is the conscious self, while the elephant consists of all the low-level, unconscious processes. Unconscious processes actually carry out most of what we do and the rider trains them and tells them what they should be doing. Think of e.g. walking or typing on the computer, where you don't explicitly think about every footstep or every press of the button, but instead just decide to walk somewhere or type in something.

Readers familiar with PJ Eby will recognize this to be the same as his Multiple Self philosophy (my previous summary, original article). What I had not thought of before was that this also applies to ethics. Formal, virtuous theories of ethics are known by the rider, but not by the elephant, which leads to a conflict between what people know to be right and what they actually do. On these grounds, The Happiness Hypothesis critiqued the way Western ethics, both in the deontologist tradition started by Immanuel Kant and the consequentialist tradition started by Jeremy Bentham have been becoming increasingly reason-based:
Read more... )
xuenay: (sonictails)
What is transhumanism? It is often described as the philosophy that we should use technology to transcend our current physical and mental limitations, but it is more than just that. Transhumanists keep tab on emerging technologies and debate their risks and benefits; they promote public awareness of the topics and help divert funding to research; they work to make sure that humanity is better off from new technology. There exist tranhumanist think-tanks like the Institute of Ethics and Emerging Technologies, transhumanist research groups devoted to bringing forth new technologies, like the Metusaleh Foundation, and transhumanist research groups devoted to the risks of new technologies, like the Singularity Institute for Artificial Intelligence. I am a transhumanist - I believe that one of the major ways of making humanity better off is by developing new technology.

Around two years back, I was talking with somebody who asked - "but what does all of this have to do with happiness? Technology doesn't make people happier." Now, this is a serious point. There have been studies showing that people in more advanced countries aren't necessarily happier, and that the effect of any new technological gadget soon fades away as soon as people get used to it. Technology has advanced, but mostly the things making people happy are the ones they've always been - friends and family, achievement, religion.

Currently, humanity is trapped in a cruel cycle. We yearn for bigger houses, higher-paying jobs and new gadgets, but not because those things would really make us happier. We crave for them because craving for such things helped spread our genes in our ancestral environment. But evolution does not optimize happiness, nor does it bargain with the creatures it has created. Evolution does not say, "okay, you have the greatest fitness in this population, now you get to be happy". We're driven to develop better communications, better television, overall better standards of living - and in the end, little of that really seems to matter when it comes to happiness. We might strive for a luxurious living, because a luxurious living meant you'd survive better in the past, but then feel bored when do have all the luxuries. But we still want them.

So is technology really such a great thing? Is researching it really one of our highest priorities, if it doesn't even make people happier?

In a word, yes. For there is a way out. Not every technology is meaningless - technology has indirectly made our societies more open-minded, helping members of different minorities feel more accepted. Happiness studies suggest that one's health is a major component to their happiness, so improvements in healthcare help as well. The key seems to be that technology needs to primarily modify, not our environment, but ourselves. If evolution has given us such a crappy deal, where we keep striving for externalities that don't make us any happier, let's beat evolution and modify the internalities.

That, of course, is what transhumanism is all about. In fact, since technology has such a great capability for making people happier, I would argue that anyone who cares about the happiness of others has a moral responsibility to be a transhumanist.

So, just what sort of technology exactly is there that we should be working on to develop? Glad that you asked.


  • Cognitive enhancement will be a boon for those with below-average intelligence. Having your intelligence enhanced might not necessarily make you any happier if you already have a normal or above-normal intelligence, but it's not at all fun to be stupid. Having a low IQ gives you a serious handicap in both social life and with handling everyday life. It's easy to find anecdotal evidence for stupidity causing unhappiness - how many people do you know who enjoy hanging around those they consider imbeciles? - but the effect of intelligence on life has also been documented by actual studies:

    [IQ 75 and below] is the "high risk" zone: high risk of failing elementary school, being unmasked as incompetent in daily affairs (making change, reading a letter, filling out a job application, understanding doctors’ instructions, monitoring one’s young children), being cheated by merchants and exploited by friends and relatives, remaining unemployed, dependent, and socially isolated, and 'consistently fail[ing] to understand certain important aspects of the world in which they live, and so regularly find[ing] themselves unable to cope with some demands of this world' (Edger-ton, 1993, p. 222). Many eventually lead satisfying lives, but only with the help of a benefactor or strong social support network or only after a long struggle to find a self-affirming social niche. -- Linda Gottfredson (1997). Why G matters. Intelligence, 24, p. 79-132.

  • Elimination of old age. Currently, growing old is not a very enjoyable thing - your health begins to fail, your close ones start dying off, you become too tired to create new social circles when you lose the old ones. It is no wonder that there have been reports of disproportionally high suicide rates among the elderly. All of this could be avoided if we could eliminate aging and prevent all age-related decline, so people would stay healthy and physically young forever. This is a project many transhumanists are actively working on or supporting by donating to the Metusaleh Foundation - we already know what causes old age, so all that remains is fixing it.

  • Mentally becoming what you want to be. Many people are conflicted between competing desires - a desire to be a good person competing with a very short temper, or a desire to be a good lover competing with an unhealthy jealousy. I would like to be a good transhumanist and help improve the world, but I frequently grow lazy and end up wasting time doing something else when I could be studying things that might help me in this. As we learn to better understand the working of our brain, we can start modifying it. Oxytocin is a chemical which has been suggested to make people more trusting of each other, and there exist concentration-improving chemicals which could help me study (were they not currently prescription drugs). Eventually, such treatments will become more elegant and more accepted, and we'll be able to make ourselves be exactly what we want to be. The Cyborg Buddha project of the Institute for Ethics and Emerging Technologies is an effort to promote awareness of these possibilities.

  • Physically becoming what you want to be. Sex-reassignment surgery is the most obvious example of this category. People might be unhappy with the physical shape they are in - be it their sex or their weight. Biotechnological and nanotechnological advances will help in this category, with improved virtual reality setups helping people forget their current shape until the technology arrives to actually make actual shape changes reality.

  • Baseline modification. It appears that a large part of happiness is genetic - some people are born naturally happy, and others are naturally unhappy. If the exact factors making some people happier than others can be isolated, everybody could potentially have their brain chemistry tweaked so that their baseline emotional state would be that of greater happiness.

  • Continued existence. Finally, one can't be happy if one's dead. There are problems ranging from meteorite impacts to global pandemics - existential risks - which might either kill all of us or a considerable portion of us. Technology such as cognitive enhancement or artificial intelligence will give us a better grasp of our problems, helping ward off such dangers.


All six categories - and others I have not mentioned - increase equality. People are not randomly condemned to be stupid. People are not condemned to be unhealthy simply because they're old. Like intelligence, some are naturally more talented at self-control than others: by increasing our control of our own brains, these inequalities diminish. Some people are not condemned to live in bodies they're unhappy with, while others get great ones. People are not condemned to be naturally less happy than others. People are not wiped out of existence when they'd still rather live. All of this increases people's happiness, and giving people control over these things gives them more choice. These are some of the core values that transhumanism's all about: Happiness, Equality and Choice.

Of course, transhumanism is not about embracing new technologies unthinkingly or without question. Every new technology carries with it new risks as well as new opportunities. Nanotechnology, one of transhumanists' favorite technologies, carries within it the potential to do vast damage in addition to vast good. Regulation will be undoubtably be needed - and transhumanists will be at the forefront of that as well, evaluating emerging technologies and bringing up the issues that might be involved.

Like all movements, transhumanism isn't something that just happens. It isn't obvious that technological progress will happen as fast as we like, that needless fears won't ruin it, or that the appropriate safeguards are taken. Transhumanism isn't a reason to go "cool, let's wait for these new toys". Transhumanism is a rallying cry for everyone who cares about humanity - to get up to date, to do something to help. Personally I blog and try to promote awareness about transhumanist issues, donate to valued organizations like the Singularity Institute, and work on my cognitive science degree. Everyone can help out somehow, by spreading the word if by nothing else. (Some other suggestions of how one might help can be found at Accelerating Future.)

In his book Our posthuman future, Francis Fukuyama worries about biotechnology reducing humanity's diversity. It is certainly a fact that in some ways, advancing transhumanism will reduce diversity - it will reduce the diversity of suffering, the diversity of unhappiness, the diversity of inequality. Instead, as people become more capable of changing into what they really want to be, it will increase diversity of expression, diversity of thought and diversity of mind.

It is entirely understandable that people might feel resistance to transhumanist aims and goals. Even I sometimes wonder if this is what I really want - having lived an entire life in a certain sort of society, one naturally gets attached to it. I wonder if the problems caused when true mindcrafting becomes possible will be worth it, feel annoyance at the thought of a world where I might have no unique talents that everybody couldn't obtain via technology. It is only human to grow much too attached to the ills of the world, only human to prefer a safe status quo instead of healthy change. It is human, just as it is human to grow fragile and mentally sluggish with age, to lack intelligence and to discriminate against others, to suffer and to be unhappy. I recognize my flaws for what they are - the worse part of my human nature, the one that diminishes where it could ennoble.

Something to transcend.
xuenay: (Default)
Lately, the excellent blog Overcoming Bias has had discussion about the rationality and psychology of disagreement. I admit that I don't entirely understand everything that's discussed there - apparently in 1976, a Nobel prize winner published a paper which says roughly that, in theory, people who have the same information and who talk to each other long enough cannot agree to disagree. This has led to the release of a large amount of subsequent papers, some of which discuss the issue in rather abstract terms and long mathemathical proofs, considering mostly perfectly rational creatures, leaving their exact relevance to the field of human thought a bit unclear. Nevertheless, the bits that I've gleaned from some of the blog posts discussing this subject have been most interesting.

Let's discuss the issue in plain English, without invoking any math or formal logic. The principle is simple, almost obvious when you think about it. Let's assume that we have two people who have a shared goal, and disagree about how to best reach that goal. Since they disagree, that must mean that they have different information - the information person A has says that approach X is better, while the information person B has says that approach Y is better. Now, when they sit down to discuss the issue, they start sharing the information they have with each other, until finally both know exactly the same things. Since they both now know the same things, logically they should also both draw the same conclusions from this material. Thus, assuming they're perfectly rational and have enough time to discuss the issue, in the end they cannot agree to disagree about it. They may have reasons to interpret the information they have differently, but if they do so there is still something they haven't shared with each other, since presumably they have a reason to interpret the information they have differently.

Now, of course we all know that humans aren't perfectly rational creatures. Still, this is a subject that I've thought about every now and then - in just about every field of human behavior, huge disagreements persist about things that have been debated back and forth for ages, with plenty of experimental evidence to go around (consider, for one, the divisions between the political right and the left). Even though I know that people don't really think rationally about most things, this still strikes me as somewhat strange - typically both sides have plenty of really smart people arguing their cases, and often people devote practically their entire lives to the study of these things. There are no doubt plenty of folks who are just biased beyond belief, but nevertheless there should be enough people who really want to find out the truth that these things would get resolved relatively quickly. So what causes might there be for all of these persisting disagreements?

* People might actually have different goals. By Hume's Guillotine, moral rules cannot be directly derived from physical facts. One person can believe that positive rights are inherently the most important things to achieve, while another believes that negative rights are more important. (This is what I suspect is behind a lot of right-left dispute.) One person can believe that maximizing humanity's happines is the most important, while another can believe that living a pure and sinless life in the eyes of God is the most important. For as long as these underlying moral beliefs are axiomatic and not based on any other information (and some beliefs must be, if a person is to have them at all), they cannot be challenged by learning new things.
* People might treat the same information differently based on extra-informational factors. For instance, they might have a genetic disposition towards optimism or pessimism. Also, the human mind is built so that if one learns of something that conflicts with something they already know, they're more likely to discredit the new information than the old one. Thus, simply changing the order in which information is received may alter the information processing, even if ultimately one has all the same information as somebody else. Somebody who is first trained as an engineer and then majors in the humanities will view their both educations entirely differently than somebody who first gets their humanities degree and then goes to study engineering.
* The issue may be too complex to be comprehended fully, or just simply so hard to understand that human minds can never hope to fully grasp it. Of course, in this situation, the most rational choice would be to just accept that it's impossible to really know or that more research into the matter is needed, not to simply cling to the side you'd wish to win more.
* The sides discussing the matter may both have so much information that they cannot hope to ever share all of it in the limited time that they have, or it's not worth spending so much time on to share it all. This argument works on some issues, but it's more dubious on things like politics that are extensively debated - after all, if you're politically on the left at the age of 20, it's not very plausible to assume that you can't communicate all the information that's led to you to this stance, even if you spend the rest of your life talking about it.
* Different ways of communicating information have various effectiveness, and some things can't be communicated with speech alone. You can spend a whole day being told about the economist's mindset by a professor with PhDs in both economy and pedagogy, but even then you still won't internalize all of it as well as somebody who has spent five years in university studying economics. Also, people who experience something themselves and simply hear about somebody else's experience tend to give their own experiences more value.
* One does not always know that he knows something. You can have beliefs that are well-founded in facts you know, but when asked to explain them, you don't remember the original evidence that convinced you anymore. Various incidents where you've seen certain behavior can compress themselves in your mind until it's obvious to you that something works a certain way, and when somebody disagrees you think he's being silly without being able to prove yourself to be right.
* One can be affected by a large amount of other biases. Just look at Wikipedia's list of cognitive biases. It's depressingly long.
* Finally, one might simply not care about the truth, and be uninterested in encountering conflicting points of view. An interesting question is how often this might actually be a good thing - there is a concept known as rational irrationality (HTML version), which basically states that in many situations the benefit people would derive from knowing the truth is practically nonexistent (believing or not believing in evolution doesn't directly influence your life in any way, regardless of which way you swung). Thus, spending even a minimal amount of effort trying to find out the truth might be pointless - irrational, even. And sometimes unfounded beliefs might even benefit you (religion is a good example).

Let's assume, though, that you are a seeker of the truth - if not entirely, then at least in part. You want to know how things are in reality. What lessons should we draw from all of this, and how should you act? Here are my personal suggestions, though I do not claim that I would yet follow them all to the letter. Still, they are things to strive for.

* Study things from as many points of view as possible, and try to understand as many models of thought as you can. This way, you can better understand the behavior of other people, and how people can think in ways that seem incomprehensible to you. If an atheist, talk to religious people until you understand them well enough not to consider them silly; if religious, talk to atheists until you understand them in the same way. Get at least passingly familiar with all the existing genres of fiction, and especially study science fiction - the good sort of science fiction, the one that isn't just "laserguns for revolvers and spaceships for horses" but instead builds on premises and settings that are as bizarre and unusual for us as possible. At the same time, beware the fallacy of generalization from fictional evidence, and always keep in mind that you are reading fiction, not scientific studies. Fiction is just stuff that someone has invented. It doesn't prove that things would go that way in real life, and you should be very cautious of letting the images painted in fictional works color your concept of what the world really is like.
* Become interdisciplinary. Do for science what you did for fiction, for you never know what branch of human thought might grant you the knowledge you need to understand a phenomenon. Where fiction could lead you to misapplied conclusions, science will give you the methods you need to truly understand the world - even the methods that might feel counterintuitive to the one not skilled in them. Study mathemathics, economy, history, psychology, physics, everything.
* Recognize your fallibility. Realize that in a quest for the truth, your own biases become your worst enemy. To defeat your enemy you must understand it, so set forth on studying it. Follow blogs like Overcoming Bias. Read up on the field of heurestics and biases - the book Judgment Under Uncertainty: Heuristics and Biases comes highly recommended, and though I haven't read it yet, I plan to do so soon. Find the time to peruse articles like Wikipedia's list of cognitive biases and Cognitive Biases Potentially Affecting Judgment of Global Risks. In your interdisciplinary studies, especially emphasize the sciences that help you in understanding and combating your bias, and the ones that allow you to think clearly - in his Twelve Virtues of Rationality (which is required reading for you), Eliezer Yudkowsky recommends evolutionary psychology, heuristics and biases, social psychology, probability theory and decision theory. Read texts that are obviously biased, so that you are better in spotting the milder biases. Bookmark lists of debating fallacies. Practice the Art of Rationality in whatever ways you can.
* Actively adjust your thoughts and hypotheses based on information you have about yourself, and of others. In Uncommon Priors Require Origin Disputes (it has some formal logic, but you can just read the plain English summaries and skip the formal bits - that's what I did), Robin Hanson discusses the example of two astronomers with differing opinions about the size of the universe. He notes that they cannot base their differences of opinion on genetic differences influencing optimism and pessimism, because the laws of heredity work independent of the size of the universe - a person inheriting the gene for optimism does not alter the size of the universe (or vice versa), so he must seek to remove the effect that gene has on his thought. Find out which influences on your thought are correlated with better understanding the world, and eliminate others. You having an IQ different from others is relevant to whether or not your hypotheses are accurate, but you having been born in a geographical location where a certain point of view tends to be favored is not.
* When I in my last point said that I skipped the formal bits of a paper and just read the summary? Don't do that. Strive for a technical understanding of all things, as is explained in A Technical Explanation of Technical Explanation. If you know that "everything is waves" but do not understand the mathemathical and physical concepts behind that sentence, then you do not really know anything but a fancy phrase. You cannot use it to make valid predictions, and if you can't use it to make predictions, it's useless for you. Strive for an ability to make testable predictions, not an ability to explain anything you encounter.
* Discuss the same subjects repeatedly, even with the same people. If you are losing a debate but still cannot admit you're wrong, ask for time to ponder upon it. Decide if your hesitation was you being too caught up in the defense of a topic, in which case you only need time to get over it and accept your opponent's arguments, or because there was more relevant information in your mind that you couldn't recall at the moment, in which case you need time for your subconsciousness to bring them to your mind. Be very sceptical of yourself if you disagree with something, but cannot justify it even with time - you might be dealing with bias instead of forgetten knowledge. If questioned, be prepared to double-check your intuitions of what is obvious from scientific studies, and be ready to discard your intuitions if necessary.
* Avoid certainty, and of all people, be the harshest on yourself. 80% of drivers thinks they belong in the top 30% of all drivers, and even people aware of cognitive biases often seem to think those biases don't apply to them. People tend to find in ambiguous texts the points that support their opinions, while discounting the ones that disagree with them. Question yourself, and recognize that if you want your theories to find the truth, you can never be the only one to evaluate them. Subject them for criticism and peer review, and find those with the most conflicting thoughts to look them over. Never think that you have found the final truth, for when you do that, you stop looking for other explanations. Remember the scientists behind the Castle Bravo nuclear test, whose mistake in their calculations was thinking their calculations complete and forgetting to factor in the point they had forgotten. Consider impossible scenarios. Meditate on the mantra of "nothing is impossible, only extremly unlikely". Think of the world in terms of probabilities, not certainties.

September 2017

S M T W T F S
     12
34567 89
10111213141516
171819202122 23
24252627282930

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 26th, 2017 03:37 am
Powered by Dreamwidth Studios