xuenay: (sonictails)

I just got home from a four-day rationality workshop in England that was organized by the Center For Applied Rationality (CFAR). It covered a lot of content, but if I had to choose a single theme that united most of it, it was listening to your emotions.

That might sound like a weird focus for a rationality workshop, but cognitive science has shown that the intuitive and emotional part of the mind (”System 1”) is both in charge of most of our behavior, and also carries out a great deal of valuable information-processing of its own (it’s great at pattern-matching, for example). Much of the workshop material was aimed at helping people reach a greater harmony between their System 1 and their verbal, logical System 2. Many of people’s motivational troubles come from the goals of their two systems being somehow at odds with each other, and we were taught to have our two systems have a better dialogue with each other, harmonizing their desires and making it easier for information to cross from one system to the other and back.

To give a more concrete example, there was the technique of goal factoring. You take a behavior that you often do but aren’t sure why, or which you feel might be wasted time. Suppose that you spend a lot of time answering e-mails that aren’t actually very important. You start by asking yourself: what’s good about this activity, that makes me do it? Then you try to listen to your feelings in response to that question, and write down what you perceive. Maybe you conclude that it makes you feel productive, and it gives you a break from tasks that require more energy to do.

Next you look at the things that you came up with, and consider whether there’s a better way to accomplish them. There are two possible outcomes here. Either you conclude that the behavior is an important and valuable one after all, meaning that you can now be more motivated to do it. Alternatively, you find that there would be better ways of accomplishing all the goals that the behavior was aiming for. Maybe taking a walk would make for a better break, and answering more urgent e-mails would provide more value. If you were previously using two hours per day on the unimportant e-mails, possibly you could now achieve more in terms of both relaxation and actual productivity by spending an hour on a walk and an hour on the important e-mails.

At this point, you consider your new plan, and again ask yourself: does this feel right? Is this motivating? Are there any slight pangs of regret about giving up my old behavior? If you still don’t want to shift your behavior, chances are that you still have some motive for doing this thing that you have missed, and the feelings of productivity and relaxation aren’t quite enough to cover it. In that case, go back to the step of listing motives.

Or, if you feel happy and content about the new direction that you’ve chosen, victory!

Notice how this technique is all about moving information from one system to another. System 2 notices that you’re doing something but it isn’t sure why that is, so it asks System 1 for the reasons. System 1 answers, ”here’s what I’m trying to do for us, what do you think?” Then System 2 does what it’s best at, taking an analytic approach and possibly coming up with better ways of achieving the different motives. Then it gives that alternative approach back to System 1 and asks, would this work? Would this give us everything that we want? If System 1 says no, System 2 gets back to work, and the dialogue continues until both are happy.

Again, I emphasize the collaborative aspect between the two systems. They’re allies working for common goals, not enemies. Too many people tend towards one of two extremes: either thinking that their emotions are stupid and something to suppress, or completely disdaining the use of logical analysis. Both extremes miss out on the strengths of the system that is neglected, and make it unlikely for the person to get everything that they want.

As I was heading back from the workshop, I considered doing something that I noticed feeling uncomfortable about. Previous meditation experience had already made me more likely to just attend to the discomfort rather than trying to push it away, but inspired by the workshop, I went a bit further. I took the discomfort, considered what my System 1 might be trying to warn me about, and concluded that it might be better to err on the side of caution this time around. Finally – and this wasn’t a thing from the workshop, it was something I invited on the spot – I summoned a feeling of gratitude and thanked my System 1 for having been alert and giving me the information. That might have been a little overblown, since neither system should actually be sentient by itself, but it still felt like a good mindset to cultivate.

Although it was never mentioned in the workshop, what comes to mind is the concept of wu-wei from Chinese philosophy, a state of ”effortless doing” where all of your desires are perfectly aligned and everything comes naturally. In the ideal form, you never need to force yourself to do something you don’t want to do, or to expend willpower on an unpleasant task. Either you want to do something and do, or don’t want to do it, and don’t.

A large number of the workshop’s classes – goal factoring, aversion factoring and calibration, urge propagation, comfort zone expansion, inner simulation, making hard decisions, Hamming questions, againstness – were aimed at more or less this. Find out what System 1 wants, find out what System 2 wants, dialogue, aim for a harmonious state between the two. Then there were a smaller number of other classes that might be summarized as being about problem-solving in general.

The classes about the different techniques were interspersed with ”debugging sessions” of various kinds. In the beginning of the workshop, we listed different bugs in our lives – anything about our lives that we weren’t happy with, with the suggested example bugs being things like ”every time I talk to so-and-so I end up in an argument”, ”I think that I ‘should’ do something but don’t really want to”, and ”I’m working on my dissertation and everything is going fine – but when people ask me why I’m doing a PhD, I have a hard time remembering why I wanted to”. After we’d had a class or a few, we’d apply the techniques we’d learned to solving those bugs, either individually, in pairs, or small groups with a staff member or volunteer TA assisting us. Then a few more classes on techniques and more debugging, classes and debugging, and so on.

The debugging sessions were interesting. Often when you ask someone for help on something, they will answer with direct object-level suggestions – if your problem is that you’re underweight and you would like to gain some weight, try this or that. Here, the staff and TAs would eventually get to the object-level advice as well, but first they would ask – why don’t you want to be underweight? Okay, you say that you’re not completely sure but based on the other things that you said, here’s a stupid and quite certainly wrong theory of what your underlying reasons for it might be, how does that theory feel like? Okay, you said that it’s mostly on the right track, so now tell me what’s wrong with it? If you feel that gaining weight would make you more attractive, do you feel that this is the most effective way of achieving that?

Only after you and the facilitator had reached some kind of consensus of why you thought that something was a bug, and made sure that the problem you were discussing was actually the best way to address to reasons, would it be time for the more direct advice.

At first, I had felt that I didn’t have very many bugs to address, and that I had mostly gotten reasonable advice for them that I might try. But then the workshop continued, and there were more debugging sessions, and I had to keep coming up with bugs. And then, under the gentle poking of others, I started finding the underlying, deep-seated problems, and some things that had been motivating my actions for the last several months without me always fully realizing it. At the end, when I looked at my initial list of bugs that I’d come up with in the beginning, most of the first items on the list looked hopelessly shallow compared to the later ones.

Often in life you feel that your problems are silly, and that you are affected by small stupid things that ”shouldn’t” be a problem. There was none of that at the workshop: it was tacitly acknowledged that being unreasonably hindered by ”stupid” problems is just something that brains tend to do.  Valentine, one of the staff members, gave a powerful speech about ”alienated birthrights” – things that all human beings should be capable of engaging in and enjoying, but which have been taken from people because they have internalized beliefs and identities that say things like ”I cannot do that” or ”I am bad at that”. Things like singing, dancing, athletics, mathematics, romantic relationships, actually understanding the world, heroism, tackling challenging problems. To use his analogy, we might not be good at these things at first, and may have to grow into them and master them the way that a toddler grows to master her body. And like a toddler who’s taking her early steps, we may flail around and look silly when we first start doing them, but these are capacities that – barring any actual disabilities – are a part of our birthright as human beings, which anyone can ultimately learn to master.

Then there were the people, and the general atmosphere of the workshop. People were intelligent, open, and motivated to work on their problems, help each other, and grow as human beings. After a long, cognitively and emotionally exhausting day at the workshop, people would then shift to entertainment ranging from wrestling to telling funny stories of their lives to Magic: the Gathering. (The game of ”bunny” was an actual scheduled event on the official agenda.) And just plain talk with each other, in a supportive, non-judgemental atmosphere. It was the people and the atmosphere that made me the most reluctant to leave, and I miss them already.

Would I recommend CFAR’s workshops to others? Although my above description may sound rather gushingly positive, my answer still needs to be a qualified ”mmmaybe”. The full price tag is quite hefty, though financial aid is available and I personally got a very substantial scholarship, with the agreement that I would pay it at a later time when I could actually afford it.

Still, the biggest question is, will the changes from the workshop stick? I feel like I have gained a valuable new perspective on emotions, a number of useful techniques, made new friends, strengthened my belief that I can do the things that I really set my mind on, and refined the ways by which I think of the world and any problems that I might have – but aside for the new friends, all of that will be worthless if it fades away in a week. If it does, I would have to judge even my steeply discounted price as ”not worth it”. That said, the workshops do have a money-back guarantee if you’re unhappy with the results, so if it really feels like it wasn’t worth it, I can simply choose to not pay. And if all the new things do end up sticking, it might still turn out that it would have been worth paying even the full, non-discounted price.

CFAR does have a few ways by which they try to make the things stick. There will be Skype follow-ups with their staff, for talking about how things have been going since the workshop. There is a mailing list for workshop alumni, and the occasional events, though the physical events are very US-centric (and in particular, San Francisco Bay Area-centric).

The techniques that we were taught are still all more or less experimental, and are being constantly refined and revised according to people’s experiences. I have already been thinking of a new skill that I had been playing with for a while before the workshop, and which has a bit of that ”CFAR feel” – I will aim to have it written up soon and sent to the others, and maybe it will eventually make its way to the curriculum of a future workshop. That should help keep me engaged as well.

We shall see. Until then, as they say in CFAR – to victory!

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)
During my more pessimistic moments, I grow increasingly skeptical about our ability to know anything.

Take science. Academia is supposed to be our most reliable source of knowledge, right? And yet, a number of fields seem to be failing us. Any results shouldn't really be believed before they've been replicated several times. Yet, of the 45 most highly regarded studies within medicine suggesting effective interventions, 11 haven't been retested, and 14 have been shown to be convincingly wrong or exaggarated. John Ioannidis suggests that up to 90 percent of the published medical information that doctors rely on is flawed - and the medical community has for the most accepted most of his findings. ( http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/ ) His most cited paper, "Why Most Published Findings Are False" has been cited almost a thousand times.

Psychology doesn't seem to be doing that much better. Last May, the Journal of Personality & Social Psychology refused to publish ( http://psychsciencenotes.blogspot.com/2011/05/failing-to-replicate-bems-ability-to.html ) a failed replication of the parapsychology paper they published earlier. "The reason Smith gives is that JPSP is not in the business of publishing mere replications - it prioritises novel results, and he suggests the authors take their work to other (presumably lesser) journals. This is nothing new - flagship journals like JPSP all have policies in place like this. [...] ...major journals simply won't publish replications. This is a real problem: in this age of Research Excellence Frameworks and other assessments, the pressure is on people to publish in high impact journals. Careful replication of controversial results is therefore good science but bad research strategy under these pressures, so these replications are unlikely to ever get run. Even when they do get run, they don't get published, further reducing the incentive to run these studies next time. The field is left with a series of "exciting" results dangling in mid-air, connected only to other studies run in the same lab."

This problem is not unique to psychology - all fields suffer from it. But while we are on the subject of psychology, the majority of its results are from studies conducted on Western college students, who have been presumed to be representative of humanity. "A recent survey by Arnett (2008) of the top journals in six sub-disciplines of psychology revealed that 68% of subjects were from the US and fully 96% from ‘Western’ industrialized nations (European, North American, Australian or Israeli). That works out to a 96% concentration on 12% of the world’s population (Henrich et al. 2010: 63). Or, to put it another way, you’re 4000 times more likely to be studied by a psychologist if you’re a university undergraduate at a Western university than a randomly selected individual strolling around outside the ivory tower." Yet cross-cultural studies indicate a number of differences between industrialized and "small-scale" societies, in areas such "visual perception, fairness, cooperation, folkbiology, and spatial cognition". There are also a number of contrasts between "Western" and "non-Western" populations "on measures such as social behaviour, self-concepts, self-esteem, agency (a sense of having free choice), conformity, patterns of reasoning (holistic v. analytic), and morality" ( http://neuroanthropology.net/2010/07/10/we-agree-its-weird-but-is-it-weird-enough/ ; http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=7825833 ). Many supposedly "universal" psychological results may actually only be "universal" to US college students.

In any field, quantiative studies require intricate knowledge about statistics and a lot of care to get right. Academics are pressed to publish things at a fast pace, and the reviewers of scientific journals often have relatively low standards. The net result is that the researchers have neither the time nor the incentive to conduct their research with the necessary care.

Qualitative research doesn't suffer from this problem, but it suffers from the obvious problem of often having a limited sample group and difficult-to-generalize findings. Many social sciences that are heavily based on qualitative methods outright state that carrying out an objective analysis, where the researcher's personal attributes and opinions don't influence the results, is not just difficult but impossible in principle. At least with quantiative sciences, it may be possible to convincingly prove results wrong. With qualitative sciences, there is much more wiggle room.

And there's plenty of room for the wiggling to do a lot of damage even in the quantative sciences. From the previous article on John Ioannidis:

"Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process, in which journals ask researchers to help decide which studies to publish, to suppress opposing views. "You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct," says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine."

Of course, all of this is not to say that science wouldn't be good for anything. I'm typing this on a computer that obviously works, in an apartment built by human hands, surrounded by countless of technological widgets. The more closely related a science is to a branch of engineering, the more likely it is that it is basically right. Its ideas are constantly and rigorously being tested in a way that actually incentivizes being right, not just publishing impressive-looking studies. The farther out a science is from engineering and from having practical applications that can be tested at once, the more likely it is that it's just full of nonsense.

Take governmental institutions. Academia, at least, still has some incentive to seek the truth. Meanwhile, politicians have an incentive to look good to voters, who by and large do not care about the truth. The issues that citizens care the most strongly about tend to be the issues that they know the least about, and often they do not even know the political agendas of the parties or politicians that they vote for. For the average voter, who has very little influence on actual decisions but who can take a lot of pleasure from believing things that are actually pleasant to believe, remaining ignorant is actually a rational course of action. Statements that sound superficially good or that appeal to the predjudices of a certain segment of the population are much more important for politicians than actually caring about the truth. Often, even considering a politically unpopular opinion to be possibly true is thought to be immoral and suggestive of a suspicious character.

And various governmental institutions, from academics funded by government funds to supposedly neutral public institutions are all suspect to pressures from above to sound good and produce pleasing results. The official recommendations of any number of government agencies can be the result of political compromise as much as anything else, and researchers are routinely hired to act as the politicians' warriors ( http://www.overcomingbias.com/2011/01/academics-as-warriors.html ). Even seemingly apolitical institutions like schools and the police may fall victim to the pressure to produce good results and start reporting statistics and results that do not reflect reality. (For a particularly good illustration of this, watch all five seasons of The Wire, possibly the best television series ever made.)

Take the media. Is there any reason to expect the media to do much better? I don't see why there would be. Compared to academia, journalists are under even more time pressure to produce articles, have even less in the way of rigorous controls on truthfulness, and have even more of an incentive to focus on big eye-catching headlines. Even for the journalists who actually follow strict codes of ethics, the incentives for sloppy work are strong. Anybody who has an expertise in pretty much any field that's been reported on will know that what's written often has very little resemblance to reality.

Some time ago, there were big claims about how Twitter was powering revolutions and protests in a number of authoritarian countries. Many of us have probably accepted those claims as fact. But how true are they, really?

"In the Iranian case, meanwhile, the people tweeting about the demonstrations were almost all in the West. 'It is time to get Twitter’s role in the events in Iran right,' Golnaz Esfandiari wrote, this past summer, in Foreign Policy. 'Simply put: There was no Twitter Revolution inside Iran.' The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. 'Western journalists who couldn’t reach - or didn’t bother reaching? - people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,' she wrote. 'Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.'" ( http://www.newyorker.com/reporting/2010/10/04/101004fa_fact_gladwell )

Take the Internet. Online, we are increasingly living in filter bubbles ( http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html ; https://en.wikipedia.org/wiki/Filter_bubble ), where the services we use attempt to personalize the information we read to what they think we want to see. Maybe you've specifically gone to the effort of including both liberals and conservatives as your Facebook friends, as you want to be exposed to the opinions of both. But if you predominantly click on the liberal links, then eventually the conservative updates will be invisibly edited out by Facebook's algorithms, and you will only see liberal updates in your feed. Various sites are increasingly using personalization techniques, trying to only offer us content they think we want to see - which is often the content most likely to appeal to our existing opinions.

Take yourself. Depressed by all of the above? Think you should only trust yourself? Unfortunately, that might very well produce even worse results than trusting science. We are systematically biased to favorably misremember events, only seek evidence confirming our beliefs, and interpret everything in our own favor. Our conscious minds may not be evolved to look for the truth at all, but to choose of various defensible positions the one that the most favors ourselves. ( http://lesswrong.com/lw/8gv/the_curse_of_identity/ ; http://lesswrong.com/tag/whyeveryonehypocrite ) Our minds run on corrupted hardware: even as we think we are trying to impartially look for the truth, other parts of our brains are working hard to give us that impression while hiding the actual biased thought processes we engage in. We have conscious access to only a small part of our thought processes, and have to rely on countless amounts of information prepared by cognitive mechanisms whose accuracy we have no way of verifying directly. Science, at least, has _some_ safeguards in place that attempt to counter such mechanisms - in most cases, we will still do best by relying on expert opinion.

"But if you plan to mostly ignore the experts and base your beliefs on your own analysis, you need to not only assume that ideological bias has so polluted the experts as to make them nearly worthless, but you also need to assume that you are mostly immune from such problems!" ( http://www.overcomingbias.com/2011/02/against-diy-academics.html )

----

Most of the things I know are probably wrong: with each thing I think I learn, I might be learning falsehoods instead. Because the criteria for an idea catching on and for an idea to be true are different, the ideas that a person is the more likely to hear about are ones that are more likely to be wrong. Thus most of the things I run across in my life (and accept as facts) will be wrong.

And of course, I'm quite aware of the irony in that I have here appealed to a number of sources, all of which might very well be wrong. I hope I'm wrong about being wrong, but I can't count on it.

(Essay also cross-posted to Google Plus.)
xuenay: (Default)
Lately, the excellent blog Overcoming Bias has had discussion about the rationality and psychology of disagreement. I admit that I don't entirely understand everything that's discussed there - apparently in 1976, a Nobel prize winner published a paper which says roughly that, in theory, people who have the same information and who talk to each other long enough cannot agree to disagree. This has led to the release of a large amount of subsequent papers, some of which discuss the issue in rather abstract terms and long mathemathical proofs, considering mostly perfectly rational creatures, leaving their exact relevance to the field of human thought a bit unclear. Nevertheless, the bits that I've gleaned from some of the blog posts discussing this subject have been most interesting.

Let's discuss the issue in plain English, without invoking any math or formal logic. The principle is simple, almost obvious when you think about it. Let's assume that we have two people who have a shared goal, and disagree about how to best reach that goal. Since they disagree, that must mean that they have different information - the information person A has says that approach X is better, while the information person B has says that approach Y is better. Now, when they sit down to discuss the issue, they start sharing the information they have with each other, until finally both know exactly the same things. Since they both now know the same things, logically they should also both draw the same conclusions from this material. Thus, assuming they're perfectly rational and have enough time to discuss the issue, in the end they cannot agree to disagree about it. They may have reasons to interpret the information they have differently, but if they do so there is still something they haven't shared with each other, since presumably they have a reason to interpret the information they have differently.

Now, of course we all know that humans aren't perfectly rational creatures. Still, this is a subject that I've thought about every now and then - in just about every field of human behavior, huge disagreements persist about things that have been debated back and forth for ages, with plenty of experimental evidence to go around (consider, for one, the divisions between the political right and the left). Even though I know that people don't really think rationally about most things, this still strikes me as somewhat strange - typically both sides have plenty of really smart people arguing their cases, and often people devote practically their entire lives to the study of these things. There are no doubt plenty of folks who are just biased beyond belief, but nevertheless there should be enough people who really want to find out the truth that these things would get resolved relatively quickly. So what causes might there be for all of these persisting disagreements?

* People might actually have different goals. By Hume's Guillotine, moral rules cannot be directly derived from physical facts. One person can believe that positive rights are inherently the most important things to achieve, while another believes that negative rights are more important. (This is what I suspect is behind a lot of right-left dispute.) One person can believe that maximizing humanity's happines is the most important, while another can believe that living a pure and sinless life in the eyes of God is the most important. For as long as these underlying moral beliefs are axiomatic and not based on any other information (and some beliefs must be, if a person is to have them at all), they cannot be challenged by learning new things.
* People might treat the same information differently based on extra-informational factors. For instance, they might have a genetic disposition towards optimism or pessimism. Also, the human mind is built so that if one learns of something that conflicts with something they already know, they're more likely to discredit the new information than the old one. Thus, simply changing the order in which information is received may alter the information processing, even if ultimately one has all the same information as somebody else. Somebody who is first trained as an engineer and then majors in the humanities will view their both educations entirely differently than somebody who first gets their humanities degree and then goes to study engineering.
* The issue may be too complex to be comprehended fully, or just simply so hard to understand that human minds can never hope to fully grasp it. Of course, in this situation, the most rational choice would be to just accept that it's impossible to really know or that more research into the matter is needed, not to simply cling to the side you'd wish to win more.
* The sides discussing the matter may both have so much information that they cannot hope to ever share all of it in the limited time that they have, or it's not worth spending so much time on to share it all. This argument works on some issues, but it's more dubious on things like politics that are extensively debated - after all, if you're politically on the left at the age of 20, it's not very plausible to assume that you can't communicate all the information that's led to you to this stance, even if you spend the rest of your life talking about it.
* Different ways of communicating information have various effectiveness, and some things can't be communicated with speech alone. You can spend a whole day being told about the economist's mindset by a professor with PhDs in both economy and pedagogy, but even then you still won't internalize all of it as well as somebody who has spent five years in university studying economics. Also, people who experience something themselves and simply hear about somebody else's experience tend to give their own experiences more value.
* One does not always know that he knows something. You can have beliefs that are well-founded in facts you know, but when asked to explain them, you don't remember the original evidence that convinced you anymore. Various incidents where you've seen certain behavior can compress themselves in your mind until it's obvious to you that something works a certain way, and when somebody disagrees you think he's being silly without being able to prove yourself to be right.
* One can be affected by a large amount of other biases. Just look at Wikipedia's list of cognitive biases. It's depressingly long.
* Finally, one might simply not care about the truth, and be uninterested in encountering conflicting points of view. An interesting question is how often this might actually be a good thing - there is a concept known as rational irrationality (HTML version), which basically states that in many situations the benefit people would derive from knowing the truth is practically nonexistent (believing or not believing in evolution doesn't directly influence your life in any way, regardless of which way you swung). Thus, spending even a minimal amount of effort trying to find out the truth might be pointless - irrational, even. And sometimes unfounded beliefs might even benefit you (religion is a good example).

Let's assume, though, that you are a seeker of the truth - if not entirely, then at least in part. You want to know how things are in reality. What lessons should we draw from all of this, and how should you act? Here are my personal suggestions, though I do not claim that I would yet follow them all to the letter. Still, they are things to strive for.

* Study things from as many points of view as possible, and try to understand as many models of thought as you can. This way, you can better understand the behavior of other people, and how people can think in ways that seem incomprehensible to you. If an atheist, talk to religious people until you understand them well enough not to consider them silly; if religious, talk to atheists until you understand them in the same way. Get at least passingly familiar with all the existing genres of fiction, and especially study science fiction - the good sort of science fiction, the one that isn't just "laserguns for revolvers and spaceships for horses" but instead builds on premises and settings that are as bizarre and unusual for us as possible. At the same time, beware the fallacy of generalization from fictional evidence, and always keep in mind that you are reading fiction, not scientific studies. Fiction is just stuff that someone has invented. It doesn't prove that things would go that way in real life, and you should be very cautious of letting the images painted in fictional works color your concept of what the world really is like.
* Become interdisciplinary. Do for science what you did for fiction, for you never know what branch of human thought might grant you the knowledge you need to understand a phenomenon. Where fiction could lead you to misapplied conclusions, science will give you the methods you need to truly understand the world - even the methods that might feel counterintuitive to the one not skilled in them. Study mathemathics, economy, history, psychology, physics, everything.
* Recognize your fallibility. Realize that in a quest for the truth, your own biases become your worst enemy. To defeat your enemy you must understand it, so set forth on studying it. Follow blogs like Overcoming Bias. Read up on the field of heurestics and biases - the book Judgment Under Uncertainty: Heuristics and Biases comes highly recommended, and though I haven't read it yet, I plan to do so soon. Find the time to peruse articles like Wikipedia's list of cognitive biases and Cognitive Biases Potentially Affecting Judgment of Global Risks. In your interdisciplinary studies, especially emphasize the sciences that help you in understanding and combating your bias, and the ones that allow you to think clearly - in his Twelve Virtues of Rationality (which is required reading for you), Eliezer Yudkowsky recommends evolutionary psychology, heuristics and biases, social psychology, probability theory and decision theory. Read texts that are obviously biased, so that you are better in spotting the milder biases. Bookmark lists of debating fallacies. Practice the Art of Rationality in whatever ways you can.
* Actively adjust your thoughts and hypotheses based on information you have about yourself, and of others. In Uncommon Priors Require Origin Disputes (it has some formal logic, but you can just read the plain English summaries and skip the formal bits - that's what I did), Robin Hanson discusses the example of two astronomers with differing opinions about the size of the universe. He notes that they cannot base their differences of opinion on genetic differences influencing optimism and pessimism, because the laws of heredity work independent of the size of the universe - a person inheriting the gene for optimism does not alter the size of the universe (or vice versa), so he must seek to remove the effect that gene has on his thought. Find out which influences on your thought are correlated with better understanding the world, and eliminate others. You having an IQ different from others is relevant to whether or not your hypotheses are accurate, but you having been born in a geographical location where a certain point of view tends to be favored is not.
* When I in my last point said that I skipped the formal bits of a paper and just read the summary? Don't do that. Strive for a technical understanding of all things, as is explained in A Technical Explanation of Technical Explanation. If you know that "everything is waves" but do not understand the mathemathical and physical concepts behind that sentence, then you do not really know anything but a fancy phrase. You cannot use it to make valid predictions, and if you can't use it to make predictions, it's useless for you. Strive for an ability to make testable predictions, not an ability to explain anything you encounter.
* Discuss the same subjects repeatedly, even with the same people. If you are losing a debate but still cannot admit you're wrong, ask for time to ponder upon it. Decide if your hesitation was you being too caught up in the defense of a topic, in which case you only need time to get over it and accept your opponent's arguments, or because there was more relevant information in your mind that you couldn't recall at the moment, in which case you need time for your subconsciousness to bring them to your mind. Be very sceptical of yourself if you disagree with something, but cannot justify it even with time - you might be dealing with bias instead of forgetten knowledge. If questioned, be prepared to double-check your intuitions of what is obvious from scientific studies, and be ready to discard your intuitions if necessary.
* Avoid certainty, and of all people, be the harshest on yourself. 80% of drivers thinks they belong in the top 30% of all drivers, and even people aware of cognitive biases often seem to think those biases don't apply to them. People tend to find in ambiguous texts the points that support their opinions, while discounting the ones that disagree with them. Question yourself, and recognize that if you want your theories to find the truth, you can never be the only one to evaluate them. Subject them for criticism and peer review, and find those with the most conflicting thoughts to look them over. Never think that you have found the final truth, for when you do that, you stop looking for other explanations. Remember the scientists behind the Castle Bravo nuclear test, whose mistake in their calculations was thinking their calculations complete and forgetting to factor in the point they had forgotten. Consider impossible scenarios. Meditate on the mantra of "nothing is impossible, only extremly unlikely". Think of the world in terms of probabilities, not certainties.

September 2017

S M T W T F S
     12
34567 89
10111213141516
171819202122 23
24252627282930

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 23rd, 2017 06:15 pm
Powered by Dreamwidth Studios