xuenay: (Default)
Yes, I know that I'm way behind on my reports: session III was over a month ago. Better late than never.

I've been thinking about global workspace theory on and off in the context of meditation. Haven't come up with anything particularly insightful, basically just a repetition of the argument in the Dietrich paper: in meditation, attentional resources are used to actively amplify a particular event such as a mantra until it becomes the exclusive content in the working memory buffer. This intentional, concentrated effort selectively disengages all other cognitive capacities of the prefrontal cortex.

Put into GWT terminology, normally sensory systems and "thought systems" within our brain generate a number of (bottom-up) inputs that compete for control of the global neuronal workspace (GNW), and some process of top-down attention picks inputs that get strenghtened until they dominate the neuronal workspace. In meditation, the practicioner seems to train their attentional network into only choosing a specific set of stimuli (e.g. their breath, a mantra, the sensations of their body, etc.) and ignoring all the others. As they concentrate on these stimuli, those get transmitted into all the brain regions that receive input from the GNW. Since this is an abnormal input that most of the systems can't do anything with, they gradually get turned off - especially since the it doesn't matter what output they produce in response, as the successful meditation practicioner pays no attention to it. Of course, it will take a lot of practice for a practicioner to get this far, since the brain is practically built to "get sidetracked" from meditation and concentrate on something more important.

It's interesting to ask why this would lead to perceptual changes, such as an increased tolerance for pain. A straighforward guess would be that if the GW/GNW gets taken over by a very simple stimulus, and that stimulus gets broadcast into all the different systems in the brain, then there are systems related to learning that can't help but to analyze the stimulus. If a meditation practicioner consciously begins to break a sensation into smaller and smaller components, or begins to note and name individual sensations, then the implict learning systems will pick up on this and learn how to do it better. Also, as the meditator forces his brain to analyze very simple inputs, the brain allocates disproportionate computational resources into analyzing them and begins to find in them increasingly subtle hidden details - which the meditator then dismisses, forcing his brain to go to even more extreme lengths to find something. Over time and with enough practice, he learns to feel and notice these subtle sensations even when not meditating.

Of course, it's a bit of a misnomer to talk about the brain "finding" subtler sensations, since those sensations are themselves also generated by the brain. Rather what's happening is that there is a hierarchical process in which simpler inputs get increasingly complex layers of interpretation applied on them, and meditation strips away those layers of interpretation. Thus information that's usually thrown away during earlier processing stages becomes revealed and accessible to the conscious mind. That'd my guess, anyway. It's also interesting to note that savant abilities are also hypothesized to be created via having access to lower-level brain processing, but so far I haven't heard of anyone becoming a genius savant through meditation, even if it should be theoretically possible.

As I noted the last time, there's still the puzzle of how the attentional networks find out about an input that might be worth promoting into the GNW, if the GNW is already dominated by another input. A hypothesis that might make sense is that we're actually rapidly cycling a lot of content into and out of consciousness, and the attentional networks decide which stuff gets the most "clock cycles" (here's an obvious analogy to operating systems and multiprogramming). E.g. this text gets processed within the GNW, then I hear a sound coming from outside and that input pushes its way to the GNW for a brief moment, and then an attentional system decides that it isn't important and gets back to the task of writing this text. While the outside noise has pushed the text out of the GNW, it's still locally active in the brain regions that were most heavily involved in processing it, and the attentional network can home into the activation in those regions and strenghten it again.

Alternatively, this whole hypothesis of swapping stuff in and out might be unnecessarily complicated, and there could just be cross-region communication that wsan't conscious. There are a number of results saying that cross-modality integration of sense data can happen without consciousness. E.g. in ventriloquism we see a talking puppet mouth and hear sound coming from the puppeteer's closed mouth. Somehow this conflict gets resolved into us hearing the sound as if it were coming from the puppeteer's mouth, without us being consciously aware of the process. Also the results of the paper below, which suggest that attention and consciousness can both occur without each other, would support that hypothesis.

None of that actually has anything to do with the third session, though - it's just stuff that occurred to me while thinking about some of the seminar papers in general. So let's get to the actual topic...

----

The third Neuroinformatics presentation covered Giulio Tononi & Christof Koch (2008) The Neural Correlates of Consciousness: An Update. Annals of the New York Academy of Sciences. The paper was pretty packed with information, and there was a lot of interesting stuff mentioned. I won't try to cover all of it, but will rather concentrate on some of the most interesting bits.

In particular, the previous Neuroinformatics papers seemed to come to close to equating consciousness and attention. If input from our senses (or from internal sources like e.g. memory) becomes conscious if it is chosen to be promoted to consciousness by attentional processes, does that mean that we are conscious of the things that we pay attention to? Subjectively, I'm often conscious of experiences that I try to direct my attention away from, though that might just mean that a top-down attentional mechanism is competing with a bottom-up one. Introspection is notoriously unreliable, anyway.

Tononi & Koch argue that the two are not the same, and there can be both attention without consciousness and consciousness without attention. Let's first look at attention without consciousness. Among the studies that they cite, Naccache et al. (2002) is probably the easiest to explain.

The experimental subjects were shown ("target") numbers ranging from 1 to 9, and had to say whether the number they saw was smaller or larger than 5. (They were not shown any fives.) Unknown to them, each number was preceded by another ("priming") number, hidden by a geometric masking shape. In some versions of the experiment, the subjects knew when they were going to see the number, and could pay attention around that time. In other version, they did not, and could not focus their attention specifically at the right window in time. When the subjects were paying attention at the right time (and therefore also paying attention to the priming number), there was what's called a priming effect. Their reaction times were faster when the prime number was congruent with the target number, i.e. either both were smaller than 5 or both were larger. When the numbers were incongruent, the reaction times were faster. When the subjects couldn't focus their attention on the right time period, the priming effect didn't occur. Tononi & Koch interpret these results to mean that there can be attention without consciousness: the priming numbers were always seen too quickly to enter conscious awareness, but they caused a priming effect depending on whether or not the subjects paid attention to them.

The opposite case is consciousness without attention. There are experiments in which the subjects are made to focus their attention to the middle of their visual field, and something else is then briefly flashed in their peripheral field of vision. Subjects are often capable of reporting on the contents of the peripheral image and performing some quite complex discrimination tasks. They can tell male faces from female ones, or distinguish between famous and non-famous people, even though the image was (probably) flashed too briefly for top-down attention to kick in. At the same time, they cannot perform some much easier tasks, such as discriminating a rotated letter "L" from a rotated letter "T". So at least some kinds of consciousness-requiring tasks seem to be possible in the absence of directed attention, while others aren't.

Tononi & Koch conclude this section by summarizing their view of the differences between attention and consciousness, and by citing Baars and saying something akin to his Global Workspace Theory:

Attention is a set of mechanisms whereby the brain selects a subset of the incoming sensory information for higher level processing, while the nonattended portions of the input are analyzed at a lower band width. For example, in primates, about one million fibers leave each eye and carry on the order of one megabyte per second of raw information. One way to deal with this deluge of data is to select a small fraction and process this reduced input in real time, while the nonattended data suffer from benign neglect. Attention can be directed by bottom-up, exogenous cues or by top-down endogenous features and can be applied to a spatially restricted part of the image (focal, spotlight of attention), an attribute (e.g., all red objects), or to an entire object. By contrast, consciousness appears to be involved in providing a kind of “executive summary” of the current situation that is useful for decision making, planning, and learning (Baars).


As has often been the case lately, I wonder how much weight I should actually put on these results. A study that has not been replicated is little better than an anecdote, and while Tononi & Koch do cite several studies with similar results, there have been previous cases where the initial replications all seemed to support a theory but then stopped doing so. So for all that I know, everything in the paper (and the previous papers, of course) might turn out to be wrong within a few years. Still, it's the best that we have so far.

Like some of the GWS/GNS papers, this one also suggested that non-dreaming sleep involves reduced connectivity between cortical regions, and the regions communicate in a more local manner. That's also interesting.
xuenay: (Default)
Today was the second session of the Neuroinformatics 4 course that I'm taking. Each participant has been assigned some paper from this list, and we're all supposed to have a presentation summarizing the paper. We're also supposed to write a diary about each presentation and hand it in in the end, which is the reason why I'm typing this entry. I figure that if I'm going to keep a diary about this, I might as well make it public.

Session I: Global Workspace Theory. I held the first presentation, which covered Global Workspace Theory as explained by Baars (2002, 2004). You can read about it in those papers, but the general idea of GWT is that that which we experience as conscious thought is actually information that's being processed in a "global workspace", through which various parts of the brain communicate with each other.

Suppose that you see in front of you a delicious pie. Some image-processing system in your brain takes that information, processes it, and sends that information to the global workspace. Now some attentional system or something somehow (insert energetic waving of hands) decides whether that stimulus is something that you should become consciously aware of. If it is, then that stimulus becomes the active content of the global workspace, and information about it is broadcast to all the other systems that are connected to the global workspace. Our conscious thoughts are that information which is represented in the global workspace.

There exists some very nice experimental work which supports this theory. For instance, Dehaene (2001) showed experimental subjects various words for a very short while (29 milliseconds each). Then, for the next 71 milliseconds, the subjects either saw a blank screen (the "visible" condition) or a geometric shape (the "masking" condition). Previous research had shown that in such an experiment, the subjects will report seeing the "visible" words and can remember what they said, while they will fail to notice the "masked" words. That was also the case here. In addition, fMRI scans seemed to show that the "visible" words caused considerably wider activation in the brain than the "masked" words, which mainly just produced minor activation in area relating to visual processing. The GWT interpretation of these results would be that the "visible" words made their way to the global workspace and activated it. For the "masked" words there was no time for that to happen, since the sight of the masking shape "overwrote" the contents of the visual system before the sight of the word had had the time to activate the global workspace.

That's all fine and good, but Baars's papers were rather vague on a number of details, like "how is this implemented in practice"? If information is represented in the global workspace, what does that actually mean? Is there a single representation of the concept of a pie in the global workspace, which all the systems manipulate together? Or is information in the global workspace copied to all of the systems, so that they are all manipulating their own local copies and somehow synchronizing their changes through the global workspace? How can an abstract concept like "pie" be represented in such a way that systems as diverse as those for visual processing, motor control, memory, and the generation of speech (say) all understand it?

Session II: Global Neuronal Workspace. Today's presentation attempted to be a little more specific. Dehaene (2011) discusses the Global Neuronal Workspace model, based on Baars's Global Workspace model.

The main thing that I got out of today's presentation was that the brain is the idea of the brain being divisible into two parts. The processing network is a network of tightly integrated, specialized processing units that mostly carry out non-conscious computation. For instance, early processing stages of the visual system, carrying out things like edge detection, would be a part of the processing network. The "processors" of the processing network typically have "highly specific local or medium range connections" - in other words, the processors in a specific region mostly talk with their close neighbors and nobody else.

The various parts of the processing network are connected by the Global Neuronal Workspace, a set of cortical neurons with long-range axons. The impression I got was this is something akin to a set of highways between cities, or different branches of a post office. Or planets (processing network areas) joined together by a network of Hyperpulse Generators (the Global Neuronal Workspace). You get the idea. I believe that it's some sort of a small world network.

Note that contrary to intuition and folk psychology (but consistently with the hierarchical consciousness hypothesis), this means that there is no single brain center where conscious information is gathered and combined. Instead, as the paper states, there is "a brain-scale process of conscious synthesis achieved when multiple processors converge to a coherent metastable state". Which basically means that consciousness is created by various parts of the brain interacting and exchanging information with each other.

Another claim of GNW is that sensory information is basically processed in a two-stage manner. First, a sensory stimulus causes activation in the sensory regions and begins climbing up the processor hierarchy. Eventually it reaches a stage where it may somehow be selected to be consciously represented, with the criteria being "its adequacy to current goals and attention state" (more waving of hands). If it does, it becomes represented in the GNW. It "is amplified in a top-down manner and becomes maintained by sustained activity of a fraction of GNW neurons": this might re-activate the stimulus signal in the sensory regions, where its activation might have already been declining. Something akin to this model has apparently been verified in a number of computer simulations and brain imaging studies.

Which sounds interesting and promising, though this still leaves a number of questions unclear. For instance, the paper claims that only one thing at a time can be represented in the GNW. But apparently the thing that gets represented in the GNW is partially selected by conscious attention, and the paper that I previously posted about placed the attentional network in the prefrontal cortex (i.e. not in the entire brain). So doesn't the content in the sensory regions then need to first be delivered to the attentional networks (via the GNW) so that the attentional networks can decide whether that content should be put into the GNW? Either there's something wrong with this model, or I'm not understanding it correctly. I should probably dig into the references. And again, there's the question of just what kind of information is actually put into the GNW in such a manner that all of the different parts of the brain can understand it.

(Yes, I realize that my confusion may seem incongruent with the fact that I just co-authored a paper where we said that we "already have a fairly good understanding on how the cerebral cortex processes information and gives rise to the attentional processes underlying consciousness". My co-author's words, not mine: he was the neuroscience expert on that paper. I should probably ask him when I get the chance.)

July 2017

S M T W T F S
      1
2345678
9101112131415
16171819 202122
23242526272829
3031     

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 22nd, 2017 12:50 pm
Powered by Dreamwidth Studios