xuenay: (sonictails)

Whatever next? Predictive brains, situated agents, and the future of cognitive science (Andy Clark 2013, Behavioral and Brain Sciences) is an interesting paper on the computational architecture of the brain. It’s arguing that a large part of the brain is made up of hierarchical systems, where each system uses an internal model of the lower system in an attempt to predict the next outputs of the lower system. Whenever a higher system mispredicts a lower system’s next output, it will adjust itself in an attempt to make better predictions in the future.

So, suppose that we see something, and this visual data is processed by a low-level system (call it system L). A higher-level system (call it system H) attempts to predict what L’s output will be and sends its prediction down to L. L sends back a prediction error, indicating the extent to which H’s prediction matches L’s actual activity and processing of the visual stimulus. H will then adjust its own model based on the prediction error. By gradually building up a more accurate model of the various regularities behind L’s behavior, H is also building up a model of the world that causes L’s activity. At the same time, systems H+, H++ and so on that are situated “above” H build up still more sophisticated models.

So the higher-level systems have some kind of model of what kind of activity to expect from the lower-level systems. Of course, different situations elicit different kinds of activity: one example given in the paper is that of an animal “that frequently moves between a watery environment and dry land, or between a desert landscape and a verdant oasis”. The kinds of visual data that you would expect in those two situations differs, so the predictive systems should adapt their predictions based on the situation.

And apparently, that is what happens – when salamanders and rabbits are put to varying environments, half of their retinal ganglion cells rapidly adjust their predictions to keep up with the changing image predictions. Presumably, if the change of scene was unanticipated, the higher-level systems making predictions of the ganglion cells will then quickly get an error signal indicating that the ganglion cells are now behaving differently from what was expected based on how they acted just a moment ago; this should also cause them to adjust their predictions, and data about the scene change gets propagated up through the hierarchy.

This process involves the development of “novelty filters”, which learn to recognize and ignore the features of the input that most commonly occur together within some given environment. Thus, things that are “familiar” (based on previous experience) and behave in expected ways aren’t paid attention to.

So far we’ve discussed a low-level system sending the higher-level an error signal when the predictions of the higher-level system do not match the activity of the lower-level system. But the predictions sent by the higher-level system also serve a function, by acting as Bayesian priors for the lower-level systems.

Essentially, high up in the hierarchy we have high-level models of how the world works, and what might happen next based on those models. The highest-level system, call it H+++, makes a prediction of what the next activity of H++ is going to be like, and the prediction signal biases the activity of H++ in that direction. Now the activity of H++ involves making a prediction of H+, so this also causes H++ to bias the activity of H+ in some direction, and so on. When the predictions of the high-level models are accurate, this ends up minimizing the amount of error signals sent up, as the high-level systems adjust the expectations of the lower-level systems to become more accurate.

Let’s take a concrete example (this one’s not from the paper but rather one that I made up, so any mistakes are my own). Suppose that I am about to take a shower, and turn on the water. Somewhere in my brain there is a high-level world model which says that turning on the shower faucet will lead to water pouring out, and because I’m standing right below it, the model also predicts that the water will soon be falling on my body. This prediction is expressed in terms of the expected neural activity of some (set of) lower-level system(s). So the prediction is sent down to the lower systems, each of which has its own model of what it means for water to fall on my body, and each of which send that prediction down to yet more lower-level systems.

Eventually we reach some pretty low-level system, like one predicting the activity of the pressure- and temperature-sensing cells on my skin. Currently there isn’t yet water falling down on me, and this system is a pretty simple one, so it is currently predicting that the pressure- and temperature-sensing cells will continue to have roughly the same activity as they do now. But that’s about to change, and if the system did continue predicting “no change”, then it would end up being mistaken. Fortunately, the prediction originating from the high-level world-model has now propagated all the way down, and it ends up biasing the activity of this low-level system, so that the low-level system now predicts that the sensors on my skin are about to register a rush of warm water. Because this is exactly what happens, the low-level system generates no error signal to be sent up: everything happened as expected, and the overall system acted to minimize the overall prediction error.

If the prediction from the world-model would have been mistaken – if the water had been cut, or I accidentally turned on cold water when I was expecting warm water – then the biased prediction would have been mistaken, and an error signal would have been propagated upwards, possibly causing an adjustment to the overall world-model.

This ties into a number of interesting theories that I’ve read about, such as the one about conscious attention as an “error handler”: as long as things follow their familiar routines, no error signals come up, and we may become absent-minded, just carrying out familiar habits and routines. It is when something unexpected happens, or something of where we don’t have a strong prediction of what’s going to happen next, that we are jolted out of our thoughts and forced to pay attention to our surroundings.

This would also help explain why meditation is so notoriously hard: it involves paying attention to a single unchanging stimuli whose behavior is easy to predict, and our brains are hardwired to filter any unchanging stimuli whose behavior is easy to predict out of our consciousness. Interestingly, extended meditation seems to bring some of the lower-level predictions into conscious awareness. And what I said about predicting short-term sensory stimuli ties nicely into the things I discussed back in anticipation and meditation. Savants also seem to have access to lower-level sensory data. Another connection is the theory of autism as weakened priors for sensory data, i.e. as a worsened ability for the higher-level systems to either predict the activity of the lower-level ones, or to bias their activity as a consequence.

The paper has a particularly elegant explanation of how this model would explain binocular rivalry, a situation where a test subject is shown one image (for example, a house) to their left eye and another (for example, a face) to their right eye. Instead of seeing two images at once, people report seeing one at a time, with the two images alternating. Sometimes elements of unseen image are perceived as “breaking through” into the seen one, after which the perceived image flips.

The proposed explanation is that there are two high-level hypotheses of what the person might be seeing: either a house or a face. Suppose that the “face” hypothesis ends up dominating the high-level system, which then sends its prediction down the hierarchy, suppressing activity that would support the “house” interpretation. This decreases the error signal from the systems which support the “face” interpretation. But even as the error signal from those systems decreases, the error signal from the systems which are seeing the “house” increases, as their activity does not match the “face” prediction. That error signal is sent to the high-level system, decreasing its certainty in the “face” prediction until it flips its best guess prediction to be one of a house… propagating that prediction down, which eliminates the error signal from the systems making the “house” prediction but starts driving up the error from the systems making the “face” prediction, and soon the cycle repeats again. No single hypothesis of the world-state can account for all the existing sensory data, so the system ends up alternating between two conflicting hypotheses.

One particularly fascinating aspect of the whole “hierarchical error minimization” theory as presented so far is that it can also cover not only perception, but also action! As hypothesized in the theory, when we decide to do something, we are creating a prediction of ourselves doing something. The fact that we are actually not yet doing anything causes an error signal, which in turn ends up modifying the activity of our various motor systems so as to cause the predicted behavior.

As strange as it sounds, when your own behaviour is involved, your predictions not only precede sensation, they determine sensation. Thinking of going to the next pattern in a sequence causes a cascading prediction of what you should experience next. As the cascading prediction unfolds, it generates the motor commands necessary to fulfill the prediction. Thinking, predicting, and doing are all part of the same unfolding of sequences moving down the cortical hierarchy.

Everything that I’ve written here so far only covers approximately the first six pages of the paper: there are 18 more pages of it, as well as plenty of additional commentaries. I haven’t yet had the time to read the rest, so I recommend checking out the paper itself if this seemed interesting to you.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)

Germund Hesslow’s paper Conscious thought as simulation of behaviour and perception, which I first read maybe three months back, has an interesting discussion about anticipations.

I was previously familiar with the idea of conscious thought involving simulation of behavior. Briefly, the idea was that when you plan an action, you are simulating (imagining) various courses of action and evaluating their possible outcomes in your head. So you imagine bringing your boyfriend some flowers, think of how he’d react to that, and then maybe decide to buy him chocolate instead. Imagining things is a process of constructing a simulation of them. Nothing too surprising in that idea. Here’s how Hesslow puts it:

What we perceive is quite often determined by our own behaviour: visual input is changed when we move our head or eyes; tactile stimulation is generated by manipulating objects in the hands. The sensory consequences of behaviour are to a large extent predictable (Fig. 2a). The simulation hypothesis postulates the existence of an associative mechanism that enables the preparatory stages of an action to elicit sensory activity that resembles the activity normally caused by the completed overt behaviour (Fig. 2b). A plausible neural substrate for such a mechanism is the extensive fibre projection from the frontal lobe to all parts of sensory cortex. Very little is known about the function of these pathways, but there is physiological evidence from monkeys that neurons in polysensory cortex can be modulated by movement[33].

But the “buy flowers or chocolate?” example concerns relatively long-term decision-making. We also simulate the short-term consequences of our actions (or at least try to). And what I had not consciously realized before, but what was implied in the excerpt above, was that very immediate consequences will be simulated as well.

Discussing this paper with a friend, we considered the subjective experience of such anticipatory simulations. Suppose that I want to open a door, and start pushing down the handle. Even before I’ve pushed it all the way down, I seem to already experience a mild foretaste of what having pushed it down feels like. I know what it will feel like to have completed the action, a fraction of a second before actually having completed that action, and it feels faintly pleasing when that anticipation is realized.

Which was interesting to realize, but not particularly earth-shattering by itself. But the real discovery came soon after reading the paper. I was doing some vipassana-style meditation, focusing on the feeling of discomfort that came from wanting to swallow as there was excess saliva gathering in my mouth. I realized that what I thought of as “discomfort” was actually a denied anticipation. I wanted to swallow, and there was already in my mind a simulation of what swallowing would feel like. I was already experiencing some of the pleasure that I would get from swallowing, and my discomfort came from the fact that I wanted to experience the rest of that pleasure. When I realized this, I focused on that anticipated pleasure, trying to either make it stop feeling pleasant, or alternatively, strengthen the pleasure so that I could enjoy it without actually swallowing. My clock rang before I could fully succeed in either, but I did notice that it made it considerably easier to resist the urge.

On my way to town, I started observing my mental processes and noticed that that tiny anticipation of pleasure was everywhere. Coming to the train station, there was an anticipation of not needing to wait for long. Using a machine to buy more time on my train card, there was an anticipation of the machine working. Waiting for the train, there was an anticipation of seeing the train arrive and getting to board it. And each time that I experienced discomfort, it was from that subtle anticipation being denied. Anticipating the experience of seeing the train being there on time could have led to frustration if it was running late. Anticipating the experience of boarding the train led to impatience as the train wasn’t there yet, and that sequence of planned action that had already been partially initiated couldn’t finish. Suddenly I was seeing the anticipatory component in every feeling of discomfort I had.

When I realized that, I started writing an early draft of this post, which contained the following rather excited paragraph:

That’s what “letting go of attachments” refers to. That’s what “living in the moment” refers to. Letting go of the attachment to all predictions and anticipations, even ones that extend only seconds into the future. If one doesn’t do that, they will constantly be awaiting what happens in some future moment, and will experience constant frustrations. On some intellectual level I already understood that, but I needed to develop the skill for actually noticing all my split-second anticipations before I could really get it.

Unfortunately, what often happens with insights gained from meditation is that one simply forgets to apply them. Or if one does, in principle, remember that they should apply the insights, they’ll have forgotten how. Being able to isolate the anticipation from the general feeling of frustration, and then knowing how to let go of the attachment to it, is a tricky skill. And I ended up mostly just forgetting about it, especially once my established routine of meditating once per day got interrupted for a month or so.

I did some meditation today, and finally remembered to try out this technique again. I started looking for such anticipations whenever I experienced a feeling of discomfort, and when I found any, I just observed them and let go of them. And it worked – I was capable of meditating for a total of 70 minutes in one sitting, and got myself to a pleasant state of mind where everything felt good. That feeling persisted for most of the rest of the day.

But after that session, it feels like my earlier characterization of the technique as “a cessation of attachments to predictions” would be a little off. That description feels clunky, and like it doesn’t properly describe the experience. “Letting go of a desire for sensations to feel different” sounds more like it, but I’m not sure of what exactly the difference is.

This probably also relates to another meditation experience, which I had about two months back. I was concentrating on my breath, and again, I noticed that the sensation of saliva in my mouth was bothering me. At first I tried to just ignore it and keep my attention on my breath; or alternatively, to let go of the feeling of distraction so that the sensation of saliva wouldn’t bother me anymore. When neither worked, I essentially just thought “oh, screw it” and accepted the sensation just as it was, as well as accepting the fact that it would continue to bother me. And then, once I had accepted that it would bother me… the feeling of it bothering me melted away, and vanished from my consciousness entirely. I was left with a warm, strongly pleasant feeling that lasted for many hours after I’d stopped meditating.

I haven’t been able to put myself back into that exact state, because as far as I can tell it, getting into it requires you to genuinely accept the fact that you’re feeling uncomfortable. In other words, you cannot use the acceptance as a means to an end, thinking that “I’ll now accept this unpleasantness so that I’ll get back to that nice state where it doesn’t feel unpleasant anymore”. That’s not genuine acceptance anymore, and therefore it doesn’t work.

Anyway, it feels like the “isolate anticipations and let go of them” and “accept your feelings and discomforts exactly as they are” techniques would be two different ways of achieving the same end. The feeling of pleasure I got today wasn’t as strong as the feeling of pleasure I got when I managed to accept my discomforts as they were, but it seemed to have much of the same character.

Some – though not all – meditators report a lack of achievement after reaching high levels of skill. They’re just happy with doing whatever, with no need to accomplish more things. And after meditating today, I too felt happy with whatever would happen, with no urgency to accomplish (nor avoid!) any of the things that I had planned for today. There seems to be a fine line between “use meditation to get rid of your disinclination for doing the things you want to do” and “use meditation and get rid of your inclination to do anything”.

In any case, I will have to try to remember this technique from now on, and keep experimenting with it. Hopefully, having written this post will help.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)
Yes, I know that I'm way behind on my reports: session III was over a month ago. Better late than never.

I've been thinking about global workspace theory on and off in the context of meditation. Haven't come up with anything particularly insightful, basically just a repetition of the argument in the Dietrich paper: in meditation, attentional resources are used to actively amplify a particular event such as a mantra until it becomes the exclusive content in the working memory buffer. This intentional, concentrated effort selectively disengages all other cognitive capacities of the prefrontal cortex.

Put into GWT terminology, normally sensory systems and "thought systems" within our brain generate a number of (bottom-up) inputs that compete for control of the global neuronal workspace (GNW), and some process of top-down attention picks inputs that get strenghtened until they dominate the neuronal workspace. In meditation, the practicioner seems to train their attentional network into only choosing a specific set of stimuli (e.g. their breath, a mantra, the sensations of their body, etc.) and ignoring all the others. As they concentrate on these stimuli, those get transmitted into all the brain regions that receive input from the GNW. Since this is an abnormal input that most of the systems can't do anything with, they gradually get turned off - especially since the it doesn't matter what output they produce in response, as the successful meditation practicioner pays no attention to it. Of course, it will take a lot of practice for a practicioner to get this far, since the brain is practically built to "get sidetracked" from meditation and concentrate on something more important.

It's interesting to ask why this would lead to perceptual changes, such as an increased tolerance for pain. A straighforward guess would be that if the GW/GNW gets taken over by a very simple stimulus, and that stimulus gets broadcast into all the different systems in the brain, then there are systems related to learning that can't help but to analyze the stimulus. If a meditation practicioner consciously begins to break a sensation into smaller and smaller components, or begins to note and name individual sensations, then the implict learning systems will pick up on this and learn how to do it better. Also, as the meditator forces his brain to analyze very simple inputs, the brain allocates disproportionate computational resources into analyzing them and begins to find in them increasingly subtle hidden details - which the meditator then dismisses, forcing his brain to go to even more extreme lengths to find something. Over time and with enough practice, he learns to feel and notice these subtle sensations even when not meditating.

Of course, it's a bit of a misnomer to talk about the brain "finding" subtler sensations, since those sensations are themselves also generated by the brain. Rather what's happening is that there is a hierarchical process in which simpler inputs get increasingly complex layers of interpretation applied on them, and meditation strips away those layers of interpretation. Thus information that's usually thrown away during earlier processing stages becomes revealed and accessible to the conscious mind. That'd my guess, anyway. It's also interesting to note that savant abilities are also hypothesized to be created via having access to lower-level brain processing, but so far I haven't heard of anyone becoming a genius savant through meditation, even if it should be theoretically possible.

As I noted the last time, there's still the puzzle of how the attentional networks find out about an input that might be worth promoting into the GNW, if the GNW is already dominated by another input. A hypothesis that might make sense is that we're actually rapidly cycling a lot of content into and out of consciousness, and the attentional networks decide which stuff gets the most "clock cycles" (here's an obvious analogy to operating systems and multiprogramming). E.g. this text gets processed within the GNW, then I hear a sound coming from outside and that input pushes its way to the GNW for a brief moment, and then an attentional system decides that it isn't important and gets back to the task of writing this text. While the outside noise has pushed the text out of the GNW, it's still locally active in the brain regions that were most heavily involved in processing it, and the attentional network can home into the activation in those regions and strenghten it again.

Alternatively, this whole hypothesis of swapping stuff in and out might be unnecessarily complicated, and there could just be cross-region communication that wsan't conscious. There are a number of results saying that cross-modality integration of sense data can happen without consciousness. E.g. in ventriloquism we see a talking puppet mouth and hear sound coming from the puppeteer's closed mouth. Somehow this conflict gets resolved into us hearing the sound as if it were coming from the puppeteer's mouth, without us being consciously aware of the process. Also the results of the paper below, which suggest that attention and consciousness can both occur without each other, would support that hypothesis.

None of that actually has anything to do with the third session, though - it's just stuff that occurred to me while thinking about some of the seminar papers in general. So let's get to the actual topic...

----

The third Neuroinformatics presentation covered Giulio Tononi & Christof Koch (2008) The Neural Correlates of Consciousness: An Update. Annals of the New York Academy of Sciences. The paper was pretty packed with information, and there was a lot of interesting stuff mentioned. I won't try to cover all of it, but will rather concentrate on some of the most interesting bits.

In particular, the previous Neuroinformatics papers seemed to come to close to equating consciousness and attention. If input from our senses (or from internal sources like e.g. memory) becomes conscious if it is chosen to be promoted to consciousness by attentional processes, does that mean that we are conscious of the things that we pay attention to? Subjectively, I'm often conscious of experiences that I try to direct my attention away from, though that might just mean that a top-down attentional mechanism is competing with a bottom-up one. Introspection is notoriously unreliable, anyway.

Tononi & Koch argue that the two are not the same, and there can be both attention without consciousness and consciousness without attention. Let's first look at attention without consciousness. Among the studies that they cite, Naccache et al. (2002) is probably the easiest to explain.

The experimental subjects were shown ("target") numbers ranging from 1 to 9, and had to say whether the number they saw was smaller or larger than 5. (They were not shown any fives.) Unknown to them, each number was preceded by another ("priming") number, hidden by a geometric masking shape. In some versions of the experiment, the subjects knew when they were going to see the number, and could pay attention around that time. In other version, they did not, and could not focus their attention specifically at the right window in time. When the subjects were paying attention at the right time (and therefore also paying attention to the priming number), there was what's called a priming effect. Their reaction times were faster when the prime number was congruent with the target number, i.e. either both were smaller than 5 or both were larger. When the numbers were incongruent, the reaction times were faster. When the subjects couldn't focus their attention on the right time period, the priming effect didn't occur. Tononi & Koch interpret these results to mean that there can be attention without consciousness: the priming numbers were always seen too quickly to enter conscious awareness, but they caused a priming effect depending on whether or not the subjects paid attention to them.

The opposite case is consciousness without attention. There are experiments in which the subjects are made to focus their attention to the middle of their visual field, and something else is then briefly flashed in their peripheral field of vision. Subjects are often capable of reporting on the contents of the peripheral image and performing some quite complex discrimination tasks. They can tell male faces from female ones, or distinguish between famous and non-famous people, even though the image was (probably) flashed too briefly for top-down attention to kick in. At the same time, they cannot perform some much easier tasks, such as discriminating a rotated letter "L" from a rotated letter "T". So at least some kinds of consciousness-requiring tasks seem to be possible in the absence of directed attention, while others aren't.

Tononi & Koch conclude this section by summarizing their view of the differences between attention and consciousness, and by citing Baars and saying something akin to his Global Workspace Theory:

Attention is a set of mechanisms whereby the brain selects a subset of the incoming sensory information for higher level processing, while the nonattended portions of the input are analyzed at a lower band width. For example, in primates, about one million fibers leave each eye and carry on the order of one megabyte per second of raw information. One way to deal with this deluge of data is to select a small fraction and process this reduced input in real time, while the nonattended data suffer from benign neglect. Attention can be directed by bottom-up, exogenous cues or by top-down endogenous features and can be applied to a spatially restricted part of the image (focal, spotlight of attention), an attribute (e.g., all red objects), or to an entire object. By contrast, consciousness appears to be involved in providing a kind of “executive summary” of the current situation that is useful for decision making, planning, and learning (Baars).


As has often been the case lately, I wonder how much weight I should actually put on these results. A study that has not been replicated is little better than an anecdote, and while Tononi & Koch do cite several studies with similar results, there have been previous cases where the initial replications all seemed to support a theory but then stopped doing so. So for all that I know, everything in the paper (and the previous papers, of course) might turn out to be wrong within a few years. Still, it's the best that we have so far.

Like some of the GWS/GNS papers, this one also suggested that non-dreaming sleep involves reduced connectivity between cortical regions, and the regions communicate in a more local manner. That's also interesting.
xuenay: (Default)
Today was the second session of the Neuroinformatics 4 course that I'm taking. Each participant has been assigned some paper from this list, and we're all supposed to have a presentation summarizing the paper. We're also supposed to write a diary about each presentation and hand it in in the end, which is the reason why I'm typing this entry. I figure that if I'm going to keep a diary about this, I might as well make it public.

Session I: Global Workspace Theory. I held the first presentation, which covered Global Workspace Theory as explained by Baars (2002, 2004). You can read about it in those papers, but the general idea of GWT is that that which we experience as conscious thought is actually information that's being processed in a "global workspace", through which various parts of the brain communicate with each other.

Suppose that you see in front of you a delicious pie. Some image-processing system in your brain takes that information, processes it, and sends that information to the global workspace. Now some attentional system or something somehow (insert energetic waving of hands) decides whether that stimulus is something that you should become consciously aware of. If it is, then that stimulus becomes the active content of the global workspace, and information about it is broadcast to all the other systems that are connected to the global workspace. Our conscious thoughts are that information which is represented in the global workspace.

There exists some very nice experimental work which supports this theory. For instance, Dehaene (2001) showed experimental subjects various words for a very short while (29 milliseconds each). Then, for the next 71 milliseconds, the subjects either saw a blank screen (the "visible" condition) or a geometric shape (the "masking" condition). Previous research had shown that in such an experiment, the subjects will report seeing the "visible" words and can remember what they said, while they will fail to notice the "masked" words. That was also the case here. In addition, fMRI scans seemed to show that the "visible" words caused considerably wider activation in the brain than the "masked" words, which mainly just produced minor activation in area relating to visual processing. The GWT interpretation of these results would be that the "visible" words made their way to the global workspace and activated it. For the "masked" words there was no time for that to happen, since the sight of the masking shape "overwrote" the contents of the visual system before the sight of the word had had the time to activate the global workspace.

That's all fine and good, but Baars's papers were rather vague on a number of details, like "how is this implemented in practice"? If information is represented in the global workspace, what does that actually mean? Is there a single representation of the concept of a pie in the global workspace, which all the systems manipulate together? Or is information in the global workspace copied to all of the systems, so that they are all manipulating their own local copies and somehow synchronizing their changes through the global workspace? How can an abstract concept like "pie" be represented in such a way that systems as diverse as those for visual processing, motor control, memory, and the generation of speech (say) all understand it?

Session II: Global Neuronal Workspace. Today's presentation attempted to be a little more specific. Dehaene (2011) discusses the Global Neuronal Workspace model, based on Baars's Global Workspace model.

The main thing that I got out of today's presentation was that the brain is the idea of the brain being divisible into two parts. The processing network is a network of tightly integrated, specialized processing units that mostly carry out non-conscious computation. For instance, early processing stages of the visual system, carrying out things like edge detection, would be a part of the processing network. The "processors" of the processing network typically have "highly specific local or medium range connections" - in other words, the processors in a specific region mostly talk with their close neighbors and nobody else.

The various parts of the processing network are connected by the Global Neuronal Workspace, a set of cortical neurons with long-range axons. The impression I got was this is something akin to a set of highways between cities, or different branches of a post office. Or planets (processing network areas) joined together by a network of Hyperpulse Generators (the Global Neuronal Workspace). You get the idea. I believe that it's some sort of a small world network.

Note that contrary to intuition and folk psychology (but consistently with the hierarchical consciousness hypothesis), this means that there is no single brain center where conscious information is gathered and combined. Instead, as the paper states, there is "a brain-scale process of conscious synthesis achieved when multiple processors converge to a coherent metastable state". Which basically means that consciousness is created by various parts of the brain interacting and exchanging information with each other.

Another claim of GNW is that sensory information is basically processed in a two-stage manner. First, a sensory stimulus causes activation in the sensory regions and begins climbing up the processor hierarchy. Eventually it reaches a stage where it may somehow be selected to be consciously represented, with the criteria being "its adequacy to current goals and attention state" (more waving of hands). If it does, it becomes represented in the GNW. It "is amplified in a top-down manner and becomes maintained by sustained activity of a fraction of GNW neurons": this might re-activate the stimulus signal in the sensory regions, where its activation might have already been declining. Something akin to this model has apparently been verified in a number of computer simulations and brain imaging studies.

Which sounds interesting and promising, though this still leaves a number of questions unclear. For instance, the paper claims that only one thing at a time can be represented in the GNW. But apparently the thing that gets represented in the GNW is partially selected by conscious attention, and the paper that I previously posted about placed the attentional network in the prefrontal cortex (i.e. not in the entire brain). So doesn't the content in the sensory regions then need to first be delivered to the attentional networks (via the GNW) so that the attentional networks can decide whether that content should be put into the GNW? Either there's something wrong with this model, or I'm not understanding it correctly. I should probably dig into the references. And again, there's the question of just what kind of information is actually put into the GNW in such a manner that all of the different parts of the brain can understand it.

(Yes, I realize that my confusion may seem incongruent with the fact that I just co-authored a paper where we said that we "already have a fairly good understanding on how the cerebral cortex processes information and gives rise to the attentional processes underlying consciousness". My co-author's words, not mine: he was the neuroscience expert on that paper. I should probably ask him when I get the chance.)
xuenay: (Default)
http://www.cogsci.ucsd.edu/~pineda/COGS175/readings/Dietrich.pdf

It proposes that what we experience as consciousness is built up in a hierarchical process, with various parts of the brain doing further processing on the flow of information and contributing their own part to the "feel" of consciousness. It's possible to subtract various parts of the process, thereby leading to an altered state of consciousness, without consciousness itself disappearing.

The prefrontal cortex is usually associated with "higher-level" tasks, including emotional regulation, but the authors suggest that this is due to the prefrontal cortex refining the outputs of the earlier processing stages, rather than inhibiting them:

"In such a view, the prefrontal cortex does not represent a supervisory or control system. Rather, it actively implements higher cognitive functions. It is further suggested that the prefrontal cortex does not act as an inhibitory agent of older, more primitive brain structures. The prefrontal cortex restrains output from older structures not by suppressing their computational product directly but by elaborating on it to produce more sophisticated output. If the prefrontal cortex is lost, the person simply functions on the next highest layer that remains.The structures implementing these next highest layers are not disinhibited by the loss of the prefrontal cortex. Rather, their processing is unaffected except that no more sophistication is added to their processing before a motor output occurs."


Their theory is that several altered states of consciousness involve a reduction in the activity of the prefrontal cortex:

"It is proposed in this article that altered states of consciousness are due to transient prefrontal deregulation. Six conscious states that are considered putative altered states (dreaming, the runner's high, meditation, hypnosis, daydreaming, and various drug-induced states) are briefly examined. These altered states share characteristics whose proper function are regulated by the prefrontal cortex such as time distortions, disinhibition from social constraints, or a change in focused attention. It is further proposed that the phenomenological uniqueness of each state is the result of the differential viability of various [dorsolateral] circuits. To give one example, the sense of self is reported to be lost to a higher degree in meditation than in hypnosis; whereas, the opposite is often reported for cognitive flexibility and willed action, which are absent to a higher degree in hypnosis.The neutralization of specific prefrontal contributions to consciousness has been aptly called ‘‘phenomenological subtraction’’ by Allan Hobson (2001).The individual in such an altered state operates on what top layers remain. In altered states that cause severe prefrontal hypofunction, such as non-lucid dreaming or various drug states, the resulting phenomenological awareness is extraordinarily bizarre. In less dramatic altered states, such as long-distance running, the change is more subtle."


And about meditation in particular, they hypothesize that it involves a general lowered prefrontal activity, with the exception of increased activation in the prefrontal attentional network:

"It is evident that more research is needed to resolve the conflicting EEG and neuroimaging data. Reinterpreting and integrating the limited data from existing studies, it is proposed that meditation results in transient hypofrontality with the notable exception of the attentional network in the prefrontal cortex. The resulting conscious state is one of full alertness and a heightened sense of awareness, but without content. Since attention appears to be a rather global prefrontal function (e.g., Cabeza & Nyberg, 2000), PET, SPECT, and fMRI scans showed an overall increase in DL activity during the practice of meditation. However, the attentional network is likely to overlap spatially with modules subserving other prefrontal functions and an increase as measured by fMRI does not inevitably signify the activation of all of the region's modules. Humans appear to have a great deal of control over what they attend to (Atkinson & Shiffrin, 1968), and in meditation, attentional resources are used to actively amplify a particular event such as a mantra until it becomes the exclusive content in the working memory buffer. This intentional, concentrated effort selectively disengages all other cognitive capacities of the prefrontal cortex, accounting for the a-activity. Phenomenologically, meditators report a state that is consistent with decreased frontal function such as a sense of timelessness, denial of self, little if any self-reflection and analysis, little emotional content, little abstract thinking, no planning, and a sensation of unity. The highly focused attention is the most distinguishing feature of the meditative state, while other altered states of consciousness tend to be more characterized by aimless drifting."


They do not discuss permanent changes caused by meditation in the paper, but if the prefrontal cortex is involved with last-stage processing of incoming sensory data, then prefrontal regulation would fit together with meditators' reports of being able to experience sensory information in a more "raw", unprocessed form. Likewise, if the prefrontal cortex unifies and integrates information from earlier processing stages, then meditation revealing the unity of self to be an illusion would be consistent would reduced prefrontal activity.

Vipassana jhanas, or other forms of meditation aimed towards reaching enlightenment, would then somehow involve permanently reducing or at least changing the nature of prefrontal processing. Meditation practicioners speak of "the Dark Night", an intermediate stage during the search for enlightenment, which is experienced as strongly unpleasant and where "our dark stuff tends to come bubbling up to the surface with a volume and intensity that we may never have known before". This is achieved after making sufficient progress in meditation, and will continue until the practicioner makes enough progress to make it go away.

Under the model suggested by the paper, the Dark Night would then be an intermediate stage where the activity of the prefrontal cortex had been reduced/changed to such an extent that it was no longer capable of moderating the output of the various earlier emotional systems. Resolving the Dark Night would involve somehow finding a new balance where the outputs of any systems involved with negative emotions could be better handled again, but I have no idea of how that happens.

December 2018

S M T W T F S
      1
2345678
910 1112131415
16171819202122
23242526272829
3031     

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 10th, 2025 04:51 am
Powered by Dreamwidth Studios