Skip to main content

Brain Microarchitecture : Feedback from Higher-order areas to Lower-order areas

Some questions that arise in Machine Learning involve the prospect of using feedback from Higher-order areas (downstream) to Lower-order areas (upstream), and using Global Knowledge for Local Processing.  A desire to gain insight into those issues from Neuroscience ("how does the brain do it?") led me to some fascinating investigations into the Microcircuits of the Cerebral Cortex.  This blog entry is a broad review of the field, in the context of the original motivating questions from Machine Learning.
 

Starting out with a quote from the “bible of Neuroscience”:

From Principles of Neural Science, 5th edn  (Online book location 1435.3 / 5867).  Emphasis and note added by me:
Sensory pathways are not exclusively serial; in each functional pathway higher-order areas project back to the lower-order areas from which they receive input. In this way neurons in higher-order areas, sensitive to the global pattern of sensory input, can modulate the activity of neurons in lower-order areas that are sensitive to local detail.
For example, top-down signals originating in the inferotemporal cortex might help neurons in V1* to resolve a detail in a part of the face.
(*) V1 is the Primary Visual cortex


Thalamus : passageway to the Brain Cortex

Thalamus means “anteroom”.  It's a brain region that features prominently in the brain circuits described in later sections.



The thalamus comprises many projections to the cerebral cortex; hence the name “anteroom”.  Most of the information going to the cortex goes thru the Thalamus.  For example, the lateral geniculate nucleus, the main central connection for the optic nerve to the primary visual cortex in the occipital lobe, resides in the thalamus.

(Perhaps more familiar to many people is the “hypothalamus”, meaning “below the thalamus”, which is responsible for the central control of homeostasis in the body.)

Both the thalamus and hypothalamus are part of the Diencephalon, an inner part of the brain (considered by some, but not all, neuroscientists to be part of the brain stem)

The Cerebral Cortex

The mammalian cerebral cortex, the grey matter encapsulating the white matter, is composed of layers. The human cortex is between 2 and 3 mm thick.

The number of layers is the same in most mammals, but varies throughout the cortex. In the neocortex, 6 layers can be recognized although many regions lack one or more layers.

The neocortex is the newest part of the cerebral cortex to evolve (prefix neo meaning new); the other part is the allocortex, which has just 3 or 4 cell layers.
From the perspective of ML, my take is that the brain cortex first evolved in a smaller “search space” (with fewer cell layers), and then parts of it “colonized” a larger search space of evolutionary possibilities.  Akin to training a smaller network, and then adding extra layers and using the previously trained state as a starting point for the new training.

The Cortical Layers

Connections "up" and "down" within the thickness of the cortex are much denser than connections that spread from side to side.



Layer IV is the main target of thalamocortical afferents from the thalamus.

Layer VI sends efferent fibers to the thalamus, establishing a very precise reciprocal interconnection between the cortex and the thalamus. That is, layer VI neurons from one cortical column connect with thalamus neurons that provide input to the same cortical column. These connections are both excitatory and inhibitory.

Cortical microcircuits are grouped into cortical columns and minicolumns. It has been proposed that the minicolumns are the basic functional units of the cortex.  Functional properties of the cortex change abruptly between laterally adjacent points; however, they are continuous in the direction perpendicular to the surface. There is evidence of the presence of functionally distinct cortical columns in the visual cortex, auditory cortex, and associative cortex.




Studies mentioning “Top-down Signals”

A 2017 article gives experimental evidence (from monkeys with brain lesions) that:
“the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features

Source:  Paneri, S., & Gregoriou, G. G. (2017). Top-Down Control of Visual Attention by the Prefrontal Cortex. Functional Specialization and Long-Range Interactions. Frontiers in neuroscience, 11, 545. doi:10.3389/fnins.2017.00545     link



Selected quotes from a 2007 review article:

All cortical and thalamic levels of sensory processing are subject to powerful top-down influences, the shaping of lower-level processes by more complex information.
The general idea of top-down influence is that complex information that is represented at higher stages of processing influences simpler processes occurring at antecedent stages. Whereas some of the earlier work on spatial attention—the most studied instance of top-down modulation—suggested that significant influences of attention are found only at high levels in the visual pathway, it is becoming increasingly clear that even at the earliest stages in cortical sensory processing the functional properties of neurons are subject to influences of attention, as well as other forms of top-down modulation.
The higher-order information may include learned, internal representations of the shapes of objects and of the abstract syntax of object relationships. It may also include information about behavioral context, which would include attention, expectation, and perceptual task.
These influences may not even be specific to cortex, but wherever one sees feedback connections, including thalamus. This study showed even stronger attentional effects in the LGN* than in V1/V2**. Top-down influences are not unexpected in the LGN since it receives input from many more V1 neurons, by orders of magnitude, than it receives from the retina.
(*) LGN : the Lateral Geniculate Nucleus is a relay center in the thalamus for the visual pathway
(**) V1 is the Primary Visual Cortex

Source: Gilbert & Sigman (2007),  “Brain States: Top-Down Influences in Sensory Processing.”  Neuron, Volume 54, Issue 5, 7 June 2007, Pages 677-696        link



From a 2016 article:

(1) “lower-order” visuotopically organized cortical areas, some of which receive their principal or a substantial, direct thalamic input from the dorsal lateral geniculate nuclei (LGNd), send numerous “feedforward” associational projections to the “higher-order” visual areas

(2) beyond the primary visual cortices, information about pattern/form vs. motion is processed along two largely parallel “quasi-hierarchical” feedforward streams

(3) higher-order areas send numerous associational “recurrent” or “feedback” projections back to lower-order areas

Source:  Huang JY, Wang C and Dreher B (2017)  Silencing “Top-Down” Cortical Signals Affects Spike-Responses of Neurons in Cat’s “Intermediate” Visual Cortex. Front. Neural Circuits 11:27. doi: 10.3389/fncir.2017.00027   link


Microcircuits of the Cerebral Cortex


From 2012 article “The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full-Scale Spiking Network Model”  (link)   Excitatory (black) and inhibitory (gray) connections with connection probabilities >0.04 are shown.

Central to the idea of a canonical microcircuit is the notion that a cortical column contains the circuitry necessary to perform requisite computations and that these circuits can be replicated with minor variations throughout the cortex.

George and Hawkins have suggested that the canonical microcircuit implements a form of Bayesian processing (George, D., and Hawkins, J. , 2009. Towards a mathematical theory of cortical micro-circuits. PLoS Comput. Biol.5, e1000532. )

The most popular scheme — for Bayesian filtering in neuronal circuits — is predictive coding (Srinivasan et al., 1982; Buchs-baum and Gottschalk, 1983; Rao and Ballard, 1999).  In this context, surprise corresponds (roughly) to prediction error.  In predictive coding, top-down predictions are compared with bottom-up sensory information to form a prediction error.  This prediction error is used to update higher-level representations, upon which top-down predictions are based.

To predict sensations, the brain must be equipped with a generative model of how its sensations are caused. Indeed, this led Geoffrey Hinton and colleagues to propose that the brain is an inference (Helmholtz) machine (Hinton and Zemel, 1994;Dayan et al., 1995).  A generative model describes how variables or causes in the environment conspire to produce sensory input. Generative models map from (hidden) causes to (sensory) consequences. Perception then corresponds to the inverse mapping from sensations to their causes.

In predictive coding, representations (or conditional expectations) generate top-down predictions to produce prediction errors. These prediction errors are then passed up the hierarchy  in the reverse direction, to update conditional expectations.

The generative model therefore maps from causes (e.g., concepts) to consequences (e.g., sensations), while its inversion corresponds to mapping from sensations to concepts or representations. This inversion corresponds to perceptual synthesis, in which the generative model is used to generate predictions. Note that this inversion implicitly resolves the binding problem by explaining multisensory cues with a single cause.



Diagram source:  Canonical Microcircuits for Predictive Coding (2012)

Note:  intrinsic connectivity = within a cortical column ;
extrinsic connections = between columns in different cortical areas


The above is a simplified schematic of the key intrinsic connections among excitatory (E) and inhibitory (I) populations in granular (L4), supragranular (L1/2/3), and infragranular (L5/6) layers. The excitatory interlaminar connections are based largely on Gilbert and Wiesel (1983).

Forward connections denote feedforward extrinsic corticocortical or thalamocortical afferents that are reciprocated by backward or feedback connections. Anatomical and functional data suggest that afferent input enters primarily into L4 and is conveyed to superficial layers L2/3 that are rich in pyramidal cells, which project forward to the next cortical area, forming a disynaptic route between thalamus and secondary cortical areas (Callaway, 1998).  Information from L2/3 is then sent to L5 and L6,which sends (intrinsic) feedback projections back to L4 (Usrey and Fitzpatrick, 1996). L5 cells originate feedback connections to earlier cortical areas as well as to the pulvinar, superior colliculus, and brain stem.

In summary, forward input is segregated by intrinsic connections into a superficial forward stream and a deep backward stream. In this schematic, we have juxtaposed densely interconnected excitatory and inhibitory populations within each layer.

Comments

Popular posts from this blog

Discussing Neuroscience with ChatGPT

UPDATED Apr. 2023 - I'm excited by ChatGPT 's possibilities in terms of facilitating advanced learning .  For example, I got enlightening answers to questions that I had confronted when I first studied neuroscience.  The examples below are taken from a very recent session I had with ChatGPT (mid Jan. 2023.) Source: https://neurosciencestuff.tumblr.com In case you're not familiar with ChatGPT, it's a very sophisticated "chatbot" - though, if you call it that way, it'll correct you!  'I am not a "chatbot", I am a language model, a sophisticated type of AI algorithm trained on vast amounts of text data to generate human-like text'. For a high-level explanation of how ChatGPT actually works - which also gives immense insight into its weaknesses, there's an excellent late Jan. 2023 talk by Stephen Wolfram, the brilliant author of the Mathematica software and of Wolfram Alpha , a product that could be combined with ChatGPT to imp

Using Neo4j with Python : the Open-Source Library "NeoAccess"

So, you want to build a python app or Jupyter notebook to utilize Neo4j, but aren't too keen on coding a lot of string manipulation to programmatic create ad-hoc Cypher queries?   You're in the right place: the NeoAccess library can do take care of all that, sparing you from lengthy, error-prone development that requires substantial graph-database and software-development expertise! This article is part 4 of a growing,  ongoing  series  on Graph Databases and Neo4j   "NeoAccess" is the bottom layer of the technology stack provided by the BrainAnnex open-source project .  All layers are very modular, and the NeoAccess library may also be used by itself , entirely separately from the rest of the technology stack.  (A diagram of the full stack is shown later in this article.) NeoAccess interacts with the Neo4j Python driver , which is provided by the Neo4j company, to access the database from Python; the API to access that driver is very powerful, but complex - and does

Graph Databases (Neo4j) - a revolution in modeling the real world!

UPDATED Oct. 2023 - I was "married" to Relational Databases for many years... and it was a good "relationship" full of love and productivity - but SOMETHING WAS MISSING! Let me backtrack.   In college, I got a hint of the "pre-relational database" days...  Mercifully, that was largely before my time, but  - primarily through a class - I got a taste of what the world was like before relational databases.  It's an understatement to say: YUCK! Gratitude for the power and convenience of Relational Databases and SQL - and relief at having narrowly averted life before it! - made me an instant mega-fan of that technology.  And for many years I held various jobs that, directly or indirectly, made use of MySQL and other relational databases - whether as a Database Administrator, Full-Stack Developer, Data Scientist, CTO or various other roles. UPDATE: This article is now part 1 of a growing, ongoing series on Graph Databases and Neo4j But ther

What are Graph Databases - and Why Should I Care?? : "Graph Databases for Poets"

  This is a very gentle introduction to the subject.  The subtitle is inspired by university courses such as "Physics for Poets"!  (if you're technically inclined, there's an alternate article for you.) It has been said that "The language of physics (or of God) is math".  On a similar note, it could be said that: The language of the biological world - or of any subject or endeavor involving complexity - is networks ('meshes') What is a network?  Think of  it as the familiar 'friends of friends' diagram from social media. Everywhere one turns in biology, there's a network – at the cellular level, tissue level, organ level, ecosystem level.  The weather and other earth systems are networks.  Human societal organization is a network.  Electrical circuits, the Internet, our own brains...  Networks are everywhere! What can we do with networks, to better understand the world around us, or to create something that we need? Broadly s

Full-Text Search with the Neo4j Graph Database

(UPDATED Oct. 2023)   Now that we have discussed a full technology stack based on Neo4j (or other graph databases), and that we a design and implementation available from the open-source project BrainAnnex.org  , what next?  What shall we build on top? Well, how about  Full-Text Search ?  This article is part of a growing, ongoing series on Graph Databases and Neo4j Full-Text Searching/Indexing Starting with the  Version 5, Beta 26.1  release, the Brain Annex open-source project includes a straightforward but working implementation of a design that uses the convenient services of its Schema Layer , to provide indexing of word-based documents using Neo4j. The python class FullTextIndexing ( source code ) provides the necessary methods, and it can parse both plain-text and HTML documents (for example, used in "formatted notes"); parsing of PDF files and other formats will be added at a later date. No grammatical analysis ( stemming or lemmatizing ) is done on

Using Schema in Graph Databases such as Neo4j

UPDATED Feb. 2024 - Graph databases have an easygoing laissez-faire attitude: "express yourself (almost) however you want"... By contrast, relational databases come across with an attitude like a micro-manager:  "my way or the highway"... Is there a way to take the best of both worlds and distance oneself from their respective excesses, as best suited for one's needs?  A way to marry the flexibility of Graph Databases and the discipline of Relational Databases? This article is part 5 of a growing,  ongoing  series  on Graph Databases and Neo4j Let's Get Concrete Consider a simple scenario with scientific data such as the Sample, Experiment, Study, Run Result , where Samples are used in Experiments, and where Experiments are part of Studies and produce Run Results.  That’s all very easy and intuitive to represent and store in a Labeled Graph Database such as Neo4j .   For example, a rough draft might go like this:   The “labels” (black tags) represent

Neo4j & Cypher Tutorial : Getting Started with a Graph Database and its Query Language

You have a general idea of what Graph Databases - and Neo4j in particular - are...  But how to get started?  Read on! This article is part 3 of a growing,  ongoing  series  on Graph Databases and Neo4j   If you're new to graph databases, please check out part 1 for an intro and motivation about them.  There, we discussed an example about an extremely simple database involving actors, movies and directors...  and saw how easy the Cypher query language makes it to answer questions such as "which directors have worked with Tom Hanks in 2016" - questions that, when done with relational databases and SQL, turn into a monster of a query and an overly-complicated data model involving a whopping 5 tables! In this tutorial, we will actually carry out that query - and get acquainted with Cypher and the Neo4j browser interface in the process.  This is the dataset we'll be constructing: Get the database in place If you don't already have a database installed locally

Neo4j Sandbox Tutorial : try Neo4j and learn Cypher - free and easy!

So, you have an itch to test-drive Neo4j and its Cypher query language.  Maybe you want to learn it, or evaluate it, or introduce colleagues/clients to it.  And you wish for: fast, simple and free! Well, good news: the Neo4j company kindly provides a free, short-term hosted solution called "the Neo4j sandbox" .  Extremely easy to set up and use! This article is part 2 of a growing, ongoing series on Graph Databases and Neo4j Register (free) for the Neo4j "Sandbox" Go to sandbox.neo4j.com , and register with a working email and a password.  That's it! Note that this same email/password will also let you into the Neo4j Community Forums and Support ; the same login for all: very convenient! Launch your instance - blank or pre-populated After registering, go to  sandbox.neo4j.com  , and follow the steps in the diagram below (the choices might differ, but the "Blank Sandbox" should always be there): Too good to be true?  Is there

Visualization of Graph Databases Using Cytoscape.js

(UPDATED APR. 2024)   I have ample evidence from multiple sources that there are strong unmet needs in the area of visualization of graph databases. And whenever there's a vacuum, vendors circle like vultures - with incomplete, non-customizable, and at times ridiculously expensive, closed-box proprietary solutions.   Fortunately, coming to the rescue is the awesome open-source cytoscape.js library ,  an offshoot of the "Cytoscape" project of the  Institute for Systems Biology , a project with a long history that goes back to 2002. One can do amazing custom solutions, relatively easily, when one combines this Cytoscape library with:   1) a front-end framework such as Vue.js   2) backend libraries (for example in python) to prepare and serve the data   For example, a while back I created a visualizer for networks of chemical reactions, for another open-source project I lead ( life123.science )   This visualizer will look and feel generally familiar to anyone who has eve

To Build or Not to Build One’s Own Desktop Computer?

“ VALENTINA ” [UPDATED JUNE 2021] - Whether you're a hobbyist, or someone who just needs a good desktop computer, or an IT professional who wants a wider breath of knowledge, or a gamer who needs a performant machine, you might have contemplated at some point whether to build your own desktop computer. If you're a hobbyist, I think it's a great project.  If you're an IT professional - especially a "coder" - I urge you to do it: in my opinion, a full-fledged Computer Scientist absolutely needs breath, ranging from the likes of Shannon's Information Theory and the Halting Problem - all the way down to how transistors work. And what about someone who just needs a good desktop computer?  A big maybe on that - but perhaps this blog entry will either help you, or scare you off for your own good! To build, or not to build, that is the question: Whether 'tis nobler in the mind to suffer The slings and arrows of OEM's cutting corners and limit