active nerve cell in human neural system

network

Scientific materialism explains a lot about how the brain creates consciousness, but hasn’t yet fully accounted for subjective awareness. As a result, the “hard problem” of consciousness remains unsolved, and we’re alternately urged to either concede that the human brain just isn’t ready to figure itself out, or conclude that reality is ultimately determined subjectively.

Princeton psychology and neuroscience professor Michael S. A. Graziano isn’t ready to do either. He thinks the “hard problem” label is itself the problem, because it cuts off further inquiry:

“Many thinkers are pessimistic about ever finding an explanation of consciousness. The philosopher Chalmers in 1995, put it in a way that has become particularly popular. He suggested that the challenge of explaining consciousness can be divided into two problems. One, the easy problem, is to explain how the brain computes and stores information. Calling this problem easy is, of course, a euphemism. What it meant is something more like the technically possible problem given a lot of scientific work.

“In contrast, the hard problem is to explain how we become aware of all that stuff going on in the brain. Awareness itself, the essence of awareness, because it is presumed to be nonphysical, because it is by definition private, seems to be scientifically unapproachable. Again, calling it the hard problem is a euphemism, it is the impossible problem.

“The hard-problem view has a pinch of defeatism in it. I suspect that for some people it also has a pinch of religiosity. It is a keep-your-scientific-hands-off-my-mystery perspective. In the hard problem view, rather than try to explain consciousness, we should marvel at its insolubility. We have no choice but to accept it as a mystery.

“One conceptual difficulty with the hard-problem view is that it argues against any explanation of consciousness without knowing what explanations might arise. It is difficult to make a cogent argument against the unknown. Perhaps an explanation exists such that, once we see what it is, once we understand it, we will find that it makes sense and accounts for consciousness.”

Consciousness and the Social Brain. by Michael S. A. Graziano (2013).

I.e., if science is going to explain consciousness, it needs to reframe its inquiry, so that what is now an “impossible,” “scientifically unapproachable” problem becomes a “technically possible problem” that can be solved “given a lot of scientific work.”

Technology and innovation writer Steven Johnson describes how he thinks the impossible becomes possible in Where Good Ideas Come From — available as a TED talk. book, and animated whiteboard drawing piece on YouTube. In his TED talk, he contrasted popular subjective notions with what neuroscience has discovered about how the brain actually works:

“[We] have to do away with a lot of the way in which our conventional metaphors and language steer us towards certain concepts of idea-creation. We have this very rich vocabulary to describe moments of inspiration. We have … the flash of insight, the stroke of insight, we have epiphanies, we have ‘eureka!’ moments, we have the lightbulb moments… All of these concepts, as kind of rhetorically florid as they are, share this basic assumption, which is that an idea is a single thing, it’s something that happens often in a wonderful illuminating moment.

“But in fact, what I would argue is … that an idea is a network on the most elemental level. I mean, this is what is happening inside your brain. An idea — a new idea — is a new network of neurons firing in sync with each other inside your brain. It’s a new configuration that has never formed before. And the question is, how do you get your brain into environments where these new networks are going to be more likely to form?”

Johnson expands on the work of biologist and complex systems researcher Stuart Kauffman, who dubbed this idea the “adjacent possibility.” Adjacent possibility is where the brain’s neural networks (top picture above) meet data networks (the bottom picture):  neither is a static, closed environment; both are dynamic, constantly shifting and re-organizing, with each node representing a new point from which the network can expand. Thus the shift from unknown to known is always a next step away:

“The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.”

Vittorio Loreto and his colleagues at Sapienza University of Rome turned adjacent possibility into a mathematical model which they then submitted to objective, empirical, real world testing. As he said in his TED talk:

“Experiencing the new means exploring a very peculiar space, the space of what could be, the space of the possible, the space of possibilities.

“We conceived our mathematical formulation for the adjacent possible, 20 years after the original Kauffman proposals.

“We had to work out this theory, and we came up with a certain number of predictions to be tested in real life.”

Their test results suggest that adjacent possibility is good science — that impossible doesn’t step out of the ether, it waits at the edge of expanding neural networks, ready to become possible.[1] As Steven Johnson said above, that’s a far cry from our popular romantic notions of revelations, big ideas, and flashes of brilliance. We look more at those next time.

[1] For a nerdier version, see this Wired piece: The ‘Adjacent Possible’ of Big Data: What Evolution Teaches About Insights Generation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.