How Impossible Becomes Possible

active nerve cell in human neural system

network

Scientific materialism explains a lot about how the brain creates consciousness, but hasn’t yet fully accounted for subjective awareness. As a result, the “hard problem” of consciousness remains unsolved, and we’re alternately urged to either concede that the human brain just isn’t ready to figure itself out, or conclude that reality is ultimately determined subjectively.

Princeton psychology and neuroscience professor Michael S. A. Graziano isn’t ready to do either. He thinks the “hard problem” label is itself the problem, because it cuts off further inquiry:

“Many thinkers are pessimistic about ever finding an explanation of consciousness. The philosopher Chalmers in 1995, put it in a way that has become particularly popular. He suggested that the challenge of explaining consciousness can be divided into two problems. One, the easy problem, is to explain how the brain computes and stores information. Calling this problem easy is, of course, a euphemism. What it meant is something more like the technically possible problem given a lot of scientific work.

“In contrast, the hard problem is to explain how we become aware of all that stuff going on in the brain. Awareness itself, the essence of awareness, because it is presumed to be nonphysical, because it is by definition private, seems to be scientifically unapproachable. Again, calling it the hard problem is a euphemism, it is the impossible problem.

“The hard-problem view has a pinch of defeatism in it. I suspect that for some people it also has a pinch of religiosity. It is a keep-your-scientific-hands-off-my-mystery perspective. In the hard problem view, rather than try to explain consciousness, we should marvel at its insolubility. We have no choice but to accept it as a mystery.

“One conceptual difficulty with the hard-problem view is that it argues against any explanation of consciousness without knowing what explanations might arise. It is difficult to make a cogent argument against the unknown. Perhaps an explanation exists such that, once we see what it is, once we understand it, we will find that it makes sense and accounts for consciousness.”

Consciousness and the Social Brain. by Michael S. A. Graziano (2013).

I.e., if science is going to explain consciousness, it needs to reframe its inquiry, so that what is now an “impossible,” “scientifically unapproachable” problem becomes a “technically possible problem” that can be solved “given a lot of scientific work.”

Technology and innovation writer Steven Johnson describes how he thinks the impossible becomes possible in Where Good Ideas Come From — available as a TED talk. book, and animated whiteboard drawing piece on YouTube. In his TED talk, he contrasted popular subjective notions with what neuroscience has discovered about how the brain actually works:

“[We] have to do away with a lot of the way in which our conventional metaphors and language steer us towards certain concepts of idea-creation. We have this very rich vocabulary to describe moments of inspiration. We have … the flash of insight, the stroke of insight, we have epiphanies, we have ‘eureka!’ moments, we have the lightbulb moments… All of these concepts, as kind of rhetorically florid as they are, share this basic assumption, which is that an idea is a single thing, it’s something that happens often in a wonderful illuminating moment.

“But in fact, what I would argue is … that an idea is a network on the most elemental level. I mean, this is what is happening inside your brain. An idea — a new idea — is a new network of neurons firing in sync with each other inside your brain. It’s a new configuration that has never formed before. And the question is, how do you get your brain into environments where these new networks are going to be more likely to form?”

Johnson expands on the work of biologist and complex systems researcher Stuart Kauffman, who dubbed this idea the “adjacent possibility.” Adjacent possibility is where the brain’s neural networks (top picture above) meet data networks (the bottom picture):  neither is a static, closed environment; both are dynamic, constantly shifting and re-organizing, with each node representing a new point from which the network can expand. Thus the shift from unknown to known is always a next step away:

“The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.”

Vittorio Loreto and his colleagues at Sapienza University of Rome turned adjacent possibility into a mathematical model which they then submitted to objective, empirical, real world testing. As he said in his TED talk:

“Experiencing the new means exploring a very peculiar space, the space of what could be, the space of the possible, the space of possibilities.

“We conceived our mathematical formulation for the adjacent possible, 20 years after the original Kauffman proposals.

“We had to work out this theory, and we came up with a certain number of predictions to be tested in real life.”

Their test results suggest that adjacent possibility is good science — that impossible doesn’t step out of the ether, it waits at the edge of expanding neural networks, ready to become possible.[1] As Steven Johnson said above, that’s a far cry from our popular romantic notions of revelations, big ideas, and flashes of brilliance. We look more at those next time.

[1] For a nerdier version, see this Wired piece: The ‘Adjacent Possible’ of Big Data: What Evolution Teaches About Insights Generation.

Emergence

 

murmuration

One fine afternoon autumn day I watched transfixed as a gigantic flock of migratory birds swarmed over the woods across the street. I was watching a “complex, self-organizing system” in action — specifically, a “murmuration” of birds, which is created by “swarm behavior,” which in turn falls in the category of emergence.

Emergence explains how the whole becomes greater than the sum of its parts. The term is widely used — in systems theory, philosophy. psychology, chemistry, biology, neurobiology, machine learning — and for purposes of this blog, it also applies to cultural belief systems and the social institutions they generate.

Consider any culture you like — a team, club, company, profession, investor group, religious gathering, political party…. As we’ve seen previously in this series, the group’s cultural sense of reality is patterned in each individual member’s neural wiring and cellular makeup. But no one member can hold it all, and different members have varying affinity for different aspects of the culture. As a result, each member takes what the others bring “on faith”:  the group believes in its communal beliefs. This faith facilitates the emergence of a cohesive, dynamic cultural body that takes on a life of its own, expressed through its institutions. .

That’s emergence.

To get a further sense of how this works, see this TED Talk that uses complex systems theory to look at how the structure of the financial industry (a transnational cultural body) helped to bring about the Great Recession of 2007-2008. Systems theorist James B. Glattfelder[1] lays out a couple key features of self-organizing systems:

“It turns out that what looks like complex behavior from the outside is actually the result of a few simple rules of interaction. This means you can forget about the equations and just start to understand the system by looking at the interactions.

“And it gets even better, because most complex systems have this amazing property called emergence. This means that the system as a whole suddenly starts to show a behavior which cannot be understood or predicted by looking at the components. The whole is literally more than the sum of its parts.”

In the end, he says, there’s an innate simplicity to it all — “an emergent property which depends on the rules of interaction in the system. We could easily reproduce [it] with a few simple rules.”[2] He compares this outcome to the inevitable polarized logjams we get from clashing cultural ideologies:

 “I really hope that this complexity perspective allows for some common ground to be found. It would be really great if it has the power to help end the gridlock created by conflicting ideas, which appears to be paralyzing our globalized world.  Ideas relating to finance, economics, politics, society, are very often tainted by people’s personal ideologies.  Reality is so complex, we need to move away from dogma.”

Trouble is, we seem to be predisposed toward ideological gridlock and dogma. Even if we’ve never heard of emergence, we have a kind of backdoor awareness of it — that there are meta-influences affecting our lives — but we’re inclined to locate their source “out there,” instead of in our bodily selves. “Out there” is where the Big Ideas live, formulated by transcendent realities and personalities — God, gods, Fate, Destiny, Natural Law, etc. — that sometimes enter our lesser existence to reveal their take on how things work. Trouble is, they have super-intelligence while we have only a lesser version, so once we receive their revelations, we codify them into vast bodies of collected wisdom and knowledge, which we then turn over to our sacred and secular  cultural institutions to administer. We and our cultures aren’t perfect like they are, but we do our best to live up to their high standards.

We do all this because, as biocentrism champion Robert Lanza has said, most of us have trouble wrapping our heads around the notion that

“Everything we see and experience is a whirl of information occurring in our head. We are not just objects embedded in some external matrix ticking away ‘out there.’”[3]

In our defense, the kind of systems analysis that James Glattfelder uses in his TED talk requires a lot of machine super-intelligence and brute data-crunching power that the human brain lacks. We’re analog and organic, not digital, and we use our limited outlook to perpetuate more polarization, ideological gridlock. and dogma. Culture may be emergent, but when it emerges, it walks right into a never-ending committee meeting  debating whether it has a place on the agenda..

Next time, we’ll look at what happens when emergent cultures clash.

[1] James B. Glattfelder holds a Ph.D. in complex systems from the Swiss Federal Institute of Technology. He began as a physicist, became a researcher at a Swiss hedge fund. and now does quantitative research at Olsen Ltd in Zurich, a foreign exchange investment manager.

[2] Here’s a YouTube explanation of the three simple rules that explain the murmuration I watched that day.

[3] From this article in Aeon Magazine.