Monday, March 21, 2011

Emergence in CCK11

Emergence is one of the key concepts in network theory that most attracts me to the conversation. In his article Emergent Biological Principles and the Computational Properties of the Universe, physicist Paul Davies defines emergence neatly as "the appearance of new properties that arise when a system exceeds a certain level of size or complexity, properties that are absent from the constituents of the system." Emergence, then, is an antidote to the reductionist idea that we can understand anything by reducing it to its basic parts and then thoroughly describing those parts and their interactions.

In Chapter 9 of his book Networks of the Brain, Sporns notes the strong impact of reductionist thinking in modern neuroscience and its ultimate shortcomings in accounting for mind:
There have been many false starts in the attempt to link brain and cognition. One such failure is neuroreductionism, a view that fully substitutes all mental phenomena by neural mechanisms, summarized in the catchphrase "You are nothing but a pack of neurons," or, put more eloquently, "'You', your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules" (Crick, 1994). The problematic nature of this statement lies not in the materialist stance that rightfully puts mental states on a physical basis but rather in the phrase "no more than," which implies that the elementary properties of cells and molecules can explain all there is to know about mind and cognition. Reductionism can be spectacularly successful when it traces complex phenomena to their root cause, and yet it consistently falls short as a theoretical framework for the operation of complex systems because it cannot explain their emergent and collective properties. (180)
Sporns' argument echoes arguments from Edgar Morin that I have noted in previous posts that reductionist science has been "spectacularly successful [in] tracing complex phenomena to their root cause," yet has consistently fallen "short as a theoretical framework for the operation of complex systems because it cannot explain their emergent and collective properties."

I think I detect a similar reductionism at work in most educational theories, which reduce knowledge and learning to the functions of a single mind. Even the social constructivists still limit knowledge created in a group to the knowing of a single mind. This concept of knowledge and learning can be spectacularly successful in helping us trace the complex phenomena of learning back to the behaviors of a single individual, but it fails to provide us with a theoretical framework to account for the ability of a network of strangers to so accurately guess the weight of an ox in a rural English fair, as James Surowiecki describes in his book The Wisdom of Crowds. How does the crowd know the weight of the ox when obviously so few of the individuals in the crowd had even the remotest notion of how much the animal weighed? What emergent properties were at work that produced knowledge that no one person had? For Sporns, this question might be phrased: what emergent properties are at work to produce mind when no one neuron has mind? Reductionism fails us just here. When we collect enough neurons in one place and interconnect them, then new structures and functions emerge—not out of nothing, but into something that wasn't there before. Likewise, when we collect enough students in one place and interconnect them, then do new structures and functions emerge?

I think Carl Bereiter is questioning this specific reductionist tendency in education when he says in the Preface to his book Education and Mind in the Knowledge Age (2002): "What is being challenged is the basic conception of the mind as a container of objects—beliefs, desires, conjectures, remembered events, and the like—which the mind works on in cognition" (1). If Bereiter is correct, then knowledge and learning cannot be reduced to a single mind containing a complicated collection of chunks of knowledge and other cognitive objects and upon which the individual mind operates to create its mental picture of the universe.

I'm somewhat curious that the concept of emergence is not more prominent in our CCK11 conversations. I wonder why.


  1. I attended the Winter Chaos Conference at Southern Connecticut State University this past weekend and my use of the term "emergence" prompted an hour long discussion at dinner. Wikipedia includes the definition that was agreed upon which is the one by Professor Jeffrey Goldstein. a friend of many around the dinner table.
    The term "emergent" was coined by the pioneer psychologist G. H. Lewes, who wrote:
    "Every resultant is either a sum or a difference of the co-operant forces; their sum, when their directions are the same -- their difference, when their directions are contrary. Further, every resultant is clearly traceable in its components, because these are homogeneous and commensurable. It is otherwise with emergents, when, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, and it cannot be reduced to their sum or their difference." (Lewes 1875, p. 412)(Blitz 1992)
    Professor Jeffrey Goldstein in the School of Business at Adelphi University provides a current definition of emergence in the journal, Emergence (Goldstein 1999). Goldstein initially defined emergence as: "the arising of novel and coherent structures, patterns and properties during the process of self-organization in complex systems" (Corning 2002).
    Goldstein's definition can be further elaborated to describe the qualities of this definition in more detail:
    "The common characteristics are: (1) radical novelty (features not previously observed in systems); (2) coherence or correlation (meaning integrated wholes that maintain themselves over some period of time); (3) A global or macro "level" (i.e. there is some property of "wholeness"); (4) it is the product of a dynamical process (it evolves); and (5) it is "ostensive" (it can be perceived). For good measure, Goldstein throws in supervenience -- downward causation." (Corning 2002)

  2. Theodore, thanks for the wonderful elaboration on emergence. I've added your references to my notes.

    And what new structures do you see emerging from MOOC CCK11?

  3. Thank you for the link to this article, Keith. I share your interest in emergence, and I think it's clearly applicable to CCK11. I've often come back to Sugata Mitra's experiments as an example, and I also think there's something in the term of system that captures emergence (contrary to our course facilitator). Complex systems are more than the sum of parts, and I think that "more than" is worth examining. Maybe ideas like field theory, I don't know. But perhaps we're not addressing those questions because at a certain point you either have to admit there's some action/activity going on that's not quantifiable--explained by esoteric things like field theory--or just shrug and get back to examining pieces and parts. I know when I really try to think about it, my brain gets a little wonky (kinda like when I think about string theory). For me, that's a cool thing, BTW.


  4. Leah, examining pieces and parts is what we've been doing for two hundred years now, and while it has had some magnificent successes, it has also trapped all of us in silos of expertise, reducing all the world to our own little collections of facts. One of the things that I like about our current MOOC is the ability to move beyond my little silo (composition and rhetoric) to explore other parts of the world. It's a nice room. Big, too. :-)

  5. As I read this, Keith, many questions come to mind. If the Internet and all of its human nodes has the potential to emerge some kind of a collective mind, when would we expect to identify it? What kind of mind would it be? How would it become manifest? What would it want? Could it potentially become self-conscious?

  6. Good question, Bruce. My immediate answer is somewhat depressing to me: we may not ever know if or when a collective consciousness emerges from the Internet. Does a neuron know that it is part of a conscious mind, even if it somehow knows that it is part of a brain?

    But the most accurate answer I can give you is I don't know.