Characterizing Complexity

Dynamic complex systems inevitably become even more complex. But why? Turns out that's a deep question.

A fundamental similarity between the evolution of living systems and the evolution of computing systems is that they become ever more complex as they evolve. We know what that means at an intuitive level: these systems become increasingly difficult for us to understand and predict. But just what does it mean in a more formal sense?

There has been serious theoretical work by heavyweights such as Andrey Kolmogorov, Gregory Chaitin, and Stephen Wolfram to define complexity in a general, rigorous, and formal way. But there is no general definition or theory of complexity. Nor have any of those attempts given rise to a theory of complexity suitable for characterizing the way the human brain models the world.

We tend to think of complexity as the difficulty of comprehending or describing the intricacy of the structure of some system. That sort of complexity is often called detail, structural, or static complexity. Depending as it does on the limits of our understanding, the definition of static complexity is inextricably confounded with the definition of human cognitive abilities such as perception, memory, and intelligence. But intelligence itself is notoriously slippery and difficult to define.

A different sort of complexity, typically called dynamic complexity, is inherent in the way the systems themselves function, not in the limits of human comprehension of their parts. This second sort of complexity emerges naturally in systems that evolve over time via positive feedback, e.g., meteorological, cosmological, biological, ecological, geological, social, economic, or computing systems. (For more on these two sorts of complexity, see here.) Given that we can't satisfactorily define complexity, it should come as no surprise that we cannot satisfactorily measure complexity in a general way either. For fundamental theoretical reasons, the lack of a well-behaved general definition of complexity prevents the development of a general way to measure it.

So, what could it mean to say that evolving complex dynamic systems have a habit of becoming more and more complex? Without stepping into the deep waters of trying to define complexity, let me say that such systems become less and less predictable without becoming random. [Note: we also lack satisfactory rigorous definitions and measures of randomness so I use that term somewhat loosely too.]

Both static and dynamic complexity bedevil us in our computing systems. We struggle with the static complexity of program source code and database schema that define structural relationships between networks of interacting elements. As complex systems grow, these structural descriptions tend to become so intricate that they exceed our cognitive ability to understand them. Dynamic complexity is qualitatively different; it is about what happens at runtime as the behavior of a system, e.g., a computer program, unfolds via interactions between elements. Dynamic complexity arises from feedback loops between interacting elements. Consider a flock of starlings for example. The birds attempt to stay in the flock while avoiding collisions with other birds. The turns and twists of each bird to satisfy those goals affect the paths of many nearby birds that, in turn, affect the paths of still more birds. Thus the dynamics of the flock as a whole are complex and inherently unpredictable in detail. Yet anyone who has watched flocks of starlings can see clearly that the behavior of the flock is not random. Note that this sort of complexity has nothing to do with human cognitive limits. It arises from the distributed nature of the task the birds face and the different perspective each bird has.

Emergent systems such as flocks of birds, economic trading systems, groups of human dwellings, or a community of websites or blogs -- become new "entities" i.e., the flock, the stock exchange, the city, or a community in the blogosphere or twitterspace. These new entities then begin to interact with each other. Cities, for example, compete with each other for status, pride, educated workforces, new businesses and even land. Eventually their interactions grow rich enough that yet another, higher level complex system can emerge. Cooperating cities form countries, and so forth. These new sorts of interactions add yet another level of complexity (unpredictability) without becoming more random.

As we will see, the consequences of emergent multi-level complexity turn out to be central to understanding the evolution of computing as well as the evolution of biological systems.

Read more on Evolution of Computing

Last revised 3/23/2015