A fundamental similarity between the evolution of living systems and the evolution of computing systems is the way they become ever more complex as they evolve. We know what that means at an intuitive level, but just what does it mean in a more formal sense?
There has been serious theoretical work by heavyweights such as Andrei Kolmogorov, Gregory Chaitin, Charles Bennett, and Stephen Wolfram to define complexity in a general, rigorous, and formal way. None of those attempts has given rise to a theory of complexity suitable for characterizing the way the human brain models the world.
We tend to think of complexity as the difficulty of comprehending or describing the intricacy of the structure of some system. That sort of complexity is often called detail, structural, or static complexity. Depending as it does on the limits of our understanding, the definition of static complexity is inextricably confounded with the definition of human cognitive abilities such as intelligence. But intelligence itself is notoriously slippery and difficult to define.
A different sort of complexity, typically called
dynamic complexity, is inherent in
systems themselves, not in the limits of human comprehension of
parts. This second sort of complexity emerges
naturally in systems that evolve over time via positive
meteorological, cosmological, biological, ecological,
economic, or computing systems. (For more on these two sorts of
that we can't satisfactorily define complexity, it should come
as no surprise that we cannot satisfactorily
complexity in a general way either. So, what could it mean to
evolving complex dynamic systems have a habit of becoming more
complex? Without stepping into the deep waters of trying to
complexity, let me say that such systems become less and less
without becoming random. [Note: we also lack satisfactory
measures of randomness so I use that term somewhat loosely too.]
Both static and dynamic complexity bedevil us in our computing
systems. We struggle with the static complexity of program
and database schema that define structural relationships between
of interacting elements. As complex systems grow, these
descriptions tend to become so intricate that they exceed our
ability to understand them.
is qualitatively different; it is about what happens at runtime
the behavior of a system, e.g., a computer program, unfolds via
between elements. That is, dynamic complexity arises from
between interacting elements. Consider a
flock of starlings for example. The birds attempt to stay
in the flock while avoiding collisions with other birds.
The turns and twists of each bird to satisfy these
goals affect the paths of many nearby birds that, in turn,
paths of still more birds. Thus the dynamics of the flock as a
are complex and inherently unpredictable. Yet anyone who has
flocks of starlings can see clearly that the behavior of the
not random. Note that this sort of complexity has nothing to do
human cognitive limits. It arises from the distributed nature of
task the birds face and the different perspective each bird has.
Emergent systems such as flocks of birds, economic trading systems, groups of human dwellings, or a community of websites or blogs -- become new "entities" i.e., the flock, the stock exchange, the city, or a community in the blogosphere or twitterspace. These new entities then begin to interact with each other. Cities, for example, compete with each other for status, pride, educated workforces, new businesses and even land. Eventually their interactions grow rich enough that yet another, higher level complex system can emerge. Cooperating cities form countries, and so forth. These new sorts of interactions add yet another level of complexity (unpredictability) without becoming more random.
As we will see, the consequences of emergent multi-level complexity turn out to be central to understanding the evolution of computing as well as the evolution of biological systems.