Computing Evolves

No matter how carefully we engineer complex computing systems, they stubbornly insist on evolving and thereby become more and more complex and difficult to understand

Well engineered systems reflect a number of accepted principles of good design. The parts in engineered systems are expected to have a known function, irrelevant parts should be removed, and redundancy should be explicit. Designers should attempt to maintain separation of concerns, i.e., each part participates in just one functional unit and is designed to do that one task well. And engineers do everything possible to prevent the emergence of unforeseen consequences (bugs). In contrast, parts in evolved systems may have mysterious interactions with several apparently separate functions, and they may not seem to be optimized for any of those roles.

Software systems, especially large or old software systems, owe much more to evolution than we sometimes wish to acknowledge. Complex computing systems may start with good engineering but all too soon the best intentions give way to expediency. Changes accumulate by accretion of small modifications, each one intended to fix some bug or add some small function or change an existing function. Inevitably, unintended consequences accumulate as well. As they age, computing systems begin to resemble biological systems.

Parallels with Biological Evolution

Three and a half billion years of biological evolution has given rise to living systems that are incredibly elaborate. Complex biochemistry begat complex cells that begat complex multicellular organisms [1].

Software professionals might well characterize biological systems as a triumph of layer after layer of “clever hacks.” Each new layer exploits the hacks that have come before.  Evolution of computing As with software, biological systems include vestiges of functional units that may be irrelevant to current circumstances -- or maybe not.  These vestiges are left over from the ancestral history of the organism. Some may still be functional in unusual circumstances (e.g., rarely used metabolic functions).  The closer scientists examine non-functional "junk" DNA, the more of it turns out to have a function after all.  The term was coined as a reference to non-coding DNA at a time when coding for proteins was about all biochemists thought DNA did.  But then "functions" were discovered in contexts other than simply coding for amino-acids.  Some of this noncoding DNA is used to produce noncoding RNA components such as transfer RNA, regulatory RNA and ribosomal RNA.  DNA sequences provide binding sites for various proteins with various functions, they affect the probabilities and nature of multiple splicing, they affect the mutability of different sections of DNA, and so forth. Complex organisms eventually evolved group and social relationships which, in the fullness of time, evolved human societies that evolved technologies, which became digital.

So to with computing systems.  To any IT manager, especially those who were around during the late '90s, the above should sound very familiar. The history of computing may be relatively short but, as we learned in the Y2K (Year 2000) experience, it is long enough for legacy computing systems to be full of obscure code that may or may not be relevant to current circumstances. Or, worse yet, they may be relevant only in exceedingly rare circumstances.

All complex evolved systems, be they biological, social, economic, or computing systems, change over time as a result of the interaction between various sources of novelty and various mechanisms for weeding out “undesirable” or “unfit” novelty. In biological evolution, novelty is presumed to occur by various random processes, and weeding out occurs when an organisms does not survive long enough to produce offspring. Computing systems also evolve. Novelty in computing usually arises from human creativity. There are always new ways computers can be used, new ways for them to interact, and new architectures for their design and construction. Weeding out happens when the novel uses simply don’t work, or don’t scale. But most often novelty in computing fails simply because the marketplace rejects it.


Evolution of a given complex system, whether a biological or a computing system, occurs in the context of the other evolving systems in its ecology. That is, these systems co-evolve with other organisms or computing systems that collaborate, compete and/or cooperate with each other. Biological co-evolution in predator/prey or symbiotic relationships tends to drive evolution rapidly, something that should sound familiar as we cope with today's co-evolutionary arms race between computing virus/worm predators and their Windows prey. The interplay between email spammers and spam filter developers is another obvious example of digital co-evolution.

One important lesson from biological co-evolution that is only now being absorbed by computing professionals is that monocultures, i.e., large populations of genetically identical organisms such as corn fields or rubber plantations are big, stationary targets for diverse evolving predators. Once any virus, bacteria, fungus or insect manages by mutation to escape the defenses of one plant in the monoculture, all plants are immediately at risk. The Irish potato famine in 1845-50 is an unfortunate example of what can happen. To our dismay, we are discovering that this same principle applies equally well to computing monocultures such as the Windows-Office-IE-Outlook monoculture or the Browser-Adobe Flash monoculture. Exploits that attack a weakness in such monocultures spread like wildfire.

For more on Evolution of Computing

[1] “The minor transitions in hierarchical evolution and the question of a directional bias,” McShea, D. W., J. Evolutionary Biology, Vol. 14, pp. 502-518, Blackwell Science, Ltd., 2001.

Last revised 7/26/2018