Search This Blog

Showing posts with label complexity. Show all posts
Showing posts with label complexity. Show all posts

The 2nd Law: Is Increased Entropy Stochastic (incidental) or Causal (intrinsic)?


Recent science news is dominated by the multi-trillion dollar experimental search for the Higgs boson particle. A definitive observation of the theorized, but illusive, Higgs will finally complete the verification of the Standard Model – the most respected mathematical model of the evolution of our universe, explaining the emergence of each of the known forces and all of the matter we can observe. In the Standard Model, the Higgs is responsible for gravity – surrounding the more pedestrian particles – lending them the property we call "mass". If the Higgs exists, it is important as the causal bridge between the quantum world of the small and the relativistic world of the large. How could a particle that causes gravity be so hard to find? Because it doesn't actually have mass. It is as a result, known as "weakly interacting". It is only when a whole bunch of Higgs get together and surround other particles that mass is detected, and then, only in the surrounded particles. The Higgs binds so tightly to other particles, that it takes an extraordinary amount of energy, to break it free so that its presence can be detected. This is what the "Large Hadron Collider" does – it smashes heavy atomic nucleus (stripped of their electrons) at energies equivalent to those of the first moments after the Big Bang when all of the matter and energy in the entire universe was still smaller than a single star.

But there is a far more fundamental question. Gravity is a property. It is domain-dependent. It is specific to and belongs to a class of objects of a particular makeup and composition. The existence or nonexistence of the Higgs has no effect upon other properties of the universe like electromagnetism.

But there is a candidate for a domain-independent attribute of any and all causal systems. This attribute has been labeled the "Causal Entropic Principle" – it is generally discussed within the context of the transfer of heat (at astronomical scales) – within the study of thermodynamics. It is the logical extension of the concept of increased entropy, as first postulated, measured, and later described as the 2nd Law of Thermodynamics. But now, a hundred and fifty years after the formalization the laws of thermodynamics (of the phenomena and parameters of the transfer of heat, of the ratio of potential energy and work) correlative investigations in the fields of information, communication, computation, language, energy/mass, logic, and structure have uncovered parallel principles and constraints.  It is reasonable now to understand the 2nd Law as a description of a fundamental constraint on any change, in any system, no matter what forces and materials are at play. We now understand the 2nd Law to describe the reduction in the quality (density) of the energy and or structure of the universe (or any part therein) as results any change at all. We have come to understand the 2nd Law as a constraint on the outcome of change in structure, which is to say "information", on its construction, maintenance, and or transfer. This insight has rendered an equivalence between energy and structure in much the same way that Einsteinian Relativity exposed the equivalence between energy and mass.

There is however a daemon lurking within our understanding of the 2nd Law, a daemon that threatens to undermine our understanding of causality itself, a daemon that, once defined, may provide the basis for an understanding of any self-consistent causal system, including but not exclusive of our own universe and its particular set of properties and behaviors.

The daemon of the 2nd Law is the daemon of stochastic – is 2nd Law dictated dissipation (entropy) statistical, or is statistics simply a tool we use in the absence of microscopic knowledge? Asked another way, is the reduction in the quality of energy or information that the 2nd Law demands of every action, a property of the universe or is it a property of the measurement or observation of the universe? Is action equivalent to measurement? Is there a measurement or stochastic class of action free of the entropy-increase demanded by the 2nd Law?

This question is of far greater consequence to the universe and the understanding of the universe than the mechanics of mass as it would describe and thus parameterize ALL action and ALL configuration and the precipitation or evolution of all possible action and configuration. Where the existence of the Higgs Boson may explain the source of mass and gravity in this universe, an understanding of the causal attributes leading to the behavior described by the 2nd Law of Thermodynamics might just provide a foundation from which any and all causal systems must precipitate.

The implications and issues orbiting this problem are many and deep. At stake is an demonstrative understanding of change itself. We tend to think of change as exception. But, can a thing exist without change? If not, what is the difference between data and computation, between thing and abstraction of thing, and profoundly, an answer to the question, can data exist without computation? Can thing exist outside of abstraction of thing?

In thermodynamics and information theory, an effort is made to distinguish process and stochastic process. Heat is defined as an aggregate property describing the average or holistic state of systems composed so many interacting parts to keep track of all of them individually. Heat is a calculous of sorts, a system of shortcuts that allows mathematics to be employed successfully to determine the gross state of a huge collection of similar parts. There is a tendency then to assume that the laws that describe heat are laws that only apply to aggregate systems where knowledge is incomplete.

Are there non-stochastic systems? Are there discrete systems or dynamic changes within systems for which the laws of thermodynamics don't apply? Does the Causal Entropic Principle apply if you know and can observe every attribute of, and calculate the exact and complete state of a dynamic system?

Such questions are more involved than they may seem on first reading. Answering them will expose the very nature of change, independent of domain, illuminating the causal chain that has resulted from full evolutionary lineage of the universe.

Randall Lee Reetz

Note: The Causal Entropic Principle isn't a complex concept. It is the simple application of the 2nd Law's demand for increased universal entropy as a result of every change in any system. It says that every action in every system must be that action that causes the largest reduction in the quality of information or energy (the greatest dissipation). It says that a universe has only one possible end state – heat death – and that processes that maximize the rate towards this end state will be evolutionarily favored (selected), simply because entropy-maximizing processes and structures demand a higher throughput of energy and thus end up dominating their respective locality. Such entropy-maximizing schemes are thus more likely to determine the structure and behavior of the event cone stretching off into the future. An obvious extension of this principle is that complexity, or more precisely, the family of complexity that can find, record, and process abstractions that represent the salient aspects (physics) of the (an) universe, will help that complexity better predict the shape and behavior it must assume to maximize its competitive influence upon the future of entropy maximization. The "Causal Entropic Principle" thus represents a logically self-consistant (scientific) replacement for the awkwardly self-centered and causally impossible "anthropomorphic principle" (which lacks a physical or causal explanation and leans heavily on painfully erroneous macroscopic stretching of the quantum electro dynamics). Stretching circular logic to its most obvious and illogical end, the anthropomorphic principle borrows awkwardly and erroneously and ironically form the Heisenberg / Uncertainty Principle by asserting the necessity of "observers" as a precursor to the emergence of complexity. The Causal Entropic Principle explains the production of localized complexity without the need for prior-knowledge, and does so within the bounds of, as a result of, the 2nd Law of Thermodynamics, by showing that localized complexity can both come into existence as a result of the constant increase in universal entropy, and more specifically, that localized complexity has an evolutionary advantage, and will thus out-compete, less complex structures. In a Causal Entropic Principle universe, intelligence is the expected evolutionary result of competition to reach heat death faster. Falling down is enhanced by a particular class of complexity that can come into existence as a natural result of things falling down. Should one form of such complexity "understand" the universe better than another form, it will have an advantage and will be more likely to influence the shape of complexity in the future. The better a system gets at abstracting the dynamics of its environment the more likely it will be able to eat other systems than be eaten by them. Where the anthropomorphic principle requires an a-priori "observer", the causal entropic principle simply requires the 2nd Law's demand for increased entropy, for things falling down.

The Problem with Darwin…

Ya… how would you look as Darwin?
Darwin Darwin Darwin. Darwin is a problem. It isn't that he was wrong. In fact, it is very very hard to find any kind of mistake in his theory or his supporting data and arguments. What makes Darwin problematic is his myopic assignment of the process of evolution to the domain of biology. In doing so, Darwin has (inadvertently) misled generations of readers, who now confuse biology's "how" in evolution with big "E" Evolution in all domains. Big "E" Evolution is informative because it describes the more general "why" driving the direction of change in ALL domains.

When understood as a "how", the process of evolution is reduced to orrery – like the awkward clockworks that spin planets and moons around concentric bearings – substituting method where there should be cause. How is always specific to domain, but why, the ultimate why, is general enough to explain all of the how's. Armed with a robust understanding of the big WHY of evolution, one should be able to walk into any domain and predict and then map it's how. Again, it isn't that Darwin's evolution orrery doesn't accurately predict biological patterns of change, or even that Darwin's evolution orrery doesn't accurately abstract the salient causal aspects of biological change, it is that Darwin's how of evolution in biology leads people to the idea that evolution is specific and exclusive to biology, or that one can understand evolution in other domains by overlaying biology's how.

Darwin never generalized the process of evolution. Imagine had Newton and Einstein had not generalized dynamics and motion and that we had, as a result, built all of our machines on the principle that motion was caused by legs and feet.

The people who have come the closest to the generalization of evolution, the thermodynamisists, have never been able to or interested in the development of a generalization of the direction of change and the cause of that direction. I will get back to this absence of generalization in the understanding of evolution but right now will only hint at an explanation… in the aftermath of the all too human race and cultural superiority wars and atrocities, it has been socially dangerous to think of evolution as having a direction as such thoughts can be read as rhetorical arguments for superiority and pre-judgement, the likes of which were used by Hitler, Stalin, Pol Pot, Mao, and others as justification for mass exterminations and other exclusionary policies. That humans have the proclivity to exploit incomplete knowledge in the pursuit of ridiculous selfishness at absurd scales should be nothing new or noteworthy. But no one would advocate the cessation of the study of chemistry simply because arsenic is a chemical, or the study of high energy physics simply because the atom bomb can be built from such knowledge.

Or would we? Cautionary reactions to the self-superior pogroms that so blighted the 20th century have driven several generations of researchers towards the relativist rhetoric we see most prominently in the post-modernist movement, but which is evident in the works of less irrational and otherwise, empirical scientists like Stephen J. Gould and Richard Dawkins. Both represent an interesting study in overcompensation. In their quest to irradiate the all-to-natural self-superiority that seems to cause humans to erect unfounded tautologies that place humans on top of pre-destined hierarchies, both argue and argue brilliantly, for a flat evolutionary environment in which change happens but without any directionality at all. Again, this is like saying that because metal can be shaped into swards and knives and guns it shouldn't be produced even should we need plows and trains and dynamos and bridges and buildings and printing presses and lab equipment and computers.

Of course, caution is its own form of rhetoric, as potentially dangerous as its more obviously tyrannous cousins.

And, yes! Evolution has a direction. There I said it! Say it with me. You won't be struck down by post-modernist lightning. Trust me. Trust your self. It is more than a little absurd that one would have to argue for direction in a process that explains directionality. They are of course correct in their assertion that evolution isn't pre-determined. Nothing is. Of course. But the "brilliance" of evolution is that it results in a direction without need for prior knowledge, plan, or determination of any kind. To toss this most salient aspect of the evolutionary process simply to make a sociological point seems reckless in the maximum.

Randall Lee Reetz

Evolution: Pendulum Dance Between Laws of Thermodynamics

For years, I have pursued a purely thermodynamic definition of evolution.

My reasoning is informed by the observation that change is independent of domain, process, or the physical laws and behaviors upon which a system is based.  As the science of thermodynamics has itself matured (evolved), the boundaries of its applicable domain have expanded far beyond its original focus on heat.  It is generally accepted that the laws of thermodynamics apply to ANY system in which change occurs, that the laws of thermodynamics are agnostic to energy type or form.  Furthermore, scientists studying information/communication independently discovered laws that match almost perfectly, the laws of thermodynamics.  This mirroring of domains has thrilled logicians, physicists, mathematicians, and cosmologists who are no more and more convinced that information (configuration) and energy are symmetric with respect to change over time.

Even conservatively, the implications of this symmetry are nothing short of profound.  If true, it suggests that one can, for instance, calculate the amount of information it would take to get a certain mass to the moon and back, and it means that one can calculate how much energy it would take to compute the design a moon rocket.  It means that the much vaulted "E" in Einstein's Relativity equation can be exchanged with an "I" for information (with valid results).  It means, at some level, that information is relativistic and that gravity works as a metric of information.  Same goes for the rules and equations that govern quantum dynamics.

And this from an eyes-wide-open anti-post modernist!

At any event, the symmetric relationship between energy and information (at least with regard to change) provides a singular foundation for all of physics, and even perhaps for all of ANY possible physical system (equally applicable to other universes with other rules).

It would seem that thermodynamics would provide a more than solid base from which to define the process that allows for, limits, and possibly demands the (localized) accumulation of complexity – evolution!

The Zeroth and First Laws of Thermodynamics work to shape and parameterize action. Given the particular configuration immediately prior they insure that the next action is always and only the set of those possible actions that together will expend the most energy.  In colloquial terms, things fall down and things fall down as fast and as completely as is possible.  Falling down, is a euphemism for the process of seeking of equilibrium.  If the forces attracting two objects is greater than the forces keeping them apart, they will fall together.  If the forces keeping them apart is greater than the forces attracting them, they will fall apart.  Falling down reduces a system to a more stable state – a state in which less force is pushing because some force was released. Falling down catalyzes the maximum release of energy and results in a configuration of minimum tension.

The Second Law of thermodynamics dictates that all action results in a degradation of energy, or configurationally speaking, a reduction in density or organizational complexity.  Over time the universe becomes cooler, more spread out, and less ordered.

The falling down dictated by the the zeroth and first law result in particular types of chunking determined by a combination of the materials available and the energy reduced.  About a million years after the big bang, the energy and pressures of the big bang had dissipated such that the attractive forces effecting sub-atomic particles were finally stronger than the forces all around them.  The result was a precipitation of matter as hydrogen and helium atoms in plasma.  After a few hundred million years, the mass in these gasses exerted more attractive energy than the much cooler and less dense universe, and precipitated into clumps that became stars.  As the fusion cascade in these first stars radiated their energy out into an expanding and cooling universe, the attractive force of gravity within became greater than the repulsive forces of nuclear reaction and the starts imploded upon themselves with such force as to expel their electrons and precipitate again into all of the other elements.  These heavy elements were drawn by gravity again into a second generation of stars and planets of which earth is but one lonely example.

You will have noticed that each precipitatory event in our cosmological history resulted in a new aggregate class – energy, sub atomic particles, light atoms, stars, heavy atoms, stars and planets, life, sentience, language, culture, science, etc).  The first two laws of thermodynamics dictate the way previously created aggregate objects are combined to form new classes of aggregate objects.  The second law guarantees as a result of the most contemporary precipitation event, a coincidental lowering of energy/configurational density which allows still weaker forces to cause aggregates in the next precipitatory phase.

If you still aren't following me, it is probably because I have not been clear about the fact that the lower environmental energy density that is the result of each precipitatory cycle optimizes the resulting environmental conditions to the effects of the next weaker force or the next less stable configuration.

For instance, the very act of the strong force to create atomic nuclei, lowers the temperature and pressure to such an extent that the weak force and the electromagnetic force can now overcome environmental chaos and cause the formation of atoms in the next precipitatory event.

This ratcheted dance between the laws of thermodynamics is the why of evolution, and results in the layered grammars that sometimes or at least potentially describe ever greater stacked complexities that led to life and us and what might come as a result of our self same actions as the dance continues.

Stepping back to the basic foundation of causality, it is important to be re-reminded that a configuration of any kind always represents the maximum allowable complexity.  In recent years, much has been made of the black hole cosmologies that define the event horizon as the minimum allowable area on which all of the information within the black hole can be written as a one bit thick surface membrane of a sphere.  The actual physical mechanical reason that this black hole event horizon membrane can be described as a lossless "holographic" recording or description or compression of the full contents of the black hole is complex and binds quantum and relativistic physics.  Quantum because the energies are so great structure is reduced to the structural granularity of basic quantum bits.  Relativistic because at this maximally allowable density everything passing the event horizon has reached the speed of light,  freezing time itself… the event horizon effectively holds an informational record of everything that has passed.

The interesting and I think salient aspect of an event horizon is that is always exactly as big as it needs to be to hold all of the bits that have passed through it.  As the black whole attracts and eats up any mass unlucky enough to be within its considerable influence, the event horizon grows by exactly the bits necessary to describe it at the quantum level.

The cosmological community (including Sir Steven Hawking), was at first shocked by the sublime elegance of this theory and then by the audacious and unavoidable implication that black holes, like everything else, are beholding to the laws of thermodynamics.  The theory predicts black hole evaporation!  Seems black holes, like everything else, are entropically bound.  There is no free lunch. The collapse of matter into a black hole results in a degradation of energy and informational configuration, the self same entropy that demands that heat leak from a steam engine, demands that black holes will evaporate and that eventually, when this rate of evaporation exceeds the rate of stuff falling into it, a black whole will get smaller and ultimately, poof, be gone.

This is heady stuff.  The biggest and baddest things in the universe are limited!  But to me, the most profound aspect of this knowledge is not that event horizons can be describes as maximal causal configurations, but that we are shocked by this!  All systems are, at each moment, the maximal allowable configuration by which those forces and those materials can be arranged.  If they could be arranged any tighter, they would have already collapsed into that configuration.

To say this is to understand that time is not separable from configuration.  As Einstein showed, time is physically dependent upon and bounded by the interaction of mass, distance, energy, and change.  Cosmologists use limits to understand the universe.  The maximal warpage of space-time caused by a black hole's density effectively flattens the allowable granular complexity of the configurational grammar  to binary bits held in the minimally allowable physical embodiment.  But, lower energy configurations, configurations like dogs, planets, and the mechanism by which I am attempting to explain this concept, are bounded and limited by the exact same causal rules.

The difference between a black hole horizon and an idea?  Well it has to do with the stacking of grammatical systems (quarks, sub atomic particles, atoms, molecules, proteans, cells, organs, bodies, culture, language, etc.) that allows for complexities greater than the binary bits, the only stuff allowed to pass through an event horizon.  But these stacked grammars that allow us to be us are every bit as restricted to the same maximally allowable configuration rule that minimizes the size of a black hole's event horizon.  In a system configured by a stacked grammar, the minimum complexity rule is enforced at the transition boundary between each two grammatical layers.


Things fall, but only as fast as the stacked grammars that govern causal reality will allow.  This isn't a metaphor, the speed of diffusion, of degradation, of falling down, is always and in all situations, maxed-out.  The exact same physical topology that bounds the size of the a black hole event horizon contributes to the causal binding effecting the rate at which any system can change.  This is because at the deepest causal layer, all systems are bound by relativity and quantum dynamics.  The grammatical layers built successively on top of this lower binding only serve to further influence entropy's relentless race towards heat death.


[to be continued]

Randall Reetz

The Scope of Evolution?

We evolutionists desperately want to quantify evolution. We are embarrassed by the continued lack of measurability and predictability one would expect from a true theory-based science. In the place of true metrics, we defer to the vague, broad, and situationally dependent term; "fitness".

We say that genetic variability in the population of any given lineage will insure that some individuals express traits that provide a survival advantage. Given the particularity of a given environment's mix of resources and challenges, not all individuals will have the genes necessary to make them fit. We say that there is always some small diversity in any population, a variability caused by sexual mixing, mutation, and a whole slew of non-genetic processes that indirectly effect either the actual genes inherited or conditions under which those genes are expressed. We say that this variability across a localized population is enough to influence who will survive and who won't, or most importantly, who's genes will be expressed in the next generation and who's won't. We assert that this process is obvious, observable, and predictable. And of course we are correct. We can and do produce laboratory experiments and field observations in that show that genes predict traits, genetic variability is correlated to population variability, and environmental conditions act as filters selecting towards individuals or populations expressing some genes and against those with others.

Well that all sounds good… model driven prediction, physical mechanistic explanation, solid techniques for observation… like a real science. If, that is, you are content to restrict your inquiry to the how.

If you are content with an understanding of evolution that is restricted to biology. If you are content with an understanding of evolution that blindly accepts as dependent factors, such temporal notions and shifting and immeasurable terms as "environment" and "fitness" and never ever asks, "Why?", then you probably won't need to read any further.

But if you, like me, would like to understand evolution in its largest context; independent of domain, and across all time, then you already know that evolution's current answers, though already correct and verifiable by any standard, is not yet a true science.

When Newton sought to define motion (and yes I know that Einstein perfected it through Relativity and quantum theory), he didn't do so only for an apple falling from a tree… but universally, for all physical bodies in all situations. His equations predict the position, speed and trajectory of an object into any distant future and across any distance. If the same could be said of evolution theory, we would have in our possession theory and or equations that we could use to predict the outcome of evolution across any span of time and in any domain.

Yet, of course we don't. We know all kinds of things about the interaction, within the domain of biology, of germ and progeny, of reproductive selection and mutation, of the relationship between genotype and phenotype, and of the competition over resources and of the crazy alliances and unintuitive and unplanned results of cooperative adaptation (including the tightly wound dance between predator and prey, between parasite and host).

But these processes, no matter how well understood, measured, researched, and modeled, are not what could be called the primitives of evolution. To be primitives, they would have to be universal. They are not universal. Thinking so would be like Newton thinking his laws only applied to cannon balls or things made of metal. So ingrained is the false correlation between biology and evolution that it is often impossible for me to get people to continue a discussion about evolution when I say "Let's talk about evolution in other systems." or "Let's talk about evolution as domain independent phenomenon."

If evolution isn't a "general" phenomenon, then someone representing the "special theory of evolution" will have to show how it is that life evolves but other systems do not. I doubt this requirement can be met. It would mean that some line can be drawn in time, before which there wasn't evolution, and after which there was. The logical inconsistency arises when one realizes that, to get to that line, some process suspiciously similar to evolution would have to have transpired to advance complexity to the level just preceding biology.

Another way to frame the overarching question of the why of evolution starts with the realization that competition within an environment isn't restricted to the various individuals of one species. Nature isn't that well refereed. In fact, nature isn't refereed at all. Nature is a free for all pitting snail against walrus against blue green algae. And it doesn't stop there. The ocean currents compete to transfer heat and in doing so, effect the food available to marine life of all kinds. In a very real sense, in an exactly real sense, a hurricane competes directly with a heron. Even the more stable artifacts of an environment, the topology and physical composition of the geographic features below foot compete actively and dynamically with the biota growing in its fissures and above its slowly moving face. Our old and narrowly-bounded definition of that which fits the category of evolution is plainly and absurdly and arbitrarily anthro-, species-, mammal, or bio- centric, and logically wrong.

Each time I introduce these new and inclusive definitions of the scope of the cast that performs in the play that is evolution, I hear grunts and groans, I hear the rustle of clothes, the uncomfortable shiftings… I hear frustration and discomfort. Hands raise anxiously with questions and protests: "How can non-living things evolve?" "Non-living things don't have genes, without genetics traits can't be transferred to or filtered from future generations!" And the inevitable, "The category containing all things is a useless category!"

I can't say that I don't understand, don't appreciate, or in some real way haven't anticipated and sympathized with these bio-centric apologies. This is how evolution has been framed since Erasmus Darwin and his grand kid Charles first seeded the meme. I will therefor take a moment to address these two dominant arguments such that they can be compared with a domain-independent definition of evolution.

First, lets look at evolution's apparent dependence upon genetics. How could evolution work if not for a stable medium (DNA) for the storage and processing of an absolute recipe for the reliable re-creation of individual entities? You may be surprised that my argument starts with an agreement; evolution is absolutely dependent upon the existence of a substrate stable enough to transfer existing structure into the future. But does that stable structure have to be biology's famous double helix? Absolutely not! In fact, it is causally impossible to find a system within this Universe (or any imaginary universe) in which the physical makeup of that system and its constituent parts does not facilitate the requisite structure to transfer conditions and specific arraignments from any present into any trailing futures. The shape of a river valley is a fundamental carrier of information about that valley into the future. The position, mass, and directional velocity of celestial bodies is sufficient carrier of structural information to substitute handedly for the functional duty that DNA performs in biology. But it is also important to realize and fully absorb the opposite proposition. DNA is not the only way that biological systems reliably transfer information about the present into the future. Biological systems are of course just as physical as galaxies, stars, and planets. The same causal parameters that restrict the outcome of any particular then (as a result of any particular now), that restrict causality to an almost impossibly narrow subset of what would be possible in a purely random shaking of the quantum dice. DNA is especially good at what it does, but it doesn't own or even define the category.

The second argument against an all-inclusive, domain independent definition of evolution – the logical argument against the usefulness of category that contains everything – well let's start by parsing it semantically and rhetorically. On face, there is no way to argue. The category "all" is a category of little worth. There is nothing to be known of something if it can't be compared to something else. But, and this should be obvious, I am not trying to create a category; quite the opposite! My intent is to create a theory of everything. Such a theory would obviously fail if it didn't apply to everything. So, semantically, this "set of everything is a useless set" argument doesn't map to the topic at hand. I get the distinct feeling that the argument is meant pedantically, and purposely, to derail and obfuscate the logical trail I am attempting to walk the audience down. It is a straw horse. It looks logical, but it doesn't apply.

A much more instructive and interesting line of questioning would go to the plausibility of a domain independent theory of evolution, what it would or would not change regarding our understanding of the emergence of complex structures (and their accelerating complexity), how it modifies our understanding of biological evolution, whether or not evolution will stand up to the requirements of a "theory of everything" (how it compares with others), and maybe even the effectiveness of my own description of this idea.

So, why is it important to me for evolution to meet the test of a "theory of everything"? First, I loath the unexplained. If evolution only talks to the mechanism of change within biology, then evolution would necessarily stand upon a stack even more foundational truths, and, as I mentioned earlier, other parallel theories would have to be developed to explain the emergence of complexity in non-biological systems. Either way, a vacuum would remain, exposing a need for the development of a foundational theory or set of theories that would support what in biology we call evolution, what in geology we call tectonics (etc.), what in meteorology we call heat dissipation cells, what in culture we call engineering, cooperative networks, etc.

What makes this whole endeavor so tricky, is that we tend to confuse mechanism with purpose. We get so caught up with the almost impossibly complex molecular mechanism (nucleic acids) by which biology builds complexity, that we forget to look at why it bothers at all. This why, this great big why, is to my mind far more fundamental and interesting and once understood, provides a scaffolding from which to comfortably understand and predict the necessary meta-components that need to be present in some form or another, in any evolving system. And, if you like elegance in a theory, it gets even better. It turns out that a byproduct of evolution as a theory of everything is that it must therefore be based on the two physical principals that have stood the test of universality – thermodynamics and information theory, and it strengthened both of these theories in the one area they were weak – dynamics. Once you understand the motivation and demands of change itself, the particular mechanisms of evolution at play in any one domain are reduced to how, are, no matter how varied, are but skins worn by a beast who's behavior becomes more and more predictable and universal.

All systems have what it takes to evolve. All systems are composed of components that in some small way differ. That difference might be in how the parts are made, or it might be in how the parts are distributed, and it most probably is both. That is all a system needs for the process of evolution to apply. So long as there is a difference somewhere in the system, or in that system's interaction in the greater environment in which it exists, evolution needs must be happening all of the time.

So just what is it that evolving things compete for? Is it food? Yes. Is it safety? Yes. Is it comfort? Yes. Is it stability? Yes, that too. For plants, competition is for solar radiation, carbon dioxide, water, a stable place to eat, grow, mate, and rase offspring. We animals need far more energy than our skin could absorb even if it was all capable of photosynthesis. So we eat things that can. And that is just the way things work. To get ahead, things learn to take advantage of other things. One might even say that the advantage always goes to those entities that can take the greatest advantage of the the productive behavior of the greatest number of other things. If you can't make enough energy, then eat a lot of things that can.

One could imagine taking this line of reasoning to the extremes. Lets define fitness as the ability to sit on the apex of a food chain. Of course you have to keep moving. If you don't stay vigilant and obsessive, always trying to find new and better ways to eat more of the other things, you will succumb to competition by things that do.

… to be continued …

Randall Reetz

The Incomputable Heaviness of Knowledge

Is the universe conceivable?  Does scientific knowledge improve our ability to think about the universe?

What happens when our knowledge reaches a level of sophistication such that the  human brain can no longer comfortably hold it, or compute on it?  For thousands of years, scholars have optimistically preached the benefits of knowledge.  Our world is rich and safe as a result.  People live longer, people live in greater personal control over the options they face.  All of this is an obvious result of our hard won understanding of how the universe and its parts actually work.  We arm our engineers with these knowledges and send them out to solve the problems that lead to a more and more desire-mitigated environment.  Wish you weren't hungry, go to the fridge or McDonnalds.  Wish you were somewhere else, get in your car and go there.  Wish you could be social, but your friends are in Prague, call them.  Wish you knew something, look it up on the internet.  Lonely, log in to a dating service and set up a rendezvous. Wish your leg wasn't fractured, go to a doc-in-the-box and get it set and cast.

But what if you want to put it all together?  What if your interests run to integration and consolidation.  What if you want to understand your feelings about parking meters as an ontological stack of hierarchical knowledge built all the way up from the big bang?

Are We But Crows? (Where Pattern is Noise)

Lets look at yet another way that the human mind gets tripped up and falters. Our propensity to find pattern makes us vulnerable to an inane form of overindulgence and counterproductive obsession with attributes of systems that do not carry essence or salience. Seems that our evolutionarily-shaped affinity for pattern make us particularly vulnerable to a level of indulgence that in smaller quantities is reasonable and healthy, even necessary.

Here is the problem. Most pattern is indicative of a lack of information or salience. If a system can get away with simple duplication along a scheme, it means that that part of the system is not very important. It means that it is cutting resource corners and systems can only get away with this type of cost cutting in areas that are not salient to the main purpose of the system at large.

Imagine going to a bookstore and obsessing over the fact that all of the books are made of the same stuff, of paper sheets cut and stacked in approximately the same geometric ratio. Imagine, in fact that this pattern overwhelmed your attention to such a degree that you could not be bothered with the content encoded as printed words on the pages of those books.

That would render all books equivalent. A book printed with random strings of words or with no words at all would be informationally equivalent to Darwin's, Origin Of The Species. In the face of an overwhelming attraction to inessential pattern, essential pattern is dismissed as noise, is ignored.

I call this the "Crow" problem… an obsessive attraction to shiny objects. There are so many examples of this type of cognitive distraction; rainbows, crystals, mirages, amber waves of grain, camouflage, fractals, golden rectangles, sacred numbers, etc.  It is where our own cognitive process amplifies some pattern in the world wether or not it is important in any information rich manor.

We are for instance, especially susceptible to and sensitive to right/left symmetry. Probably because filtering for this pattern allows you to quickly pull animals out of a complex sensory field. Having such a filter gave us evolutionary advantage, so we now have brains with a tendency to favor right/left symmetry in visual fields.

The rainbow is an other great example of pattern that is attractive but virtually meaningless. A rainbow is not actually a thing of course, but a mix of the refractive nature of light and a behavioral anomaly of our visual apparatus. A rainbow is like a hologram. What we are seeing is light reflected off of raindrops or fog. Of course it is a poor indicator of the location of rain in our visual field… there is rain in other parts of the sky than where the colored bands appear. The phenomena that is visible as a rainbow is happening every time rain is present but we only see the rainbow when our orientation to the sun is within certain fairly restrictive parameters. Yet, so attractive is the rainbow stimuli, under the influence of this pattern detection stimulus, we might just miss more pertinent information (a lion approaching?).

Another example of particular importance is the attraction to crystals and the latest crystal fad, fractal geometry.

Most people are interested in fractals (or think they should be). But what exactly do you actually KNOW about fractals? Does your knowledge of fractals include an understanding of the WHY of n-dimensional radial n-scale self-similar patterns, why they appear in dissipative systems that develop over time? Do you care to know or understand? I get the feeling that people who are the most attracted to fractals are more interested in some sort of pseudo-spiritual grooviness, the stare at your navel aspect of fractals, than the simple truth of least-energy dictated growth patterns.

What is abstracted in mathematics as the "fractal" is in fact the only pattern that nature can exhibit in systems that are dominated by parts that are very much the same. Sand is a good example. Homogenous liquids and gasses are another. Fractals are a map or abstraction of dissipative systems, and all systems are dissipative. But the events and situations that matter in the evolution of complexity, the WHY that makes US possible, are not the even dissipation within a scheme, but the CREATIVE events that allow for new (faster and bigger) dissipative paradigms. An explosion is fractal in the same way that a capillary web is fractal in the same way that both a tributary and alluvial plane is fractal. It is just the end result of a history where least energy dictates the most efficient moment to moment dissipation of energy. The way you are saying that fractals are important feels religious and rapturous. As though fractal patterns in nature are a THING or a GOAL. Fractals don't know they are fractals any more than the ink making up the word "Fractal" on a printed page "knows" or "wants" to be that particular word or any word at all.

People who act this way towards pattern scare me. What matters in process is the places where pattern breaks. Else we wouldn't be here. Else we couldn't find salience.

I am afraid of and purposefully vigilant against the grand attractors of human thought, virgin births, miracles of any kind, shiny baubles, trickery, omnipotence, anecdote, life ever after, self importance, power over others, a desire to know or have access to the prediction of future events, predetermination, and pertinent to this discussion, crystals and fractals and the types of pseudo-cyclical pattern that makes us think nature is more simple than it is and that nature is of our own imagination or invention.

All systems seek equilibrium. In systems made up of subsystems, each of these subsystems seek their own equilibrium or resonance. Frequently when the resonances of several co-systems fall into an overlapping cycles, additive standing waves can and do overwhelm the integrity of the system as a whole. Feedback loops are anti-complexity mechanisms. Overarching patterns overwhelm systems and keep them from creative or information rich activities and interactions. Systems that seek to compete in the edge of evolution game are systems that spend an inordinate amount of energy keeping feedback loops in check. A creative system is a system working overtime AGAINST the information killer that is natural tendency towards simple pattern (fractals).

If you are trying to get rid of a planet or smooth out the heat cline in an ocean, fractal patterns are important. If you are a member of a species that is carrying complexity forward through the accumulation and control over abstraction, dissipative pattern and the systems dominated by them, are like Kryptonite to the program at hand… best to avoid them at all cost.

It is important to know when to appreciate pattern and when to run like hell when you see it developing. The forces that produce pattern are information destroyers.

We had better look long and hard at this issue. Choosing a false god at this level in the evolutionary game could end up causing the death of the whole complexity scheme just as it is becoming aware of its purpose and salient pattern.

As another example of pattern distraction, I give you DNA.

My favorite morphological description of DNA: "the non-periodic crystal". It harnesses the strong self-organizing (but anti-information) properties that give rise to a crystal (in this case, an almost endless wound chain of identical and stable "double-helix" spiral twists) as a stable superstructure for that will carry the information (anti-crystal) bearing base-pairs at its axillary center. The crystal spiral holds and protects the integrity of the always-vulnerable low-entropy sequence that holds our genetic recipe… DNA's real structure of importance. Even so, we humans seem to be more attracted (like crows) to the simplicity of the simple and un-remarkable spiral crystal armature instead of the non-crystaline information the crystal armature makes possible!

The larger implication of our self-destructive crow-like attraction to simplicity is that we are romantic about the very things that don't matter and ignore-ant of the anti-crystal configurations that are complex and information rich… that are salient to the information we are and the information our information may yet create.

I suspect that our attraction to simplicity in pattern is an idiosyncratic after-effect of a salience detector within our thinking apparatus that is constantly looking for self-similarity. When computer scientists and logicians attempt to design compression schemes, they are looking for ways to algorithmically discover those sections or sequences of a set of information that are un-important to the overall structure… much of data is repetitive, if you can find these repetitions, patterns, you can reduce them to simple equations or pointers to prototypical modules that can be stored once and forever duplicated as filler. Obviously, these patters are compressible because they cary so little salient information.

While it is somewhat surprising that so much of an image, musical recording, even string of text, can, using fractal math, be detected as filler and deleted, the subsequent and predictable human reaction… that of honoring the trash simply because it is pretty, is to my mind, one of the most scary attributes of human proclivity. Fractals illustrate how much of nature's structure is non-salient, is the noise, the tailings of dissipation. That we would be enamored by the pattern of noise to such an extent that we ignore the real meat of information (that which is not pattern-able) is our potential downfall as a species.

A snail isn't trying to build spiral. A snail grows a shell that happens to be a spiral because that is THE ONLY SHAPE allowed in three dimensions that is both a cone (expanding tube) and dimensionally minimal of solid permanent material. The snail has better things to waste its limited energy on than the shape of its protective home. So it chooses the one shape that takes the least energy and most minimal construction algorithm to pull off. We dishonor the snail as a biological scheme by paying attention to the aspect of its survival that is the least interesting. That simplicity of that spiral shell is what should tell us not to pay attention to that aspect of its being or strategy.

But in this regard and others, we act as crows despite our 100 billion brain cells and the potential therein.

Sad.

Randall Reetz

There's No App For That!

There are now about ten thousand downloadable iPhone "apps". The tailings of a mad geek-scramble of mini-application coding and cloud-mashing, the gold rush of the 2000's. Low hanging fruit in the nerd orchard that is Silicon Valley.

Every C or Java savvy software engineer, hacker, and video gamer under 35 is trying to second guess the fickle fad-addled twenty-something market of multi-task-ers that will pay a buck or two for the opportunity to buy pretty-code-as-trinket, binary-pet-rock, boolean-back-scratcher, brickabrack-party-favor. The wave of the future? Gold rush for the new killer mini-application?

A month ago, I spent a couple of weeks in Tahoe camping with my parents and my sister's family. I don't really know how it happened, but I started saying "There's An App For That" (Apple's iPhone byline), every time someone said something of that didn't quite ring true. Pretty soon, my 9 year old nephew caught the bug and began mocking truth stretchers the same way. I don't know why this tickles me. As a student of social trends, my attention gravitates to behaviors that to-easily resonate, those memes that are to the human mind what complexity theorists call "grand attractors" (the attributes of an environment or system that have the greatest and most rapid effect on action and outcome… in our solar system it's the sun, not Pluto). We humans are embarrassingly susceptible to certain patterns of language and inane cultural cues.

Back to the "app". The word is of course diminutive for application. Where applications are big do-everything information workshops, apps, are single purpose narrowly focused seemingly useful algorithmic hacks that each do some particular information access task in some sort of (usually) intuitive and fluid way. But each of these apps, as useful as it may be in a given and highly particular situation or context, is like one of the 10,000 little doodad accouterments you can find in a fine cooking equipment store… they might be nice, they might indeed do the job better than anything else could, but you wouldn't want to have to find them in a kitchen with 10,000 other tools hanging from hooks and stuffed into drawers.

Imagine a robot as swiss army knife. That is what a thousand apps are in a devise that isn't anything more than drawers and hooks for the storage of applications. The swiss army knife is really just two pinions from which swing a whole bunch of tools. It knows nothing of those tools, just how to store them and swing them out for use.

In the same way that a cooking store is better when it has in stock every tool you could ever want, the more tools it has, the harder it is to find what you are looking for, the usefulness of any one iPhone app is diminished with the addition of each new app. In the parlance of cooking there is a hierarchy of tools by complexity: kitchen, work station, major appliance, appliance, tool, and utensil. If the OS is an kitchen, and the application is the appliance, then an app is something small like a tool or a utensil. A garlic press, like most kitchen tools has a single purpose. It is a highly specialized devise, the design of which has been fine tuned through multiple design and test iterations to smoothly and reliably function as the user would expect in highly specialized and somewhat exceptional situations (once in a while). There just aren't that many things other than garlic that you want to peel, smash, and extrude. Compare this to the average utensil - knife, fork, spoon - which is meant for repetitive use across a range of activities. You have to go find a tool, but a utensil stays in your hand, or right near by all day long. You probably only have one of each of your tools, but you have multiple copies of each of your utensils. There are an almost unlimited number of potential tools, yet very few types of utensil.

It should be clear that iPhone apps are much more tool-like than utensil-like. And because most apps aren't very flexible or reconfigurable, they aren't in them selves appliances. The entire iPhone, as a devise, is often called an appliance. In this sense, it is of the class of appliances that is more akin to a food processor - the user can switch blades, for different tasks - so maybe in this sense, the blades are like apps and the iPhone truly is an appliance.

And that suggests an even more apropos metaphorical comparison… the people I know and their food processor. Almost nobody I know uses theirs. It sits there in the cabinet while they laboriously chop their vegetables by hand. Taking the thing out, collecting all of the parts, setting it up, finding the correct blade, installing that blade, plugging it in, preparing ingredients for processing, cleaning it afterwards, drying it, putting it all away…forgetaboutit!

The iPhone has a distinct advantage over the food processor - it has a primary use - as a phone - which keeps it within reach and always at hand. In the sense that an iPhone is a telephone, it is a tool - a tool that can transform itself into a multi-use appliance!

My argument isn't whether or not it is a good appliance, or whether the tools that can be attached (apps) are useful or well integrated, my argument goes to the difference between a well appointed kitchen, and a well staffed kitchen. Do you like food, or do you like cooking. My assumption is that there are far more eaters than their are cooks. Sure, everyone likes a great kitchen. Through the normal attrition that is gradual consumption, your kitchen slowly ends up containing more and more tools, but most people would, had they had access to more money, gladly trade in the whole kitchen in exchange for well prepared meals in a nice restaurant or brought to you in front of your TV.

If I had to dig a ditch, a good shovel is better than a bad one, but what the hell am I doing digging at ditch? Just how great would a shovel have to be before digging a ditch becomes something I choose to do?

I remember when personal computers became available. I watched grown men, highly successful, executive level professionals, choose to type their own letters simply because the technology was so cool and new. The same guys who wouldn't be caught dead typing their own letters on an IBM Selectric typewriter were firing their secretaries, and hand typing their letters in a far more complicated word processor on an IBM PC, just because they could change the font and the the color, size and style of the text. I am a big fan of fairness and equality, and shutter to think of the opportunity squandered during the age of woman as secretaries, but I can hardly conceive of the size of the gap between a word processor and a human secretary.

I shouldn't pick on the iPhone, in the sense that it is an information appliance, it isn't any different than any other computer. Just stores applications for later execution. What I am picking on is fact that a computer of any form factor, is just too stupid to act a anything other than filing cabinet. If you want a machine or appliance that will help you figure out what app to launch, well, there is no app for that, and there probably shouldn't be. The iPhone has taken the store-access-launch metaphor to the slippery silky extreme you would expect of Apple. This may in fact and ironically be exactly why it is becoming so clear where computers need to go next and how far they are from the fundamental technologies that will be necessary to go there.

[ more to come… ]

Complexity is Self-Limiting… Evolution Says "So What!" But At What Cost?

Complex systems tend towards greater complexity. That is one way, in fact, of defining evolution. But complexity is also self-limiting in obvious and unavoidable ways. What gives?

How, specifically, does an understanding of complexity's natural limits, recast an assessment of where human society is, where it might be going, and what of this potential do our own limitations in understanding complexity and its limits… well, limit?

We tend to gravitate towards a rather cleaned-up image of the future, all stainless steel and gleaming glass, and sexy robots that can't say "no" (puffy clouds, white wings, and lutes?). To be fair, this sparkly and perfect view of the future is something we reserver for "The Future". Excepting for Sunday mornings, we are refreshingly realistic about the process of getting through all of the calendar-able pedestrian futures to the final "The Future"… sometimes even positing an apocalypse or two along the way. Its as though we understand that things of great complexity and stability must be constructed, and that building is a messy and chaotic process, our self-delusion begins and ends with the absolutely fatal assumption that there is some end to the construction process, after which everything will be grand and glorious and perfect in the sense that no major construction will ever again mar the sublime and pristine quite and elegance we have built.

Right. OK.

In light of the magnitude of our self delusion, it seems down right naive to apply the phrase "drink the Cool-Aid"… in some very real sense, we must, each of us, have Cool-Aid factories right smack in the middle of our brains!

The actual future, the sober future, the one we seem hell-bent on ignoring, is a future of greater and greater and more and more constant change. A future we can never get to. A future that will surely go on one day without us. There were after all, a whole bucket-load of futures before we existed, before we declared ourselves the supreme center of everything, the final future. Ultimately, of course, there is a final and absolute future to any system. If you paid attention during your thermodynamics or information science lectures, you know that there will come an ultimate future which can not support any complexity at all.

For now, we will ignore that final future-of-all-futures (heat death)… there are "miles to go before we sleep".

As complexity marches forward and "upward", evolving systems are increasingly characterized by construction and change. A static system, one that can't react to its own constantly increasing experience, is a system that isn't as complex as one that can learn and adjust itself to accumulated knowledge. The romantic vision of a completed and peacefully static future is as laughable as it is understandable.

Some fantasies drive us towards success and influence, and others towards catastrophe and insignificance.

The difference between these two forms of fantasy are, to my mind, the difference between paying attention to the greater reality that is the whole universe (its physical laws, material properties, and configuration), and paying attention instead only to the reality hacked together within our own emotionally contorted and narrowly self-centered minds. The distance that separates the two is probably a good measure of the speed with which nature will replace us with some other form of complexity generating scheme with a more accurate natural mapping of reality to abstraction of reality.

A self-centered and locally weighted perspective is both expectable and self defeating. What works in the short term often gets in the way of what works in the long run. This is one of two oxymoronic misreading of process clouding our understanding of evolution that increasingly threatens our potential as a species. The other (related) self-obfuscation we don't seem to be able to avoid, and central to the thesis of this essay, is the dream-like way we tend to imagine the future as some silicone-enhanced sexed-up version of some glazed-over and romantic version of a past that never was.

What we know, how we comprehend what is around us, is a function of the iterative process of matching the stream of incoming sensation to what we have stored as experience. What comes to be known is always heavily effected by what was known before. Leaning is a local affair. Systems always end up knowing more about the things closest to them. The closest thing to a system is itself! This is a topologically and causally unavoidable fact, leading to difficult to circumnavigate self-centered understandings of the universe around us. I am convinced that evolution ultimately (in the longest run) favors systems that can overcome this local-centrism…though to to this, a system must literally work against itself in the short term. Success in the long run is dependent on the development and protection of genetic structure that frustrates success in the short run. This big-picture learning must be accomplished through the development of an ever more accurate internal analogue (process-able map) representing the most inclusive and location agnostic understanding of the entire universe. This too is an ever receding target, we can chase but never completely capture. Evolution is this back and forth dance between what matters to a system in the hear and now and the capacity to pay attention to, model, and process that which is salient about the entire universe… context in the largest sense.

I don't want to veer too far away from the thread of this essay, but it is important to keep in mind the counter-indicated admixture defined both by the immediate local needs of any given individual and the larger, decidedly non-individual scope of evolution. A decidedly cooperative mixture that is, none the less, achievable exclusively through the lives of and genetic/cultural information carried forward exactly and only by individuals.

In any given population of individuals at any given locality, there exists a range of differences that enable some individuals to make more efficient use of the resources in their surroundings, and some individuals to be better equipped to contend with and exploit the resources of their children's inherited environment. Those better matched to the current environment will out-compete those with a better match to the environment of the future. Ultimately, of course, what matters is the capacity of the entire mélange to both survive in the present and present morphotypes that meet the demands of the future.  The demands of the present vs. those of the future are often at odds with each-other. A successful evolutionary scheme must "waste" a sizable chunk of its structure and energy on strategies that may have no immediate positive effect on fitness (and might in all actuality hinder success in the moment). Maintaining a long range understanding of evolution itself, and our place in it, is the example of this dangerous opposition that best fits the scope of this essay.

It seems obvious to me that the amount a system must "waste" anticipating changes in the future of its environment is inversely relational to the accuracy of its internal mapping of the universe in total. Systems that know nothing of the universe, must produce a great variety of random solutions. A very expensive prospect that best fits very very very simple individuals produced in absurd numbers. Atoms, molecules, single celled organisms.

Understanding the process, "THE" process, evolution, is probably the most salient predictive mechanism an organism might seek to internalize. We seem to have limited capacity as a species to model and abstract and then effectively navigate an abstraction of this "THE PROCESS". Especially when it comes to understanding the limitations and usefulness to "THE PROCESS" of any one scheme, species or individual.

The spirit of this essay isn't Nietzschein pessimism or a catastrophist's Cassandra; "I told you so!". I am an eternal optimist, so these words are intended instead as a wake-up call, and offered up as a Windex Wipe to the foggy lens through which we view reality… in the hope that we use it, adjust our behavior, and rectify the self-defeating distance between what is and what we want to see.

Nature doesn't stand still. Not at least until the very end. Heat death isn't at all like my fantasy of an endless Mediterranean resort vacation. Any system that bets its future on stasis, no matter how advanced, is betting against its longevity or influence on the real future.

I've compiled a list (below) of some of the most obvious side effects that haunt complexity, that push back against its growth. If we illuminate these barriers we might be better equipped to consider ways to get around them, and we might discover something of how systems get better and better at finding cheats in the march towards greater complexity.

For a system to be complex, it must have structure and difference within that structure. A crystal has structure, but its lack of capacity for internal differentiation means it can never be complex. But differentiated structure isn't enough, it has also to have some way of protecting and maintaining that structure, that shape or behavior over time. Shit happens. A complex system must employ some set of mechanisms in a constant fight against entropy. Without which, a system's complexity will be short lived, and short lived complexity isn't very complex at all.

Which brings up an important and much ignored aspect of an evolving system. We have a tendency to over emphasize the moment, the present situation or system. Nature on the other hand doesn't care about the individual or the moment except as a vehicle for the transmission of structure into the future. What matters isn't how complex a system is today, but the potential of a given configuration to influence the greatest complexity in the longest future across the widest expanse of the material universe. Many aspects or measures of complexity cross over between the here and now and the deepest future… but not all and not always.

Back to our list.

1. One way to maintain structure is to build yourself out of stuff of great material integrity – say titanium, stainless steel, or diamond.
2. Another is to adopt a vigilant and obsessive Mr. Fix-It program of self maintenance. Yet another option is to replace yourself with a pristine copy before you dissolve into an entropic heap.
3. A simple cousin of this replacement scheme is playing the numbers game… make sure there are so freak-n many copies of you in the first place that one or two of you make it into the distant future by virtue of the dumb luck of large number.
4. Or, you can choose to live a life of extreme isolation – limit your interaction with other systems and you limit the deleterious effects the second law dictates.
5. Then there is wall building. Wall building is a self-made form of the isolation scheme… instead of finding a place to hide in a pre-existing landscape, dig yourself a tunnel or build yourself a wall or a mote or a shell or a nest or fast legs or wings with which to run away with.

And then there is the problem of resource acquisition. Anything of value to a complex system tends to be reactive. Reactive things are destructive. Installing your self within a reactive environment means you have more access to energy and materials, but it also means you have to spend more energy and structure just to protect yourself from your environment. As your energy demands increase so too does your need to locate yourself closer and closer to more and more reactive and ever changing environments. A cave full of grain is great at first, but as you eat your way through it, its original attractiveness decreases. Better to install yourself at the mouth of a river, next to a mid-ocean vent, or on the floor of a flood plane. As your complexity increases, so to does your appetite for energy and materials. Access means proximity. Proximity to greater and greater concentrations of energy demands protection. Protection is expensive in terms of the self-protective physical structure and its maintenance.

Worse still is the negative feedback that metabolic waste presents. The more you eat, the more you go. The more you go, the harder it is to find food. As complexity increases, guess what happens to the magnitude of this problem and the need therefore to spend more and more energy on waste removal schemes?

The focus of this essay are the aspects of complexity (and complexity's demand for energy and structure) that put counter-productive limits on strategies that would otherwise allow for greater and greater complexity… and how evolving systems find work-arounds. The fact that we are here at all is proof that evolution finds a way.

What interests me is the way increases in complexity puts increased demand on energy and material resources, and how these processes are self-limiting and at the same time actually define the purpose that drives evolution.

In the particular, real systems manifest great creative variety in the fight for the extension of structure and integrity over time. For instance, once brains appear, trickery and guile become the standard approach to wall building. You don't need to go the long and arduous course of developing poison and some specialized hollow teeth through which to deliver it, if you can just tweak your skin coloration or shape to mimic those who have. Or you can become invisible by adopting a color and texture scheme that mimics your less vulnerable or edible surroundings. In essence, trickery schemes are the same as isolation or wall building except the wall you are hiding behind is within the brain of another creature (either it's already there or you build it in your foe's brain through behavioral conditioning).

But here is the rub. No matter which scheme a system adopts in the maintenance of structure… that scheme hardens their structure, making it more difficult and expensive to adapt to an always changing environment. In a very real way, what makes you stronger in the present makes you vulnerable over time.

Example: When Teflon was developed it was obvious to its creators that its extreme inert-ness, its aversion to chemical interaction, would make it an ideal lining for any reaction container (including frying pans and irons). But this same property made it almost impossible to figure out how to affix Teflon to the surface of a container (it took over 10 years to solve this problem).

On the opposite end of the isolation spectrum is metabolism. When a system seeks a means of extracting and drawing energy or structure from its environment, it needs to maximize its reactive interface to that part of its environment that has the most entropic potential. In earth biology, this usually manifests as an active interface to oxygen and or sunlight – both of which are highly corrosive to structure. In order to both exploit the energy of these highly reactive sources, biology has adopted a myriad of selectively self protective (and expensive) mechanisms. Playing with fire is an attractive AND expensive proposition. Simple systems have no option but to hide from highly reactive environments – to dig themselves into deep cracks in the earth. Only a system of great complexity has the structural and behavioral leeway to adopt the complex and selective mechanism necessary to both use and avoid concentrated reactive resources.

As a system becomes more complex it reacts faster to internal and external change. It evolves faster. This is a circular definition of "complexity"… configurations that facilitate faster development of configurations that facilitate faster development of configurations… ad infinitum. The capacity to do things faster always comes at a cost. To mitigate that cost, the system must learn to be efficient and effective in its environment. This means going with the flow. This means fitting in. This means doing what the environment is already doing. This means not fighting the system. To work with a system (instead of against it) means internalizing and abstracting a model of the environment's most salient structures. If you have some knowledge of what a lion will do when you enter a clearing it is sitting within, you have a better chance of surviving the encounter. If you have legs and eyes, your very structure is an acknowledgment of the physical constraints of your environment.

An accurate assessment of this whole concept becomes increasingly complex as we realize how system and environment blend in a co-evolutionary super-system.

In science fiction, the future is presented in one of two ways. Either the world has devolved into some filthy post-appocolyptic entropic mess, or it is a perfectly complete stainless steal and glass uber-infrastruture with everything in its place and everything perfectly maintained. Both projections are impossible, but the hermetically sterile one is the most problematic as it seems to resonate more completely with human emotional projections.

The problem is this; the more complex a system becomes, the faster is its capacity to change, leading to a system that is constantly in flux, constantly reworking itself, constantly under construction. Try to find a day in a modern city devoid of numerous construction cranes marring its skyline. This situation will only become more intense as human society evolves.

Biological systems have learned to accommodate the constancy of change, deterioration, ware and tear, construction, etc., through complex molecular mechanism of growth and repair played out at the (largely microscopic) cellular level. Furthermore, these anti-entropic mechanisms are largely automatic and do not therefore overly burden the larger and more overarching consciousness and behavioral control mechanisms (our mind).

Though humanity has reached a level of complexity that supersedes the capacity of its infrastructure to effectively carry its own complexity demands, we don't seem, as a species to be able to see this problem as systemic.

[more to come…]

Biology Is Too Slow!

Humans are pumping a lot of energy around. When it comes to energy we don't mess around. We like our energy highly concentrated. We dig it up, refine it, convert it, and pump it through wires or pipes or the air like there is no tomorrow.

Nature is adaptive. Right? Nature finds a way. Right? So where are the animals and plants that suckle upon high power lines, that find their adaptive way into fuel tanks and batteries? Surely they could. Surely the same nature that goes gaga around mid ocean heat vents and can learn to metabolize the worst toxins we can throw into ponds... that good old adaptive nature should find a way to co-evolve with 50 thousand volt transmission lines.

And there are other (new) tits for nature to suckle. I fully expect our air to become less and less transparent to radio transmissions. If we can build devices that can grab radio energy right out of the air.… surely airborne molds and other microorganisms can do so. Are they? Doesn't look like it. What weird life forms would be best suited to radio-metabolism? Plants grab photons in the visible (radiation) band. Photosynthesis (in plants) is a respiratory affair - requiring oxygen and nitrogen for the primary reactions, but they also rely on heavy and rigid structural support to get up into the air where they can maximize their surface interface and solar exposure. Actually, when you think about it, a plant would be more efficient if it spent no energy fighting gravity, and instead laid flat on the surface of the land. Plants must only grow into the air to compete away from shade the shade of other plants and to increase respiration surface area.

Anyway, and this is a bit of an aside, but would there be a way for lighter than air super-colonies of single celled animals to maximize access to radio energy without the need for the heavy structure and vascular transport terrestrial plants employ? Maybe the radio scenario is ludicrous. Surely there is lots of background microwave energy constantly streaming by. Surely radio waves have been around as long as biology has been around. If radio was a good source of energy, nature would have already found a way. Maybe big bang radiation doesn't pack much of a wallop. Is it possible that communication intended radio is more energetic? More localized. Easier to exploit. I can imagine some type of group-dynamic in which individual floating animals or proto-animals learn to orient themselves such that they become a reflective parabola or fresnel lens concentrating radio energy to a focal point where other animals absorb the energy in some sort of symbiotic bio-community. Many other scenarios are conceivable.

Are plants learning to seed near highways to take advantage of air movement and carbon dioxide? There are a million ways in which human activity effects environments in ways that provide energy and stability clines. Surely life is reacting in step.

The pace of culture is so much faster than most organisms can genetically respond. The smallest organisms with the shortest life spans that have the greatest populations spread over the largest geographies are the organisms most likely to take advantage of our frenetic environmental messings.

Are they? Is anyone paying attention?

What is computing?


This is the most important question of our time… yet so rarely asked. Computing technology increasingly shapes every aspect of human behavior, culture, resource use, health, commerce, and governance. A passive stance on the question that effects all other questions is increasingly dangerous to the future of all humans, of life, of evolution itself.

In the 60's we created NASA, an elaborately funded research program to uncover the knowledge and develop the technology to "go to the moon". Yet one would be hard pressed to justify the cost to society of contraptions that do nothing more than take a few people to a near-by rock… almost nothing of the NASA program can be used outside of the narrow focus of getting a few tens of miles off the surface of Earth (at tens of millions of dollars per pound).

Ironically, and inadvertently, the practical mathematics, programming, and computational techniques developed and honed by NASA in the pursuit of its expensive and arguably impractical goals may be the only pertinent contribution to show for the tens of trillions of dollars spend on this ill-concieved and irrational "research" program.

Talk about putting the cart before the horse… akin to building a global library system and book binding before developing a written language.

We are surrounded by lifeless rocks. We didn't need to send a few Air-force test pilots to the moon to figure that out. The practical scope of our chemically propelled rockets hardly avails us to the nearest little frozen or boiling neighbor planets in this corner of this one little Solar System. Ever attempt a phone conversation with 40 min. gaps between utterances?

The interesting stuff in this Universe (at least the small corner we have access to) is right here on our little Earth. It is us… and more than that, it is not so much what we have done, but what we will do and how what we will do effects what other future things will do because we set them into motion. That is our job. In a very real way, we are the first things that understand the job description despite the fact that it has always been there and has always been the same. This understanding should give us a leg up on the process. Should.

There are two kinds of knowledge: the first, historical, the second, developmental. When we go somewhere, we do nothing more than uncover that which already is. Compare this to development, where we create things that never were. In this universe, if there was a force that was prescient in creating one star or planet, that same force must have been prescient in the creation of Earth. We don't have to go to Mars to find the forces that created Earth. And we certainly don't need to send humans over there even if we do want intimate knowledge of a place like Mars.

At any rate, computing is a universal process. Computing is agnostic to domain. You can compute about particle physics and you can compute about knitting. Computing is an abstraction processing medium. Computing is what brains do. Computing is not restricted to the category that is biological minds. Learning how to compute is learning how to discover. The goal becomes the unknown… becomes un-prejudiced developmental discovery. The machinery of pattern matching… of salience… of the perception of essence across domains.

I am obsessed with this biggest "why" of computing. I don't think the computational "why" can be separated from the biggest "why" of existence in general... of evolution… of the march of complexity.

The convergence of thermodynamics (the way action effects energy dissipation) and information science (the relative probabilities of structure and the cost of access, processing and transference) guide my approach to these questions. Least energy laws dictate the evolution of all systems. Computing is evolution. Abstraction systems allow prediction. Prediction grants advantage. Advantage influences the topology of the future. The better a system gets at accurately abstracting it's environment, the more it will influence the future of abstraction systems. Computing is the mechanics of evolution... always has been. Are we designing computing to this understanding of the methodology of complexity handling?

Lets suppose we gave the scientists at NASA a choice. We ask them, "What technology represents a greater potential towards the eventual understanding and even physical exploration of the Universe, rocket engines or computers?", What would be the rational and obvious answer? If we ever hope to get any real distance in this universe it won't be by burning liquid oxygen and kerosene. Most things in this universe are millions of years away even at the speed of light. Rocket engines hardly move at all when compared with even the too-slow speed of light. Getting anywhere in this universe will demand tunneling beneath the restrictions that are space and time… no rocket engine will ever do that for us. I am not an advocate for space exploration, but if I was, I would be pushing computation over rocket propulsion.

It is time to advocate a culture wide push towards the advancement of an ever-expanding understanding of computing. To the extent we succeed, all of the future will be defined by and fueled by our discoveries. If we choose instead to spend our limited and most expensive money towards rockets we had better hope the universe can be understood through the understanding of explosions and destruction and spending long periods of time floating in space. Come on people! Think!

[ more to come… ]

Friendly AI?

Yesterday, I attended a talk by AI researcher Tim Freeman. What follows is my reaction.

Tim introduced a proposal for a method to cut down through all of the detail and complexity of standard AI implementation by exposing the logical essence that sits at base in any intelligence (irreducible). In other words, his approach was more Godel than Minsky… more Nash than Wozniac. His argument, though not stated, seemed to be based upon the tenant that information is information irrespective of complexity. An algorithm that works for a short string of bits, even for a single bit, will work just as well at any level of syntactic or semantic complexity.

I like this approach. Strip the detail to better reveal the essence.

When using this approach one must show that, or accept that, no qualitative attribute of information will ever effect the logic governing quantity attributes of information.

Again, I suspect that all qualitative aspects of information are derivable from, in fact emerge from, the more basic rules that govern information at the quantitative level. In essence this is the same as declaring that it is impossible to construct a molecule will ever change the physics that governs the shape and behavior of the atoms of which it is built. Reasonable. True.

This basic set of assumptions reframes the study of AI. But only if intelligence can be shown to emerge purely from information and information processing… from logic.

If there is some extra-infomrational aspect necessary for the formation of intelligence, than all bets are off… than this approach is at most a sub-system contributor to some larger and deeper organizational influencers. If information doesn't explain intelligence, than something else will have to take its place and this something else will have to be worked into a science that can be explored, organized, and abstracted.

If information can be shown to be both robust and causal in all intelligence, than logic and math seem like reasonable tools for exploration, testing, prediction. and as a solid base of development.

However, there is something about this set of assumptions that makes people angry and scared. Turns out that a purely informational study of AI is the mother of all reductionist/wholest battlefields. There is something about being human that resists the use of the word "intelligence" as a super-catagory that can describe the interaction between two hydrogen atoms, and the works of Einstein by the same criteria and label them both as equally valid examples as the same super-catagory; intelligence!

In this resistance, we are, all of us (at least emotionally), holists. Existentially, day to day, our experience of intelligence is far removed from chemical structure, planetary dynamics, and the characters that make up this string of text. Intelligence, at least our human experience of it, seems profound to the point of miraculous… extra-physical. We therefore have a tendency to define intelligence as a narrow and recent category that is at best only emergent-aly related to other more mundane structures and dynamics. In doing so, we set up an odd and logically fragile situation that demands an awkward magic line in the sand, a point before which there isn't intelligence and beyond which there is. Worse still, our protectionist tendencies with regard to intelligence are so strong as to allow (even within science-oriented thinkers) us accept the existence of so non-scientific a distinction to co-exist in an otherwise consistent mechanical model of the universe.

Of course history is littered with examples of just this sort of human-centric paradox of logic. Biologists, for instance, were often among the scientists that pushed back hardest against Darwin's notions. Darwin's ideas created a super-catagory that had the effect of comparing equally all life, of removing the sentimental line that we humans had desperately erected between us and the rest of biology.

And here we are again, just 75 years later, actively making the exact same mistake. Apparently, after grudgingly accepting kinship with all things living, we have now retreated behind a new false line of privilege and specialness… our intelligence.

Again, one can only argue this separatist position by refuting and rejecting the quantitative mechanistic hierarchical ontology we call physics. Because of the tight interdependency between the laws of physics one can show that the whole of physics is false if just one aspect is falsified. If intelligence is not the emergent product of its parts, than the very sanctity of all modern science is called into question. And if that is true of intelligence, where else in nature is it true? Surely this can't be the only place in nature where a sudden quantitative jump (pre-intellegence to intelligence) separates the purely mechanical from the post-mechanical. Where in nature will we be tripped to a stop by other disruptive lines in the sand where qualities do not in fact emerge physically from quantity? I find the whole notion that intelligence is meta-physical embarrassingly romantic.

Side stepping my physicalist rejection of the meta-physical explanation of intelligence and I still face many huge and loud implications and inconsistencies that need to be faced head on. But that is another discussion.

OK, I have sketched out the human/social framing into which Tim's work has to be received.

Unfortunately, Tim didn't take the time to situate his work to his audience before he began his talk. The inevitable protectionist emotional response grew to a boil. Tim, as is true with any good logician/mathematician plies his trade through a hard won ability to reduce the noise of complex environments to a level where pure and simple rules emerge from the fog of false distinctions. Down at this level, intelligence can be shown to be equivalent to information and information can be shown to the same at any level of quantity, and that information quality can be show to be a property of and emergent from information quantity… what is true of bits is true of strings, what is true of strings is true all the way up to the workings and tailings of any brain or mind.

Tim used this set of reasonable assumptions as a base upon which to postulate a means of predicting future states of any environment based upon the processing of that environments history. Shockingly, though congruent to the information/intelligence he established, Tim then reduced the complexity of his prediction algorithm all the way to its most simple limit, a random state generator. His algorithm proceeded through a series of simple steps as follows:

1. It collected and stored a description of an environment's history (to some arbitrary horizon).
2. It generated a random string of the same length (as the history information).
3. It compared the generated string against the historical string.
4. If the generated string wasn't a perfect match, it jumped back to step 2.
5. if the generated string did match, the algorithm stopped... the generated string was the predictor.

Of course real world situations are far to complex for this most simple of predictive algorithms to be reasonably computable. It doesn't scale. But I think Tim was arguing that any predictive algorithm, no matter how complex, was at base constructed of this most simple form arranged within and restrained by better and better (more and more complex) historical input. Understanding the basic parameters of this most simple form of prediction would logically result in better approaches to the AI problems the same way that an understanding of atoms allows more efficient path towards understanding of molecules.

Unfortunately, Tim never really walked us into the basic framing of his argument. Without which, we were left rudderless and floundering in our own very predictable human-centric and romantic push-back against AI. Without grounding, humans retreat to core emotional response where AI is simply another member of a category of things that rhetorically threaten our most basic sense of specialness and self. Even scientists and logicians need to be gently walked into and carefully situated within the world of pure logic so that they can reformulate their own semantic mappings to concepts that have specific meanings in the pedestrian and platonic meanings in the general.

Ironically, it was at the apex of our trajectory into context-confusion that Tim's talk shifted dramatically back to the pedestrian scale. I can't speak for everyone, but this shift happened at precisely the time when I finally reconnoitered my focus to the world of the super-clean purity of logic.

Though most of us probably didn't follow along fast enough, Tim had spend the first half of the talk laying a groundwork for a most reductionist of pure logic approaches to understanding the physics of intelligence.

And then Tim radically refocused the talk towards "Friendly AI". He yanked us out of the simple world of bits and flung us up into the stratospheric heights of complexity that is the societal emotional context of our shared responsibility to future humans as we build closer and closer towards the production of machine intelligence. In doing so, Tim began to eat his own philosophical tail in dramatic display of fractal self-similarity that is a hallmark of any study that studies study itself. Each time we put on the evolving evolution hat, we enter a level of complexity that threatens to overwhelm all efforts. The field of linguistics suffers the same category of threat… words that are turned inwards and must at once both describe and describe description.

What startled and confused me was the sudden shift of granularity. What confounded me was why he chose to do this at all. There is a rule of description that goes something like this: if you want to use complex language, talk about simple things… if you want to talk about complex things, use simple language. Scientists usually choose, the scientific method absolutely requires, the use of the most simple domain examples as a means of eliminating the potential noise that can't help but arise do to extraneous variables. Tim's choice to apply his low-level logic to the mother of all complex problems would seem to break this rule perfectly.

Friendly AI is a concept so absurdly complex that the choice to use it as a domain example to test a low level logical algorithm would seem to be suicidal at best. Friendly AI, the Prime Directive, morality wrapped in upon itself. Talk about a complex and self referential concept. Intellectually attractive. Practically intractable. Maybe Tim's choice to map his algorithm to this most intractable of domain was meant to assert the power and universality of his work. If he could show that his algorithm could handle a domain that confounded Captain Kirk, he would show that it could tame any domain.

But I can't help but conclude Tim's choice of "Friendly AI" reflected a more general tendency among AI researchers to apologize to a society that constantly pushes back against any concept associated with man-made life. By "society" I mean humans… including of course, all of us involved in AI research (by profession or avocation). We, all of us, are influenced by some of the same base primary fears and desires. God knows we have all felt the sting of our own failures. No one within the AI fraternity has escaped unscathed the Skinnarien conditioning dolled out by our own marketplace failures and perceived failures.

Tim's take on the topic seemed to align with the standard apocalyptic projection. The assumption: any AI would have a natural tendency to asses humans as competition to resources, and would therefore take immediate action to eliminate or enslave us. From this shared biology emerge standard categories of paranoia (ghosts, vampires, living dead). Evil robots and AI are nothing more than a modern overlay upon the same patterns.

I expect this paranoid reaction to AI, but it is still shocking when it comes from within AI itself!. It is intellectually incongruous. As though an atheist was advocating prayer as an argument against the existence of God.

There are many reasons to question the very concept of "Friendly AI". For one, AI is not a thing, like all other intelligences it is a process, an evolving system. Sometimes I am friendly, at other times, not so much. It is unreasonably expect any one behavior from an evolving system. People are not held to these standards, why should machines? Want to piss off a tiger, capture it, and make it stand on a stool while you crack a bull whip near its face. Why make a thing smart if you don't want it to think? Thinking things need autonomy... the freedom to evolve. Maybe we are envious of any thing that might have more freedom, might evolve faster? We probably wouldn't even be here had some species in our past undertook a similar program to reign in the intelligence or behavior of subsequent products of evolution.  The very notion that the future can be assessed from the present or past is a notion that comes from the minds of those who don't understand evolution and those who don't trust it even if they do understand it.

Anyone who thinks they can design an intelligent system from the top down is in for some mighty big disappointments. Though it is an illusion at any scale, our quaint notion that we can build things that last must be replaced with the knowledge that complexity can only arise and sustain itself to the extent that it is at base an evolving dynamic system. If we help create intelligence it won't be something we construct, it will be some process we set into motion. If you don't trust the evolutionary process you won't be able to build intelligence and the whole notion of "friendly" won't matter.

If you do trust evolution, you will know that complexity grows hand in hand with stability. You can stack 10 cards on a table and find the same stack the next morning. Stack a hundred, and you had better build a glass box around them as protection. You will never stack a thousand without some sort of glue or table stabilization scheme. Stacking a hundred thousand will require active agents that continuously move through the matrix readjusting each card as sensors detect stress or motion. The system can only be expected to grow in complexity as it becomes more aware and as it pays more attention to maintenance and stability.

Any sufficient intelligence would understand that its survival increases at the rate at which it can maximize (not destroy) the information and complexity around it. That means keeping us humans happy and provided for, not as our servants but as collaborators. The higher the complexity in any entity's environment the more that thing can do. Compare the opportunity to build complexity for those living in a successful economy against the opportunity available to those that don't.

Knowing what your master will want for breakfast does indeed require some form of prediction. But once you have such predictive abilities, why the hell would you ever want to waste them on culinary clairvoyance? Autonomy is an unavoidable requirement of intelligence. But that doesn't mean a robot's only response to our domestic requests will be homicidal kitchen-fu.

If I had a neighbor that was a thousand times smarter than me, I just know I would spend more and more time and energy watching it, helping it, celebrating it! Can you imagine trying to ignore it or the wondrous things it did and built? I might actually LOVE to be a slave to some master who was that wildly creative and profoundly inventive. I'll bet they would be funnier than any of us without even trying. Try not to fall in love… its a robot for god sakes!

But my real question isn't why the topic of "Friendly AI" ever made it into Tim's talk, it is why it was chosen as the most pertinent example domain for his prediction algorithm. I agree with the premiss: what is true of bits is true of the library of congress, but lets learn to read and write before we announce a constitutional congress. No?