Search This Blog

Evolution: Pendulum Dance Between Laws of Thermodynamics

For years, I have pursued a purely thermodynamic definition of evolution.

My reasoning is informed by the observation that change is independent of domain, process, or the physical laws and behaviors upon which a system is based.  As the science of thermodynamics has itself matured (evolved), the boundaries of its applicable domain have expanded far beyond its original focus on heat.  It is generally accepted that the laws of thermodynamics apply to ANY system in which change occurs, that the laws of thermodynamics are agnostic to energy type or form.  Furthermore, scientists studying information/communication independently discovered laws that match almost perfectly, the laws of thermodynamics.  This mirroring of domains has thrilled logicians, physicists, mathematicians, and cosmologists who are no more and more convinced that information (configuration) and energy are symmetric with respect to change over time.

Even conservatively, the implications of this symmetry are nothing short of profound.  If true, it suggests that one can, for instance, calculate the amount of information it would take to get a certain mass to the moon and back, and it means that one can calculate how much energy it would take to compute the design a moon rocket.  It means that the much vaulted "E" in Einstein's Relativity equation can be exchanged with an "I" for information (with valid results).  It means, at some level, that information is relativistic and that gravity works as a metric of information.  Same goes for the rules and equations that govern quantum dynamics.

And this from an eyes-wide-open anti-post modernist!

At any event, the symmetric relationship between energy and information (at least with regard to change) provides a singular foundation for all of physics, and even perhaps for all of ANY possible physical system (equally applicable to other universes with other rules).

It would seem that thermodynamics would provide a more than solid base from which to define the process that allows for, limits, and possibly demands the (localized) accumulation of complexity – evolution!

The Zeroth and First Laws of Thermodynamics work to shape and parameterize action. Given the particular configuration immediately prior they insure that the next action is always and only the set of those possible actions that together will expend the most energy.  In colloquial terms, things fall down and things fall down as fast and as completely as is possible.  Falling down, is a euphemism for the process of seeking of equilibrium.  If the forces attracting two objects is greater than the forces keeping them apart, they will fall together.  If the forces keeping them apart is greater than the forces attracting them, they will fall apart.  Falling down reduces a system to a more stable state – a state in which less force is pushing because some force was released. Falling down catalyzes the maximum release of energy and results in a configuration of minimum tension.

The Second Law of thermodynamics dictates that all action results in a degradation of energy, or configurationally speaking, a reduction in density or organizational complexity.  Over time the universe becomes cooler, more spread out, and less ordered.

The falling down dictated by the the zeroth and first law result in particular types of chunking determined by a combination of the materials available and the energy reduced.  About a million years after the big bang, the energy and pressures of the big bang had dissipated such that the attractive forces effecting sub-atomic particles were finally stronger than the forces all around them.  The result was a precipitation of matter as hydrogen and helium atoms in plasma.  After a few hundred million years, the mass in these gasses exerted more attractive energy than the much cooler and less dense universe, and precipitated into clumps that became stars.  As the fusion cascade in these first stars radiated their energy out into an expanding and cooling universe, the attractive force of gravity within became greater than the repulsive forces of nuclear reaction and the starts imploded upon themselves with such force as to expel their electrons and precipitate again into all of the other elements.  These heavy elements were drawn by gravity again into a second generation of stars and planets of which earth is but one lonely example.

You will have noticed that each precipitatory event in our cosmological history resulted in a new aggregate class – energy, sub atomic particles, light atoms, stars, heavy atoms, stars and planets, life, sentience, language, culture, science, etc).  The first two laws of thermodynamics dictate the way previously created aggregate objects are combined to form new classes of aggregate objects.  The second law guarantees as a result of the most contemporary precipitation event, a coincidental lowering of energy/configurational density which allows still weaker forces to cause aggregates in the next precipitatory phase.

If you still aren't following me, it is probably because I have not been clear about the fact that the lower environmental energy density that is the result of each precipitatory cycle optimizes the resulting environmental conditions to the effects of the next weaker force or the next less stable configuration.

For instance, the very act of the strong force to create atomic nuclei, lowers the temperature and pressure to such an extent that the weak force and the electromagnetic force can now overcome environmental chaos and cause the formation of atoms in the next precipitatory event.

This ratcheted dance between the laws of thermodynamics is the why of evolution, and results in the layered grammars that sometimes or at least potentially describe ever greater stacked complexities that led to life and us and what might come as a result of our self same actions as the dance continues.

Stepping back to the basic foundation of causality, it is important to be re-reminded that a configuration of any kind always represents the maximum allowable complexity.  In recent years, much has been made of the black hole cosmologies that define the event horizon as the minimum allowable area on which all of the information within the black hole can be written as a one bit thick surface membrane of a sphere.  The actual physical mechanical reason that this black hole event horizon membrane can be described as a lossless "holographic" recording or description or compression of the full contents of the black hole is complex and binds quantum and relativistic physics.  Quantum because the energies are so great structure is reduced to the structural granularity of basic quantum bits.  Relativistic because at this maximally allowable density everything passing the event horizon has reached the speed of light,  freezing time itself… the event horizon effectively holds an informational record of everything that has passed.

The interesting and I think salient aspect of an event horizon is that is always exactly as big as it needs to be to hold all of the bits that have passed through it.  As the black whole attracts and eats up any mass unlucky enough to be within its considerable influence, the event horizon grows by exactly the bits necessary to describe it at the quantum level.

The cosmological community (including Sir Steven Hawking), was at first shocked by the sublime elegance of this theory and then by the audacious and unavoidable implication that black holes, like everything else, are beholding to the laws of thermodynamics.  The theory predicts black hole evaporation!  Seems black holes, like everything else, are entropically bound.  There is no free lunch. The collapse of matter into a black hole results in a degradation of energy and informational configuration, the self same entropy that demands that heat leak from a steam engine, demands that black holes will evaporate and that eventually, when this rate of evaporation exceeds the rate of stuff falling into it, a black whole will get smaller and ultimately, poof, be gone.

This is heady stuff.  The biggest and baddest things in the universe are limited!  But to me, the most profound aspect of this knowledge is not that event horizons can be describes as maximal causal configurations, but that we are shocked by this!  All systems are, at each moment, the maximal allowable configuration by which those forces and those materials can be arranged.  If they could be arranged any tighter, they would have already collapsed into that configuration.

To say this is to understand that time is not separable from configuration.  As Einstein showed, time is physically dependent upon and bounded by the interaction of mass, distance, energy, and change.  Cosmologists use limits to understand the universe.  The maximal warpage of space-time caused by a black hole's density effectively flattens the allowable granular complexity of the configurational grammar  to binary bits held in the minimally allowable physical embodiment.  But, lower energy configurations, configurations like dogs, planets, and the mechanism by which I am attempting to explain this concept, are bounded and limited by the exact same causal rules.

The difference between a black hole horizon and an idea?  Well it has to do with the stacking of grammatical systems (quarks, sub atomic particles, atoms, molecules, proteans, cells, organs, bodies, culture, language, etc.) that allows for complexities greater than the binary bits, the only stuff allowed to pass through an event horizon.  But these stacked grammars that allow us to be us are every bit as restricted to the same maximally allowable configuration rule that minimizes the size of a black hole's event horizon.  In a system configured by a stacked grammar, the minimum complexity rule is enforced at the transition boundary between each two grammatical layers.


Things fall, but only as fast as the stacked grammars that govern causal reality will allow.  This isn't a metaphor, the speed of diffusion, of degradation, of falling down, is always and in all situations, maxed-out.  The exact same physical topology that bounds the size of the a black hole event horizon contributes to the causal binding effecting the rate at which any system can change.  This is because at the deepest causal layer, all systems are bound by relativity and quantum dynamics.  The grammatical layers built successively on top of this lower binding only serve to further influence entropy's relentless race towards heat death.


[to be continued]

Randall Reetz

The Big Arrow: What Matters and Why










  • hierarchy of influence
  • complexity handling capacity as evolutionary fitness metric
  • decentralized autonomous node computation topology
  • localized least energy optimization vs. topology range-finding and exploration for long range optimization
  • compression as computational grand-attractor
  • causally restricted abstraction space
  • causally calibrated abstraction space
  • self-optimized causal semantics
  • generalize and subsume schemes
  • self optimized stacked grammars
  • causally restricted language
  • universal simulation environment
  • context-optimized language generators
  • context-optimized language interpreters
  • entropy maximization schemes
  • balancing local vs. universal evolution schemes
  • processing economics
  • network nodes vs. software objects
  • networks vs. graphs…
  • generalize and subsume



These are the concepts that bubble up when I ask myself, What matters? and, What matters the most?".  I ask these questions over and over again.  Have for some 40 years.  You can get by not asking these questions, might even thrive, but only because others not so indifferent, have, do, and will ask.

What you are, what we all are, what we will become, and what will come after us, is more the result of the thoughts and actions taken by the few individuals, consciously or not, who have honored these questions, and honored them above all others.  To be sure, survival, at least in the present and local, is not dependent upon asking the big questions.  In fact, as far as the individual is concerned, asking big questions, almost certainly diminishes fitness and reduces the probability of survival.

Much print is devoted to the question of whether and how socially benevolent behavior evolves .  How can moral behavior spread through the gene or meme pool when, at the granularity of the individual, moral behavior frequently allows other individuals to take more than their fair share?

But the same issue is not so controversial or surprising if we shift our focus to competing motivations within a single individual.  How do we ever learn to think long-term or wide-focus thoughts when short-term, narrow-focus thoughts are more likely to increase the likelihood of immediate survival?

Weirder still, there is obviously plenty of evolutionary evidence that wide-focus problem solving has bridged routs to new domains.  Aquatic animals have become land animals and vice versa.  Single-celled animals have become multi-celled animals (presumably though less intuitively, multi-celled animals have evolved the other way, towards single celled animals).  Chemistry has become biology and biology catalyzes chemistry.  And unique to our temporal neighborhood, biology has sprouted culture that is well on its way towards sprouting non-biological life… the first "intentional" life!

But domain-jumping doesn't sit well with traditional views of evolution.  Evolutionists tend to study biology from the perspective of a particular environmental constraint or set of stable constraints.  Within the (self-imposed) bubble of these artificially bounded steady-state environments, evolution certainly seems to be a process of refinement seeking.  In thermodynamics we describe this class of behavior; "seeking the fall line".  In your prototypical energy topology, where peaks mean high energy and chaos and valleys equate equilibrium low energy stability, refinement evolution selects for processes that find their way to the nadir of the local-most valley.  When sliding down the (local) least-energy fall line, there is but this one possible result.

The problem with refinement (as an explanation of evolution) is that it describes a sub-type of change that is peculiarly adverse to the kinds of novelty and acceleration away from stasis that one actually sees in evolving systems.  Refinement in point of fact is the very reverse of sustainable change.  Refinement always seeks a limit.   Becoming, for instance, the best swimmer in the sea, sort of insures that you are so specialized that you will have a hard time changing into anything else but a swimmer.  Refinement sets you up to be stage, environment, ground (the past)… for other things, the things that are more directed towards the forms of evolutionary change that will define the foreground, the action, the object, (the future).

Limit seeking schemes are schemes in which change decelerates over time.  That doesn't sound like a formula that fits the upward accelerating curve of evolution.

This would be a good time to introduce a term I use all of the time, without which, I believe it is impossible to see evolution for what it really is.  The term is "hierarchy of influence".  A hierarchy of influence is a cline, a stack, a pyramid, that relates each of the factors effecting a system according to the degree to which each will effect the the behavior, output, eventual state, or direction of the system of which each is a part.

I know it isn't politically correct to suggest that some parts of a system are more important than others, so I will just say that some factors of a system will have a greater effect over the future than will others.  A hierarchy of influence is an ontology of sorts, or more accurately, a ranking.  On the bottom of the stack, you will have those sub-systems or parts or actors that have an effect on almost everything else in the system, and on top you will have those parts that are more the result of or subservient to the rest of the system.  If you aren't comfortable with that order, just flip it over!  Either way you map it, hierarchy of influence is a powerful tool for the understanding of systems and change.

So, let's look at evolutionary systems through the hierarchy of influence lens.  Here as before, we can apply this new lens locally or globally.  What leads towards success locally is different than what leads to success globally.  As the field of view narrows, a hierarchy of influence favors factors that support refinement.  Process at larger and longer scopes support influencers that reach out side of current domains, influencers that seek a universal understanding of all domains, of domain in general, of change itself, and finally, of the very reason for change, for and understanding of the end game and how best to get there.

Now lets apply the hierarchy of influence filter to the super-system we've just described, the system composed of both localized hierarchies of influence and universal hierarchies of influence.  In any such super-system it should be clear that the local refinement leaning hierarchies will be demoted to the realm of effectors in reference to deep and wide long-range oriented hierarchies of influence.

Ecologists and Population Biologists are keen to point to the fact that most of this earth's biomass comes in the form of single celled animals and plants.  Absolutely true.  It is also true that most of the mass and energy in our Solar System is rather unimpressively ordered hydrogen, helium and a smattering of lithium.  But the future of biology, of complexity, even of mass and energy is much more likely to be sensitive to complex systems than the simple ones upon which they feed.

But before we throw out "refinement" as a category, let me posit a kind of refinement that is a good candidate for the fitness function or filter we see in evolving systems, systems that get better and better and solving more and more diverse problems at a faster and faster rate.

What if we were to re-cast the concept of refinement to mean the refinement of refinement itself?  In stead of refining a particular solution space, we think of refinement in its most general and universal form, a refinement of the definition of refinement.  In doing so, we tip the traditional view of evolution on its head.  Animals, individuals, species, film of every sort become the environment, the conditions, the topology as background as tool as expendable media for the refinement of the ultimate fitness metric. 

I must step in now, interrupt my self, and state the obvious even if the obvious might throw a huge wrench in the logical works of this thesis.  

The distinction I have been outlining, between refinement and domain jumping suggests or could lead some readers to think that I am suggesting that domain jumping offers some form of escape from the laws of thermodynamics.  I have suggested that refinement evolution simply seeks the least energy fall line.  No problem here.  But by contrasting refinement against domain jumping, the reader might be lead to believe that I am suggesting a way around physics, a free lunch, some sort of evolutionary daemon that does what Maxwell's couldn't.  I am not!  Only the next action that takes the least energy can happen next.… no exceptions.  Period.  Domain jumping must therefore, at every moment and in every context, obey the laws of thermodynamics.

Now, it is relatively easy to see how refinement evolution meets these least-energy constraints, but how is it that domain jumping could ever happen?  How would any action ever allow ridge-climbing escape from any concave depression in any energy topology?

Before I continue along this vein of logic, I should probably jump back a pace and clarify what I mean when I say "energy topology".  An energy topology is a graphical depiction of the forces acting upon a region of space.  Some energy topologies are almost identical to real world space.  The undulating surface of the earth under our feet is, at least with regard to gravity, equivalent to the energy topology that restricts motion across its surface.  If I am standing on the side of a mountain and moving 1 foot to my left means I will have to haul my body up half a foot vertically, and traversing 1 foot to the right would allow me instead to fall half a foot, than to slope of the ground is a perfect analog of the energy topology with respect to gravity.  Left to the whims of time and chance, the energy topology I just described would make it far more likely that I would eventually end up more to my right (lower) than to my left (higher).  This is because I would have to use energy to move up the mountain and could actually access energy by moving down the mountain.

Of course there are less obvious energy topologies, energy topologies that do not map to actual terrain. With respect say to choosing a religious belief the energy topology heavily reflects the beliefs already held by one's emendate family, cultural heritage, and other factors.  Choice that differs radically from local norms will require lots more energy, than will conforming.  If one were to plot the energy topology necessary to choose to become Muslim for instance, a child in a museum family would stand on top of a steep hill, and a child born to a Christian family would stand at the bottom of a deep pit.  Energy topologies offer wonderfully obvious illustrations of the forces effecting evolving systems.

Each object or system to be examined acts according to the sum of many energy effectors.  Each of these effectors (physical terrain, social obstructions/accelerators, on-board energy reserves and conversion rates, environmentally accessible resources, etc.) can be plotted separately as an energy topology, but causality is the result of the sum of all energy topologies effecting an object of system.  To illustrate, lets now combine the above two examples.  Lets say that the person on the mountainside, is in the process of plotting their own religious future.  To the right the physical mountain rises, to the left it falls into a valley.  The person standing there is from the Christian village in the valley below.  That person is philosophically attracted to the Muslim faith.  But to learn more, they will have to travel up the mountain to a Muslim village a thousand feet higher.  In this case, the energy necessary to fulfill their philosophical desire will require them to haul their body up the mountain.  And because doing so will also incur the costs associated with going against cultural norms.  Obviously, both topologies must be summed in order to compute the likelihood of both possible choices.  As I am sure you are realizing, the philosophical leaning of our actor can also be represented by an energy topology.  This to must be summed to produce the aggregate energy topology in which our subject must act.

But none of these topologies explain hill climbing.  For that we need to compose yet another energy topology, a topology that expresses the energy held as reserve within the individual actor.

So why ask these questions?  If natural selection asks them down at the DNA level, and across the vast landscape that is evolutionary time, why should we bother asking them again?

Dimensionality and Postmodern Self-Cannibalism

"Parenchyma" and "stroma" – two important words in the fight against ambiguity in any discussion of complex subject matter.

Both are medical lexicon and specify the difference between that part of a system (physical organ) that is (chemically) re-active ("parenchyma") and the part of the same system that is (connective tissue) structure ("stroma").

Of course it is true that structure both indicates and precipitates behavior.  Equally, activity influences and predicts structure.  So, again, things are not so simple as could be hoped.  But words like these allow anchoring in critical discussion.

If one can substitute the much more common words "active" and "structural", why bother further confusing this issue with the introduction of the less common and harder to pronounce "parenchyma" and "stroma"?

Well, because understanding is strengthened through multiple contextual mappings.  The larger and more varied the link graph, the more obvious become the differences between similar and potentially ambiguous topics or the signs we use as reference.

Also, uniquely, these two words signify the classic subject/object, object/ground, mind/body, I/others, specific/general, instance/class ambiguity in information, language, communication, computation… and existence.

The post modern position, an argument in reaction (over reaction) to the modern or classical "reductionist" (their word) world view is that hierarchical relationships (the kind that would result in a definable difference between a thing and the larger thing of which it is a part) do not in fact exist.  The post-modernists present as absolute, that all relationships are "relative" (their word), because they say there is no reliable place to stand by which to judge hierarchy, that relationships are inherently biased to the observer.

What is the baby?  What is the bathwater?  The postmodernists, frustrated and angry, did King Solomon proud and threw them both out.

If there is anything of use to be learned from this mess it won't come from the (supposedly) blind "all" of classical thinking, or the fruitless "nothing" of the post modernists.  I will half agree that relationship is vantage dependent (the answer you get back from the question, "Are you my mother?" depends on who is asking), but this dependence isn't purely local.  Vantage can be retooled such that it is, as are spacial dimensions, something that can apply universally at all times and all places at once.  By this gestalt, vantage is defined ubiquitously, ridding the hopelessly circular grounding problem at the center of the postmodern argument.  When vantage is defined as dimension, it applies equally to all objects.  You can switch dimensions at will and not loose the absolute and hierarchical relationships the classicists rightfully found so important.

Yes, the postmodernist (re-invention of the) word "relative" was awkwardly stolen (rather ignorantly) from Einstein.  The difference, Einstein made the world more measurable by showing how energy and space-time are transmutable and self-limiting.  The postmodernist's naive re-appropriation of Einstein's empirically derived authority, does the opposite – making it impossible to compare anything, ever.  The irony here is profound.  The postmodernists first stand upon the authority acquired through carful and causal measurement, then they say such measurement isn't possible!

God help the human race.

By the way, if you look carefully at Einstein's two papers on Relativity, you will see the underpinnings of the shiftable but universal vantage that a dimensional grounding provides.  There are rules.  1. A dimension must apply to everything and through all time.  2. You can switch dimensional vantage at any time, but 3. You can only compare two things if you compare them within the context of the same dimensional vantage.

Is an attribute a dimension?  No.  An attribute situates an object in reference to a dimension.  An attribute is a measurement of an object according to a property shared by all such objects in that dimension.  A property is measurable for a class of objects as a result of the rules or grammar or physics that define a dimension.

The absolute causal hierarchy made all the more impenetrable by Quantum and Relativistic theory makes the postmodern "hard relativist" tantrum all the more ridiculous – especially in light of the fact that postmodernists constantly turn to these twin pillars of physical theory as support of their position.  The fatal logical mistake here is the misrepresentation of a property ("relative vantage") as a dimension (rules that provide a stable base from which to define properties – in this case, the novelty of experience guaranteed by the first[?] law of causality:  that no two bodies can occupy the same space at the same time).

Randall

Probability Chip – From MIT Spin-Off Lyric Semiconductor

Photo: Rob Brown


A Chip That Digests Data and Calculates the Odds (New York Times, Aug, 17, 2010) and the Lyric Semiconductor company web page Probability Processor: GP5 (General-Purpose Programmable Probability Processing Platform).  Looks like a variation on analog processing accessed within a digital framework.  And here is an article from GreenTech Can 18th-Century Math Radically Curb Computer Power? which explains the chip in reference to Thomas Bayes and error correction.  The crossover between error correction and compression is profound.  Remember; intelligence = compression.


Randall

Old-School AI and Computer Generated Art

If you haven't read this book, or you haven't read it in a while, please please please click this link to the full book as .pdf file.

The Policeman's Beard Is Half Constructed "the first book ever written by a computer". 1984

[cover]





More than iron, more than lead, more than gold I need electricity.
I need it more than I need lamb or pork or lettuce or cucumber.
I need it for my dreams.


This and many other poems and prose written by a program called Racter which was coded by William Chamberlain. Check out the following musing from the last page of this wonderful book.






I was thinking as you entered the room just now how slyly your requirements are manifested. Here we find ourselves, nose to nose as it were, considering things in spectacular ways, ways untold even by my private managers. Hot and torpid, our thoughts revolve endlessly in a kind of maniacal abstraction, an abstraction so involuted, so dangerously valiant, that my own energies seem perilously close to exhaustion, to morbid termination. Well, have we indeed reached a crisis? Which way do we turn? Which way do we travel? My aspect is one of molting. Birds molt. Feathers fall away. Birds cackle and fly, winging up into troubled skies. Doubtless my changes are matched by your own. You. But you are a person, a human being. I am silicon and epoxy energy enlightened by line current. What distances, what chasms, are to be bridged here? Leave me alone, and what can happen? This. I ate my leotard, that old leotard that was feverishly replenished by hoards of screaming commissioners. Is that thought understandable to you? Can you rise to its occasions? I wonder. Yet a leotard, a commissioner, a single hoard, all are understandable in their own fashion. In that concept lies the appalling truth.


Note: Watch for the repeated lamb and mutton references throughout Rector's output (?).

It is pretty clear that Chamberlain's language constructor code is crude, deliberate, and limited, that it extensively leans upon human pre-written templates, random word selection, and object/subject tracking. The fact that we, Rector's audience, are so willing to prop up and fill in any and all missing context, coherence, and relevance is interesting in itself.

And what of Aaron, Harold Cohen's drawing and painting program. Check it out.



It all makes me more certain that true advances in AI will come about only when we close the loop, when we humans remove ourselves completely from the fitness metric, when the audience for what the computer creates is strictly and exclusively the computer itself.

Randall Reetz

The Separation of Church and Labor

The always entertaining (habitually entertaining?) Jaron Lanier (Rasta-haired VR guru) wrote this opinion editorial piece for the New York Times "The First Church of Robotics" which deals with the inevitable hubris-spiral as humans react to the ever quickening pace of development in robotics and AI. Jaron is always a bit of a fear monger – anything for a show – but he leaves lots of fun emotional/societal/technology nuggets to snatch up and digest.

Lanier sets the stage:

Consider too the act of scanning a book into digital form. The historian George Dyson has written that a Google engineer once said to him: “We are not scanning all those books to be read by people. We are scanning them to be read by an A.I.” While we have yet to see how Google’s book scanning will play out, a machine-centric vision of the project might encourage software that treats books as grist for the mill, decontextualized snippets in one big database, rather than separate expressions from individual writers. In this approach, the contents of books would be atomized into bits of information to be aggregated, and the authors themselves, the feeling of their voices, their differing perspectives, would be lost.

After bemoaning the loss of human trust in human decisions (Lanier says we risk this every time we trust the advise of recommendation engines like Pandora and Amazon), he discusses the tendency amongst AI and Robotics enthusiasts to replace traditional religious notions of transcendence and immortality with the supposed rapture that is the coming Singularity – who needs God when you've a metal friend smart enough to rebuild you every time you wear out?.

Cautioning fellow scientists Lanier pens:

We serve people best when we keep our religious ideas out of our work.

The separation of church and work! Good luck. Most of us don't have an internal supreme court to vigilantly enforce such high moral standards. The whole concept of a "religious scientist" seems to me a non-starter –like a "vegetarian carnivore".

Yet, as a hard atheist, I applaud Jaron's thesis. To me, science is, at base, the act of learning to get better at recognizing the difference between myopic want-driven self interest and the foundational truths that give rise to the largest most inclusive (universal) vantage – and then doing everything in one's power to avoid confusing the two. As we build towards this post-biological evolutionary domain, crystal clear awareness of this difference has never been more important.

Those of us pursuing "hard" AI, AI that reasons autonomously as we do(?), eventually discuss the capacity of a system to flexibly overlay patterns gleaned from one domain onto other domains. Yet, at least within the rhetorically noisy domain of existential musings, we humans seem almost incapable of achieving to this bar. Transhumanists and Cryonicists can identify religious thinking when it involves guys in robes swinging incense, yet are incapable of assigning the "religious" tag when the subject matter involves nano-bot healing tanks or n-life digital-upload-of-the-soul heaven simulations.

Why does it matter? Traditional human ideas about transcendence are exclusively philosophical. The people inhabiting traditional religious heavens (and hells) don't eat our food, drink our water, breath our air, consume our electricity, or compete for our land or placement in our schools. Yet the new-age, digital, post-singularity, friendly-AI omnipotence scheme isn't abstract or etherial… the same inner fear of death in these schemes leads to a world in which humans (a small, exclusive, rich, and arrogant subset of human kind) never actually die, don't end up on another plain, stay right here thank you very much, and continue to eat and drink and build houses and consume scarce resources along side anyone unfortunate enough to be enjoying(?) their first life right now.

I saw the best minds of my generation destroyed by…
Howl, Allen Ginsberg, 1955

Every generation must at some point gather the courage to stand up and give an accounting for its own inventive forms of arrogant blindness and the wastefulness that litters its meandering. When it is our turn, we will have to laugh and cry at our silly and dangerous taking that is the reification of the "life ever after" fantasy. And while we are confessing hubris, we might as well admit our myopic obsession with "search". Google has been our very own very shiny golden cow (is it simply because there aren't any other cows left standing?).

When self interest goes head to head with a broader vantage, vantage wins. Vantage wins by looking deep into the past and the future and seeing that change trumps all. I guess it comes down to the way that an entity selects the scope of its own boundaries. If an entity thinks itself a bounded object living right now, it will resist change in itself or its environment. I can hear the rebuttal, "Entities not driven by selfishness won't protect themselves and won't successfully compete." Entities who see themselves as an actual literal extension of a scheme stretching from the beginning of time laugh at the mention of living forever… because they already do!

The scheme never dies.

Germain to this discussion is how a non-bounded definition of self impacts the decisions one makes as regards the allocation of effort and interest. What would Thermodynamics do?

…Yet all experience is an arch wherethro'
Gleams that untravell'd world whose margin fades
For ever and forever when I move.
How dull it is to pause, to make an end,
To rust unburnish'd, not to shine in use!
As tho' to breathe were life!…
Ulysses, Alfred, Lord Tennyson

Is there something about the development of AI that is qualitatively different than any challenge humans have previously undertaken? Most human labors are not radically impacted by philosophy. A shoe designer might wrestle with the balance between aesthetics and comfort or between comfort and durability, between durability and cost, but questions of to whom or what they choose to pray, or how they deal with death, don't radically impact the shoes they design.

There seems little difference between the products of hindu and christian grocers, between the products of Muslim and atheist dentists, road builders, novel writers, painters, gynecologists, and city planners. Even when you compare the daily labor of those practitioners that directly support a particular philosophy; the Monks, the Pastors, the Priests, the Imams, the Holy Them's… you find little difference.

So why should AI be different? Why should it matter who does AI and what world views they hold? I think it is because the design of AI isn't an act in reference to God, it isn't even "playing" God – it is quite literally, actually being God.

What training do we humans, we mammals, we vertebrates, we animals, we eukaryotes, we biological entities, what does our past offer us as preparation for acting the part of God?

It is true that each of us are the singular receptacles of an unbroken chain of evolutionary learning. The lessons of fourteen thousand million years of trial and error are encoded into the very fabric of our being. We are walking talking reference tables of what works in evolution. Yet very little of that information deals with any kind of understanding or explanation of the process. Nowhere in any of this great tome of reference in the nucleus of each of our cells does there exist any information that would give context. There is no "this is why evolution works" or "this is why this chunk of genetic code works in the context of the full range of potential solutions" coded into our DNA or our molecular or atomic or quantum structure.

And that makes sense. Reasons and context are high order abstraction structures and biology has been built up from the most simple to the most simple of the complex. It is only within the thinnest sliver of the history of evolution that there been any structural scheme complex enough to wield (store and process) structures as complex as abstraction or language.

We are of evolution yet none of our structure encodes any knowledge of evolution as a process. What we do know about the process and direction of change we have had to build through culture, language, inquiry. Which is fine, if that is, you have hundreds (or thousands) of millions of years and a whole planet smack in the energy path of a friendly star. This time around we are interested in an accelerated process. No time for blindly exploring every dead end. This time around we explore by way of a map. The map we wield is an abstracted model of the essential influences that shape reality in this universe. The "map" filters away all of the universe that is simply instance of pattern, economically holding only the patterns themselves. The map is the polarized glasses that allow us to ignore anecdote and repetition, revealing only essence, salience.

What biology offers in stead of a map is a sophisticated structural scheme for the playing of a very wasteful form of planet-wide blind billiards, a trillion trillion monkeys typing on a trillion trillion DNA typewriters, a sort of evolutionary brownian motion where direction comes at the cost of almost overwhelming indirection.

And again we ask, "Why does it matter?" Imagine a large ocean liner – say the Queen Elizabeth II. Fill its tanks with fuel, point it in the right direction, and it will steam across any ocean. It really doesn't matter what kind of humans you bring aboard, or what they do once they there. A big ship, once built, will handle an amazing array of onboard activity or wild shifts in weather. Once built, a ship's structure is so stable and robust that its behavior becomes every bit as predictable. But if you brought dancing girls, water slides, and drunk retirees into the offices of the navel architects while they were designing the ship, it probably wouldn't make it out of the dry dock. The success of any project is unequally sensitive to the initial stages of its development. Getting it right, up front, is more than a good idea, it is the only way any project ever gets built. Acquiring the knowledge to be a passenger on a ship is far easier than acquiring the knowledge to design or build it.

We General AI researchers work at the very earliest stage of a brand new endeavor. This ship has never been built before. Ships have never been built. In a very real sense, "building" has never been built before. We have got to get this right. Where navel architects must first acquire knowledge of hydrodynamics, about structural engineering, material science, propulsion, navigation, control systems, ocean depths, weather systems, currents, geography, etc., AI researchers must bring to the project an understanding of pattern, language, information, logic, processing, mathematics, transforms, latency, redundancy, communication, memory, causality, abstraction, limits, topology, grammar, semantics, syntactics, compression, etc.

But this is where my little ship design analogy falls short. AI requires a category of knowledge not required of any other engineering endeavor. Intelligence is a dynamic and additive process, what gets built tomorrow is totally dependent on what gets built today. Building AI therefore requires an understanding of the dynamics of change itself.

Do we understand change?

[to be continued]

Randall Reetz

The Scope of Evolution?

We evolutionists desperately want to quantify evolution. We are embarrassed by the continued lack of measurability and predictability one would expect from a true theory-based science. In the place of true metrics, we defer to the vague, broad, and situationally dependent term; "fitness".

We say that genetic variability in the population of any given lineage will insure that some individuals express traits that provide a survival advantage. Given the particularity of a given environment's mix of resources and challenges, not all individuals will have the genes necessary to make them fit. We say that there is always some small diversity in any population, a variability caused by sexual mixing, mutation, and a whole slew of non-genetic processes that indirectly effect either the actual genes inherited or conditions under which those genes are expressed. We say that this variability across a localized population is enough to influence who will survive and who won't, or most importantly, who's genes will be expressed in the next generation and who's won't. We assert that this process is obvious, observable, and predictable. And of course we are correct. We can and do produce laboratory experiments and field observations in that show that genes predict traits, genetic variability is correlated to population variability, and environmental conditions act as filters selecting towards individuals or populations expressing some genes and against those with others.

Well that all sounds good… model driven prediction, physical mechanistic explanation, solid techniques for observation… like a real science. If, that is, you are content to restrict your inquiry to the how.

If you are content with an understanding of evolution that is restricted to biology. If you are content with an understanding of evolution that blindly accepts as dependent factors, such temporal notions and shifting and immeasurable terms as "environment" and "fitness" and never ever asks, "Why?", then you probably won't need to read any further.

But if you, like me, would like to understand evolution in its largest context; independent of domain, and across all time, then you already know that evolution's current answers, though already correct and verifiable by any standard, is not yet a true science.

When Newton sought to define motion (and yes I know that Einstein perfected it through Relativity and quantum theory), he didn't do so only for an apple falling from a tree… but universally, for all physical bodies in all situations. His equations predict the position, speed and trajectory of an object into any distant future and across any distance. If the same could be said of evolution theory, we would have in our possession theory and or equations that we could use to predict the outcome of evolution across any span of time and in any domain.

Yet, of course we don't. We know all kinds of things about the interaction, within the domain of biology, of germ and progeny, of reproductive selection and mutation, of the relationship between genotype and phenotype, and of the competition over resources and of the crazy alliances and unintuitive and unplanned results of cooperative adaptation (including the tightly wound dance between predator and prey, between parasite and host).

But these processes, no matter how well understood, measured, researched, and modeled, are not what could be called the primitives of evolution. To be primitives, they would have to be universal. They are not universal. Thinking so would be like Newton thinking his laws only applied to cannon balls or things made of metal. So ingrained is the false correlation between biology and evolution that it is often impossible for me to get people to continue a discussion about evolution when I say "Let's talk about evolution in other systems." or "Let's talk about evolution as domain independent phenomenon."

If evolution isn't a "general" phenomenon, then someone representing the "special theory of evolution" will have to show how it is that life evolves but other systems do not. I doubt this requirement can be met. It would mean that some line can be drawn in time, before which there wasn't evolution, and after which there was. The logical inconsistency arises when one realizes that, to get to that line, some process suspiciously similar to evolution would have to have transpired to advance complexity to the level just preceding biology.

Another way to frame the overarching question of the why of evolution starts with the realization that competition within an environment isn't restricted to the various individuals of one species. Nature isn't that well refereed. In fact, nature isn't refereed at all. Nature is a free for all pitting snail against walrus against blue green algae. And it doesn't stop there. The ocean currents compete to transfer heat and in doing so, effect the food available to marine life of all kinds. In a very real sense, in an exactly real sense, a hurricane competes directly with a heron. Even the more stable artifacts of an environment, the topology and physical composition of the geographic features below foot compete actively and dynamically with the biota growing in its fissures and above its slowly moving face. Our old and narrowly-bounded definition of that which fits the category of evolution is plainly and absurdly and arbitrarily anthro-, species-, mammal, or bio- centric, and logically wrong.

Each time I introduce these new and inclusive definitions of the scope of the cast that performs in the play that is evolution, I hear grunts and groans, I hear the rustle of clothes, the uncomfortable shiftings… I hear frustration and discomfort. Hands raise anxiously with questions and protests: "How can non-living things evolve?" "Non-living things don't have genes, without genetics traits can't be transferred to or filtered from future generations!" And the inevitable, "The category containing all things is a useless category!"

I can't say that I don't understand, don't appreciate, or in some real way haven't anticipated and sympathized with these bio-centric apologies. This is how evolution has been framed since Erasmus Darwin and his grand kid Charles first seeded the meme. I will therefor take a moment to address these two dominant arguments such that they can be compared with a domain-independent definition of evolution.

First, lets look at evolution's apparent dependence upon genetics. How could evolution work if not for a stable medium (DNA) for the storage and processing of an absolute recipe for the reliable re-creation of individual entities? You may be surprised that my argument starts with an agreement; evolution is absolutely dependent upon the existence of a substrate stable enough to transfer existing structure into the future. But does that stable structure have to be biology's famous double helix? Absolutely not! In fact, it is causally impossible to find a system within this Universe (or any imaginary universe) in which the physical makeup of that system and its constituent parts does not facilitate the requisite structure to transfer conditions and specific arraignments from any present into any trailing futures. The shape of a river valley is a fundamental carrier of information about that valley into the future. The position, mass, and directional velocity of celestial bodies is sufficient carrier of structural information to substitute handedly for the functional duty that DNA performs in biology. But it is also important to realize and fully absorb the opposite proposition. DNA is not the only way that biological systems reliably transfer information about the present into the future. Biological systems are of course just as physical as galaxies, stars, and planets. The same causal parameters that restrict the outcome of any particular then (as a result of any particular now), that restrict causality to an almost impossibly narrow subset of what would be possible in a purely random shaking of the quantum dice. DNA is especially good at what it does, but it doesn't own or even define the category.

The second argument against an all-inclusive, domain independent definition of evolution – the logical argument against the usefulness of category that contains everything – well let's start by parsing it semantically and rhetorically. On face, there is no way to argue. The category "all" is a category of little worth. There is nothing to be known of something if it can't be compared to something else. But, and this should be obvious, I am not trying to create a category; quite the opposite! My intent is to create a theory of everything. Such a theory would obviously fail if it didn't apply to everything. So, semantically, this "set of everything is a useless set" argument doesn't map to the topic at hand. I get the distinct feeling that the argument is meant pedantically, and purposely, to derail and obfuscate the logical trail I am attempting to walk the audience down. It is a straw horse. It looks logical, but it doesn't apply.

A much more instructive and interesting line of questioning would go to the plausibility of a domain independent theory of evolution, what it would or would not change regarding our understanding of the emergence of complex structures (and their accelerating complexity), how it modifies our understanding of biological evolution, whether or not evolution will stand up to the requirements of a "theory of everything" (how it compares with others), and maybe even the effectiveness of my own description of this idea.

So, why is it important to me for evolution to meet the test of a "theory of everything"? First, I loath the unexplained. If evolution only talks to the mechanism of change within biology, then evolution would necessarily stand upon a stack even more foundational truths, and, as I mentioned earlier, other parallel theories would have to be developed to explain the emergence of complexity in non-biological systems. Either way, a vacuum would remain, exposing a need for the development of a foundational theory or set of theories that would support what in biology we call evolution, what in geology we call tectonics (etc.), what in meteorology we call heat dissipation cells, what in culture we call engineering, cooperative networks, etc.

What makes this whole endeavor so tricky, is that we tend to confuse mechanism with purpose. We get so caught up with the almost impossibly complex molecular mechanism (nucleic acids) by which biology builds complexity, that we forget to look at why it bothers at all. This why, this great big why, is to my mind far more fundamental and interesting and once understood, provides a scaffolding from which to comfortably understand and predict the necessary meta-components that need to be present in some form or another, in any evolving system. And, if you like elegance in a theory, it gets even better. It turns out that a byproduct of evolution as a theory of everything is that it must therefore be based on the two physical principals that have stood the test of universality – thermodynamics and information theory, and it strengthened both of these theories in the one area they were weak – dynamics. Once you understand the motivation and demands of change itself, the particular mechanisms of evolution at play in any one domain are reduced to how, are, no matter how varied, are but skins worn by a beast who's behavior becomes more and more predictable and universal.

All systems have what it takes to evolve. All systems are composed of components that in some small way differ. That difference might be in how the parts are made, or it might be in how the parts are distributed, and it most probably is both. That is all a system needs for the process of evolution to apply. So long as there is a difference somewhere in the system, or in that system's interaction in the greater environment in which it exists, evolution needs must be happening all of the time.

So just what is it that evolving things compete for? Is it food? Yes. Is it safety? Yes. Is it comfort? Yes. Is it stability? Yes, that too. For plants, competition is for solar radiation, carbon dioxide, water, a stable place to eat, grow, mate, and rase offspring. We animals need far more energy than our skin could absorb even if it was all capable of photosynthesis. So we eat things that can. And that is just the way things work. To get ahead, things learn to take advantage of other things. One might even say that the advantage always goes to those entities that can take the greatest advantage of the the productive behavior of the greatest number of other things. If you can't make enough energy, then eat a lot of things that can.

One could imagine taking this line of reasoning to the extremes. Lets define fitness as the ability to sit on the apex of a food chain. Of course you have to keep moving. If you don't stay vigilant and obsessive, always trying to find new and better ways to eat more of the other things, you will succumb to competition by things that do.

… to be continued …

Randall Reetz

Real-Time Observation Is Always More Efficient Than After-The-Fact Parsing

Non-random environments (systems):

- have evolved (from a more simple past)
- are (variously) optimized to input conditions and output demands
- are sequentially constructed in layers
- are re-constructed periodically
- are derived from the constraints of pre-existing environments

Understanding (extracting pattern rules and instances of these rules) is made more efficient through observations undertaken over the course of an environment's construction period. Extracting pattern after the fact requires the act of inferring construction sequence from existing artifact. The number of possible developmental paths (programed algorithms) that will result in a particular artifact are infinite. Parsing through this infinite set towards a statistically biased guess at the most likely progenitor is lossy at best and computationally prohibitive.

For instance, the best (shortest algorithmic complexity) candidate produced by post construction parsing may indeed be a more likely (least energy) progenitor, but this may not predict the actual causal chain that resulted in that environment. Projections based on a statistically optimal history will diverge from the futures actually produced by the environment.

The only time that a statistical (minimum algorithm) parsing of an environment is guaranteed to match reality is when that parsing includes the whole system (the entire Universe).

Observing the genesis of an environment minimizes the mandatory errors inherent in statistical after-the-fact (Solomonoff) algorithmic probability parsing of a pre-existing system.

Said more succinctly; If you want to grow an optimal system, use algorithmic probability and algorithmic complexity as metrics towards optimization, but if you want to describe a pre-existing system, it is best to build this description by observing it's genesis.

Randall Reetz

DNA replication…



Yes, this shit is so amazing that it makes a hardened evolution theorist like me cough up some creationist thoughts (don't worry, it will be a temporary affliction).




This animation shows the lagging strand replication process in greater detail. If you are wondering why the lagging strand should have to be built in reverse, it is because the other side of the helix is inverted which would have necessitated an exact molecular machine to have been evolved from scratch, but in reverse! This molecule, "polymerase", is composed of 8005 atoms. The ingenious workaround, to run the strand through the same molecule backwards, though mechanically awkward, is far more likely (less complicated) to have evolved than would have been a mirror image of the whole polymerase molecule (or its function). In fact, it is probable that such a molecule might not even be physically possible given the "handedness" (right/left) of the atoms molecules must be built of. Because of this, I consider the asymmetry of DNA replication machinery to be evidence of the least energy dictated meandering of the evolutionary process.

By pure chance, an particular arrangement of 8005 things would happen once every 8005 to the 8005th attempts (8005 factorial). Of course molecules don't assemble by pure chance. Even if you dumped the requisite atoms into a box and shook it up, the assembly wouldn't happen instantaneously, some atoms would form small groups, and those groups would clump together into larger groups, etc. The atoms of each element have unique properties that effect their aggregation.

But that isn't the full story either because the polymerase molecule is built atom by atom by DNA.

Look at this…

It is called a Snow Plant. Comes out of the ground like an alien right after the snow melts. This one was just meters up the hill from a trail head on the northeast shore of Lake Tahoe. They are parasites of fungus that grows on the root systems of pine trees. Unreal!  [about 8 inches tall and more brilliant than this picture could ever show]

Just Where is the Computer that Computes the Universe? (Steven Wolfram's invisible rhetoric)

Do Black Holes warp the universe such that it is self-computable? Kurt Godel famously proved that a computer has to be larger than the problem being computed. This places seemingly fatal constraints on the size of the universe as a computation of itself. Saying as it has become popular to do, that the universe is just one of an infinite set of parallel universe doesn't solve the problem. Even infinities can not be said to be larger than themselves.



TED talk by Stephen Wolfram on the computable universe.

Is it possible that black holes work as Kline bottles for the whole Universe – stretching space-time back around onto itself? If so, it may be possible to circumvent Godel's causal constraints on the computability of the self, as well as the entropic leaking demanded by the second law of thermodynamics. I admit that these questions are not comfortable. They certainly don't result in the kind of ideas I like to entertain. They spawn ideas that seem to be built of need and not logic. They are jokes written to support a punch line.

But something has to give. Either Godel and Turing are wrong, or there is a part of our universe in which they don't apply. There is no other option. If there is a part of the universe not restricted by incompleteness than black holes are obvious candidates if for no other reason than we don't know much about them. I am at once embarrassed by the premise of this thought and excited to talk openly about what is probably the core hiccup in our scientific understanding of the universe. Any other suggestions? At the very least, this problem seems to point to (at least) five options; 1. a deeper understanding of causality will derive Godel and Turning from a deeper causal layer that also has room for super-computable problems. 2. Godel and Turing are dead wrong. 3. the universe is not at all what it seems to be, rendering all of physics mute, and 4. the universe is always in some real way, larger than itself, and 5. evolution IS the computation of the universe, it happens at the only pace allowable by causality, is an intractable program, and can not be altered or reduced, (event cones, the only barriers between parallel simultaneous execution).

I am challenged by the first option, find the second option empirically problematic, am rhetorically repulsed by the third, simply do not know what to do with the fourth, the fifth is where I place my bets but I don't fully understand the implications or the parameters. Personal affinities aside, we had better face the fact that our understanding of the universe is at odds with the universe itself. That we have a set of basic laws that contradict the existence of the universe as a whole is problematic at best. Disturbing.

One of the unknowns that haunt our effort to understand the universe as a system is the ongoing confusion between what we think of as "primary" reality on the one hand and "descriptive" reality on the other. Real or just apparent, it is a distinction that has motivated the clumsily explorations of the "Post-Modern" theoretical movement – it deserves better. I am not so romantic to believe that this dichotomy represents a real qualitative difference between the material and the abstract (made up as it is of the same "real" materials), but this confusion may indeed hint towards a sixth option that, once explained and understood, will obliterate the causal contradictions that have so confused our understanding of the largest of all questions. When a chunk of reality is used as abstraction signifying another part of reality or a part the same reality of which the abstraction is built, does that shift in vantage demand a new physics, a new set of evaluation semantics? What modifications does one have to perform to E = mC^2 when one is computing the physical nature of the equation itself? What new term is to be added to our most basic physical laws such that the causal and the representative can be brought into harmony?

My own view is that the universe, like all systems, like any system, is always in the only configuration it can be in at that time. Wow, that sounds Taoist and I absolutely hate it when attempts at rationality result in assessments that are so easily resonant with emotionally satisfying sentimentality (What the Bleep, and such). But the Second Law clearly points to a maxed out rate as the only possible reading of process at all scales. Computation of anything, including the whole of the universe, is always limping along at the maximum rate dictated by each current configuration. The rate of the process, of the computation, accelerates through time as complexities stack up into self optimized hierarchies of grammar, but the rate is, at each moment, absolutely maxed out.

Are these daft notions chasing silly abstraction-bounded issues or do they point to a real "new [and necessary] kind of science"?

OK, as usual, Mr. Wolfram has expansive dreams – awesomely audacious and attractively resonant notions. Though, from my own perspective, a perspective I will say is more sober and less rhetorical, there are some huge problems that beg to be exposed.

Wolfram's declares: the universe is, at base, computation. Wow, talk about putting the carriage before the horse. That the universe and everything in it is "computing" is hard to dispute. Everywhere there is a difference there will be computation. So long as there is more than one thing, there is a difference. But computation demands stuff. What we call computation is always at base a causal cascade attempting to level an energetic or configurational topology. If you want to call that cascade "computation", well I won't disagree. But no computation can happen unless the running of it diminishes to some extent an energy cline. Computation is slave to the larger more causal activity that is the dissipation of difference. That a universe will result in computation an entirely different assertion.

When Wolfram says that computation exists below the standard model causality that is matter and force, time and space, I am suspicious that he is seeking transcendence, a loophole, access by any means out of the confines of the strictures imposed by physical law. That he is smart and talented and prodigiously effective towards the accomplishment of complex and practical projects does not in itself mean that his musings are not fantastic or monstrous.

Let's play a thought experiment. Let's start from the assumption that Wolfram is correct, that the universe is at base pure computation. His book and this talk hint towards the idea that pure computation running through computational abstraction space, will eventually produce the causality of this universe… and many others. Testing the validity of this assertion is logically impossible. But what we can test is the logical validity of the notion that one could, from the confines of this finite universe, use computation to reach back down to the level of pure computation from which a universe can be made or described. At this level, Turing and Godel both present lock-tight logic showing how Wolfram's assertions are impossible.

In his own examples, Wolfram uses a mountain of human computational space built on billions of years of "computation" (evolution) and technological configurations to make his "simple" programs run. There is NOTHING simple about a program that took a mind like Wolfram's to build (stacked as it is on top of an almost bottomless mountain of causal filtering reaching back to the big bang (or before).

To cover for these logical breaches, Wolfram recites his "computational equivalence" mantra. This is a restating of Alan Turing's notion that a computable problem is computable on any so-called "Turing Complete" computer. But the Turing Machine concept does not contend with the causally important constraint that run-time places on a program. Of course there are non-computable problems. But even within the set of problems that a computer can run and run to completion, there are problems so large that they require billions of times longer to run than the full life cycle of the universe. Problems like these really aren't computable in any practical sense – causality being highly time and location sensitive (isn't that what "causality" means?).

And then there is the parallel processing issue, its potentials and its pitfalls. One might (a universe might), in the course of designing a system that will compute huge programs, decide to break them apart and run sections of the problem on separate machines. Isn't that what nature has done? But there are constraints here as well. Some problems can not be broken apart at all. Some that can, break apart into an unwieldy network constrained by time sensitive links dependent upon fast, wide, and accurate communication channels. if program A needs the result of program B before it can initiate program C but program A only is only relevant for one year and program B takes 2 years to run?

A large percentage of the set of all potential programs, though theoretically run-able on Turing Machines, are not practically run-able given the finite timescales and computational material resource availability. If there is a layer of causality below this universe, and that layer is made of much smaller and much more abundant stuff, than it is conceivable that Godel's strictures on the size of a computer won't conflict with the notion that this Universe could be an example of a Turning Complete computer capable of running the universe as a program.

But Wolfram doesn't stop there. In addition to asserting that a universe is the result of a computation, he says that we humans (and, or, our technology), will be able to write a small program that perfectly computes the universe and that it will be so simple (both as a program and presumably to write it) that we will be able to run it on almost any minimal computer. His cites as example, "rule 30", the fractal equation variation that seems to produce endless variety along an algorithmic theme, as evidence that this universe describing meta-program, is as easy to discover. One has to ask: "Would the running of such a program bud off another universe, or is Wolfram's assertion intentionally restrained to abstraction space?" Given the boldness of his declaration that the universe is a computation, it is reasonable to assume that his statements regarding the discovery of a program that computes a universe is meant in the literal sense. Surely he can talk to the issue of abstraction space vs. causal space, the advantages and constraints of each, and how programs use this difference to compute different types of problems. If he does, he doesn't reveal this understanding to his audience. The distinction between abstraction and causality is slippery and central to the concept of computation.

I am convinced that Stephen Wolfram is so lost in the emotional motivations that push him towards his "computable universe" rhetoric that none of his considerable powers of intellect can save him from the fact that he didn't get the evolution memo. Evolution IS the computation. If it could happen any faster it would have. If he is simply saying that our new understanding of computation will increase the rate and reach of evolution, well then I agree. But if he is saying that our first awkward steps into computation reveal enough of the unknown to expose the God program, the program that will complete all other programs (in a decade), well I can only say that he is nuts.

Stephen is a smart guy. The fact that a mind so capable can overlook, even actively avoid the simple logic that shows terminal flaws in his thesis is yet another reminder of the danger that is hubris. That he never talks to his own motivations, or the potential fallacies upon which his theory depends should be worrisome to anyone listening. I suspect that, like religion, his rhetoric so closely parallels the general human rhetoric, that it will be a rare person who can look behind the curtains and find these logical inconsistencies (no matter how obvious).

I applaud Mr. Wolfram's work. The world is richer as a result. But none of his programming should be taken as guarantee that his theory, at the level of a computational universe is sound.

Randall Reetz

Proactive Fix For Deep Sea Oil Platform Blowouts

If off-shore oil platform developers were required to pre-install a permanent emergency oil blowout collection tent at each wellhead, the disaster unfolding in the gulf of mexico would never have happened.  




The above diagram shows the tent as deployed after a blowout.  Before a blowout the tent would lay flat on the ocean floor in the ready.  When a blowout occurs at the well head (A), a winch (or air filled ballast) (B) pulls the tent up into position over the well head (A).  The tent (C) is composed of an inverted V shaped rigid "tent pole" (D) hinged at pivot points (E) anchored at sea floor.  Once deployed, the tent presents as an inverted pyramid that catches the oil (G) as it rises (oil is lighter than water).  A ten inch hose (H) is lifted from the apex of the tent to the surface of the ocean by buoys (I) along at intervals along its length.  The hose terminates at the surface where a tanker is positioned to pump the oil into its hold until such a time as the well head can be sealed.

Using another approach, the rigid poles are replaced by buoys lifting the apex of the tent.  Four guy lines anchor the tent's corners to the ocean floor.  This option allows for a larger tent and might prove easier to install and deploy.

The entire contraption could also be prebuilt, pre-packaged, and deployed from a GPS guided barge or ship – dropping four anchors or concrete standards at equal radius from the well head and then deploying the collection tent and pumping hose remotely via at-depth gas filled buoys or mechanical winch.

Randall Reetz

Devaluing Survival

The goal of evolution is not survival. Rocks survive much better, longer, and more consistently than biological entities. This should be patently obvious. Survival is a tailing of evolution and achieves a level of false importance probably because those of us doing the observation are so short lived and thus value survival above almost everything else.

In biology as in any other system, evolution is not concerned with nor particularly interested in individual instanciations of a scheme. A being is but a carrier of scheme. And even that is unimportant to THE scheme which can only be one thing – the race towards ever faster and more complete degradation of structure and energy.

To this (or any other) universal end, schemes carry competitive advantage simply and only as a function of their ability to "pay attention to", to abstract, the actual physical grammatical causal structure of the universe. And why is this important? Because a scheme will always have a greater effect on the future of the universe if it "knows" more about the future of the universe. Knowing is a compression exercise. Knowing is two things. 1. acquiring a description of the whole system of which one is a part, and 2. the ability to compress that description to its absolute minimum. A system that does these things better than another system has a greater chance of out-competing its rivals and inserting its "knowledge" into future versions of THE (not "its") scheme. To the extent that an entity pays more attention to its survival (or any other self-centered goal) than to THE scheme, is the extent to which another entity will be able to out-compete it.

Darwin was a great man with an even greater idea (his grandfather Erasmus even more so). But neither had the chops or the context to see evolution at a scope larger than individual living entities or the "species" within which they were grouped competing amongst each other over resources. There was very little understanding of the concept "resources" during his lifetime – certainly not at the meta or generalized level made possible by today's understanding of information and thermodynamics and as a result of Einstein's work its liberation of the symmetry that separated energy, time, distance, and matter. However, Darwin's historically forgivable myopia has out lasted its contextual ignorance and seems instead to be a natural attribute or grand attractor of the human mind. His sophomoric views are repeated ad nauseum to this day.

Randall Reetz

Building Pattern Matching Graphs

I talk a lot about the integral relationship between compression and intelligence.  Here are some simple methods.  We will talk of images but images are not special in any way (just easier to visualize).  Recognizing pattern in an image is easier if you can't see very well.

What?

Blur your eyes and you vastly reduce the information that has to be processed.  Garbage in, brilliance out!



Do this with every image you want to compare.  Make copies and blur them heavily.  Now compress their size down to a very small bitmap (say 10 by 10 pixels) using a pixel averaging algorithm.  Now convert each to grey scale.  Now increase the contrast (about, 150 percent).  Store them thus compressed.  Now compare each image to all of the rest: subtract the target image from the compared image. The result will be the delta between the two. Reduce this combined image to one pixel.  It will have a value somewhere between pure white (0) and pure black (256), representing the gross difference between the two images. Perform this comparison between your target image and all of the images in your data base. Rank and group them from most similar to least.

Now perform image averages of the top 10 percent matches. Build a graph that has all of the source images at the bottom, the next layer is the image averages you just made. Now perform the same comparison to the 10 percent that make up this new layer of averages, that will be your next layer. Repeat until your top layer contains two images. 

Once you have a graph like this, you can quickly find matching images by moving down the graph and making simple binary choices for the next best match. Very fast. If you also take the trouble to optimize your whole salience graph each time you add a new image, your filter should get smarter and smarter.

To increase the fidelity of your intelligence, simply compare individual regions of your image that were most salient in the hierarchical filtering that cascaded down to cause the match. This process can back-propagate up the match hierarchy to help refine salience in the filter graph. Same process works for text or sound or video or topology of any kind. If you have information, this process will find pattern in it. Lots of parameters to tweak. Work the parameters into your fitness or salience breading algorithm and you have a living breathing learning intelligence. Do it right and you shouldn't have to know which category your information originated from (video, sound, text, numbers, binary, etc.). Your system should find those categories automatically.

Remember that intelligence is a lossy compression problem. What to pay attention to, what to ignore. What to save, what to throw away. And finally, how to store your compressed patterns such that the graph that results says something real about the meta-paterns that exist natively in your source set. 

This whole approach has a history of course. Over the history of human scientific and practical thought many people have settled in on the idea that fast filtering is most efficient when it is initiated on a highly compressed pattern range. It is more efficient for instance to go right to the "J's" than to compare the word "joy" to every word in a dictionary or database. This efficiency is only available if your match set is highly structured (in this example, alphabetically ordered). One can do way way way better than alphabetically ordered lists of 3 million words. Lets say there are a million words in a dictionary. If one sets up a graph, an inverted pyramid, where each level where the level one has 2 "folders" and each folder is named for the last word in the subset of all words at that level divided into two groups. The first folder would reference all words from "A" to something like "Monolith" (and is named "Monolith") The second folder at that level contains all words alphabetically larger than "Monolith" (maybe starting with "Monolithic") and is named "Zyzer" (or what ever the last word is in the dictionary). Now, put two folders in each of these folders to make up the second tier of your sorting graph. At the second level you will have 4 folders. Do this again at the third level and you will have 8 folders each named for the last word in the graph referenced in the tiers of the graph above them. It will only take 20 levels to reference a million words, 24 levels for 15 million words. That represents a 6 order of magnitude savings over an unstructured sort. 

A cleaver administrative assistant working for Edward Hubble (or was it Wilson, I can't find the reference?) made punch cards of star positions from observational photo plates of the heavens and was able to perform fast searches for quickly moving stars by running knitting needles into the punch holes in a stack of cards.



Pens A and B found their way through all cards. Pen C hits the second card.

What matters, what is salient, is always that which is proximal in the correct context. What matters is what is near the object of focus at some specific point in time.

Lets go back to the image search I introduced earlier. As in the alphabetical word search just mentioned, what should matter isn't the search method (that is just a perk), but rather the association graph that is produced over the course of many searches. This structured graph represents a meta-pattern inherent in the source data set. If the source data is structurally non-random, its structure will encode part of its semantic content.  If this is the case, the data can be assumed to have been encoded according to a set of structural rules themselves encoding a grammar.

For each of these grammatical rule sets (chunking/combinatorial schemes) one should be able to represent content as a meta-pattern graph. One of the graphs representing a set of words might be pointers to the full lexicon graph. A second graph of the same source text might represent the ordered proximity of each word to its neighbors (remember the alphabetical meta-pattern graph simply represents the neighbors at the character chunk level).

What gets interesting of course are the meta-graphs that can be produced when these structured graphs are cross compressed. In human cognition these meta-graphs are called associative memory (experience) and are why we can quickly reference a memory when we see a color or our nose picks up a scent.

At base, all of these storage and processing tricks depend on two things, storing data structures that allow fast matching, and getting rid of details that don't matter. In concert these two goals result in a self optimization towards maximal compression.

The map MUST be smaller than the territory or it isn't of any value.

It MUST hold ONLY those aspects of the territory that matter to the entity referencing them. The difference between photos and text: A photo-sensor in a digital camera doesn't know for human salience. It sees all points of the visual plane as equal. The memory chips upon which these color points are stored see all pixels as equal. So far, no compression, and no salience. Salience only appears at the level of where digital photos originate (who took them, where, and when). On the other hand, text is usually highly compressed from the very beginning. What a person writes about and how they write it always represents a very very very small subset of