Search This Blog

Showing posts with label process. Show all posts
Showing posts with label process. Show all posts

Building Pattern Matching Graphs

I talk a lot about the integral relationship between compression and intelligence.  Here are some simple methods.  We will talk of images but images are not special in any way (just easier to visualize).  Recognizing pattern in an image is easier if you can't see very well.

What?

Blur your eyes and you vastly reduce the information that has to be processed.  Garbage in, brilliance out!



Do this with every image you want to compare.  Make copies and blur them heavily.  Now compress their size down to a very small bitmap (say 10 by 10 pixels) using a pixel averaging algorithm.  Now convert each to grey scale.  Now increase the contrast (about, 150 percent).  Store them thus compressed.  Now compare each image to all of the rest: subtract the target image from the compared image. The result will be the delta between the two. Reduce this combined image to one pixel.  It will have a value somewhere between pure white (0) and pure black (256), representing the gross difference between the two images. Perform this comparison between your target image and all of the images in your data base. Rank and group them from most similar to least.

Now perform image averages of the top 10 percent matches. Build a graph that has all of the source images at the bottom, the next layer is the image averages you just made. Now perform the same comparison to the 10 percent that make up this new layer of averages, that will be your next layer. Repeat until your top layer contains two images. 

Once you have a graph like this, you can quickly find matching images by moving down the graph and making simple binary choices for the next best match. Very fast. If you also take the trouble to optimize your whole salience graph each time you add a new image, your filter should get smarter and smarter.

To increase the fidelity of your intelligence, simply compare individual regions of your image that were most salient in the hierarchical filtering that cascaded down to cause the match. This process can back-propagate up the match hierarchy to help refine salience in the filter graph. Same process works for text or sound or video or topology of any kind. If you have information, this process will find pattern in it. Lots of parameters to tweak. Work the parameters into your fitness or salience breading algorithm and you have a living breathing learning intelligence. Do it right and you shouldn't have to know which category your information originated from (video, sound, text, numbers, binary, etc.). Your system should find those categories automatically.

Remember that intelligence is a lossy compression problem. What to pay attention to, what to ignore. What to save, what to throw away. And finally, how to store your compressed patterns such that the graph that results says something real about the meta-paterns that exist natively in your source set. 

This whole approach has a history of course. Over the history of human scientific and practical thought many people have settled in on the idea that fast filtering is most efficient when it is initiated on a highly compressed pattern range. It is more efficient for instance to go right to the "J's" than to compare the word "joy" to every word in a dictionary or database. This efficiency is only available if your match set is highly structured (in this example, alphabetically ordered). One can do way way way better than alphabetically ordered lists of 3 million words. Lets say there are a million words in a dictionary. If one sets up a graph, an inverted pyramid, where each level where the level one has 2 "folders" and each folder is named for the last word in the subset of all words at that level divided into two groups. The first folder would reference all words from "A" to something like "Monolith" (and is named "Monolith") The second folder at that level contains all words alphabetically larger than "Monolith" (maybe starting with "Monolithic") and is named "Zyzer" (or what ever the last word is in the dictionary). Now, put two folders in each of these folders to make up the second tier of your sorting graph. At the second level you will have 4 folders. Do this again at the third level and you will have 8 folders each named for the last word in the graph referenced in the tiers of the graph above them. It will only take 20 levels to reference a million words, 24 levels for 15 million words. That represents a 6 order of magnitude savings over an unstructured sort. 

A cleaver administrative assistant working for Edward Hubble (or was it Wilson, I can't find the reference?) made punch cards of star positions from observational photo plates of the heavens and was able to perform fast searches for quickly moving stars by running knitting needles into the punch holes in a stack of cards.



Pens A and B found their way through all cards. Pen C hits the second card.

What matters, what is salient, is always that which is proximal in the correct context. What matters is what is near the object of focus at some specific point in time.

Lets go back to the image search I introduced earlier. As in the alphabetical word search just mentioned, what should matter isn't the search method (that is just a perk), but rather the association graph that is produced over the course of many searches. This structured graph represents a meta-pattern inherent in the source data set. If the source data is structurally non-random, its structure will encode part of its semantic content.  If this is the case, the data can be assumed to have been encoded according to a set of structural rules themselves encoding a grammar.

For each of these grammatical rule sets (chunking/combinatorial schemes) one should be able to represent content as a meta-pattern graph. One of the graphs representing a set of words might be pointers to the full lexicon graph. A second graph of the same source text might represent the ordered proximity of each word to its neighbors (remember the alphabetical meta-pattern graph simply represents the neighbors at the character chunk level).

What gets interesting of course are the meta-graphs that can be produced when these structured graphs are cross compressed. In human cognition these meta-graphs are called associative memory (experience) and are why we can quickly reference a memory when we see a color or our nose picks up a scent.

At base, all of these storage and processing tricks depend on two things, storing data structures that allow fast matching, and getting rid of details that don't matter. In concert these two goals result in a self optimization towards maximal compression.

The map MUST be smaller than the territory or it isn't of any value.

It MUST hold ONLY those aspects of the territory that matter to the entity referencing them. The difference between photos and text: A photo-sensor in a digital camera doesn't know for human salience. It sees all points of the visual plane as equal. The memory chips upon which these color points are stored see all pixels as equal. So far, no compression, and no salience. Salience only appears at the level of where digital photos originate (who took them, where, and when). On the other hand, text is usually highly compressed from the very beginning. What a person writes about and how they write it always represents a very very very small subset of 

Evolution: Optimizing a Definition of Fitness

We think of evolution as a process that optimizes organisms (things) through the filter of fitness. Fitness as means - the species - as end. I have long suspected that this interpretation is wrong-headed, and results in conceptual mistakes that ripple though all of science, blinding us to much that could be understood about the Universe, process, and the basic shape and behavior of reality.

So let's flip it. We'll instead, re-frame evolution as a process that uses things (organisms, species, systems, ideas, etc.) as a means (channel, resource, armature, vehicle) for the optimization of fitness. From this inverted vantage, optimizing the criteria of fitness is the goal – species, nothing more than a convenient means.

It always feels wrong to talk of evolution's "goals".  Certainly a universe doesn't start out with a plan or agenda.  Things like plans and agendas are only possible within advanced abstraction apparatus like a brain or computer.  Universe's start out simple and chaotic.  Only chance causal interactions played out amongst a universe sized accumulation of matter and force over ridiculous amounts of time will lead to the types of rare and energy demanding structures that can "think" up things like plans and agendas.  So when I talk here of "a process that optimizes",  I make use concepts and terms that are more generally associated with self, ideation, and will – with the products of advanced abstraction machinery found in humans and maybe eventually in thinking machines.  But what I mean to convey is the direction of a process.  That processes have direction and that direction is (or can be) independent of the types of advanced computation necessary for things like planing and intent is in fact, the exact conceptual jump that the idea or discovery of "evolution" demands.  Evolution = direction without intent.

The directionality we see in evolving systems (all systems) is blatantly and obviously non-random.  Our job then is to understand, explain, and ultimately, exploit this understanding. Because we humans have trouble imagining non-random direction coming from systems without a brain, a soul, an agenda, we are left with a slim set of emotionally acceptable options; anthropomorphizing the universe and evolution, inserting a deity, or simply rejecting evolution (or reality) out of hand.  The non-emotional option, the science option, evolution, recovers from this dissonance through the application of inductive logic, physical evidence, and frankly, by simply offering a emotionally dissonant option.

The thesis of this essay is the suggestion that evolution might be agnostic to optimization of species and is instead simply using species as a conduit for the optimization of this thing called fitness.  That fitness might in fact be more real, and species, ethereal.

This entire domain is so fraught with potential miss-interpretation.  I feel a constant urge to over-explain, to be extra careful, to make sure the reader isn't thinking one thing when I mean something else.  For instance I feel a need to define the term "species", especially because I am using it in a more general way than is usually required within the boundaries of its original domain, biology.  This is because I am convinced that evolution is a universal process, that it has nothing in particular to do with biology or life, that it happens in all systems, all of the time, an unavoidable aspect of any reality.

So when I write "species"  I mean the thing or system that is "evolving" – the animal, the planet, the culture, the idea, the group attitude in line at the post office this morning.  And in the context of this essay, I use "species" to mean the thing upon which "fitness" acts (as judge, jury, pimp, or executioner).  Species is the thing, fitness the criteria that molds the thing.

But by this definition, species is corporal and measurable, suggesting that fitness is… is what?  If we are talking about something, shouldn't we have some way of examining it, measuring it, comparing it, holding it in our hands, flipping it over, squeezing it, spitting it open and looking at its parts?  That seems a more reasonable proposition for species than for fitness.

We like to think we can man-handle a thing like species, take it to the lab and do lab things.  But maybe that is more illusion than truth.  We can dissect a frog, but that particular frog isn't really the species "frog".  The species "frog" is an average, a canonical concept, a Platonic solid, a moving target, an arbitrarily bounded collection, a gelatinous arrow through foggy potentialities.

I was in route to show that "fitness" is a real thing, but all I accomplished was a picking away at the real-ness of "species".  Maybe that will end up being more helpful anyway.  The colloquial image of species, even amongst evolution theorists has always seemed more visceral, more thing-like than fitness.  We point to a single nervous animal on the savanna and declare, "that is gazelle".  Worse, we often fail to make a semantics distinction between that declaration and the categorical; "gazelle is that".  That fitness is a much harder thing to point to, really doesn't mean it is less real, or as I have shown, that real-ness applies to either.

Now that I've reduced both species and fitness to the realm of concept, it should be easier to argue my thesis.

Even at the concept level, "species" is a thorny concept fraught with pedagogy and hubris.  It is hard to look at a penguin, a porpoise, or a planet and imagine something more amazing, more evolved than its current form.  Which probably goes a long way to explain why we have a natural tendency to overlay onto the concept "species" notions of perfect form, of an apex, a pre-determined goal.  But this certainly has less to do with species and more to do with the limits of our cognitive facility.  It would be absurd to assume that this particular now is in some way special, that forms are complete and that we just happen to inhabit the planet just at the point when evolution has finally and completely finished its big 14 billion year project.

OK, the apologies have been met out, the slippery territory marked, the standard arguments abutted, the inconsistencies delineated, the usual misinterpretations admitted. These are standard precursors to any serious discussion in the study of evolution and bare witness to both the complexity of the subject and the apparent inability of the brain to readily make sense of its many dimensions.

So why should I want to reorder the relative hierarchy of fitness and species?  For one, I have always felt the standard Darwinian definition of evolution to be a bit circular.  Wow, before that comment ruins my standing, I had better get to work defending Darwin.  I am a "standard model" realist.  Darwin got most or all of evolution correct.  Especially if you restrict your focus to biology.  Darwin is the dude!  The positions I detail here are meant as additions, as icing on the cake Darwin baked.  But Darwin built his theory around life and his bio-centrist focus on evolution restricts and warps the applicable idea-space it scopes.  I always say that Darwin explained the how of evolution with regard to biology, and that I am interested in the why of evolution with regard to all systems.

To restrict the scope of evolution to biology, is to somehow draw a line in the sand between life and not-life, a special sauce within life that categorically separates it from all other systems.  I can't find that line.  So I am left with the responsibility of understanding and defining evolution as a domain independent attribute of any system or system of systems.

Structurally, all systems are ordered as hierarchical stacks.  Each level receives aggregate structures from lower (previously constructed) levels and produces from these, new super-aggragates, that it in turn passes to the next higher level.  That this process of aggregate layering is historically dependent is obvious.  The non-obvious mapping is to energy.  The lowest levels of the hierarchy, the earliest levels, represent high energy processes, energy levels that would rip apart aggregates at higher levels.  In this universe, all systems are built upon the aggregation processes laid down in the earliest moments, aggregations that occur at the upper limits of heat and pressure –  strings, quarks, sub-atomic particles, atoms, molecules.  Each corresponding to a matching environmental energy level, an energy level that is cooler and less pressurized than the ones that came before it.  The universe gets cooler and more dispersed.  Always.  The growth of complexity, evolution, is dependent upon this predictable and unavoidable dissipation of energy over time.

Those who would argue that life is special, that evolution is exclusive to it, well they are obligated to draw a definitive boundary between life and everything else, and because life, like everything else, is dependent upon the historical layering of aggregate systems, will have to draw that line historically.  They will have to show a moment in time before which there was not life or evolution and after which there was life and evolution.

There are many ways to define life in order that such a line could be drawn.  If you say, as most do, that life is that set of systems that incorporate and utilize both R and D Nucleic acids, well there is surely some moment in the past which would accurately delineate those earlier systems which didn't have both RNA and  DNA, from the later systems that did.  Such a definition is some what arbitrary, but all categorical definitions are.  But if you seek instead to hinge your definition of life to the process of evolution, then you are faced with a tautologically intractable problem.  Either you must accept the nonsensical proposition that the universe started with RNA/DNA preformed, or the more rational causal proposition that evolution is independent of and proceeded biology, preparing over vast periods of time, the aggregate ingredients necessary for the super-aggregate we call life.  If you insist despite this logic, that evolution is a property exclusive to biology, then you are left with the thorny problem of defining aggregation processes happening simultaneous to and independent of biology.  Processes that continue to produce atoms, molecules, stars, planets, galaxies, cultures, ideas, sand dunes, ocean currents, etc. And, you must also show how these continuous and omnipresent processes are qualitatively different when they happen outside of systems that use RNA and DNA from those that do.  But that isn't enough, you must also show either that no system after biology will ever evolve, of that the entire future of evolution will happen within the confines of biological systems.

The evidence and logic weighs overwhelmingly on the side of life being an arbitrarily bounded category, and evolution defining a process unbounded by domain, history, or complexity. Both of which are difficult concepts for humans to accept.  We like to think we belong to a category made exclusive by some secret sauce, some magic that applies in some measure only to life, and which has reached its zenith in the human form or spirit.  We like to imagine evolution to be that process that shaped the shapeless gasses of primal soup into the perfect form that we now enjoy.  Wow.  The ego and hubris drips and pools.

If I may, back to fitness.  The above arguments are crafted to shake we humans free of our innate bio/human/self centrism and to show how such hubris works to emphasize contemporary corporal form over timeless ephemeral process, placing a sort of artificial spotlight on species and downgrading the in contrast, fitness.  Its only natural.  And it is wrong.

The tendency to focus on species is easy to understand.  If you are looking at an animal and asking questions about evolution and process it is only natural that the scope of your thinking would be restricted to that animal, that species, that family of life and its struggle to survive.  Even when you back your self out to a vantage wide enough to include all of life, the full fan of Linnaean Taxonomy over the full 4.5 billion year crawl, the focus is still thing, still survival, still some sort of cosmic engineering project.  It is only when you back all the way out, when you look at all that is, the entire Universe, every moment since the big bang, life and the stuff between, in, and of it, that you might be forced to ask questions big enough to frame the why of evolution.

The why of evolution has to be big enough to comfortably hold all change, all systems, any aggregate and any aggregate chain, not just those that succeed, not just those that are fit, not just things that can be called things… everything!  Any process that explains the existence of one system should also be able to explain every other system.  Universality, at this depth of scope demands a bigger reason than can be explained by the concept "species".  Darwin's big how in biology then becomes a local mapping to a specific domain.  It isn't wrong, it just isn't universal.  You can know everything about pianos, but won't really understand music until you know enough about enough instruments that you begin to see the formative patterns that unite, from which all instruments are informed.

Species, be it a valid concept at all, must be but a subset, an example, a non-special representative, a member of a perfectly inclusive, and domain independent set.  Sets that include everything are not informative as a set.  So we look elsewhere.  That species, as a label, pointing to the subject of evolution, can equally be applied to any thing, forces us to look elsewhere for that which explains the big why of change. Change must not reside in thing, product, tailings, result, or even detritus.  If the big why isn't thing, but has to explain thing, any thing, all things, than the big why must be a process or action or modifier or pressure.  Some common attribute of any change regardless of domain.  What process is agnostic to domain?

In a word, entropy. In an attempt to determine the maximum work that could be extracted from any source of energy, steam era engineers teased apart the relationship between source and output and found an intriguing and strangely universal leakage.  Energy, when used, degrades, diffuses, is no longer as useful or available to the original process.  When scientists discovered the same leak, this time with structure, a strange universality began to appear.  Energy and information, force and structure, an unexpected symmetry.  Then Einstein revealed the exact relationship between energy, time, space, and mass, allowing thermodynamic transforms on all physical terms.  Despite initial objections by Stephen Hawking (and others attracted to the notion that nooks and crannies of the universe might provide respite from the second law's rigid causal prescriptions), Leonard Susskind and others have brought both the quantum world of the impossibly small and the black hole world of the impossibly big, together under a shared entropic umbrella.  What we are left with, like it or not, is a universal.  A universal that is universal to all physical domains and dimensions, regardless of scale.  Wow. That doesn't happen very often in nature.  That hasn't happened in science.  Ever.  Significant?

In his 1927 book, The Nature of the Physical World, Sir Arthur Eddington, put it this way:

"The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations - then so much the worse for Maxwell's equations. If it is found to be contradicted by observation - well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation."

Any theory or assessment of evolution that is not written in response to thermodynamics, information theory, and entropy would seem to be a theory not particularly interested in validity.  That the laws of thermodynamics and evolution both direct their unblinking stares upon the domain of change would seem to me an invitation to at least begin to consider the possibility of a concerted union between the two.

[more to come…]

Randall Reetz

Cognition Is (and isn't):

What is really going on in cognition, thinking, intelligence, processing?

At base cognition is two things:

1. Physical storage of an abstraction
2. Processing across that abstraction

Key to an understanding of cognition of any kind is persistence. An abstraction must be physical and it must be stable. In this case, stability means, at minimum, the structural resistance necessary to allow processing without that processing undoly changing the data's original order or structural layout.

The causal constraints and limits of both systems, abstraction and processing, must work such that neither prohibits or destroys the other.

Riding on top of this abstraction storage/processing dance is the necessity of a cognition system to be energy agnostic with regard to syntactic mapping. This means that it shouldn't take more energy to store and process the string "I ate my lunch" than it takes to store and process the string, "I ate my house".

Syntactic mapping (abstraction storage) and walking those maps (abstraction processing) must be energy agnostic. The abstraction space must be topologically flat with respect to the energy necessary to both store and process.

Thermodynamically, such a system, allows maximum variability and novelty at minimum cost.

What if's… playing out, at a safe distance, simulations, virtualizations of events and situations which would, in actuality, result in huge and direct consequences, is the great advantage of any abstraction system. A powerful cognition system is one that can propagate endless variations on a theme, and do so at low energy cost.

And yet. And yet… syntactical topological flatness carries its own obvious disadvantages. If it takes no more energy to write and read "I ate my house" than it does to write or process the statement, "I ate my lunch", how does one go about measure validity in an abstraction? How does one store and process the very necessary topological inequality that leads to semantic landscapes… to causal distinction?

The flexibility necessary in an optimal syntactic system, topological flatness, works against the validity mapping that makes semantics topologically rugged, that gives an abstraction syntactic fidelity.

This problem is solved by biology, by mind, though learning. Learning is a physical process. As such it is sensitive to the direction of time. Learning is growth. Growth is directional. Growth is additive. Learning takes aggregate structures from any present and builds super-aggragate structures that can be further aggregated in the next moment.

I will go so far as suggesting that definitions of both evolution and complexity are hinged on the some metric of a system to physically abstract salient aspects of the environment in which it is situated. This abstraction might be as complex as experience stored as memory in mind, and it may be as simple as a shape that maximizes (or minimizes) surface area.

A growth system is a system that can not help but to be organized ontologically. A system that is laid up through time is a system that reflects the hierarchy of influence from which its environment is organized. Think of it this way, the strongest forces effecting an environment will overwhelm and wipe out structures based on less energetic forces. Cosmological evolution provides an easy to understand example. The heat and pressure right after the big bang only allow aggregates based on the most powerful forces. Quarks form first, this lowers the temperature and pressure enough for sub atomic particles, then atoms. Once the heat and pressure is low enough, once the environmental energy is less than the relatively weak electrical bonds of chemistry, molecules can precipitate from the atomic soup. The point is that evolved systems (all systems) are morphological ontologies that accurately abstract the energy histories of the environments from which they evolved. The layered grammars that define the shape and structure (and behavior) of any molecule, reflect the energy epochs from which they were formed. This is learning. It is exactly the same phenomenon that produces any abstraction and processing system. Mind and molecule, at least with regard to structure (data) and processing (environment), are the result of identical process, and as a result, will (statistically) represent the energy ontology that is the environment from which they were formed.

It is for this reason that the ontological structure of any growth system is always and necessarily organized semantically. Regardless of domain, if a system grew into existence, an observer can assume overwhelming semantic relevance that differentiates those things that appeared earlier (causally more energetic) from those things that appeared later (causally less energetic).

This is true of all systems. All systems exhibit semantic contingency as a result of growth. Cognition system's included (but not special). The mind (a mind, any mind), is an evolving system. Intelligence evolves over the life span of an individual in the same way that the proclivity towards intelligence evolves over the life-span of the species (or deeper). Evolving systems can not be expressed as equation. If they could, evolution wouldn't be necessary, wouldn't happen. Math-obsessed people have a tendency to confuse the feeling of the concept of pure abstraction with the causal reality of processing (that allows them to experience this confusion).

Just as important, data is only intelligible, (process-able, representative, model, abstraction) if it is made of parts in a specific and stable arrangement to one another. The zeroith law of computation is that information or data or abstraction must be made of physical parts. The crazies who advocate a "pure math" form of mind or information simply sidestep this most important aspect of information. This is why quantum computing is in reality something completely different than the information-as-ether inclination of the duelists and metaphysics nuts. Where it may indeed be true that the universe (any universe) has to, by principle, be describable, abstract-able by self consistent system of logic, that is not the same what's so ever as the claim that the universe IS (purely and only) math.

Logic is an abstraction. As such it needs a physical realm in which to hold its concepts as parts in steady and constant and particular relation to each-other.

My guess is that we confuse the FEELING of math as ethereal and non-corporal pure-concept with the reality which of course necessitates both a physical REPRESENTATION (in neural memory or on paper or chip or disc) and a set of physical PROCESSING MACHINERY to crawl it and perform transforms on it.

What feels like "pure math" only FEELS like anything because of the physicality that is our brains as copular machinery as they represent and process a very physical entity that IS logic.

We make this mistake all day long. When the only access to reality we have is through our abstraction mechanism, we begin to confuse the theater that is processing with that which is being processed and ultimately with that which that which is being processed represents.

Some of the things the mind (any mind) processes are abstractions, stand-ins for other external objects and processes. Other things the mind processes only and ever exist in the mind. But that doesn't make them any less physical. Alfred Korzybski is famous for declaring truthfully, "The map is not the territory!" But this statement is not logically similar to the false declaration, "The map is not territory!". Abstractions are always and only physical things. The physics of a map, an abstraction system, a language, a grammar, is rarely the same as the physics of the things that map is meant to represent, but the map always obeys and is consistent with some set of physical causal forces and structures built of them.

What one can say is that abstraction systems are either lossy or they aren't useful as abstraction systems. The point of an abstraction is flexibility and processing efficiency. A map of a mountain range could be built out of rocks and made larger than the original it represents. But that would very much defeat the purpose. On the other hand, one is advised to understand that the tradeoff of the flexibility of an effective map is that a great deal of detail has been excluded.

Yet, again and again, we ourselves, as abstraction machines, confuse the all too important difference between representation and what is represented.

Until we get clear on this, any and all attempts at merely squaring up against the problem of machine intelligence will fail.

[more later…]

Randall Reetz

Complexity is Self-Limiting… Evolution Says "So What!" But At What Cost?

Complex systems tend towards greater complexity. That is one way, in fact, of defining evolution. But complexity is also self-limiting in obvious and unavoidable ways. What gives?

How, specifically, does an understanding of complexity's natural limits, recast an assessment of where human society is, where it might be going, and what of this potential do our own limitations in understanding complexity and its limits… well, limit?

We tend to gravitate towards a rather cleaned-up image of the future, all stainless steel and gleaming glass, and sexy robots that can't say "no" (puffy clouds, white wings, and lutes?). To be fair, this sparkly and perfect view of the future is something we reserver for "The Future". Excepting for Sunday mornings, we are refreshingly realistic about the process of getting through all of the calendar-able pedestrian futures to the final "The Future"… sometimes even positing an apocalypse or two along the way. Its as though we understand that things of great complexity and stability must be constructed, and that building is a messy and chaotic process, our self-delusion begins and ends with the absolutely fatal assumption that there is some end to the construction process, after which everything will be grand and glorious and perfect in the sense that no major construction will ever again mar the sublime and pristine quite and elegance we have built.

Right. OK.

In light of the magnitude of our self delusion, it seems down right naive to apply the phrase "drink the Cool-Aid"… in some very real sense, we must, each of us, have Cool-Aid factories right smack in the middle of our brains!

The actual future, the sober future, the one we seem hell-bent on ignoring, is a future of greater and greater and more and more constant change. A future we can never get to. A future that will surely go on one day without us. There were after all, a whole bucket-load of futures before we existed, before we declared ourselves the supreme center of everything, the final future. Ultimately, of course, there is a final and absolute future to any system. If you paid attention during your thermodynamics or information science lectures, you know that there will come an ultimate future which can not support any complexity at all.

For now, we will ignore that final future-of-all-futures (heat death)… there are "miles to go before we sleep".

As complexity marches forward and "upward", evolving systems are increasingly characterized by construction and change. A static system, one that can't react to its own constantly increasing experience, is a system that isn't as complex as one that can learn and adjust itself to accumulated knowledge. The romantic vision of a completed and peacefully static future is as laughable as it is understandable.

Some fantasies drive us towards success and influence, and others towards catastrophe and insignificance.

The difference between these two forms of fantasy are, to my mind, the difference between paying attention to the greater reality that is the whole universe (its physical laws, material properties, and configuration), and paying attention instead only to the reality hacked together within our own emotionally contorted and narrowly self-centered minds. The distance that separates the two is probably a good measure of the speed with which nature will replace us with some other form of complexity generating scheme with a more accurate natural mapping of reality to abstraction of reality.

A self-centered and locally weighted perspective is both expectable and self defeating. What works in the short term often gets in the way of what works in the long run. This is one of two oxymoronic misreading of process clouding our understanding of evolution that increasingly threatens our potential as a species. The other (related) self-obfuscation we don't seem to be able to avoid, and central to the thesis of this essay, is the dream-like way we tend to imagine the future as some silicone-enhanced sexed-up version of some glazed-over and romantic version of a past that never was.

What we know, how we comprehend what is around us, is a function of the iterative process of matching the stream of incoming sensation to what we have stored as experience. What comes to be known is always heavily effected by what was known before. Leaning is a local affair. Systems always end up knowing more about the things closest to them. The closest thing to a system is itself! This is a topologically and causally unavoidable fact, leading to difficult to circumnavigate self-centered understandings of the universe around us. I am convinced that evolution ultimately (in the longest run) favors systems that can overcome this local-centrism…though to to this, a system must literally work against itself in the short term. Success in the long run is dependent on the development and protection of genetic structure that frustrates success in the short run. This big-picture learning must be accomplished through the development of an ever more accurate internal analogue (process-able map) representing the most inclusive and location agnostic understanding of the entire universe. This too is an ever receding target, we can chase but never completely capture. Evolution is this back and forth dance between what matters to a system in the hear and now and the capacity to pay attention to, model, and process that which is salient about the entire universe… context in the largest sense.

I don't want to veer too far away from the thread of this essay, but it is important to keep in mind the counter-indicated admixture defined both by the immediate local needs of any given individual and the larger, decidedly non-individual scope of evolution. A decidedly cooperative mixture that is, none the less, achievable exclusively through the lives of and genetic/cultural information carried forward exactly and only by individuals.

In any given population of individuals at any given locality, there exists a range of differences that enable some individuals to make more efficient use of the resources in their surroundings, and some individuals to be better equipped to contend with and exploit the resources of their children's inherited environment. Those better matched to the current environment will out-compete those with a better match to the environment of the future. Ultimately, of course, what matters is the capacity of the entire mélange to both survive in the present and present morphotypes that meet the demands of the future.  The demands of the present vs. those of the future are often at odds with each-other. A successful evolutionary scheme must "waste" a sizable chunk of its structure and energy on strategies that may have no immediate positive effect on fitness (and might in all actuality hinder success in the moment). Maintaining a long range understanding of evolution itself, and our place in it, is the example of this dangerous opposition that best fits the scope of this essay.

It seems obvious to me that the amount a system must "waste" anticipating changes in the future of its environment is inversely relational to the accuracy of its internal mapping of the universe in total. Systems that know nothing of the universe, must produce a great variety of random solutions. A very expensive prospect that best fits very very very simple individuals produced in absurd numbers. Atoms, molecules, single celled organisms.

Understanding the process, "THE" process, evolution, is probably the most salient predictive mechanism an organism might seek to internalize. We seem to have limited capacity as a species to model and abstract and then effectively navigate an abstraction of this "THE PROCESS". Especially when it comes to understanding the limitations and usefulness to "THE PROCESS" of any one scheme, species or individual.

The spirit of this essay isn't Nietzschein pessimism or a catastrophist's Cassandra; "I told you so!". I am an eternal optimist, so these words are intended instead as a wake-up call, and offered up as a Windex Wipe to the foggy lens through which we view reality… in the hope that we use it, adjust our behavior, and rectify the self-defeating distance between what is and what we want to see.

Nature doesn't stand still. Not at least until the very end. Heat death isn't at all like my fantasy of an endless Mediterranean resort vacation. Any system that bets its future on stasis, no matter how advanced, is betting against its longevity or influence on the real future.

I've compiled a list (below) of some of the most obvious side effects that haunt complexity, that push back against its growth. If we illuminate these barriers we might be better equipped to consider ways to get around them, and we might discover something of how systems get better and better at finding cheats in the march towards greater complexity.

For a system to be complex, it must have structure and difference within that structure. A crystal has structure, but its lack of capacity for internal differentiation means it can never be complex. But differentiated structure isn't enough, it has also to have some way of protecting and maintaining that structure, that shape or behavior over time. Shit happens. A complex system must employ some set of mechanisms in a constant fight against entropy. Without which, a system's complexity will be short lived, and short lived complexity isn't very complex at all.

Which brings up an important and much ignored aspect of an evolving system. We have a tendency to over emphasize the moment, the present situation or system. Nature on the other hand doesn't care about the individual or the moment except as a vehicle for the transmission of structure into the future. What matters isn't how complex a system is today, but the potential of a given configuration to influence the greatest complexity in the longest future across the widest expanse of the material universe. Many aspects or measures of complexity cross over between the here and now and the deepest future… but not all and not always.

Back to our list.

1. One way to maintain structure is to build yourself out of stuff of great material integrity – say titanium, stainless steel, or diamond.
2. Another is to adopt a vigilant and obsessive Mr. Fix-It program of self maintenance. Yet another option is to replace yourself with a pristine copy before you dissolve into an entropic heap.
3. A simple cousin of this replacement scheme is playing the numbers game… make sure there are so freak-n many copies of you in the first place that one or two of you make it into the distant future by virtue of the dumb luck of large number.
4. Or, you can choose to live a life of extreme isolation – limit your interaction with other systems and you limit the deleterious effects the second law dictates.
5. Then there is wall building. Wall building is a self-made form of the isolation scheme… instead of finding a place to hide in a pre-existing landscape, dig yourself a tunnel or build yourself a wall or a mote or a shell or a nest or fast legs or wings with which to run away with.

And then there is the problem of resource acquisition. Anything of value to a complex system tends to be reactive. Reactive things are destructive. Installing your self within a reactive environment means you have more access to energy and materials, but it also means you have to spend more energy and structure just to protect yourself from your environment. As your energy demands increase so too does your need to locate yourself closer and closer to more and more reactive and ever changing environments. A cave full of grain is great at first, but as you eat your way through it, its original attractiveness decreases. Better to install yourself at the mouth of a river, next to a mid-ocean vent, or on the floor of a flood plane. As your complexity increases, so to does your appetite for energy and materials. Access means proximity. Proximity to greater and greater concentrations of energy demands protection. Protection is expensive in terms of the self-protective physical structure and its maintenance.

Worse still is the negative feedback that metabolic waste presents. The more you eat, the more you go. The more you go, the harder it is to find food. As complexity increases, guess what happens to the magnitude of this problem and the need therefore to spend more and more energy on waste removal schemes?

The focus of this essay are the aspects of complexity (and complexity's demand for energy and structure) that put counter-productive limits on strategies that would otherwise allow for greater and greater complexity… and how evolving systems find work-arounds. The fact that we are here at all is proof that evolution finds a way.

What interests me is the way increases in complexity puts increased demand on energy and material resources, and how these processes are self-limiting and at the same time actually define the purpose that drives evolution.

In the particular, real systems manifest great creative variety in the fight for the extension of structure and integrity over time. For instance, once brains appear, trickery and guile become the standard approach to wall building. You don't need to go the long and arduous course of developing poison and some specialized hollow teeth through which to deliver it, if you can just tweak your skin coloration or shape to mimic those who have. Or you can become invisible by adopting a color and texture scheme that mimics your less vulnerable or edible surroundings. In essence, trickery schemes are the same as isolation or wall building except the wall you are hiding behind is within the brain of another creature (either it's already there or you build it in your foe's brain through behavioral conditioning).

But here is the rub. No matter which scheme a system adopts in the maintenance of structure… that scheme hardens their structure, making it more difficult and expensive to adapt to an always changing environment. In a very real way, what makes you stronger in the present makes you vulnerable over time.

Example: When Teflon was developed it was obvious to its creators that its extreme inert-ness, its aversion to chemical interaction, would make it an ideal lining for any reaction container (including frying pans and irons). But this same property made it almost impossible to figure out how to affix Teflon to the surface of a container (it took over 10 years to solve this problem).

On the opposite end of the isolation spectrum is metabolism. When a system seeks a means of extracting and drawing energy or structure from its environment, it needs to maximize its reactive interface to that part of its environment that has the most entropic potential. In earth biology, this usually manifests as an active interface to oxygen and or sunlight – both of which are highly corrosive to structure. In order to both exploit the energy of these highly reactive sources, biology has adopted a myriad of selectively self protective (and expensive) mechanisms. Playing with fire is an attractive AND expensive proposition. Simple systems have no option but to hide from highly reactive environments – to dig themselves into deep cracks in the earth. Only a system of great complexity has the structural and behavioral leeway to adopt the complex and selective mechanism necessary to both use and avoid concentrated reactive resources.

As a system becomes more complex it reacts faster to internal and external change. It evolves faster. This is a circular definition of "complexity"… configurations that facilitate faster development of configurations that facilitate faster development of configurations… ad infinitum. The capacity to do things faster always comes at a cost. To mitigate that cost, the system must learn to be efficient and effective in its environment. This means going with the flow. This means fitting in. This means doing what the environment is already doing. This means not fighting the system. To work with a system (instead of against it) means internalizing and abstracting a model of the environment's most salient structures. If you have some knowledge of what a lion will do when you enter a clearing it is sitting within, you have a better chance of surviving the encounter. If you have legs and eyes, your very structure is an acknowledgment of the physical constraints of your environment.

An accurate assessment of this whole concept becomes increasingly complex as we realize how system and environment blend in a co-evolutionary super-system.

In science fiction, the future is presented in one of two ways. Either the world has devolved into some filthy post-appocolyptic entropic mess, or it is a perfectly complete stainless steal and glass uber-infrastruture with everything in its place and everything perfectly maintained. Both projections are impossible, but the hermetically sterile one is the most problematic as it seems to resonate more completely with human emotional projections.

The problem is this; the more complex a system becomes, the faster is its capacity to change, leading to a system that is constantly in flux, constantly reworking itself, constantly under construction. Try to find a day in a modern city devoid of numerous construction cranes marring its skyline. This situation will only become more intense as human society evolves.

Biological systems have learned to accommodate the constancy of change, deterioration, ware and tear, construction, etc., through complex molecular mechanism of growth and repair played out at the (largely microscopic) cellular level. Furthermore, these anti-entropic mechanisms are largely automatic and do not therefore overly burden the larger and more overarching consciousness and behavioral control mechanisms (our mind).

Though humanity has reached a level of complexity that supersedes the capacity of its infrastructure to effectively carry its own complexity demands, we don't seem, as a species to be able to see this problem as systemic.

[more to come…]

Biology Is Too Slow!

Humans are pumping a lot of energy around. When it comes to energy we don't mess around. We like our energy highly concentrated. We dig it up, refine it, convert it, and pump it through wires or pipes or the air like there is no tomorrow.

Nature is adaptive. Right? Nature finds a way. Right? So where are the animals and plants that suckle upon high power lines, that find their adaptive way into fuel tanks and batteries? Surely they could. Surely the same nature that goes gaga around mid ocean heat vents and can learn to metabolize the worst toxins we can throw into ponds... that good old adaptive nature should find a way to co-evolve with 50 thousand volt transmission lines.

And there are other (new) tits for nature to suckle. I fully expect our air to become less and less transparent to radio transmissions. If we can build devices that can grab radio energy right out of the air.… surely airborne molds and other microorganisms can do so. Are they? Doesn't look like it. What weird life forms would be best suited to radio-metabolism? Plants grab photons in the visible (radiation) band. Photosynthesis (in plants) is a respiratory affair - requiring oxygen and nitrogen for the primary reactions, but they also rely on heavy and rigid structural support to get up into the air where they can maximize their surface interface and solar exposure. Actually, when you think about it, a plant would be more efficient if it spent no energy fighting gravity, and instead laid flat on the surface of the land. Plants must only grow into the air to compete away from shade the shade of other plants and to increase respiration surface area.

Anyway, and this is a bit of an aside, but would there be a way for lighter than air super-colonies of single celled animals to maximize access to radio energy without the need for the heavy structure and vascular transport terrestrial plants employ? Maybe the radio scenario is ludicrous. Surely there is lots of background microwave energy constantly streaming by. Surely radio waves have been around as long as biology has been around. If radio was a good source of energy, nature would have already found a way. Maybe big bang radiation doesn't pack much of a wallop. Is it possible that communication intended radio is more energetic? More localized. Easier to exploit. I can imagine some type of group-dynamic in which individual floating animals or proto-animals learn to orient themselves such that they become a reflective parabola or fresnel lens concentrating radio energy to a focal point where other animals absorb the energy in some sort of symbiotic bio-community. Many other scenarios are conceivable.

Are plants learning to seed near highways to take advantage of air movement and carbon dioxide? There are a million ways in which human activity effects environments in ways that provide energy and stability clines. Surely life is reacting in step.

The pace of culture is so much faster than most organisms can genetically respond. The smallest organisms with the shortest life spans that have the greatest populations spread over the largest geographies are the organisms most likely to take advantage of our frenetic environmental messings.

Are they? Is anyone paying attention?

What is computing?


This is the most important question of our time… yet so rarely asked. Computing technology increasingly shapes every aspect of human behavior, culture, resource use, health, commerce, and governance. A passive stance on the question that effects all other questions is increasingly dangerous to the future of all humans, of life, of evolution itself.

In the 60's we created NASA, an elaborately funded research program to uncover the knowledge and develop the technology to "go to the moon". Yet one would be hard pressed to justify the cost to society of contraptions that do nothing more than take a few people to a near-by rock… almost nothing of the NASA program can be used outside of the narrow focus of getting a few tens of miles off the surface of Earth (at tens of millions of dollars per pound).

Ironically, and inadvertently, the practical mathematics, programming, and computational techniques developed and honed by NASA in the pursuit of its expensive and arguably impractical goals may be the only pertinent contribution to show for the tens of trillions of dollars spend on this ill-concieved and irrational "research" program.

Talk about putting the cart before the horse… akin to building a global library system and book binding before developing a written language.

We are surrounded by lifeless rocks. We didn't need to send a few Air-force test pilots to the moon to figure that out. The practical scope of our chemically propelled rockets hardly avails us to the nearest little frozen or boiling neighbor planets in this corner of this one little Solar System. Ever attempt a phone conversation with 40 min. gaps between utterances?

The interesting stuff in this Universe (at least the small corner we have access to) is right here on our little Earth. It is us… and more than that, it is not so much what we have done, but what we will do and how what we will do effects what other future things will do because we set them into motion. That is our job. In a very real way, we are the first things that understand the job description despite the fact that it has always been there and has always been the same. This understanding should give us a leg up on the process. Should.

There are two kinds of knowledge: the first, historical, the second, developmental. When we go somewhere, we do nothing more than uncover that which already is. Compare this to development, where we create things that never were. In this universe, if there was a force that was prescient in creating one star or planet, that same force must have been prescient in the creation of Earth. We don't have to go to Mars to find the forces that created Earth. And we certainly don't need to send humans over there even if we do want intimate knowledge of a place like Mars.

At any rate, computing is a universal process. Computing is agnostic to domain. You can compute about particle physics and you can compute about knitting. Computing is an abstraction processing medium. Computing is what brains do. Computing is not restricted to the category that is biological minds. Learning how to compute is learning how to discover. The goal becomes the unknown… becomes un-prejudiced developmental discovery. The machinery of pattern matching… of salience… of the perception of essence across domains.

I am obsessed with this biggest "why" of computing. I don't think the computational "why" can be separated from the biggest "why" of existence in general... of evolution… of the march of complexity.

The convergence of thermodynamics (the way action effects energy dissipation) and information science (the relative probabilities of structure and the cost of access, processing and transference) guide my approach to these questions. Least energy laws dictate the evolution of all systems. Computing is evolution. Abstraction systems allow prediction. Prediction grants advantage. Advantage influences the topology of the future. The better a system gets at accurately abstracting it's environment, the more it will influence the future of abstraction systems. Computing is the mechanics of evolution... always has been. Are we designing computing to this understanding of the methodology of complexity handling?

Lets suppose we gave the scientists at NASA a choice. We ask them, "What technology represents a greater potential towards the eventual understanding and even physical exploration of the Universe, rocket engines or computers?", What would be the rational and obvious answer? If we ever hope to get any real distance in this universe it won't be by burning liquid oxygen and kerosene. Most things in this universe are millions of years away even at the speed of light. Rocket engines hardly move at all when compared with even the too-slow speed of light. Getting anywhere in this universe will demand tunneling beneath the restrictions that are space and time… no rocket engine will ever do that for us. I am not an advocate for space exploration, but if I was, I would be pushing computation over rocket propulsion.

It is time to advocate a culture wide push towards the advancement of an ever-expanding understanding of computing. To the extent we succeed, all of the future will be defined by and fueled by our discoveries. If we choose instead to spend our limited and most expensive money towards rockets we had better hope the universe can be understood through the understanding of explosions and destruction and spending long periods of time floating in space. Come on people! Think!

[ more to come… ]

Solar energy conversion… can it hurt the Earth?

Note: before anyone accuses me of being anti-green, let me explain my general motivation and then the specific intent of this post. I don't think there is a more potent problem facing humanity (and all life) than the current man-caused spike in global temperature. If we do not act appropriately and quickly and at unprecedented scale, biology faces near-total destruction. The scale of this problem demands that our solutions be equally large. Large solutions of any kind will have both intended and unintended consequences. We must strip emotion and sentimentality from our assessment and design process. We must dump our pre-conceptions and deal with the physical dynamics as they are (not as we would like them to be).

Global heat delta as solar/wind is converted?
Almost every time solar energy is harnessed by human-built converters (to electricity or work), this energy is transmuted down the thermodynamic ladder faster and more localized than would "naturally" occur.

And its digression towards heat is localized (thermodynamic oxymoron I am aware). At the very least, the global atmospheric energy distribution budget is disturbed. Energy that used to go towards other dynamic dissipative systems (ocean and air currents, the fresh water cycle, etc.) is now siphoned off and downgraded to heat at a faster rate. This is especially true of systems like solar to electricity cells which convert some sunlight that would otherwise have bounced out into space.

Even wind and water current converters (turbines) pull kinetic energy from a large system, and localize (time and location) the thermodynamic degradation in non-natural ways.

In both cases, heat that would have dissipated down stream over a long period of time is removed instantly (much of which is immediately lost to heat in the conversion process) and sent to highly localized dissipative devices (lights, heaters, stoves, computers, washing machines, TVs, and industrial equipment). The placement of these end of the line dissipative devices is determined by human desire and not the simple thermodynamic least-energy topology represented in natural systems.

As we get better and better at exploiting solar energy to our own energy needs, more and more of the solar energy that drives large scale atmospheric phenomena will be removed from the standard atmospheric causality chain. What impact will this have on weather patterns? On ocean currents? On global temperature and temperature distribution? On annual seasons? On precipitation patterns?

Our planet's heat budget is to some extent regulated by the off-planet radiation of heat through infrared (and other) radiated wavelengths. How do our current human uses of electricity effect this radiated/mechanical heat fraction?

As compared to hydro-carbon oxidation?
To be sure, the oxidation of hydro-carbons (burning oil and gas) has a more radical effect on heat balance. But this has more to do with the fact that undisturbed oil and gas are only "potential" energy until we bring them to the surface and burn them. Solar energy conversion is not typically considered in light of thermodynamic process because it is assumed that this is energy that is used naturally anyway. But natural uses of solar energy drive planet wide dissipative engines upon which all life is distributed and timed.

To what extent will drastic increases in solar energy conversion effect these essential processes? Especially as humans continue to use more and more energy?

Is this a tipping point effected system?
I know that current solar conversion is probably such a small slice of the total earth-solar energy budget that these questions must seem daft. However, as we have seen in many natural systems, small changes can catalyze huge and unexpected out-fall effects. Disregarding "tipping point" sensitivity, how will ever increasing capture of solar energy for human use effect Earth-scale dissipative systems that support biology as it is currently represented?

Engineering done well
Here is what I suspect. We put solar conversion panels up where solar real-estate is cheap... on roofs or in deserts where other (agricultural) uses of that energy is not reasonable. These locations are locations where there is reason to have highly reflective surfaces. A well designed solar converter reflects as little energy as possible. Either way, I suspect that solar panels have different reflective behavior than other surfaces. Plants appear green because they absorb red (longer wavelength) light. Plants differentially reflect green and blue light. Solar panels are usually placed where plants aren't. But even if they replaced plants, their reflection/absorption properties would be different than plants. A plant converts solar to chemical energy in a respiratory process that absorbs carbon from carbon dioxide in the air and strips the carbon releasing pure oxygen.

Photovoltaic panels are not respiratory systems. This fact alone changes the environmental atmospheric equation.

But let us instead concentrate on panels that replace only other non-biological surfaces of various reflective and heat storage indices. The whole point of a well designed solar panel is to convert solar photonic energy to heat or electricity (or hydrogen) which can be transported or transmitted to other locations for immediate use (conversion back to heat through a chemical or mechanical process that results in work). This process differs from natural processes in important ways. It is usually faster degradation to heat. It is often localized differently than natural dissipative processes. And (if well designed and engineered) it is more absorptive than natural surfaces. 

Randall