Search This Blog

The Incomputable Heaviness of Knowledge

Is the universe conceivable?  Does scientific knowledge improve our ability to think about the universe?

What happens when our knowledge reaches a level of sophistication such that the  human brain can no longer comfortably hold it, or compute on it?  For thousands of years, scholars have optimistically preached the benefits of knowledge.  Our world is rich and safe as a result.  People live longer, people live in greater personal control over the options they face.  All of this is an obvious result of our hard won understanding of how the universe and its parts actually work.  We arm our engineers with these knowledges and send them out to solve the problems that lead to a more and more desire-mitigated environment.  Wish you weren't hungry, go to the fridge or McDonnalds.  Wish you were somewhere else, get in your car and go there.  Wish you could be social, but your friends are in Prague, call them.  Wish you knew something, look it up on the internet.  Lonely, log in to a dating service and set up a rendezvous. Wish your leg wasn't fractured, go to a doc-in-the-box and get it set and cast.

But what if you want to put it all together?  What if your interests run to integration and consolidation.  What if you want to understand your feelings about parking meters as an ontological stack of hierarchical knowledge built all the way up from the big bang?

Evolution: Optimizing a Definition of Fitness

We think of evolution as a process that optimizes organisms (things) through the filter of fitness. Fitness as means - the species - as end. I have long suspected that this interpretation is wrong-headed, and results in conceptual mistakes that ripple though all of science, blinding us to much that could be understood about the Universe, process, and the basic shape and behavior of reality.

So let's flip it. We'll instead, re-frame evolution as a process that uses things (organisms, species, systems, ideas, etc.) as a means (channel, resource, armature, vehicle) for the optimization of fitness. From this inverted vantage, optimizing the criteria of fitness is the goal – species, nothing more than a convenient means.

It always feels wrong to talk of evolution's "goals".  Certainly a universe doesn't start out with a plan or agenda.  Things like plans and agendas are only possible within advanced abstraction apparatus like a brain or computer.  Universe's start out simple and chaotic.  Only chance causal interactions played out amongst a universe sized accumulation of matter and force over ridiculous amounts of time will lead to the types of rare and energy demanding structures that can "think" up things like plans and agendas.  So when I talk here of "a process that optimizes",  I make use concepts and terms that are more generally associated with self, ideation, and will – with the products of advanced abstraction machinery found in humans and maybe eventually in thinking machines.  But what I mean to convey is the direction of a process.  That processes have direction and that direction is (or can be) independent of the types of advanced computation necessary for things like planing and intent is in fact, the exact conceptual jump that the idea or discovery of "evolution" demands.  Evolution = direction without intent.

The directionality we see in evolving systems (all systems) is blatantly and obviously non-random.  Our job then is to understand, explain, and ultimately, exploit this understanding. Because we humans have trouble imagining non-random direction coming from systems without a brain, a soul, an agenda, we are left with a slim set of emotionally acceptable options; anthropomorphizing the universe and evolution, inserting a deity, or simply rejecting evolution (or reality) out of hand.  The non-emotional option, the science option, evolution, recovers from this dissonance through the application of inductive logic, physical evidence, and frankly, by simply offering a emotionally dissonant option.

The thesis of this essay is the suggestion that evolution might be agnostic to optimization of species and is instead simply using species as a conduit for the optimization of this thing called fitness.  That fitness might in fact be more real, and species, ethereal.

This entire domain is so fraught with potential miss-interpretation.  I feel a constant urge to over-explain, to be extra careful, to make sure the reader isn't thinking one thing when I mean something else.  For instance I feel a need to define the term "species", especially because I am using it in a more general way than is usually required within the boundaries of its original domain, biology.  This is because I am convinced that evolution is a universal process, that it has nothing in particular to do with biology or life, that it happens in all systems, all of the time, an unavoidable aspect of any reality.

So when I write "species"  I mean the thing or system that is "evolving" – the animal, the planet, the culture, the idea, the group attitude in line at the post office this morning.  And in the context of this essay, I use "species" to mean the thing upon which "fitness" acts (as judge, jury, pimp, or executioner).  Species is the thing, fitness the criteria that molds the thing.

But by this definition, species is corporal and measurable, suggesting that fitness is… is what?  If we are talking about something, shouldn't we have some way of examining it, measuring it, comparing it, holding it in our hands, flipping it over, squeezing it, spitting it open and looking at its parts?  That seems a more reasonable proposition for species than for fitness.

We like to think we can man-handle a thing like species, take it to the lab and do lab things.  But maybe that is more illusion than truth.  We can dissect a frog, but that particular frog isn't really the species "frog".  The species "frog" is an average, a canonical concept, a Platonic solid, a moving target, an arbitrarily bounded collection, a gelatinous arrow through foggy potentialities.

I was in route to show that "fitness" is a real thing, but all I accomplished was a picking away at the real-ness of "species".  Maybe that will end up being more helpful anyway.  The colloquial image of species, even amongst evolution theorists has always seemed more visceral, more thing-like than fitness.  We point to a single nervous animal on the savanna and declare, "that is gazelle".  Worse, we often fail to make a semantics distinction between that declaration and the categorical; "gazelle is that".  That fitness is a much harder thing to point to, really doesn't mean it is less real, or as I have shown, that real-ness applies to either.

Now that I've reduced both species and fitness to the realm of concept, it should be easier to argue my thesis.

Even at the concept level, "species" is a thorny concept fraught with pedagogy and hubris.  It is hard to look at a penguin, a porpoise, or a planet and imagine something more amazing, more evolved than its current form.  Which probably goes a long way to explain why we have a natural tendency to overlay onto the concept "species" notions of perfect form, of an apex, a pre-determined goal.  But this certainly has less to do with species and more to do with the limits of our cognitive facility.  It would be absurd to assume that this particular now is in some way special, that forms are complete and that we just happen to inhabit the planet just at the point when evolution has finally and completely finished its big 14 billion year project.

OK, the apologies have been met out, the slippery territory marked, the standard arguments abutted, the inconsistencies delineated, the usual misinterpretations admitted. These are standard precursors to any serious discussion in the study of evolution and bare witness to both the complexity of the subject and the apparent inability of the brain to readily make sense of its many dimensions.

So why should I want to reorder the relative hierarchy of fitness and species?  For one, I have always felt the standard Darwinian definition of evolution to be a bit circular.  Wow, before that comment ruins my standing, I had better get to work defending Darwin.  I am a "standard model" realist.  Darwin got most or all of evolution correct.  Especially if you restrict your focus to biology.  Darwin is the dude!  The positions I detail here are meant as additions, as icing on the cake Darwin baked.  But Darwin built his theory around life and his bio-centrist focus on evolution restricts and warps the applicable idea-space it scopes.  I always say that Darwin explained the how of evolution with regard to biology, and that I am interested in the why of evolution with regard to all systems.

To restrict the scope of evolution to biology, is to somehow draw a line in the sand between life and not-life, a special sauce within life that categorically separates it from all other systems.  I can't find that line.  So I am left with the responsibility of understanding and defining evolution as a domain independent attribute of any system or system of systems.

Structurally, all systems are ordered as hierarchical stacks.  Each level receives aggregate structures from lower (previously constructed) levels and produces from these, new super-aggragates, that it in turn passes to the next higher level.  That this process of aggregate layering is historically dependent is obvious.  The non-obvious mapping is to energy.  The lowest levels of the hierarchy, the earliest levels, represent high energy processes, energy levels that would rip apart aggregates at higher levels.  In this universe, all systems are built upon the aggregation processes laid down in the earliest moments, aggregations that occur at the upper limits of heat and pressure –  strings, quarks, sub-atomic particles, atoms, molecules.  Each corresponding to a matching environmental energy level, an energy level that is cooler and less pressurized than the ones that came before it.  The universe gets cooler and more dispersed.  Always.  The growth of complexity, evolution, is dependent upon this predictable and unavoidable dissipation of energy over time.

Those who would argue that life is special, that evolution is exclusive to it, well they are obligated to draw a definitive boundary between life and everything else, and because life, like everything else, is dependent upon the historical layering of aggregate systems, will have to draw that line historically.  They will have to show a moment in time before which there was not life or evolution and after which there was life and evolution.

There are many ways to define life in order that such a line could be drawn.  If you say, as most do, that life is that set of systems that incorporate and utilize both R and D Nucleic acids, well there is surely some moment in the past which would accurately delineate those earlier systems which didn't have both RNA and  DNA, from the later systems that did.  Such a definition is some what arbitrary, but all categorical definitions are.  But if you seek instead to hinge your definition of life to the process of evolution, then you are faced with a tautologically intractable problem.  Either you must accept the nonsensical proposition that the universe started with RNA/DNA preformed, or the more rational causal proposition that evolution is independent of and proceeded biology, preparing over vast periods of time, the aggregate ingredients necessary for the super-aggregate we call life.  If you insist despite this logic, that evolution is a property exclusive to biology, then you are left with the thorny problem of defining aggregation processes happening simultaneous to and independent of biology.  Processes that continue to produce atoms, molecules, stars, planets, galaxies, cultures, ideas, sand dunes, ocean currents, etc. And, you must also show how these continuous and omnipresent processes are qualitatively different when they happen outside of systems that use RNA and DNA from those that do.  But that isn't enough, you must also show either that no system after biology will ever evolve, of that the entire future of evolution will happen within the confines of biological systems.

The evidence and logic weighs overwhelmingly on the side of life being an arbitrarily bounded category, and evolution defining a process unbounded by domain, history, or complexity. Both of which are difficult concepts for humans to accept.  We like to think we belong to a category made exclusive by some secret sauce, some magic that applies in some measure only to life, and which has reached its zenith in the human form or spirit.  We like to imagine evolution to be that process that shaped the shapeless gasses of primal soup into the perfect form that we now enjoy.  Wow.  The ego and hubris drips and pools.

If I may, back to fitness.  The above arguments are crafted to shake we humans free of our innate bio/human/self centrism and to show how such hubris works to emphasize contemporary corporal form over timeless ephemeral process, placing a sort of artificial spotlight on species and downgrading the in contrast, fitness.  Its only natural.  And it is wrong.

The tendency to focus on species is easy to understand.  If you are looking at an animal and asking questions about evolution and process it is only natural that the scope of your thinking would be restricted to that animal, that species, that family of life and its struggle to survive.  Even when you back your self out to a vantage wide enough to include all of life, the full fan of Linnaean Taxonomy over the full 4.5 billion year crawl, the focus is still thing, still survival, still some sort of cosmic engineering project.  It is only when you back all the way out, when you look at all that is, the entire Universe, every moment since the big bang, life and the stuff between, in, and of it, that you might be forced to ask questions big enough to frame the why of evolution.

The why of evolution has to be big enough to comfortably hold all change, all systems, any aggregate and any aggregate chain, not just those that succeed, not just those that are fit, not just things that can be called things… everything!  Any process that explains the existence of one system should also be able to explain every other system.  Universality, at this depth of scope demands a bigger reason than can be explained by the concept "species".  Darwin's big how in biology then becomes a local mapping to a specific domain.  It isn't wrong, it just isn't universal.  You can know everything about pianos, but won't really understand music until you know enough about enough instruments that you begin to see the formative patterns that unite, from which all instruments are informed.

Species, be it a valid concept at all, must be but a subset, an example, a non-special representative, a member of a perfectly inclusive, and domain independent set.  Sets that include everything are not informative as a set.  So we look elsewhere.  That species, as a label, pointing to the subject of evolution, can equally be applied to any thing, forces us to look elsewhere for that which explains the big why of change. Change must not reside in thing, product, tailings, result, or even detritus.  If the big why isn't thing, but has to explain thing, any thing, all things, than the big why must be a process or action or modifier or pressure.  Some common attribute of any change regardless of domain.  What process is agnostic to domain?

In a word, entropy. In an attempt to determine the maximum work that could be extracted from any source of energy, steam era engineers teased apart the relationship between source and output and found an intriguing and strangely universal leakage.  Energy, when used, degrades, diffuses, is no longer as useful or available to the original process.  When scientists discovered the same leak, this time with structure, a strange universality began to appear.  Energy and information, force and structure, an unexpected symmetry.  Then Einstein revealed the exact relationship between energy, time, space, and mass, allowing thermodynamic transforms on all physical terms.  Despite initial objections by Stephen Hawking (and others attracted to the notion that nooks and crannies of the universe might provide respite from the second law's rigid causal prescriptions), Leonard Susskind and others have brought both the quantum world of the impossibly small and the black hole world of the impossibly big, together under a shared entropic umbrella.  What we are left with, like it or not, is a universal.  A universal that is universal to all physical domains and dimensions, regardless of scale.  Wow. That doesn't happen very often in nature.  That hasn't happened in science.  Ever.  Significant?

In his 1927 book, The Nature of the Physical World, Sir Arthur Eddington, put it this way:

"The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations - then so much the worse for Maxwell's equations. If it is found to be contradicted by observation - well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation."

Any theory or assessment of evolution that is not written in response to thermodynamics, information theory, and entropy would seem to be a theory not particularly interested in validity.  That the laws of thermodynamics and evolution both direct their unblinking stares upon the domain of change would seem to me an invitation to at least begin to consider the possibility of a concerted union between the two.

[more to come…]

Randall Reetz

The Life And Times Of Your Average Paradigm

Systems are in constant state of flux, they change all of the time, over time, and even when they don't or can't, the environment around them changes in response to their behavior or simple presence.

Systems evolve. The super-systems in which they live, evolve. It's what happens, it is the only thing that can happen. Stuff constantly adjusts its behavior in response to the stuff around it. And things can not help but mess with the things near them. Change is inevitable. But more than that, change has pattern that can be teased out, measured and described.

These patterns are generalizable and can be found in all systems regardless of domain. All systems evolve. All evolution is similar. What Darwin described in biology, once generalized, can just as accurately describe the interaction of gases or the layered persistent structure of ocean currents, or the way I came to these thoughts and decided to write them down.

An interesting aspect of systems is the way they are made up of layers of subsystems each bound by unique structural and behavioral rules, and all of this can exist simultaneously across many dimensions. These 'layered grammars' are perhaps easiest to see in language, where symbols are assembled in ever more complex aggregates (phonemes, words, phrases, sentences, paragraphs, themes, sections, volumes, collections, etc.), each governed by its own rules of construction.  Of course an utterance can be parsed by the layered rules of symbolic grammar (as above) or any other set of layered grammars… take for instance it's semantics or meaning.

But what interests me today is the life span of a system. Though it is problematic to do so, it is often useful to define, at least loosely, the beginning, middle, and end of a system's life span, the arch of its development through time. Individual humans have life spans of course, and from a more distant vantage, so too does a culture, and though the arch of of these classifications hasn't run its course, the human species. From ever wider vantages, one can talk of the stacked life span of hominids, great apes, primates, mammals, chordates, multi-celled animals, eukaryotes, and biota itself.

What interests me here are the patterns can be teased from any life span? More to the point, the patterns that are universal across all life spans. What, for example, is there that can be accurately, and predictively said, of the difference between the first half and the second half of any life span? What is it about the beginning of an individual human's life that is similar to the beginning of the life span of the human species or the beginning of the life span of life itself?

A reasonably robust set of these life span meta-patterns might work well as a way to better define the boundaries that give meaning to the most general concept; "system" ("category", or "thing").

But what I find most valuable about this strategy, is the possibility of predicting the relative age of a system without ever having witnessed the full arch of a life span, as example. Is the system of focus in its infancy, is it a teenager, or is it middle aged, old, or nearly dead? Are there reliable parameters that can be mapped over a system to help us determine such things? I am convinced there are. My confidence in this guess stems from the dramatic symmetries that have been exposed over the past century and a half in the fields of information theory, thermodynamics, classical physics, and quantum dynamics, linguistics, and logic. What this work has exposed is equivalence transforms that show causal connections between energy, mass, time and distance, and importantly, information. This overarching symmetry hints at symmetries in systems themselves and in stacks of systems, and the way systems change through time.

It is this knowledge these profound symmetries, uniting such apparently separate systems, that best describes the most important contributions of the last century of scientific exploration. Wielding this knowledge, we can use the same language and logical tools to examine any system, be it physical, behavioral, or descriptive, or cognitive.

The slippery and ghostly similarities we have noticed across domains, the ones we previously chocked up to metaphor, have been shown in fact to be causal and real (and we have the math to prove it!).

It is frustrating, that the topics I am most interested in, require the assembly of so much preliminary conceptual scaffolding. All these words, and I haven't even gotten to my main point. Here goes.

I talk often of what I call "productivity paradigms". They are ethereal and mercurial economic entities defined by some factor that gives rise to previously unachievable levels of the value of an average hour of labor.

As systems, productivity paradigms should avail themselves to the kinds of 'life span' parsing we would apply to any system. So, we can ask things like: can we determine the relative age of a given productivity paradigm?
And, is it possible to can we know this from the rising or falling rate of growth resulting from that paradigm?

Are these questions, addressed as I have, to a subset of systems, or are all systems productivity paradigms, making my questions universally applicable? Is there such a thing as a non-productivity paradigm? Can a system ever become a system if it doesn't follow some sort of life-span arch? Is productivity, as I suspect it is, a perquisite for the existence and persistence of a system?

Lets assume it is. Now what? How can we extend this assumption in order to acquire something salient to say about a system?

Productivity Stalemate: Post Industrial World Caught With Its Pants Down

If a hundred trillion dollars* falls down in a forest, and nobody has anything to spend it on, does it make a sound?

In what might be the most embarrassing moment ever completely ignored, the post-industrial west is missing the greatest opportunity… ever.

It snuck up on us, yes. But that doesn't make it any less embarrassing. It still hasn't been acknowledged, sure. But that doesn't mean it hasn't (isn't) happening. What happened? Well, while we weren't looking, the so called second and third world made some money, a lot of money, and they came running to the west to spend it. Why wouldn't they? We're the experts. We know how to put big money to work… right?

Unfortunately, and at the same time, we, the west, the so-called 'first world', well, we slammed into what I call the productivity wall. Sure we were (are) way ahead of the rest of the world. But we aren't moving forward, and haven't for about 15 years. The lack of upward motion has caused the unthinkable to happen – we ran out of things to productively spend money on. We don't have reasonable investments in which to sink new capital. We ran out of ways to ingage new forms of production at the pace demanded by all of the new capital finding its way into our securities and investment markets. It's as if we never in our wildest dreams anticipated the second and third world adding significantly to the global capital pool. We outsourced. We ran up unheard of trade imbalances. We sold bonds by the trillions to China. The world changed. It was right in front of our noses, yet we never saw it, never bothered to model its implications to our increasingly outmoded economic models.

Pants well down. The whole world watching. Just standing here. Clueless.

Money flowing in. Unprecedented quantities of new capital. Accelerating. And here we sit – without any 'shovel ready' places to invest it.

What? you ask. How can this be? Surely there are people who would love to use someone else's money to build something and sell it. Right? And yes, the demand for capital is always there. There is always someone with some big plan and a need to go out an buy some parts, a factory, and the right people to run it. But what if nobody on the other end of the assembly line has any money to buy that new thing you just spent capital to produce? Or more to the point, what if there is nothing about the resultant goods or services that catalyzes the production of other goods and services? What if the only remaining places to spend capital don't make it easier and cheaper and faster to make other things. What if in effect, the existence of your factory and the widgets it produces, doesn't result in any real growth in consumption or wealth? What if you just spent good money to make things that the economy simply can not afford to purchase?

Unthinkable. Or is it? The value of currency, the purchasing power of the people who earn it, these are determined by productivity. Productivity is just the value of an average hour of labor. Productivity changes over time. Usually it increases. Sometimes, as when a new and powerful infrastructure is established – mechanized farming, electricity, telephonic communication, global integrated transportation – it increases in grand leaps. In between these technology driven epochs of growth, there are periods in which the full landscape of opportunity within that domain are exploited. Growth during these in-paradigm epochs slows as the possibilities made possible by that advancement approach its limits.

We in the west have not jumped productivity paradigms in a while. We are running out of productive ways of exploiting the current set of infra-structural advantages. As a result, an odd thing happens, an unintuitive thing happens, we find that throwing more fuel in the economic fire no longer buys greater productivity.

Sure, we can continue to spew out ever greater quantities of goods and services, but if things we spew do not increase the value of that all important average hour of labor, there will not be more money available to consume to our new levels of production.

Production does not equal productivity. Consumption does not equal productivity. Production plus consumption does not equal productivity. Productivity, the value of that average hour of labor, is determined by the effectiveness of the infrastructure.

The classic example of a productivity jump is the introduction of mechanized farming. Mechanized farming has allowed such yield (per hour of labor) that we can afford to feed all of the people with less than 4 percent of us spending our labor on food production. This single fact freed up the remaining 96 percent of us to do other things.

Unfortunately, just as the rest of the world got to the point where it could contribute capital fuel to our economic engine, we ran out of ways to put that capital to productive use.

Holly crap! China and Brazil and India and the Philippines and, and, and, they all have some extra money to pour into global capital markets, and just when they do, we run out of productive ways to spend it!

Has any greater opportunity ever presented itself to a people unprepared to take advantage of it?

So what did the west do instead? We faked it. We gladly accepted the money and spent it on crazy layered real estate loan stocks and layered stock schemes that were nothing but new plays on the same age-old shyster speculation scheme that has substituted for capital investment every time real productivity is elusive. And the real estate boondoggle isn't the first of our fakes. We started with energy deregulation and when the inevitable crash happened there, we moved quickly on to the dot-com boondoggle, and when that crashed, we faked it again with real estate and "mortgage backed securities". And now that this crash caused a global economic shock wave leaving double digit unemployment in the strongest economies and a fallout into the third world we are only now beginning to acknowledge.

And how did we respond to the mortgage securities crash? By faking it again of course. We said, "Sure we acted like shysters, but that was the past, we have changed, give us your money and we will spend it wisely this time!" And what wise thing did we do with that money? Did we go looking for the root of the problem? Did we attempt a wholesale rethinking and scientific understanding of the economy and what makes it work? Hell no! We sent quants back to their million dollar basements and told them to be better, bigger, and faster shysters! And what they came back with was a thing called "micro-second trading".

Micro-second trading is stock trading like any other stock trading, except that it is done and can only be done by high speed computers with exclusive access to the main trading computers operated by the various trading firms (NYSE, NASTEC, NIEKO, etc.) under the auspices of the Federal Trade Commission and other international policy bodies. Micro-second trading exploits a loophole in trading regulation that allows certain exclusive trading organizations access to trading data before the trades are made. Yes you read that correctly. Insider-trading on steroids. The loophole allows this peak into the future before it happens but only for a fraction of a second and only of course to those few companies who have somehow gained physical access to space in the computer server farms that operate the stock exchanges. Imagine what kinds of profits on futures trading you could make if you knew exactly what the future was?

When you read that the trading firms that lost the most money at the end of the real estate fiasco are currently giving out hundreds of millions of dollars in bonuses to their employees, and you wonder how they are doing it, where they are getting the money, well now you know.

And all of it, each of these fake investment schemes, one after the other, happened exactly and only because we, the most powerful, most educated, most economically potent people who have ever lived… well, we ran out of ideas. We came to a wall. We hit the end of the current paradigm and either couldn't figure out how to jump to the next one or weren't as a culture prepared to think in paradigm jumping terms.

I suspect that humans are just not very smart, that in mass, we are limited in the exact ways that the current circumstance and our reaction (or lack there of) so dramatically illuminates. That so many of us exist. That all of us are so well educated. That we live in and by so protected and fecund an infrastructure, and that not one of billion of us were not in a position to see this happening and plan around it tears at credulity.

Or maybe we are all of us so greedy and shortsighted that misfortunes like this happen despite the obvious cognitive capacity of our species? Either way, whether we are more in it or of it, we have to react to it. We have to do something to get productivity up and running again. And to do this we have got to build an understanding of economic systems big enough to include and predict the foibles of which we have recently been apart. Until we have such an understanding, we will not be able to avoid the future missteps as destructive and anti-productive as the bubble/bust cycles that have so plagued the last 15 years.

And what would it have looked like if we had been prepared to productively spend this windfall of new investment money rushing towards us? What, capitalization opportunities, had they been in place, would have allowed us to avoid the malevolently inventive 'creative financial products' that led directly to the string of bubble/bust cycles that have so devastated the global economy? What, one might reasonably ask (though few have) would or should the next (and long overdue) productivity paradigm look like?

I have invented a word, "productclivity" to describe a general framework of a scaffolding of a shadow of a description of an outline of the criteria that might be used to judge a new productivity paradigm. Productclivity is the propensity of a system to produce productivity. It is to economics what fitness is to evolution. In the longest run of time, it doesn't so much matter if you have a good set of legs as it matters that you have a good set of leg building algorithms… a good set of adaptation optimization algorithms, even better. Same goes for economic systems. Any successful scheme (mechanized farming, electric power grid, broadcast television, general purpose personal computing, cell phones, etc.) pales in productclivity comparison to things like public education and national or international research and development programs. Such systems tend to produce a continuity across productivity epochs.

Pop-economists and economy pun-dents are fond of terms like "the multifier effect", which is meant to describe investment and business activity that causes an out-fall of other investment and business activity. Investments in transportation and education and communication infrastructures are said to have this special something that produces productive systems that could not have been anticipated and would not have existed in their absence.

So lets look at the current paradigm. We already have mechanized farming. We have a decent communication system. We have a world class materials market and the transportation system to get those materials anywhere in the world in less than a week (a day if you are willing to pay for air freight). We have a decent data network (the internet) and the gateways, switchers, data storage and servers to make it all zing. We have programmable computers built into the fabric of every part of our daily lives and the things that make our lives long and rich. Through trickier and trickier programming we have automated our manufacturing, factories have become robots in them selves. The result has been a level of productivity unheard of in human history. It has given us time to screw around and the products and services to upgrade our "down time" to the level of "entertainment" and "leisure". What is left? What more could there be? The simple fact that the question seems so reasonable is evidence that we are in this particular problem for the full count.

But there are hints and there is hope that more might come, that there is still something of worth, something, "Not unbecoming men that strove with Gods" (Tennyson). We have the machines, the communication channels, the materials, the transportation, the manufacturing and materials, the educated labor, the social infrastructure (or, um, a bill of rights anyway), and the standard plumbing, mini-malls, and big box stores… everything one might need if one was packing for a trip to next-paradigm land. Which is another way of saying that we know and can list all of the things that make up the current infrastructure, all of the things that work, all of the things that brought us this far, all of the things the next productivity paradigm will need, and won't be.

And this is where I am obligated to insert a cautionary explanation. New paradigms do not, though they are frequently accused, replace that which came before them. But wait, you might say, everyone can come up with with a counter-example or two. Horse drawn carriages were replaced by automobiles… right? Paper mail has been replaced by email (or is doing a good job trying). The computer has replaced ledger sheets. Yes, yes, and yes. But none of these qualify, in my estimation, as paradigm shifts. The fact that they look like paradigm shifts, only results in confusion. In all three cases, the introduction of a new tool, process, or technology expediates or in some other significant way, improves a solution that already existed before. The car, like the buggy, accepted human passengers and transported them from one site to another. Same transform can be applied to most so-called "paradigm shifts". Better, ain't different. Paradigm shifts then, are caused by solutions to problems more fundamental than the problems that structure current solutions. If, for instance, an affordance was invented that gave people all of the benefits of being somewhere else, without actually being transported there, well that technology would qualify as a true paradigm shift.

Does double digit growth in second and third world economies mean they have jumped paradigms? Scooped the first world? Beat us at our own game? N0. Not even close. Lets not loose perspective. No need for hysteria or blame. Non-western economies don't have to jump paradigms, they have only to adopt the paradigms the west had already defined and refined. Their economies are growing fast, true, but this is the simple result of the efficiencies and advantages of moving their production and infrastructure up to modern industrial levels and then amortizing these advantages across the huge number of people (hours of labor) at their disposal Nothing magic, just productivity gains multiplied across huge populations.

In the west, we face a task far more complex. We can not play the same catch-up game being played in the 'rest of the world' (ROW). We already played that game… played it to the limits possible within this paradigm. No, now have to invent the next one, the next (productivity exploding) paradigm. Nothing else will work. This should be obvious by now. Even the notion that we could tap into old paradigm gains in that part of the world with rapidly growing populations of consumers is wrong headed and won't work. Yes, the global consumer base is expanding, yes this is happening in the second and third world. Yes the numbers are astonishing. But in order to sell to the rest of the world, we have to produce at efficiencies only allowable through the use of discounted ROW labor and materials rates. The only way we can get back to growing the rate of growth in the first world, our world, is by playing the paradigm-shift game.

And what does that mean? Given the current state of the first world, what are the parameters that would define a true paradigm shift? We start with the obvious statement; the next paradigm can't be anything we already have or do. More than that, it can't be an extension of or improvement on what we already have or do. It has to solve problems that are more fundamental to, that actually cause, the problems that provide the market and demand upon which the current economic paradigm feeds.

But before we go there, let's add another few items to the "how to tell a real paradigm shift in productivity from simple extension of the current paradigm" list.

First way to tell if what you are doing isn't a true solution is when you notice that the more you do the less you get out… this is the law of Diminishing Returns. When gains in productivity become more expensive, even while the energy or investment required by a new scheme increase. Shale oil, right-now manufacturing, team building exercises, on-line shopping, hybrid drive technology, cloud computing, web 3.0, ultra-sized wind turbines, commuter freeway lanes, customer relations management software, etc. These are all great examples of what I would call skimming; finding ingenious ways eek out the last few percentage point gains available within the current paradigm. Where a new paradigm should result in fold increases in productivity, skimming only yields percentage gains. These drop off to nothing as the full potential of that paradigm reaches its natural zenith.

And then there is the law of scales. This happens as a paradigm matures, when all the easy problems have been solved, and the only remaining solutions end up being solutions that work in ever smaller and ever more isolated domains. Come to think of it, scale is a good criteria upon which to compare true paradigm shifts with the range of lesser influencers. A true paradigm shift will expose entirely new markets, new sources, new methods, new materials, new activities and uses… solutions that scale both vertically and horizontally without apparent limit (at least at first). Stove-piping, the tendency of solutions to only work vertically and for these vertical markets to become narrower and more isolated over time… these are good indicators that the current paradigm is greying, robust, reaching maturity… that all of the easy to reach has been picked, that the investment in taller ladders can not be amortized across any other markets (fruit picking ladders have become so specialized that they are useless to firemen and painters).

In converse, true paradigms tend to be expansive, to open things up, and as I have suggested earlier, they tend to result in what many call a "multiplier" effect; from their seed, other opportunities bud and grow and multiply simply because the paradigm provides such a rich foundation for effective novelty.

OK, back to guessing. What will the next paradigm be? If it has to rooted at a deeper foundational level than current solutions, how much deeper? What does deeper mean?

In science, the trend has been towards theories (understandings) of the laws of nature, and for ways these laws are inherited from yet deeper and more general layers of meta-laws. Work in pure information theory is providing what might eventually work as the foundation of all, indeed any, natural system. Understanding the deepest layer(s) of nature allows one to predict, indeed derive, all of the laws above it (and show why they are the only possible laws). In business, this deeper understanding is phrased "knowing your market". Same thing. The deeper you tap into the causal strata that supports a system, the more control you have over and knowledge of the whole system. The deeper you tap into the causal strata, the more of that system's complexity becomes salient and controllable. In science this phenomenon is called "elegance". A theory is elegant to the extent that it can comfortably inform and predict the behavior of a large number of other theories or theories of theories. Elegance it is argued is a basic attribute of any evolving system. The universe has evolved, therefore it is fundamentally elegant. Any physics, any any descriptive abstraction of the universe will therefore be accurate to the extent that it is an accurate analog of the elegance of the system it describes, the universe itself, and by extension, the evolutionarily stacked layers that describe its history. But getting to elegance isn't easy. The deeper one digs into any historical or causal strata, the less like today do things appear. The promise though, is that an understanding gets more useful and profound the more deeply it is rooted. It might be easier to study legs, but you can get more information faster if you study the DNA snippets that propagate appendages across all of biology. Not only that, but an understanding of DNA in general will allow you to know things about legs that allows you to know things about ears and hearts and intestines and the production of enzymes and metabolism and growth and reproduction and evolution and information and energy and ultimately the topology of causality itself, the arch of possibility.

But digging deeper comes at a cost. Armed only with a good ruler and notebook, you can learn lots of stuff about legs, if you want to go the next bit and understand genetics, you had better build yourself a bunch of awesomely fast computers connected to some awesomely big data storage devices. And you had better find more and more robust ways of handling greater and greater amounts of complexity. The end result might be elegant descriptions but that elegance comes at great cost. The cultural and physical armature humans have had to build, in order to reach back into the causal structure of elegance is anything but elegant. Look for instance at the Large Hadron Collider. This machine is so complex that some theorists have questioned the likelyhood that it will ever actually work or work long enough for to acquire any reliable or demonstrable and supportable data. A whole branch of logic is likewise concerned with the limits of knowledge and thus the limits of abstraction. The data stream flowing from the fully operational LHC will exceed 300 gigabytes per second. Just mastering the technology and logic to store a stream that fast and wide is a challenge no previous generation could have met. Sifting through the resulting mountain of data requires logic armature of unprecedented complexity.

All of the low hanging fruit has been picked. We have measured all of the easy to measure stuff. We have stored and organized all of the easy to measure, store and organize measurements. We have built all of the equipment that that is easy to build. Even most of the stuff that is hard to build. We are increasingly, as a species, up against a complexity wall that keeps us from progress in almost every field of human endeavor.

The next productivity paradigm will have to have something to do with breaking through this wall, something to do with finding some understanding of the very shape of complexity and using that understanding to build a stable tunnel under, ladder over, or door through the complexity that vexes progress today.

Among the myriad of discoveries made over the past century, three of the most profound are the standard physical model, information theory, and evolution. Together these describe the relationship of matter to energy and to space and time, the limits of information, the equivalency of energy and information, the way both degrade over time, and how this guarantees the direction of time. All of which puts bounds around and relates reality and our abstractions of reality. Physics and cognition. The territory and the maps we can make of it. And, importantly, the physicality of maps themselves. But the complexities involved in the manipulation of this level of understanding pushes humans to the limits of their capacity (and beyond). Fewer and fewer of us have the cognitive wherewithal to make effective use of what is known.

What is needed, what is desperately needed is an automation of discovery and cognition. We need machines that do more than just help us acquire and organize the measurements we take. We need machines that work along side of us, extend cognition into realms beyond the easy reach of human minds. We've automated everything else. Cognition is all that is left to automate.

This is the next productivity paradigm. Like it or not, and from personal experience introducing these topics, most people decidedly don't, we have to push forward towards the automation of cognition.

Until then, and in the event we just can't stomach the idea of machine cognition, our only viable choice is to redirect the capital we can't viably spend – and spend it on infrastructure projects in the second and third world where in-paradigm solutions still result in productivity gains. Not to make money mind you, but to accelerate the flattening the world's markets and ready them for the eventual global jump to the next paradigm (when we can stomach it). This is especially true when that money is mostly from the emerging world in the first place. We can't fake it any more, the fake we've been doing is killing the world's economy, the fake is destroying our hard won reputation as innovators and capitalists, the fake is defacing the very idea that is capital.

Randall Reetz

* The International Monetary Fund estimates current Gross International Product (the annual sum of all human labor) at about 71 trillion dollars and rising rapidly. The greatest growth in relative wealth comes of course from the third world, where wealth building is more rapid. Because the vast majority of the world's population hails from the second and third world, small increases in individual wealth result in huge changes in gross wealth. As a result, the new money in the global investment system is coming from the third and second world. The question, the problem, the crisis is the first world's inability to make productive use of this new money.

Ford Doing Better Than Expected (Thanks GM)



I have been watching and reading the news about Ford's better than expected financials. Each of these reports compare Ford's success to the demise of GM, but none of them link the two. Look, people still need to drive around, and cars still wear out. Which means new cars need to be built and sold. Cars not being sold by GM are cars being sold by other manufactures. It is that simple. Hydraulic really.

In the 1940's an economist named A.W.H. (Bill) Phillips built a hydraulic computer that pumped colored water through system of tubes and reservoirs, controlled by metered valves, and cams to simulate national and international economic monitory flow. It was called the "Moniac".

Less than twenty Moniacs were built, but they were purchased and put into active service by some influential organizations including the British and U.S. governments, international banks, and research institutions like Harvard and the London School of Economics (which subsequently hired Phillips as a professor). It was built as a teaching tool, but found to be an accurate enough predictor of actual economic activity that it looks as if it was employed by governments to play "what if" simulations and inform monitory, fiscal, and tax policy.

The tie-in to Ford should be obvious. Fluids are incompressible – no mater how much pressure you apply, they always take up the same volume. Push a little over here and the fluid has to compensate by leaking out somewhere where the system offers the least resistance. When people hear that a car company is struggling to such an extent that they are shuttering entire devisions (Saturn), they wonder who will honor their new car's warrantee, and they go elsewhere to do their car buying. That elsewhere is foreign manufactures and if it isn't, it is the other domestic manufacturer, Ford.

GM consumers are probably more "buy American" than other auto buyers. Additionally, US reporters are not as likely to track GM exodus to Toyota, Nissan, or Volkswagen.

The comparison of Ford to the other big US manufacturer is too seductive a sound bite, especially considering the government bailout of GM.

But wouldn't be nice to at least ask the question… Is there a connection perhaps, between GM's troubles and Ford's (comparative) success?

Come to think of it, wouldn't the monetary-flow-as-hydrodynamics also work just as well as an explanatory model when applied to the last three boom/bust bubbles (energy speculation, dot-com evaluation, and the securitized real estate boondoggle)? All three came in rapid and linked succession as the investment pressure flowed from the failure of the previous to the "success" of the next. All three can be traced to brand new second and third world money being invested for the first time in first world markets… markets without the requisite productive machinery to support the value of the investment being heaped at them.

And that my friends is as salient an explanation as you will ever find for the reason that booms both happen and why they go bust. New money coming unannounced into an old market that can not effectively scale its productivity to the demands of the new capitalization levels.

A boom/bust event is indication of the need for a new production paradigm. A boom/bust cycle says; the present system can not be scaled beyond its current level… adding capital beyond this current level will not yield commensurate growth.

At such times, the whole system is vulnerable and this vulnerability will not go away until a new technology or infrastructure usurers in a bridge to a new paradigm in which new capital will effectively yield new productivity.

Randall Reetz



Some background info on the Moniac.

http://www.youtube.com/watch?v=k_-uGHWz_k0

http://www.rbnz.govt.nz/research/bulletin/2007_2011/2007dec70_4ngwright.pdf

http://technology.open.ac.uk/t...bissell/Phillips.pdf

http://en.wikipedia.org/wiki/MONIAC_Computer

http://www.nzier.org.nz/Site/about/NZIER_Moniac.aspx#H51750-1

How Engineers Get Thermodynamics And Information Theory All Wrong

There is probably no other area of higher education where what is taught is so out of step with what is in fact valid. Engineering programs the world over, in the interest of simplicity and practicality, teach thermodynamics and information theory towards practicality and real-world solutions. What could be wrong with that? What is the negative side of practicality?

Well, usually, nothing. In most cases, cutting corners doesn't invert the causal bedrock upon which engineering is based. The field equations used to abstract relativity, do not usurp or demand a reformulation of E=mC^2. Neither do feynman diagrams mess with or disrupt an accurate understanding of quantum electro-dynamics. But in thermodynamics and information theory, the practical methods taught and used by engineers are based on assumptions that have resulted in an almost universal and wholesale misunderstanding of the base meaning and the causality that animates the bedrock of energy and information dynamics.

In thermodynamics, the problem is probably best described by the idea of "the perfect wall". To cut corners, engineers are taught arithmetic tricks that work in the usual atmospherically-dense and energy-conductive environments in which human's live. Unfortunately, these computational short-cuts do far more then introduce the usual errors of computational fidelity, they actually reverse the meaning of thermodynamics as a science. Thermodynamics as a science is about the way systems interact with the systems they are embedded within. But more than that, thermodynamics asserts the absolute necessity and inevitability of interaction and transference of energy that will result from ANY change within or without a system.

It should therefore be obvious that the teaching and use of practical methods that sidestep the central tenet of a field of science will have an unusually strong an adverse effect on the understanding of that science. Whole generations of engineers are being unleashed into the world with an absolutely backwards understanding of the very dynamic that universally informs all other dynamics. This is more than unfortunate. The growing population of scientists and engineers that march forward from universities with a backwards understanding of thermodynamics interferes with progress in all fields of science.

Same can be said of thermodynamics' sister, information theory. Because everything we do is increasingly keyed to progress in computation, the miss-map between the causal truths that inform information theory and the practical methods taught in their stead, may potentially have a much larger and deleterious impact on our potential as a species.

Where thermodynamics dictates the way energy leaks across the spacial dimensions, information theory dictates how information leaks across time. Purists will say that energy and information are equivalent. Ultimately, this is true. So when energy is measured in its more general form, as information, as bits, then information theory also dictates the lossy transfer of energy across time.

Because the two disciplines show how no system exists independent of other systems, we must concern ourselves with how systems are related through this leaking of energy and information. What can be said absolutely about the way information and energy set up directional relationships between systems with regard to space and time?

The Butterfly Effect; Isn't
In the none academic world, causality suffers a different abuse altogether. It is tempting for people to take notions of system interconnectedness to ridiculous and self-defeating extremes. We loose ground when the perfectly valid logic showing why a system can never act in isolation is illogically extrapolated to, "All systems effect all other systems equally". Making exceptions for speed of light (event cone) isolation, it can indeed be shown that all gravitational systems effect all other gravitational systems… the movement of a butterfly in South America will indeed effect (however infinitesimally) a dam in Montana. But if one were to rank, by degree of effect, all of the systems effecting the gravity fields surrounding a dam in Montana, a butterfly in Argentina would be very very low on the list. Even if one is butterfly obsessed, wants to ignore the one dog on the corner who has more mass than all of the butterfly's in the rocky mountains, there are tens of millions of butterflies closer, each of whom's infinitesimal gravitational pull would none the less have a larger causal effect on our poor dam's future.

This particularly populist breed of cause-and-effect miss-mappings is not the focus of my essay. As wacky as pedestrian notions become, they probably can't significantly derail scientific progress to any great degree. But when entire generations of science students are raised on incorrect understandings of basic science, we are all in trouble. This is especially devastating when the topic of delusion is as fundamental to the causal stack as is thermodynamics, energy and information.

"The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations - then so much the worse for Maxwell's equations. If it is found to be contradicted by observation - well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation."

Sir Arthur Stanley Eddington, The Nature of the Physical World (1927)

What determines the causal morphology and behavior of the hierarchy of influence (dictated by thermodynamics and information theory)? If we define the shape of causality we define process itself, and by extension, the shape of reality.

Information Theory specifies ways to measure the capacity of a storage matrix and the reliability of a communication channel. But all of it's metrics are agnostic to the meaning encoded and transmitted. Each bit and each bit pattern are treated as equal. Only frequency and order, not meaning, not saliency, not fidelity of representation.

What would you have to fold into or add to information theory and thermodynamics in order to measure meaning and saliency? Is it there already? Are we missing something in our approach to and use of an already semantically robust set of laws and equations?

Several years ago, the mathematician Stephen Wolfram (founder of the maths software "Mathematica") wrote a book called "A New Kind Of Science". It is a dense and repetitive work over twelve hundred pages long. I tried to get through it and gave up. Feels like a giant fractal, built of some obscure philosophy based on fractals. Not feeling OK with my initial critique, I forced myself to come up with a theory, any theory, that said or not, I could attribute to his work. The best I could do was to suppose that Wolfram was trying to say that science had historically used equations to understand the components of nature that could be accurately described by equations, but the really interesting things about nature were iterative, and open ended, they required logical descriptions that required continuous computation. To bad he couldn't have just said that.

At about the same time, the social biologist Edward O. Wilson wrote a book called "Consilience". He argued for a cross-discipline coming-together of the various branches of scientific exploration, a holism, for the advantages of looking at nature (and those who study it) as the one large and interdependent super-system it is.

Of course dynamic, ever changing, "evolving" systems are systems simple equations (calculated once) will never accurately represent. Traditional thermodynamics and information theory engineering maths and methods work best on simple systems that are or can be thought of as repetitive and isolated. The conditions (input energy, output work) might change, but the conditions of the conditions never do. At any sufficiently salient level, real systems are never that well behaved or that removed from their environments or situations.

Real systems are direction of time dependent. It is more than ironic that the one scientific law that defines exactly why causal systems are non-reversible is used primarily by engineers who choose to use it in ways that ignore the direction of time it demands. I can forgive newtonian or relativistic or quantum physicists for ignoring the asymmetry of time… their maths don't require it. But thermodynamicists? Information theorists?

[more to come…]

Randall Reetz

Cognition Is (and isn't):

What is really going on in cognition, thinking, intelligence, processing?

At base cognition is two things:

1. Physical storage of an abstraction
2. Processing across that abstraction

Key to an understanding of cognition of any kind is persistence. An abstraction must be physical and it must be stable. In this case, stability means, at minimum, the structural resistance necessary to allow processing without that processing undoly changing the data's original order or structural layout.

The causal constraints and limits of both systems, abstraction and processing, must work such that neither prohibits or destroys the other.

Riding on top of this abstraction storage/processing dance is the necessity of a cognition system to be energy agnostic with regard to syntactic mapping. This means that it shouldn't take more energy to store and process the string "I ate my lunch" than it takes to store and process the string, "I ate my house".

Syntactic mapping (abstraction storage) and walking those maps (abstraction processing) must be energy agnostic. The abstraction space must be topologically flat with respect to the energy necessary to both store and process.

Thermodynamically, such a system, allows maximum variability and novelty at minimum cost.

What if's… playing out, at a safe distance, simulations, virtualizations of events and situations which would, in actuality, result in huge and direct consequences, is the great advantage of any abstraction system. A powerful cognition system is one that can propagate endless variations on a theme, and do so at low energy cost.

And yet. And yet… syntactical topological flatness carries its own obvious disadvantages. If it takes no more energy to write and read "I ate my house" than it does to write or process the statement, "I ate my lunch", how does one go about measure validity in an abstraction? How does one store and process the very necessary topological inequality that leads to semantic landscapes… to causal distinction?

The flexibility necessary in an optimal syntactic system, topological flatness, works against the validity mapping that makes semantics topologically rugged, that gives an abstraction syntactic fidelity.

This problem is solved by biology, by mind, though learning. Learning is a physical process. As such it is sensitive to the direction of time. Learning is growth. Growth is directional. Growth is additive. Learning takes aggregate structures from any present and builds super-aggragate structures that can be further aggregated in the next moment.

I will go so far as suggesting that definitions of both evolution and complexity are hinged on the some metric of a system to physically abstract salient aspects of the environment in which it is situated. This abstraction might be as complex as experience stored as memory in mind, and it may be as simple as a shape that maximizes (or minimizes) surface area.

A growth system is a system that can not help but to be organized ontologically. A system that is laid up through time is a system that reflects the hierarchy of influence from which its environment is organized. Think of it this way, the strongest forces effecting an environment will overwhelm and wipe out structures based on less energetic forces. Cosmological evolution provides an easy to understand example. The heat and pressure right after the big bang only allow aggregates based on the most powerful forces. Quarks form first, this lowers the temperature and pressure enough for sub atomic particles, then atoms. Once the heat and pressure is low enough, once the environmental energy is less than the relatively weak electrical bonds of chemistry, molecules can precipitate from the atomic soup. The point is that evolved systems (all systems) are morphological ontologies that accurately abstract the energy histories of the environments from which they evolved. The layered grammars that define the shape and structure (and behavior) of any molecule, reflect the energy epochs from which they were formed. This is learning. It is exactly the same phenomenon that produces any abstraction and processing system. Mind and molecule, at least with regard to structure (data) and processing (environment), are the result of identical process, and as a result, will (statistically) represent the energy ontology that is the environment from which they were formed.

It is for this reason that the ontological structure of any growth system is always and necessarily organized semantically. Regardless of domain, if a system grew into existence, an observer can assume overwhelming semantic relevance that differentiates those things that appeared earlier (causally more energetic) from those things that appeared later (causally less energetic).

This is true of all systems. All systems exhibit semantic contingency as a result of growth. Cognition system's included (but not special). The mind (a mind, any mind), is an evolving system. Intelligence evolves over the life span of an individual in the same way that the proclivity towards intelligence evolves over the life-span of the species (or deeper). Evolving systems can not be expressed as equation. If they could, evolution wouldn't be necessary, wouldn't happen. Math-obsessed people have a tendency to confuse the feeling of the concept of pure abstraction with the causal reality of processing (that allows them to experience this confusion).

Just as important, data is only intelligible, (process-able, representative, model, abstraction) if it is made of parts in a specific and stable arrangement to one another. The zeroith law of computation is that information or data or abstraction must be made of physical parts. The crazies who advocate a "pure math" form of mind or information simply sidestep this most important aspect of information. This is why quantum computing is in reality something completely different than the information-as-ether inclination of the duelists and metaphysics nuts. Where it may indeed be true that the universe (any universe) has to, by principle, be describable, abstract-able by self consistent system of logic, that is not the same what's so ever as the claim that the universe IS (purely and only) math.

Logic is an abstraction. As such it needs a physical realm in which to hold its concepts as parts in steady and constant and particular relation to each-other.

My guess is that we confuse the FEELING of math as ethereal and non-corporal pure-concept with the reality which of course necessitates both a physical REPRESENTATION (in neural memory or on paper or chip or disc) and a set of physical PROCESSING MACHINERY to crawl it and perform transforms on it.

What feels like "pure math" only FEELS like anything because of the physicality that is our brains as copular machinery as they represent and process a very physical entity that IS logic.

We make this mistake all day long. When the only access to reality we have is through our abstraction mechanism, we begin to confuse the theater that is processing with that which is being processed and ultimately with that which that which is being processed represents.

Some of the things the mind (any mind) processes are abstractions, stand-ins for other external objects and processes. Other things the mind processes only and ever exist in the mind. But that doesn't make them any less physical. Alfred Korzybski is famous for declaring truthfully, "The map is not the territory!" But this statement is not logically similar to the false declaration, "The map is not territory!". Abstractions are always and only physical things. The physics of a map, an abstraction system, a language, a grammar, is rarely the same as the physics of the things that map is meant to represent, but the map always obeys and is consistent with some set of physical causal forces and structures built of them.

What one can say is that abstraction systems are either lossy or they aren't useful as abstraction systems. The point of an abstraction is flexibility and processing efficiency. A map of a mountain range could be built out of rocks and made larger than the original it represents. But that would very much defeat the purpose. On the other hand, one is advised to understand that the tradeoff of the flexibility of an effective map is that a great deal of detail has been excluded.

Yet, again and again, we ourselves, as abstraction machines, confuse the all too important difference between representation and what is represented.

Until we get clear on this, any and all attempts at merely squaring up against the problem of machine intelligence will fail.

[more later…]

Randall Reetz

If It Doesn't Scale, It Isn't A Solution

[note: this post is a work in progress]

You've met the private school freaks. They can't believe anyone would put their kids in a public school. What with all of the riffraff, the minority students, the dropout rates, the low performance on standardized tests, the poor state of school grounds and facilities, the struggle for funding for special programs like music, art, and sports, the lack of emphasis on college preparation, etc. Man, look at that list! Those are some strong and obvious arguments against public school education. Or are they?

What people who advocate private or limited solutions fail to calculate is what I call the "boutique effect". If for instance, there isn't enough money in a culture to give everyone a rich and safe childhood and education, it is absolutely expectable that those who are given those resources will excel (at least when compared to the scores of the less fortunate masses).

However, there is a real societal cost to exceptionality, and this goes to my "solutions that don't scale" thesis. With regard to exclusive schools for the rich, that cost has to do with the fact that the parents of rich kids just happen also to be the people with the most political and economic influence. When their kids are not part of the public education system, neither is their protective passion, experience, knowledge… or influence. They don't care about public education. Why should they? Their kids are not dependent upon it.

From another angle, when money goes to elite institutions, it is not being put where it is needed the most. Rich kids already live in information-ally rich environments. Rich kids are already live in safe and calm environments where learning works. Rich kids are much more likely to have direct access to positive role models… their environment is chock full of success stories. Rich kids are not as likely to live in families broken apart by drugs and prison terms and gang deaths and violence and people struggling with a second language or a culture that is not in step with the larger population. Rich kids have educated parents around them who can help them with their homework! Rich kids can afford to think about learning and succeeding, their world is devoid of the concentration-destroying stresses that poor kids deal with all day long.

And the whole reason people can get richer in this country than they can anywhere else, is because our population is more competitive (or used to be) than every other country. Why? Because we did this obscenely radical thing 150 years ago, we decided that everyone had a right to a publicly funded education! Educated people build factories and high tech energy delivery systems, they build transportation systems and are more likely to participate in advancements and the types of change that increase productivity. Productivity is the key. If you can get more value, more product, out of the average hour worked by your population, you can produce more wealth. Education is the radical difference that gave America its high per/hour, per/person productivity, it is why the rich are rich and why they can afford to borrow from the equity that is american productive wealth, even though that borrowing actually destroys wealth on a national scale in the process.

But these are not factors that have anything to do with a valid comparison of private education vs. public education. These are strictly socio-economic factors. The problem is when desperate parents look at the performance divide that exists between public and private schools and concludes that public schools should do exactly what private schools do. Private schools could be significantly worse than public schools and still produce higher test scores, more college acceptances, fewer drop outs, and lower crime rates. All of the intangibles work in their favor. In fact, it is often true that private schools employ teachers with less education and training than their public school counterparts.

Private school programs tend to spend less time on basic subjects (math, reading, etc.) and get better results! That is why they can spend a higher percentage of their student's time on extra-curricular activities (music, art, sports, theater, community projects, etc.). Obviously, if a public school would fail if it decided that it should therefore shorten the instructional time devoted to math and reading. Private exclusive schooling is not a solution that scales. It is obvious that it is a solution that only works for a very small percentage of the population of a society, and that it works only at the expense of the whole country as a whole. There are lots of examples out there of countries with very good exclusive education systems and very very very poor economies embedded within extremely unstable social chaos dominated by poverty. Most central African nations play this game, Myanmar, The United Arab Emeritus, South Africa (before apartheid was upended), Iran, North Korea, Brunei, etc, etc, etc. Exclusivity is the norm in the poorest nations on Earth. If exclusivity worked, these would be the most productive nations.

Lets look at China. Until recently, from a strictly financial perspective, China has done all of wrong things, it has restricted entrepreneurship, it has restricted credit, it has promoted an exclusively top-down decision and influence structure… everything that works against the kinds of fluid business environment that are attributable to a growing and healthy post industrial economy. And yet… and yet, despite all of these huge shortcomings and mistakes, China succeeds like no other country. Why, because it has spent an inordinate percentage of its wealth on its public education system. China's people are well educated by any global standards. Not some of its people. Not the exclusive elite. Everyone. And given China's huge population, that is a lot of educated people to compete en-mass with the rest of the world. When they finally turn the last capitalist stone, when they finally create a legal structure to support personal liberty, property ownership, and unfettered personal expression, watch out! Even without these capitalist standards, China reaps such benefits from its non-exclusive infrastructural investments. Meanwhile, those of us in the west have all but forgotten which factors really matter, and which, in comparison, are just fluff.

But, if you happen to live in a nation that has paid attention to productivity, has produced wealth as a result of a fair and solid infrastructure (transportation, energy, safety, medicine, credit, agriculture, justice, and education… for everyone!), you can play the exclusive game in limited numbers, even though it destroys wealth on a national scale.

Now let's switch gears. Inoculation. How did inoculation get to be such hotbed of superstitious thought? Inoculations have been blamed for hyper-activity, for cancer, for Asperger, and MS, for Attention Deficit Disorder, even for AIDS, etc. What inoculations are never blamed for is the one thing they are most definitely and unequivocally guilty of… preventing pandemic spread of disease! But inoculation programs only work when a certain minimum threshold of the population participate. Each vaccine (/disease combination) has its own special number… correlating to a specific minimum percentage of the population that must be inoculated. If a smaller slice of the population are given the vaccine, an outbreak is certain. When a parent makes the decision not to get their kid vaccinated, what they are really doing is passing the responsibility and risk (if there is any) to the kids who do get vaccinated. They reap the rewords but pay none of the price!

This is a great example of a solution that doesn't scale. While the vast majority of parents accept the shared responsibility a few parents can get by without inoculating their kids. It is a solution. But it isn't a solution that scales. Obviously, it doesn't work if everyone chooses this solution. It doesn't even work if more than a few choose not to inoculate. This type of solution only works because others are not choosing to shirk their share of the responsibility (and risk). In fact, if more and more people choose the non-solution option, there will become a point where everyone will suffer, even those who did take the vaccine. Most of these vaccines only work when the exposure to the pathogen is very low level. If a pandemic took hold, and if many people got really sick, high concentrations of the pathogen could overwhelm the immune resistance afforded through inoculations.

Any of this remind you of the "libertarian" platform? It should. The libertarian program is the very definition of a solution that doesn't scale.

[more to come…]

Randall Reetz

Social Media… Amplifying The Inner Sheep

In the early days of computing, those involved were pioneers, innovators, original thinkers, the super-passionate, driven by curiosity and the purity of adventure. These were a strong people. Proud. And though it sounds precious to say so, the early computing pioneers were not unlike other gritty pioneers in that they enjoyed being out well beyond the known, striding forward where there was no trail, no strength in numbers, no peer support, no mentors, no accepted pattern.

Now? Well now that computers have become ubiquitous, now that everyone has one, uses one like they use a knife and fork, the pioneers have long since been replaced by Plumber Joe, by the middle of the bell curve, by everyman and everywoman and their kids.

The computer industry is market driven! I hate that it has taken me this long to recognize the importance and implications of this now overwhelmingly obvious and simple fact. The diffusion of computers into the daily routine of the entire population has resulted in a dramatic shift in the demand landscape that informs what computing becomes. The market for computing is its users. The user today, the average user, is a whole different animal than the user/creator that defined its early history. Those of us that jumped in early probably resist the idea that what we care about really doesn't matter anymore. Though it might be true that knowledge and a deeply theoretical bases for that knowledge still matters, from a consumer market demand perspective, we grey hairs are simply invisible. The fact that we nerds ever defined a market for anything at all is the more surprising historical footnote. It is a bittersweet realization, that success of our industry would of mean the marginalization of its founders.

In every grade school class I attended there were a few kids (one or two) who were driven by a passion to know, to understand, to create. The rest, well the rest excelled at a completely different set of skills, getting through the day, unnoticed, blending in. The two groups couldn't be more different. The inquisitive few were internally driven. The rest were driven by the outward demands of success as defined by the curve. The inventive minds competed against their own ability to invent. The rest competed amongst themselves over the coveted 60 percentile that would define passing the class.

The computing market is now dominated by that larger part of the human population that defines success as climbing (which ever way possible) on top of the 60 percent of the population (of other social climbers) that makes up the bulk and center of any bell curve. As kids, these were the people who spent most of their time comparing themselves to the kids next to them. Looking over their shoulder at the other kid's test answers. Studying together so that they knew the base line before they got to the actual test. I say "climb to the top" but the word "top" when describing a bell curve does a disservice to the real meaning of averages. What we call the top of a bell curve is really the center of a frequency distribution. Climbing to the top is really a struggle to get into the center. Like fish trying to avoid a shark, there is a natural human tendency away from being alone, away from the vulnerability that is the open water that is original ideas and behavior. As a result, we constantly seek the protection of others. Each of us, as humans, spend a good deal of our energy trying to determine and then contort our behavior to that which best describes the center of normative behavior and attitude.

The similarities between schooling fish and human socialization pressures are profound. But there is one important difference. Where fish seek the center to avoid the teeth and gut of another species, the predator we humans work so hard to avoid is us, is public ridicule, being seen as different, standing out! We are in a very real sense both sheep and the sheep dogs nipping at the the sheep's legs. It is obvious that evolutionary pressures have conspired to build into our brains at least two modes of processing and that they are, at least at times, antagonistic. One is a great big "what if?" simulator, a novelty machine… the other, a social manors restriction system that cautions at the point of pain, behavior the least bit novel or different.

I have traversed the usual nature/nurture, cultural/evolutionary minefields. What I come to is this; traits that exist universally across most cultures and experienced many times within each individual's life, are most probably behaviors that have a significant genetic/physical component... are common regardless of our developmental environment and experience. Humans are obviously capable of profound novelty and abstract pattern analysis. But there is also a pervasive behavioral overlay of social control of which we are simultaneously, willing participant, and victim. What is confounding is the extent to which each system interferes with the function and success of the other… and that they are so diametrically opposed.

With regard to schooling (and herding) behavior, that which we share in common with fish (and sheep), is an indifference to where the school is, in which direction it is moving, and how fast. Under the social threat that triggers schooling, all that matters is that each of us as individuals finds our way as close as possible to the center. Humans will go along with almost any plan so long as social grouping allows us to avoid being seen as different. Obvious examples: Nazi Germany, Slavery in the southern U.S., the Nanking Sex Trade, etc.

As computers have been made "user friendly" and as the cost of ownership has dropped, this center of the bell curve, this mad fight for self-similarity that defines who we are as a species, this reflection and homogenization of the greater us, has become the market for computing. Which makes sense. Diffusion and all. But the whole history of modern computing is so short, just 40 years now, that it is surprising and a bit of a shock to realize finally that from a market perspective, computing is a mature industry. How could an industry just a generation old have transited already from go-it-alone pioneer to "I'm Love'n It" average?

The implications are huge. In particular, this insight brings social media into sharp ironic focus. Social media brings to computing the same access to community monitoring and control that gossip and storytelling brought to the camp fire. It is to computing what cheating off of your neighbor's test is to being a kid. As a person who likes to think of himself as a pioneer, I have reacted in horror, disbelief, and frustration to what has looked like fifteen years of computer industry regression.

If you accept that computers, like minds, are abstraction (language) machines, then it makes sense to wonder to what extent the human brain has suffered the same evolutionary pressure towards social assimilation and at least plausibly, away from innovation and novelty. To what extent is the rarity of the product of profound human creativity a reflection of actual physical and logical limits on and causal costs to creativity itself, and to what extent is the same rarity a product of evolutionary molded physical properties of the brain that conspire to restrict the production of novelty as a result of even greater survival pressure to promote behaviors that honor social cohesion?

If the current overwhelming trend that sees the computer as more of a social monitoring mechanism, and less a creative tool, is a trend that reflects market demand, then the same questions I am asking of the market pressures that shape the machinery of social media must be asked of the cultural pressures that have through evolutionary time shaped the mind. So long as computation is primarily consumed by human beings, both computer and mind will be shaped by the same evolutionary pressures. As technical barriers are overcome, the industry can and does react more fluidly and with higher fidelity to the demands of its consumers.

At which point, the question becomes; Which heaps greater selective pressure on the evolution of computing, the need for tools that stand in for skills the human brain lacks, or the need for tools that amplify our most attention demanding desirers? Can the two co-exist and co-evolve productively? Again, the question is asked practically of computation and at the same time, philosophically or anthropologically of the human brain and the cultural stew in which each is both product and survival pressure.

Where computing used to take its shape from the fecund imagination of computational luminaries, it has of late been lead instead in the pursuit of the lizard brain in all of us, the sub-conscious urges and fears that inform social mediation behavior. The result is all of this social media dribble, the likes of "Twitter", "13 Seconds", "myface and spacebook" [sic], and numerology based pizza ordering "iPhone Apps." What advantages do such time wasters render? Some argue that social media was the logical communication oriented extension of email and personal web sites, that social media greases mechanisms deeply human and "natural". I remain dubious to these claims. I tend to group the brand of communication that social media seems to breed with more negative forms of group behavior like cults, mass hysteria, fundamentalism, and other behaviors unique to group-think.

And what of pop-culture notions like "collective intelligence", "global awakening, and "cultural consciousness" which seem to be born of transcendent utopian notions (not dissimilar to those that feed religion and spirituality). The adherents of these optimisms appear to be blissfully unhindered by the need for causally logical argument or empirical evidence. If our computers have become social monitoring devices (at the expense of facilities that enable creativity), is there a danger that they will further distort our already distorted sense of truth? If a computer "wants" to agree with us more than it wants to accurately calculate a value, then we might already have crossed the threshold into a world where 2 plus 2 really does equal 5 (if the computer says so, it must be true!).

It would be irresponsible for me not to at this point remind myself to question my own rhetorically close topics.

These questions and trends have profound implications to populist concepts we tend to romanticize but rarely examine in detail. Democracy, plural-icy, consensus, society, culture, community, equal rights, individuality, etc. As the computer industry becomes more and more sensitive to consumer demand, its product WILL become a device that does a better and better job at the automation and magnification of human idiosyncratic behavior, of superstition, mythos, hubris, rhetoric, ego, at any and all of the emotional side effects of evolutionary pressures.  Forget about the cold indifference of causal truth that has motivated so many sci-fi stories.

The real villain to be feared in any inevitable future is the computer as hubris amplifier.

Computing's new "social media" face might disturb my pioneer sensibilities, but it reflects the satisfaction of common demand. As any market matures it learns to overcome the physical and conceptual obstacles that so plagued it in its earlier years. Unburdened by things like processor speed and storage density, the computer industry was able to pursue directions more in line with human consumptive desire than with the technical or theoretical goals of computer "scientists". Marketeers trump scientists when the saturation of a product becomes universal.

It all makes sense. I am still depressed by the anti-innovation implications of the mass market dumbing down of computing, but at least I understand why it happened, what it means. Knowledge, even depressing knowledge, should open doors, should allow more efficient planning and prediction. But what exactly are the implications when a creative tool is hijacked by a larger urge to avoid at all costs, change and novelty. What happens when the same mass-market demand pressures that cause fads and trends focus their hysterical drive towards homogeneity onto the evolution of a tool originally intended for and idealized for creative exploration? What exactly do you get when you neuter the rebellion right out from underneath Picasso's brush?, When you force Darwin to teach sunday school?

Just what does it mean when our creative medium becomes sensitive to social interaction? Pen and paper never knew anything about the person wielding them, certainly didn't know how the greater society was reacting to what was being written or drawn.

If the average human feels more comfortable doing exactly what everyone else is doing, seeking the center, would much rather copy the answers off of their desk-mate's test than understand the the course content, well then it only makes sense that, we, the royal "we", would use this computing tool in the same way that we use the rest of the stuff in our lives, to help us find the social center and stay there.

It's not just the marketplace that has shifted towards the demographic center. The schooling mentality has crept into and now dominates computing as an industry. Personnel and management which in the early days of computing was awkwardly staffed by engineers and scientists and groupie hobbyists is now as diverse (homogenous?) a mix of humans as you could find in any industry. Even the scientists are cut from a different cloth. It takes a special and rare (crazy) human being to invent an industry from nothing. When avocations become well funded departments at major universities, the graduates are not likely to be as intellectually adventurous (understatement). As any MBA knows, the success of an industry is most sensitive to its ability to understand and predict the demand of its market. Who better to know the center of the consumer bell curve, the average Joe and Jane, than that same Joe and Jane? Joe and Jane Average now dominate the rank and file workers that make up the computer industry. This means administration, it also means sales and marketing, both of which make sense. Less intuitive, is but equally understandable, Joe and Jane Average have taken over the research and design and long range planning arms of the computer industry. Even where it isn't the actual Joe and Jane, it is people who do a kick-ass job of channeling them.

Does market saturation mean the evolution of computing has reached its zenith? I don't think it does. But, once an industry has reached this level of common adoption, the appearance of maturity and stability are hard to shake. Momentum and lock-in take hold. I have tried repeatedly to sell paradigm-busting and architectural re-imaginings of the entire computing paradigm to valley angles and big ticket VC firms only to realize that I wasn't selling to the current market. Try opening a VC pitch with "The problem with the computer industry is…" to a group of thirty-five year old billionaires who each drove to the meeting in custom ordered european super cars. Needless to say, their own rarefied experience makes it hard for them to connect with anything that follows. This is a classic conundrum in the study of evolution. I presume it is a classic conundrum facing evolution itself. Why should a system that is successful in the current environment ever spend any energy on alternative schemes? How could it? It is hard to even come to the question "What could we be doing better?" when surrounded by the luxury of success.

At the same time, it is unlikely (impossible?) that any system, no matter how successful in the present, will long define success in the future. It might even be true that the more successful a scheme, the more likely that scheme will hasten an end to the environment that supported it (through faster and more complete exploitation of available resources). The irony of success!

But we a are smart species. We are capable of having this discussion aren't we? So we might be prepared as well to discuss the inevitability of the end of the current computational scheme. No? To prepare as a result, for the next most likely scheme (or schemes)? Especially those of us who study language, information, computation, complex systems, evolution. Especially an industry that has so advanced the tools and infrastructure of complexity handling. No? Surely we in the computer industry are ideally situated to have a rational vantage from which to see beyond the success of the current scheme? Yet, for the reasons I have already postulated (market homogeneity and success blindness), and others, we seem to be directionless in the larger sense, incapable of the long range and big picture planning that might help us climb out of our little eden and into the larger future that is inevitable and unavoidable. Innovations springing forth from the current industry pale in comparison to those offered up 20, 30, even 40 years ago.

Speaking of which; I just found the resource list for Pattie Maes' "New Paradigms In Human Computer Interaction" class at MIT's Media Lab. These are video clips of speeches and demos of early and not so early computing pioneers showing off their work prescient work. Mind blowing. The future these folks (from places like MIT, Brown, Stanford Research, the Rand Corporation, Xerox PARC, and other institutions), well, it is sooooo much more forward looking than what has become of computing (or how the average computer is used). Everyone would do well to sit down and view or re-view these seminal projects in the context of their surprisingly early introduction.

I have written quite a few essays lambasting what I see as the computing industry's general loosing of its collective way… at the very least, a slowing down of the deep innovation that drove computing's early doers. Even when potentially powerful concepts ("Semantic Computing", "Geo-Tagging", "Augmented Reality", "User-Centered Cloud Computing") are (re-)introduced, their modern implementations are often so flawed and tainted by an obsession to kowtow to market pressures (or just plain lie or fake it) that the result is an insult, a blaspheme of the original concept being hijacked.

Over ten years ago, I gave a talk at Apple titled: "Apple Got Everything It Has For Free, And Sat On Its Ass For Eight Years While The Rest Of The World Caught Up".

Which is true, at least with regard to the Lisa/Macintosh which Xerox PARC handed them (check out the Star System) and the way they just kind of sat on the WISIWG mode-less graphical interaction scheme while other computer companies (Borland and then, reluctantly, Microsoft) eventually aped the same. At the time of my presentation, Apple had wasted its eight year lead extrapolating along the same scheme… a bigger badder WIMP interface, when they could have been introducing paradigm vaulting computational systems (that would put Seattle on another eight year chase).

But from a marketing perspective I couldn't have been more wrong. I have got to keep reminding myself that I no longer represent the market for computers! I wish I did, but I don't. I am an outlier, a small dot on a very long tail, I am pluto or maybe even just some wayward ice and dust comet to the big ordinary inner planets that trace out wonderfully near-circular orbits around the sun. In later presentations, I explained that Apple's critically acclaimed "Think Different" campaign and the elitist mindset from which it was derived, was the reason they had never garnered more than 2 or 3 percent of the computer market. I explained that Bill Gates' "genius" lie not in his profound insight, but in his ability to understand the motivations that drive the average person… namely to never be caught doing something that someone else could question. That means acting the same as everyone else. That means knowing how everyone else is acting. That means social media!

Nobody (other than wacky outliers like me) wants to be compared to iconoclasts like Einstein or Picasso or Gandhi or Gershwin. Very few people really want to "think different". Most people wouldn't be caught dead risking that type of public audacity. You have got to be pretty confident that you have an answer to the dreaded question "Why are you doing that?" to ever DO THAT (individually creative thing) in the first place.

Pioneers know exactly why they do what they do. They are driven by knowing more than others and by the excitement of being somewhere others haven't been… by being very much outside of the ball of fish that others seek as protection.

But if you want to sell a billion computers instead of just a few thousand, then you want to pay attention to the fish in all of us and not to the smiling and sock-less Einstein's on a bike.

But the larger and longer implications of mass market sensitivity are profound. While it is entirely true that paying attention to the center of the cultural bell curve will allow any industry to exploit more of the total available consumption potential, such behavior does not necessarily produce the paradigm jumping disruption upon which long term progress depends. If your Twinkies are selling really well, you might not notice that your customers are all reaching morbid levels of obesity and malnutrition or that the world is crumbling around the climate changing policies upon which your fast food empire is based.

The satisfaction of human center-of-the-fish-school urge is not necessarily the best recipe for success if success means more than short term market exploitation. In the long run, potential and satisfaction are decidedly not the same thing; they are, as a matter of fact, very often mutually antagonistic. Rome comes to mind. What comes to mind when I mention the phrase "dot com" or "mortgage backed securities" or "energy deregulation".

The mass market topology that has driven computing and communication towards a better and better fit with the demands of the largest and most homogenized consumer base the earth has ever witnessed could very likely work against the types of creative motivations that might be necessary to rescue us from the bland and the average, from the inward facing spiral of self-sameness that avarice alone yields. I am increasingly worried that the computer's seemingly endless potential to be warped and molded chameleon-like to perfectly satisfy our most basic evolutionary proclivities, to amplify unhindered, urges made strong against real scarcity in the natural environment, has already so distracted us within our egos and desires that we might not be able to pull our heads out before we get sucked completely down and into our very own John Malkovich-ian portals of endless identity-amplication.

And why does it matter? Because, like it or not, progress in every single field of human endeavor is now predicated on advancements in computation. More profound than anything else science has discovered about nature is that information is information. Information is domain agnostic. Which means that advances in information processing machinery benefit all information dependent fields of inquiry. Which is every field of inquiry! Which also implies that all fields of human inquiry are increasingly sensitive to the subtle ways that any one computational scheme effects information and the scope of information process-able within that scheme.

At any rate, it is alarming (if understandable) to me that the trending in this industry is towards a re-molding of the computer into ego amplifier, pleasure delivery system, truth avoidance device, distraction machine, Tribbles (as in "The Trouble With…"). The more insightful among us may want to place a side bet or two (if only as evolutionary insurance) on more expansive futures. Some of us are not so distracted by the shininess of these machines or by how they are getting better and better at focusing our attention at our own navels, to see futures for computing that are more expansive than the perfect amplification of the very human traits least likely to result in innovation or progress. There is still time (I hope).

Randall Reetz