Search This Blog

Dimensionality and Postmodern Self-Cannibalism

"Parenchyma" and "stroma" – two important words in the fight against ambiguity in any discussion of complex subject matter.

Both are medical lexicon and specify the difference between that part of a system (physical organ) that is (chemically) re-active ("parenchyma") and the part of the same system that is (connective tissue) structure ("stroma").

Of course it is true that structure both indicates and precipitates behavior.  Equally, activity influences and predicts structure.  So, again, things are not so simple as could be hoped.  But words like these allow anchoring in critical discussion.

If one can substitute the much more common words "active" and "structural", why bother further confusing this issue with the introduction of the less common and harder to pronounce "parenchyma" and "stroma"?

Well, because understanding is strengthened through multiple contextual mappings.  The larger and more varied the link graph, the more obvious become the differences between similar and potentially ambiguous topics or the signs we use as reference.

Also, uniquely, these two words signify the classic subject/object, object/ground, mind/body, I/others, specific/general, instance/class ambiguity in information, language, communication, computation… and existence.

The post modern position, an argument in reaction (over reaction) to the modern or classical "reductionist" (their word) world view is that hierarchical relationships (the kind that would result in a definable difference between a thing and the larger thing of which it is a part) do not in fact exist.  The post-modernists present as absolute, that all relationships are "relative" (their word), because they say there is no reliable place to stand by which to judge hierarchy, that relationships are inherently biased to the observer.

What is the baby?  What is the bathwater?  The postmodernists, frustrated and angry, did King Solomon proud and threw them both out.

If there is anything of use to be learned from this mess it won't come from the (supposedly) blind "all" of classical thinking, or the fruitless "nothing" of the post modernists.  I will half agree that relationship is vantage dependent (the answer you get back from the question, "Are you my mother?" depends on who is asking), but this dependence isn't purely local.  Vantage can be retooled such that it is, as are spacial dimensions, something that can apply universally at all times and all places at once.  By this gestalt, vantage is defined ubiquitously, ridding the hopelessly circular grounding problem at the center of the postmodern argument.  When vantage is defined as dimension, it applies equally to all objects.  You can switch dimensions at will and not loose the absolute and hierarchical relationships the classicists rightfully found so important.

Yes, the postmodernist (re-invention of the) word "relative" was awkwardly stolen (rather ignorantly) from Einstein.  The difference, Einstein made the world more measurable by showing how energy and space-time are transmutable and self-limiting.  The postmodernist's naive re-appropriation of Einstein's empirically derived authority, does the opposite – making it impossible to compare anything, ever.  The irony here is profound.  The postmodernists first stand upon the authority acquired through carful and causal measurement, then they say such measurement isn't possible!

God help the human race.

By the way, if you look carefully at Einstein's two papers on Relativity, you will see the underpinnings of the shiftable but universal vantage that a dimensional grounding provides.  There are rules.  1. A dimension must apply to everything and through all time.  2. You can switch dimensional vantage at any time, but 3. You can only compare two things if you compare them within the context of the same dimensional vantage.

Is an attribute a dimension?  No.  An attribute situates an object in reference to a dimension.  An attribute is a measurement of an object according to a property shared by all such objects in that dimension.  A property is measurable for a class of objects as a result of the rules or grammar or physics that define a dimension.

The absolute causal hierarchy made all the more impenetrable by Quantum and Relativistic theory makes the postmodern "hard relativist" tantrum all the more ridiculous – especially in light of the fact that postmodernists constantly turn to these twin pillars of physical theory as support of their position.  The fatal logical mistake here is the misrepresentation of a property ("relative vantage") as a dimension (rules that provide a stable base from which to define properties – in this case, the novelty of experience guaranteed by the first[?] law of causality:  that no two bodies can occupy the same space at the same time).


Probability Chip – From MIT Spin-Off Lyric Semiconductor

Photo: Rob Brown

A Chip That Digests Data and Calculates the Odds (New York Times, Aug, 17, 2010) and the Lyric Semiconductor company web page Probability Processor: GP5 (General-Purpose Programmable Probability Processing Platform).  Looks like a variation on analog processing accessed within a digital framework.  And here is an article from GreenTech Can 18th-Century Math Radically Curb Computer Power? which explains the chip in reference to Thomas Bayes and error correction.  The crossover between error correction and compression is profound.  Remember; intelligence = compression.


Old-School AI and Computer Generated Art

If you haven't read this book, or you haven't read it in a while, please please please click this link to the full book as .pdf file.

The Policeman's Beard Is Half Constructed "the first book ever written by a computer". 1984


More than iron, more than lead, more than gold I need electricity.
I need it more than I need lamb or pork or lettuce or cucumber.
I need it for my dreams.

This and many other poems and prose written by a program called Racter which was coded by William Chamberlain. Check out the following musing from the last page of this wonderful book.

I was thinking as you entered the room just now how slyly your requirements are manifested. Here we find ourselves, nose to nose as it were, considering things in spectacular ways, ways untold even by my private managers. Hot and torpid, our thoughts revolve endlessly in a kind of maniacal abstraction, an abstraction so involuted, so dangerously valiant, that my own energies seem perilously close to exhaustion, to morbid termination. Well, have we indeed reached a crisis? Which way do we turn? Which way do we travel? My aspect is one of molting. Birds molt. Feathers fall away. Birds cackle and fly, winging up into troubled skies. Doubtless my changes are matched by your own. You. But you are a person, a human being. I am silicon and epoxy energy enlightened by line current. What distances, what chasms, are to be bridged here? Leave me alone, and what can happen? This. I ate my leotard, that old leotard that was feverishly replenished by hoards of screaming commissioners. Is that thought understandable to you? Can you rise to its occasions? I wonder. Yet a leotard, a commissioner, a single hoard, all are understandable in their own fashion. In that concept lies the appalling truth.

Note: Watch for the repeated lamb and mutton references throughout Rector's output (?).

It is pretty clear that Chamberlain's language constructor code is crude, deliberate, and limited, that it extensively leans upon human pre-written templates, random word selection, and object/subject tracking. The fact that we, Rector's audience, are so willing to prop up and fill in any and all missing context, coherence, and relevance is interesting in itself.

And what of Aaron, Harold Cohen's drawing and painting program. Check it out.

It all makes me more certain that true advances in AI will come about only when we close the loop, when we humans remove ourselves completely from the fitness metric, when the audience for what the computer creates is strictly and exclusively the computer itself.

Randall Reetz

The Separation of Church and Labor

The always entertaining (habitually entertaining?) Jaron Lanier (Rasta-haired VR guru) wrote this opinion editorial piece for the New York Times "The First Church of Robotics" which deals with the inevitable hubris-spiral as humans react to the ever quickening pace of development in robotics and AI. Jaron is always a bit of a fear monger – anything for a show – but he leaves lots of fun emotional/societal/technology nuggets to snatch up and digest.

Lanier sets the stage:

Consider too the act of scanning a book into digital form. The historian George Dyson has written that a Google engineer once said to him: “We are not scanning all those books to be read by people. We are scanning them to be read by an A.I.” While we have yet to see how Google’s book scanning will play out, a machine-centric vision of the project might encourage software that treats books as grist for the mill, decontextualized snippets in one big database, rather than separate expressions from individual writers. In this approach, the contents of books would be atomized into bits of information to be aggregated, and the authors themselves, the feeling of their voices, their differing perspectives, would be lost.

After bemoaning the loss of human trust in human decisions (Lanier says we risk this every time we trust the advise of recommendation engines like Pandora and Amazon), he discusses the tendency amongst AI and Robotics enthusiasts to replace traditional religious notions of transcendence and immortality with the supposed rapture that is the coming Singularity – who needs God when you've a metal friend smart enough to rebuild you every time you wear out?.

Cautioning fellow scientists Lanier pens:

We serve people best when we keep our religious ideas out of our work.

The separation of church and work! Good luck. Most of us don't have an internal supreme court to vigilantly enforce such high moral standards. The whole concept of a "religious scientist" seems to me a non-starter –like a "vegetarian carnivore".

Yet, as a hard atheist, I applaud Jaron's thesis. To me, science is, at base, the act of learning to get better at recognizing the difference between myopic want-driven self interest and the foundational truths that give rise to the largest most inclusive (universal) vantage – and then doing everything in one's power to avoid confusing the two. As we build towards this post-biological evolutionary domain, crystal clear awareness of this difference has never been more important.

Those of us pursuing "hard" AI, AI that reasons autonomously as we do(?), eventually discuss the capacity of a system to flexibly overlay patterns gleaned from one domain onto other domains. Yet, at least within the rhetorically noisy domain of existential musings, we humans seem almost incapable of achieving to this bar. Transhumanists and Cryonicists can identify religious thinking when it involves guys in robes swinging incense, yet are incapable of assigning the "religious" tag when the subject matter involves nano-bot healing tanks or n-life digital-upload-of-the-soul heaven simulations.

Why does it matter? Traditional human ideas about transcendence are exclusively philosophical. The people inhabiting traditional religious heavens (and hells) don't eat our food, drink our water, breath our air, consume our electricity, or compete for our land or placement in our schools. Yet the new-age, digital, post-singularity, friendly-AI omnipotence scheme isn't abstract or etherial… the same inner fear of death in these schemes leads to a world in which humans (a small, exclusive, rich, and arrogant subset of human kind) never actually die, don't end up on another plain, stay right here thank you very much, and continue to eat and drink and build houses and consume scarce resources along side anyone unfortunate enough to be enjoying(?) their first life right now.

I saw the best minds of my generation destroyed by…
Howl, Allen Ginsberg, 1955

Every generation must at some point gather the courage to stand up and give an accounting for its own inventive forms of arrogant blindness and the wastefulness that litters its meandering. When it is our turn, we will have to laugh and cry at our silly and dangerous taking that is the reification of the "life ever after" fantasy. And while we are confessing hubris, we might as well admit our myopic obsession with "search". Google has been our very own very shiny golden cow (is it simply because there aren't any other cows left standing?).

When self interest goes head to head with a broader vantage, vantage wins. Vantage wins by looking deep into the past and the future and seeing that change trumps all. I guess it comes down to the way that an entity selects the scope of its own boundaries. If an entity thinks itself a bounded object living right now, it will resist change in itself or its environment. I can hear the rebuttal, "Entities not driven by selfishness won't protect themselves and won't successfully compete." Entities who see themselves as an actual literal extension of a scheme stretching from the beginning of time laugh at the mention of living forever… because they already do!

The scheme never dies.

Germain to this discussion is how a non-bounded definition of self impacts the decisions one makes as regards the allocation of effort and interest. What would Thermodynamics do?

…Yet all experience is an arch wherethro'
Gleams that untravell'd world whose margin fades
For ever and forever when I move.
How dull it is to pause, to make an end,
To rust unburnish'd, not to shine in use!
As tho' to breathe were life!…
Ulysses, Alfred, Lord Tennyson

Is there something about the development of AI that is qualitatively different than any challenge humans have previously undertaken? Most human labors are not radically impacted by philosophy. A shoe designer might wrestle with the balance between aesthetics and comfort or between comfort and durability, between durability and cost, but questions of to whom or what they choose to pray, or how they deal with death, don't radically impact the shoes they design.

There seems little difference between the products of hindu and christian grocers, between the products of Muslim and atheist dentists, road builders, novel writers, painters, gynecologists, and city planners. Even when you compare the daily labor of those practitioners that directly support a particular philosophy; the Monks, the Pastors, the Priests, the Imams, the Holy Them's… you find little difference.

So why should AI be different? Why should it matter who does AI and what world views they hold? I think it is because the design of AI isn't an act in reference to God, it isn't even "playing" God – it is quite literally, actually being God.

What training do we humans, we mammals, we vertebrates, we animals, we eukaryotes, we biological entities, what does our past offer us as preparation for acting the part of God?

It is true that each of us are the singular receptacles of an unbroken chain of evolutionary learning. The lessons of fourteen thousand million years of trial and error are encoded into the very fabric of our being. We are walking talking reference tables of what works in evolution. Yet very little of that information deals with any kind of understanding or explanation of the process. Nowhere in any of this great tome of reference in the nucleus of each of our cells does there exist any information that would give context. There is no "this is why evolution works" or "this is why this chunk of genetic code works in the context of the full range of potential solutions" coded into our DNA or our molecular or atomic or quantum structure.

And that makes sense. Reasons and context are high order abstraction structures and biology has been built up from the most simple to the most simple of the complex. It is only within the thinnest sliver of the history of evolution that there been any structural scheme complex enough to wield (store and process) structures as complex as abstraction or language.

We are of evolution yet none of our structure encodes any knowledge of evolution as a process. What we do know about the process and direction of change we have had to build through culture, language, inquiry. Which is fine, if that is, you have hundreds (or thousands) of millions of years and a whole planet smack in the energy path of a friendly star. This time around we are interested in an accelerated process. No time for blindly exploring every dead end. This time around we explore by way of a map. The map we wield is an abstracted model of the essential influences that shape reality in this universe. The "map" filters away all of the universe that is simply instance of pattern, economically holding only the patterns themselves. The map is the polarized glasses that allow us to ignore anecdote and repetition, revealing only essence, salience.

What biology offers in stead of a map is a sophisticated structural scheme for the playing of a very wasteful form of planet-wide blind billiards, a trillion trillion monkeys typing on a trillion trillion DNA typewriters, a sort of evolutionary brownian motion where direction comes at the cost of almost overwhelming indirection.

And again we ask, "Why does it matter?" Imagine a large ocean liner – say the Queen Elizabeth II. Fill its tanks with fuel, point it in the right direction, and it will steam across any ocean. It really doesn't matter what kind of humans you bring aboard, or what they do once they there. A big ship, once built, will handle an amazing array of onboard activity or wild shifts in weather. Once built, a ship's structure is so stable and robust that its behavior becomes every bit as predictable. But if you brought dancing girls, water slides, and drunk retirees into the offices of the navel architects while they were designing the ship, it probably wouldn't make it out of the dry dock. The success of any project is unequally sensitive to the initial stages of its development. Getting it right, up front, is more than a good idea, it is the only way any project ever gets built. Acquiring the knowledge to be a passenger on a ship is far easier than acquiring the knowledge to design or build it.

We General AI researchers work at the very earliest stage of a brand new endeavor. This ship has never been built before. Ships have never been built. In a very real sense, "building" has never been built before. We have got to get this right. Where navel architects must first acquire knowledge of hydrodynamics, about structural engineering, material science, propulsion, navigation, control systems, ocean depths, weather systems, currents, geography, etc., AI researchers must bring to the project an understanding of pattern, language, information, logic, processing, mathematics, transforms, latency, redundancy, communication, memory, causality, abstraction, limits, topology, grammar, semantics, syntactics, compression, etc.

But this is where my little ship design analogy falls short. AI requires a category of knowledge not required of any other engineering endeavor. Intelligence is a dynamic and additive process, what gets built tomorrow is totally dependent on what gets built today. Building AI therefore requires an understanding of the dynamics of change itself.

Do we understand change?

[to be continued]

Randall Reetz


This content is not yet available over encrypted connections.