Lanier sets the stage:
Consider too the act of scanning a book into digital form. The historian George Dyson has written that a Google engineer once said to him: “We are not scanning all those books to be read by people. We are scanning them to be read by an A.I.” While we have yet to see how Google’s book scanning will play out, a machine-centric vision of the project might encourage software that treats books as grist for the mill, decontextualized snippets in one big database, rather than separate expressions from individual writers. In this approach, the contents of books would be atomized into bits of information to be aggregated, and the authors themselves, the feeling of their voices, their differing perspectives, would be lost.
After bemoaning the loss of human trust in human decisions (Lanier says we risk this every time we trust the advise of recommendation engines like Pandora and Amazon), he discusses the tendency amongst AI and Robotics enthusiasts to replace traditional religious notions of transcendence and immortality with the supposed rapture that is the coming Singularity – who needs God when you've a metal friend smart enough to rebuild you every time you wear out?.
Cautioning fellow scientists Lanier pens:
We serve people best when we keep our religious ideas out of our work.
The separation of church and work! Good luck. Most of us don't have an internal supreme court to vigilantly enforce such high moral standards. The whole concept of a "religious scientist" seems to me a non-starter –like a "vegetarian carnivore".
Yet, as a hard atheist, I applaud Jaron's thesis. To me, science is, at base, the act of learning to get better at recognizing the difference between myopic want-driven self interest and the foundational truths that give rise to the largest most inclusive (universal) vantage – and then doing everything in one's power to avoid confusing the two. As we build towards this post-biological evolutionary domain, crystal clear awareness of this difference has never been more important.
Those of us pursuing "hard" AI, AI that reasons autonomously as we do(?), eventually discuss the capacity of a system to flexibly overlay patterns gleaned from one domain onto other domains. Yet, at least within the rhetorically noisy domain of existential musings, we humans seem almost incapable of achieving to this bar. Transhumanists and Cryonicists can identify religious thinking when it involves guys in robes swinging incense, yet are incapable of assigning the "religious" tag when the subject matter involves nano-bot healing tanks or n-life digital-upload-of-the-soul heaven simulations.
Why does it matter? Traditional human ideas about transcendence are exclusively philosophical. The people inhabiting traditional religious heavens (and hells) don't eat our food, drink our water, breath our air, consume our electricity, or compete for our land or placement in our schools. Yet the new-age, digital, post-singularity, friendly-AI omnipotence scheme isn't abstract or etherial… the same inner fear of death in these schemes leads to a world in which humans (a small, exclusive, rich, and arrogant subset of human kind) never actually die, don't end up on another plain, stay right here thank you very much, and continue to eat and drink and build houses and consume scarce resources along side anyone unfortunate enough to be enjoying(?) their first life right now.
I saw the best minds of my generation destroyed by…Howl, Allen Ginsberg, 1955
Every generation must at some point gather the courage to stand up and give an accounting for its own inventive forms of arrogant blindness and the wastefulness that litters its meandering. When it is our turn, we will have to laugh and cry at our silly and dangerous taking that is the reification of the "life ever after" fantasy. And while we are confessing hubris, we might as well admit our myopic obsession with "search". Google has been our very own very shiny golden cow (is it simply because there aren't any other cows left standing?).
When self interest goes head to head with a broader vantage, vantage wins. Vantage wins by looking deep into the past and the future and seeing that change trumps all. I guess it comes down to the way that an entity selects the scope of its own boundaries. If an entity thinks itself a bounded object living right now, it will resist change in itself or its environment. I can hear the rebuttal, "Entities not driven by selfishness won't protect themselves and won't successfully compete." Entities who see themselves as an actual literal extension of a scheme stretching from the beginning of time laugh at the mention of living forever… because they already do!
The scheme never dies.
Germain to this discussion is how a non-bounded definition of self impacts the decisions one makes as regards the allocation of effort and interest. What would Thermodynamics do?
…Yet all experience is an arch wherethro'Ulysses, Alfred, Lord Tennyson
Gleams that untravell'd world whose margin fades
For ever and forever when I move.
How dull it is to pause, to make an end,
To rust unburnish'd, not to shine in use!
As tho' to breathe were life!…
Is there something about the development of AI that is qualitatively different than any challenge humans have previously undertaken? Most human labors are not radically impacted by philosophy. A shoe designer might wrestle with the balance between aesthetics and comfort or between comfort and durability, between durability and cost, but questions of to whom or what they choose to pray, or how they deal with death, don't radically impact the shoes they design.
There seems little difference between the products of hindu and christian grocers, between the products of Muslim and atheist dentists, road builders, novel writers, painters, gynecologists, and city planners. Even when you compare the daily labor of those practitioners that directly support a particular philosophy; the Monks, the Pastors, the Priests, the Imams, the Holy Them's… you find little difference.
So why should AI be different? Why should it matter who does AI and what world views they hold? I think it is because the design of AI isn't an act in reference to God, it isn't even "playing" God – it is quite literally, actually being God.
What training do we humans, we mammals, we vertebrates, we animals, we eukaryotes, we biological entities, what does our past offer us as preparation for acting the part of God?
It is true that each of us are the singular receptacles of an unbroken chain of evolutionary learning. The lessons of fourteen thousand million years of trial and error are encoded into the very fabric of our being. We are walking talking reference tables of what works in evolution. Yet very little of that information deals with any kind of understanding or explanation of the process. Nowhere in any of this great tome of reference in the nucleus of each of our cells does there exist any information that would give context. There is no "this is why evolution works" or "this is why this chunk of genetic code works in the context of the full range of potential solutions" coded into our DNA or our molecular or atomic or quantum structure.
And that makes sense. Reasons and context are high order abstraction structures and biology has been built up from the most simple to the most simple of the complex. It is only within the thinnest sliver of the history of evolution that there been any structural scheme complex enough to wield (store and process) structures as complex as abstraction or language.
We are of evolution yet none of our structure encodes any knowledge of evolution as a process. What we do know about the process and direction of change we have had to build through culture, language, inquiry. Which is fine, if that is, you have hundreds (or thousands) of millions of years and a whole planet smack in the energy path of a friendly star. This time around we are interested in an accelerated process. No time for blindly exploring every dead end. This time around we explore by way of a map. The map we wield is an abstracted model of the essential influences that shape reality in this universe. The "map" filters away all of the universe that is simply instance of pattern, economically holding only the patterns themselves. The map is the polarized glasses that allow us to ignore anecdote and repetition, revealing only essence, salience.
What biology offers in stead of a map is a sophisticated structural scheme for the playing of a very wasteful form of planet-wide blind billiards, a trillion trillion monkeys typing on a trillion trillion DNA typewriters, a sort of evolutionary brownian motion where direction comes at the cost of almost overwhelming indirection.
And again we ask, "Why does it matter?" Imagine a large ocean liner – say the Queen Elizabeth II. Fill its tanks with fuel, point it in the right direction, and it will steam across any ocean. It really doesn't matter what kind of humans you bring aboard, or what they do once they there. A big ship, once built, will handle an amazing array of onboard activity or wild shifts in weather. Once built, a ship's structure is so stable and robust that its behavior becomes every bit as predictable. But if you brought dancing girls, water slides, and drunk retirees into the offices of the navel architects while they were designing the ship, it probably wouldn't make it out of the dry dock. The success of any project is unequally sensitive to the initial stages of its development. Getting it right, up front, is more than a good idea, it is the only way any project ever gets built. Acquiring the knowledge to be a passenger on a ship is far easier than acquiring the knowledge to design or build it.
We General AI researchers work at the very earliest stage of a brand new endeavor. This ship has never been built before. Ships have never been built. In a very real sense, "building" has never been built before. We have got to get this right. Where navel architects must first acquire knowledge of hydrodynamics, about structural engineering, material science, propulsion, navigation, control systems, ocean depths, weather systems, currents, geography, etc., AI researchers must bring to the project an understanding of pattern, language, information, logic, processing, mathematics, transforms, latency, redundancy, communication, memory, causality, abstraction, limits, topology, grammar, semantics, syntactics, compression, etc.
But this is where my little ship design analogy falls short. AI requires a category of knowledge not required of any other engineering endeavor. Intelligence is a dynamic and additive process, what gets built tomorrow is totally dependent on what gets built today. Building AI therefore requires an understanding of the dynamics of change itself.
Do we understand change?
[to be continued]
Randall Reetz