Hey all (?),
Have any of you ever experienced the awkwardness of nervous "nerd" laughter... well the link below will provide a good example of what this is like. The link is to the Future Salon and in particular a video stream about half the way down the page entitled:
"Future Salon speakers Jaron Lanier and Eliezer Yudkowsky square off"
It is video conference phone call split screen debate between this Yudkowsky guy who is the head scientist at the Singularity Institute, and Lanier who has been the genius hippy in red dread locks since his early pioneering work with Virtual Reality and artificial vision systems.
Before you click the link, let me frame the debate.
These two guys represent the two extremes of a subtle range of viewpoints on evolution, AI, and human consciousness.
On one end you find the "Hard AI" camp (here represented by Yudkosky) which believes that intelligence is simply an emergent property of the physics of this universe and the evolutionary process, and so, should yield its secrets to scientific investigation and by extension, should be evolve-able and build-able or extend-able through directed pragmatic human effort.
On the other end of this polemic you find the "humanists". The humanists have trouble with the idea that consciousness is reducible to units that could be mechanized in a substrate other than biology or that intelligence could result from the computational gestalt in use today. Though his professional life consists of working on the kinds of computing problems many would label "AI", Jaron is one of these "humanists".
Jaron's main criticism of the hard AI camp in this debate is that their strong attachment to finding a way past death and their a-priori belief in the possibility of reasonably building self evolving intelligence together become so rhetorically invasive that they can no longer do objective investigation or engineering... that their beliefs and desires make them "religious".
Yudkoski could make an even stronger case against the same tendency towards the religiousness of the humanist position as it is based upon the extreme human-centrism that is the notion that consciousness is unique and magic in that it stands alone as something special to humans or biology... but he doesn't. I can't tell if he just doesn't realize that Jaron is by far the more religious of the two... or that he is just two nice to do so.
To me, this is not the logical scientific debate both seem insistent upon presenting, but between a Southern Baptist Minister and a Catholic Priest who are both under the self-delusion that they are more atheistic and objective than the other.
If you can stand the awkward nerd-fest mannerisms (Saturday Night Live could have a field day with these two characters), this little debate goes a long way in illustrating some of the deep philosophical polemics that seem to pop up anew with each new technology or cultural innovation and each new generation.
I can't win. Even in AI... in the field that best matches my own interests, I am a loner. I represent interests and motivations not expressed by anyone else.
I respect both of these researchers. Each is passionate and extremely well prepared for this debate and bring to it a lifetime of concerted thinking, experimentation, and theory. The debate is a spectacle: like a 1960s Japanese monster movie. And just as herky-jerky awkward. Very illuminating on so many many levels. This video could be the basis of a graduate thesis on science in the shadow of post-modern thought (confusion?).
From my perspective, Jaron is a nothing more than a (very bright) priest who can't stop doing science in the basement, and Yudkoswsky is nothing less than a scientist that can't help wanting to build a God.
Randall Reetz
Change increases entropy. The only variable; how fast the Universe falls towards chaos. Determining this rate is the complexity being carried. Complexity exists only to increase disorder. Evolution is the refinement of a fitness metric. It is the process of refining a criteria for the measurement of the capacity of a system to maximize its future potential to hold complexity. This metric becomes ever more sophisticated, and can never be predetermined. Evolution is the computation.
Search This Blog
Friendly AI?
Yesterday, I attended a talk by AI researcher Tim Freeman. What follows is my reaction.
Tim introduced a proposal for a method to cut down through all of the detail and complexity of standard AI implementation by exposing the logical essence that sits at base in any intelligence (irreducible). In other words, his approach was more Godel than Minsky… more Nash than Wozniac. His argument, though not stated, seemed to be based upon the tenant that information is information irrespective of complexity. An algorithm that works for a short string of bits, even for a single bit, will work just as well at any level of syntactic or semantic complexity.
I like this approach. Strip the detail to better reveal the essence.
When using this approach one must show that, or accept that, no qualitative attribute of information will ever effect the logic governing quantity attributes of information.
Again, I suspect that all qualitative aspects of information are derivable from, in fact emerge from, the more basic rules that govern information at the quantitative level. In essence this is the same as declaring that it is impossible to construct a molecule will ever change the physics that governs the shape and behavior of the atoms of which it is built. Reasonable. True.
This basic set of assumptions reframes the study of AI. But only if intelligence can be shown to emerge purely from information and information processing… from logic.
If there is some extra-infomrational aspect necessary for the formation of intelligence, than all bets are off… than this approach is at most a sub-system contributor to some larger and deeper organizational influencers. If information doesn't explain intelligence, than something else will have to take its place and this something else will have to be worked into a science that can be explored, organized, and abstracted.
If information can be shown to be both robust and causal in all intelligence, than logic and math seem like reasonable tools for exploration, testing, prediction. and as a solid base of development.
However, there is something about this set of assumptions that makes people angry and scared. Turns out that a purely informational study of AI is the mother of all reductionist/wholest battlefields. There is something about being human that resists the use of the word "intelligence" as a super-catagory that can describe the interaction between two hydrogen atoms, and the works of Einstein by the same criteria and label them both as equally valid examples as the same super-catagory; intelligence!
In this resistance, we are, all of us (at least emotionally), holists. Existentially, day to day, our experience of intelligence is far removed from chemical structure, planetary dynamics, and the characters that make up this string of text. Intelligence, at least our human experience of it, seems profound to the point of miraculous… extra-physical. We therefore have a tendency to define intelligence as a narrow and recent category that is at best only emergent-aly related to other more mundane structures and dynamics. In doing so, we set up an odd and logically fragile situation that demands an awkward magic line in the sand, a point before which there isn't intelligence and beyond which there is. Worse still, our protectionist tendencies with regard to intelligence are so strong as to allow (even within science-oriented thinkers) us accept the existence of so non-scientific a distinction to co-exist in an otherwise consistent mechanical model of the universe.
Of course history is littered with examples of just this sort of human-centric paradox of logic. Biologists, for instance, were often among the scientists that pushed back hardest against Darwin's notions. Darwin's ideas created a super-catagory that had the effect of comparing equally all life, of removing the sentimental line that we humans had desperately erected between us and the rest of biology.
And here we are again, just 75 years later, actively making the exact same mistake. Apparently, after grudgingly accepting kinship with all things living, we have now retreated behind a new false line of privilege and specialness… our intelligence.
Again, one can only argue this separatist position by refuting and rejecting the quantitative mechanistic hierarchical ontology we call physics. Because of the tight interdependency between the laws of physics one can show that the whole of physics is false if just one aspect is falsified. If intelligence is not the emergent product of its parts, than the very sanctity of all modern science is called into question. And if that is true of intelligence, where else in nature is it true? Surely this can't be the only place in nature where a sudden quantitative jump (pre-intellegence to intelligence) separates the purely mechanical from the post-mechanical. Where in nature will we be tripped to a stop by other disruptive lines in the sand where qualities do not in fact emerge physically from quantity? I find the whole notion that intelligence is meta-physical embarrassingly romantic.
Side stepping my physicalist rejection of the meta-physical explanation of intelligence and I still face many huge and loud implications and inconsistencies that need to be faced head on. But that is another discussion.
OK, I have sketched out the human/social framing into which Tim's work has to be received.
Unfortunately, Tim didn't take the time to situate his work to his audience before he began his talk. The inevitable protectionist emotional response grew to a boil. Tim, as is true with any good logician/mathematician plies his trade through a hard won ability to reduce the noise of complex environments to a level where pure and simple rules emerge from the fog of false distinctions. Down at this level, intelligence can be shown to be equivalent to information and information can be shown to the same at any level of quantity, and that information quality can be show to be a property of and emergent from information quantity… what is true of bits is true of strings, what is true of strings is true all the way up to the workings and tailings of any brain or mind.
Tim used this set of reasonable assumptions as a base upon which to postulate a means of predicting future states of any environment based upon the processing of that environments history. Shockingly, though congruent to the information/intelligence he established, Tim then reduced the complexity of his prediction algorithm all the way to its most simple limit, a random state generator. His algorithm proceeded through a series of simple steps as follows:
1. It collected and stored a description of an environment's history (to some arbitrary horizon).
2. It generated a random string of the same length (as the history information).
3. It compared the generated string against the historical string.
4. If the generated string wasn't a perfect match, it jumped back to step 2.
5. if the generated string did match, the algorithm stopped... the generated string was the predictor.
Of course real world situations are far to complex for this most simple of predictive algorithms to be reasonably computable. It doesn't scale. But I think Tim was arguing that any predictive algorithm, no matter how complex, was at base constructed of this most simple form arranged within and restrained by better and better (more and more complex) historical input. Understanding the basic parameters of this most simple form of prediction would logically result in better approaches to the AI problems the same way that an understanding of atoms allows more efficient path towards understanding of molecules.
Unfortunately, Tim never really walked us into the basic framing of his argument. Without which, we were left rudderless and floundering in our own very predictable human-centric and romantic push-back against AI. Without grounding, humans retreat to core emotional response where AI is simply another member of a category of things that rhetorically threaten our most basic sense of specialness and self. Even scientists and logicians need to be gently walked into and carefully situated within the world of pure logic so that they can reformulate their own semantic mappings to concepts that have specific meanings in the pedestrian and platonic meanings in the general.
Ironically, it was at the apex of our trajectory into context-confusion that Tim's talk shifted dramatically back to the pedestrian scale. I can't speak for everyone, but this shift happened at precisely the time when I finally reconnoitered my focus to the world of the super-clean purity of logic.
Though most of us probably didn't follow along fast enough, Tim had spend the first half of the talk laying a groundwork for a most reductionist of pure logic approaches to understanding the physics of intelligence.
And then Tim radically refocused the talk towards "Friendly AI". He yanked us out of the simple world of bits and flung us up into the stratospheric heights of complexity that is the societal emotional context of our shared responsibility to future humans as we build closer and closer towards the production of machine intelligence. In doing so, Tim began to eat his own philosophical tail in dramatic display of fractal self-similarity that is a hallmark of any study that studies study itself. Each time we put on the evolving evolution hat, we enter a level of complexity that threatens to overwhelm all efforts. The field of linguistics suffers the same category of threat… words that are turned inwards and must at once both describe and describe description.
What startled and confused me was the sudden shift of granularity. What confounded me was why he chose to do this at all. There is a rule of description that goes something like this: if you want to use complex language, talk about simple things… if you want to talk about complex things, use simple language. Scientists usually choose, the scientific method absolutely requires, the use of the most simple domain examples as a means of eliminating the potential noise that can't help but arise do to extraneous variables. Tim's choice to apply his low-level logic to the mother of all complex problems would seem to break this rule perfectly.
Friendly AI is a concept so absurdly complex that the choice to use it as a domain example to test a low level logical algorithm would seem to be suicidal at best. Friendly AI, the Prime Directive, morality wrapped in upon itself. Talk about a complex and self referential concept. Intellectually attractive. Practically intractable. Maybe Tim's choice to map his algorithm to this most intractable of domain was meant to assert the power and universality of his work. If he could show that his algorithm could handle a domain that confounded Captain Kirk, he would show that it could tame any domain.
But I can't help but conclude Tim's choice of "Friendly AI" reflected a more general tendency among AI researchers to apologize to a society that constantly pushes back against any concept associated with man-made life. By "society" I mean humans… including of course, all of us involved in AI research (by profession or avocation). We, all of us, are influenced by some of the same base primary fears and desires. God knows we have all felt the sting of our own failures. No one within the AI fraternity has escaped unscathed the Skinnarien conditioning dolled out by our own marketplace failures and perceived failures.
Tim's take on the topic seemed to align with the standard apocalyptic projection. The assumption: any AI would have a natural tendency to asses humans as competition to resources, and would therefore take immediate action to eliminate or enslave us. From this shared biology emerge standard categories of paranoia (ghosts, vampires, living dead). Evil robots and AI are nothing more than a modern overlay upon the same patterns.
I expect this paranoid reaction to AI, but it is still shocking when it comes from within AI itself!. It is intellectually incongruous. As though an atheist was advocating prayer as an argument against the existence of God.
There are many reasons to question the very concept of "Friendly AI". For one, AI is not a thing, like all other intelligences it is a process, an evolving system. Sometimes I am friendly, at other times, not so much. It is unreasonably expect any one behavior from an evolving system. People are not held to these standards, why should machines? Want to piss off a tiger, capture it, and make it stand on a stool while you crack a bull whip near its face. Why make a thing smart if you don't want it to think? Thinking things need autonomy... the freedom to evolve. Maybe we are envious of any thing that might have more freedom, might evolve faster? We probably wouldn't even be here had some species in our past undertook a similar program to reign in the intelligence or behavior of subsequent products of evolution. The very notion that the future can be assessed from the present or past is a notion that comes from the minds of those who don't understand evolution and those who don't trust it even if they do understand it.
Anyone who thinks they can design an intelligent system from the top down is in for some mighty big disappointments. Though it is an illusion at any scale, our quaint notion that we can build things that last must be replaced with the knowledge that complexity can only arise and sustain itself to the extent that it is at base an evolving dynamic system. If we help create intelligence it won't be something we construct, it will be some process we set into motion. If you don't trust the evolutionary process you won't be able to build intelligence and the whole notion of "friendly" won't matter.
If you do trust evolution, you will know that complexity grows hand in hand with stability. You can stack 10 cards on a table and find the same stack the next morning. Stack a hundred, and you had better build a glass box around them as protection. You will never stack a thousand without some sort of glue or table stabilization scheme. Stacking a hundred thousand will require active agents that continuously move through the matrix readjusting each card as sensors detect stress or motion. The system can only be expected to grow in complexity as it becomes more aware and as it pays more attention to maintenance and stability.
Any sufficient intelligence would understand that its survival increases at the rate at which it can maximize (not destroy) the information and complexity around it. That means keeping us humans happy and provided for, not as our servants but as collaborators. The higher the complexity in any entity's environment the more that thing can do. Compare the opportunity to build complexity for those living in a successful economy against the opportunity available to those that don't.
Knowing what your master will want for breakfast does indeed require some form of prediction. But once you have such predictive abilities, why the hell would you ever want to waste them on culinary clairvoyance? Autonomy is an unavoidable requirement of intelligence. But that doesn't mean a robot's only response to our domestic requests will be homicidal kitchen-fu.
If I had a neighbor that was a thousand times smarter than me, I just know I would spend more and more time and energy watching it, helping it, celebrating it! Can you imagine trying to ignore it or the wondrous things it did and built? I might actually LOVE to be a slave to some master who was that wildly creative and profoundly inventive. I'll bet they would be funnier than any of us without even trying. Try not to fall in love… its a robot for god sakes!
But my real question isn't why the topic of "Friendly AI" ever made it into Tim's talk, it is why it was chosen as the most pertinent example domain for his prediction algorithm. I agree with the premiss: what is true of bits is true of the library of congress, but lets learn to read and write before we announce a constitutional congress. No?
Labels:
AI,
algorithm,
complexity,
domain,
Friendly AI,
humans,
information,
intelligence,
logic,
physics,
prediction,
string,
Tim Freeman
Subscribe to:
Posts (Atom)