Search This Blog

The Incomputable Heaviness of Knowledge

Is the universe conceivable?  Does scientific knowledge improve our ability to think about the universe?

What happens when our knowledge reaches a level of sophistication such that the  human brain can no longer comfortably hold it, or compute on it?  For thousands of years, scholars have optimistically preached the benefits of knowledge.  Our world is rich and safe as a result.  People live longer, people live in greater personal control over the options they face.  All of this is an obvious result of our hard won understanding of how the universe and its parts actually work.  We arm our engineers with these knowledges and send them out to solve the problems that lead to a more and more desire-mitigated environment.  Wish you weren't hungry, go to the fridge or McDonnalds.  Wish you were somewhere else, get in your car and go there.  Wish you could be social, but your friends are in Prague, call them.  Wish you knew something, look it up on the internet.  Lonely, log in to a dating service and set up a rendezvous. Wish your leg wasn't fractured, go to a doc-in-the-box and get it set and cast.

But what if you want to put it all together?  What if your interests run to integration and consolidation.  What if you want to understand your feelings about parking meters as an ontological stack of hierarchical knowledge built all the way up from the big bang?



Thanks to the tireless obsession of three centuries of scientists, most of the necessary information is there.  What's missing is some convenient way to bring it all together.  In the past, in the absence of knowledge we inserted some sort of conveniently complete and simple placeholder to fill the space between the beginning of all time, and, say, that part about parking meters.  Cosmology = genesis moment - god - parking meter.

The complete historical/causal cone that is physical forces and the environmental conditions through time… well we call it the Universe.  Scientifically, this word "Universe" now supports a cognitive mass not unbecoming its meaning.  But does this knowledge actually influence thinking? When thinking about the parking meter, is even the most knowledgeable scientist holding the standard model at the cognitive ready?  I doubt it.  Instead, we simply posit a revised cosmology: genesis - standard model - parking meter, substituting "standard model" for the out of fashion "god".

Even so, I frequently find myself wanting to present ideas that are dependent upon the whole chain of scientific causality.  Should I have to re-compose the whole standard model cosmology every time?  I frequently do.  This leaves very little room for the specific or topical arguments I am trying to convey.  And, it is tiring.

Should I assume the reader's mind is standard model primed?  Even people well schooled in the full contemporary empirical ontology can not keep the whole stack in the cognitive forefront.  But when I decide to be careful, to once again lay it all out as scaffolding to properly support my topic, can I really be sure that I am providing a service?  What if there is only x number of things a person can hold in their immediate conscious or rational mind?  What if what I want to talk to requires the pre-loading of x+1 concepts?

Let's suppose my suspicions are correct.  Let's suppose knowledge-based explanations have already reached this limit.  Now what?  Do we simply specialize, draw a bite sized boundary around a concept and avoid the whole causal-completeness problem altogether?  Or, do we construct cognitive prosthesis to shoulder part of the load?  Is pen and paper one such crutch?  How about language itself?  The computer?

Should I start my essays with "Please pre-load standard model, then proceed …"?

This will mean nothing to non-scientists.  Hell, what it means to scientist will vary wildly.  Worse still, the stuff I like to think about requires tweaks to the average interpretation of the standard model.

Over time, our understanding of the universe becoming more and more accurate.  If this understanding comes in the form of a catalog, it grows linearly with the pace of discovery.  If we are cleaver, and if the universe we study isn't randomly ordered, then our knowledge might also optimize towards salience.  It is often the case that discriptions shrink over time as knowledge of a system becomes more sophisticated, as patterns are teased from the raw data, much of it can be tossed as redundancy.  As an abstraction matures, it becomes hierarchical, a layered dependency stack, each layer a dictated by its own set of rules for the aggregation of the aggregates it inherits from the layer just beneath.  Layered grammatical hierarchies evolve towards a minimal optimal size and complexity, but their complexity will always reflect the complexity of the domain they abstract.  Any abstraction will be more complex than the absence of an abstraction.  Ignorance might not be bliss, but it does require less processing.

The question then becomes, is even the tightest and most elegant abstraction of the universe within the comfortable limits of the human brain?

So, what is it that might  limit the human brain?  Any brain?  At some basic level thinking must involve the assembly or isolation of a the set of data upon which processing in that domain depends.  In contemporary computing, that involves sequencing data into a processing cue that can be sent chunk by chunk into the logic substrate of the processing chip.  In the biological brain, well we are just now guessing at how that might happen, and it seems that it must be a radically different proposition altogether.  In the brain, data is stored within a network web of connected neurons.  It does not appear that data is shuttled from one place to another so much as it propagates through networks according to some sort of simple filtering that happens at each branching.  In the brain, the data seems to be stored as pathway, something akin to an address scheme like a postal code or IP address.

Either way, knowledge is a physical thing, taking up, in what ever form, some finite and measurable amount of storage space and requiring some finite quantity of processing energy.  By these simple metrics, computing (thinking) is as corporal, physical, and bounded as any system.  We recoil from such mechanistic assessments of thinking, but there is simply no way around it, at some level, whether it is electro-chemical, or quantum photonic, thinking has to be a physical process and the stuff being thought has to exist physically.  More to the point of this essay, the size and complexity of the stuff of thoughts must in some non-arbitrary way map to the size and complexity of the systems they abstract.  A complex thought must take more space than a simple thought in the same way that a complex thing must take more space than a simple thing.

At a certain level of complexity, any brain or computer will no longer be able to compute on the representative data set.

Turing postulated as thought experiment. the simplest computational machine.  This "turing machine" took as input a string of binary data (ones and zeros).

If we likewise consider knowledge as a graph that associates experience along a range from the most general to the most specific, and intelligence as efficient means of traversing this graph and accumulating salient reductions, then we can then see that any mind is limited and measurable by the number of these graph connections and the effectiveness of its graph crawling agents.  Assuming the perfect graph and the perfect graph crawlers one can calculate the best possible intelligence for any set of linked nodes.

To the extent that part of the graph must be used to store the graph crawling agent's behavior, the capacity of the system is reduced.  If on the other hand, the graph is so constructed, such that crawling it is the natural (least energy) result of any environmental interaction, then all of the graph is assumed to contain or have the potential to contain knowledge.

Now, still assuming the perfect storage system, lets look at what a graph crawler must do, and what its attributes must be, in order that salience results from simple traversal.  There are only two options.  In the first, the agent must keep track of the path it takes through the graph.  That history of its path through the graph becomes the answer it accumulates as it crawls.  In the second, the graph it self must be traversed at least two times.  As the agent traverses the graph on its first trip, instead of accumulating a copy of the path it takes, the agent modifies the graph in some way (leaves bread crumbs). The second trip through the graph, this time following the crumb trail, is itself the solution.  The job of the second agent is both to sing out the answer as it traverse the marked graph, and to wipe the path clean as it goes.

In both of these examples, it is easy to see that knowledge is limited by how long a path an agent can hold, or how quickly it can sing out its answer.  In computation, both the time it takes to accumulate an answer, and its shear size, are the critical limiters.  An answer is no good if it is as large as the problem.  Abstraction must provide computational advantage over the abstracted.

An analogy can be loosely drawn between intelligence then, and travel.  If for instance, it takes 10 days to cross an ocean in a ship, you might consider the option of building a much bigger ship.  If you build it to a length of half the width of the ocean, then the same trip would take just half the time.  A ship the full length of the ocean would take zero time to make the transit.  However, as any map maker can tell you, a map the size of a mountain doesn't provide much advantage.

Now we are getting close to what looks like a good general description of computation.  The building of abstractions that are easier to navigate than the objects they abstract.  If there are elements within an object, environment, or situation that are important to an observer, and there are elements in that same domain that are unimportant to the observer, then it is reasonable to assume that an abstraction, a reduction, a selective compression can be built such that navigation through the abstraction is more efficient than navigation through the original domain.  Colloquially, we can say that an abstraction must therefore be smaller than the domain it abstracts.

However, we frequently fantasize about perfect understanding.  As we build better and better abstractions, we work towards this goal of complete and absolute knowledge.  All the while, the abstraction grows.  More and more of its size represents administrative structure of the abstraction itself (spacers and associate-ers).  As an abstraction matures, another factor increasingly contributes to its size; a model of the observer in reference to the target domain.  In fact, any realistic measurement of the size of an abstraction must include the computer or brain both constructing and navigating the abstraction.  Because we humans come with computers pre-installed, it is common to see abstractions as separate from the machinery that computes on them.

For these (and other) reasons, it is quite possible for a map to grow quickly to many times the size of the domain being mapped.  So we must re-state our definition of computation.  A simple size comparison between abstraction and abstracted no longer seems appropriate.  What matters more, it seems, is the relative expense of building and executing upon the abstraction as set against the expense of executing the same measurements on the original native domain.

There is another complication that must be considered, this being the value of the knowledge that is extractable from a domain.  Abstraction construction, navigation and processing costs need only be less then the same effort applied to the extraction of that knowledge from the native domain.

Hopefully, the reader is beginning to see the many parallels connecting what we call computation and evolution.  Both are processes.  Both utilize finite temporary structures that feed off of themselves to build other finite structures through time.  Our understanding of both computation and evolution seem to be limited by our natural focus on the temporary, the here and now, on species and executables, when the larger phenomena, general complexity handling and abstraction maximization are largely ignored.

[more to come]

Randall Reetz

No comments: