Search This Blog

Cognition Is (and isn't):

What is really going on in cognition, thinking, intelligence, processing?

At base cognition is two things:

1. Physical storage of an abstraction
2. Processing across that abstraction

Key to an understanding of cognition of any kind is persistence. An abstraction must be physical and it must be stable. In this case, stability means, at minimum, the structural resistance necessary to allow processing without that processing undoly changing the data's original order or structural layout.

The causal constraints and limits of both systems, abstraction and processing, must work such that neither prohibits or destroys the other.

Riding on top of this abstraction storage/processing dance is the necessity of a cognition system to be energy agnostic with regard to syntactic mapping. This means that it shouldn't take more energy to store and process the string "I ate my lunch" than it takes to store and process the string, "I ate my house".

Syntactic mapping (abstraction storage) and walking those maps (abstraction processing) must be energy agnostic. The abstraction space must be topologically flat with respect to the energy necessary to both store and process.

Thermodynamically, such a system, allows maximum variability and novelty at minimum cost.

What if's… playing out, at a safe distance, simulations, virtualizations of events and situations which would, in actuality, result in huge and direct consequences, is the great advantage of any abstraction system. A powerful cognition system is one that can propagate endless variations on a theme, and do so at low energy cost.

And yet. And yet… syntactical topological flatness carries its own obvious disadvantages. If it takes no more energy to write and read "I ate my house" than it does to write or process the statement, "I ate my lunch", how does one go about measure validity in an abstraction? How does one store and process the very necessary topological inequality that leads to semantic landscapes… to causal distinction?

The flexibility necessary in an optimal syntactic system, topological flatness, works against the validity mapping that makes semantics topologically rugged, that gives an abstraction syntactic fidelity.

This problem is solved by biology, by mind, though learning. Learning is a physical process. As such it is sensitive to the direction of time. Learning is growth. Growth is directional. Growth is additive. Learning takes aggregate structures from any present and builds super-aggragate structures that can be further aggregated in the next moment.

I will go so far as suggesting that definitions of both evolution and complexity are hinged on the some metric of a system to physically abstract salient aspects of the environment in which it is situated. This abstraction might be as complex as experience stored as memory in mind, and it may be as simple as a shape that maximizes (or minimizes) surface area.

A growth system is a system that can not help but to be organized ontologically. A system that is laid up through time is a system that reflects the hierarchy of influence from which its environment is organized. Think of it this way, the strongest forces effecting an environment will overwhelm and wipe out structures based on less energetic forces. Cosmological evolution provides an easy to understand example. The heat and pressure right after the big bang only allow aggregates based on the most powerful forces. Quarks form first, this lowers the temperature and pressure enough for sub atomic particles, then atoms. Once the heat and pressure is low enough, once the environmental energy is less than the relatively weak electrical bonds of chemistry, molecules can precipitate from the atomic soup. The point is that evolved systems (all systems) are morphological ontologies that accurately abstract the energy histories of the environments from which they evolved. The layered grammars that define the shape and structure (and behavior) of any molecule, reflect the energy epochs from which they were formed. This is learning. It is exactly the same phenomenon that produces any abstraction and processing system. Mind and molecule, at least with regard to structure (data) and processing (environment), are the result of identical process, and as a result, will (statistically) represent the energy ontology that is the environment from which they were formed.

It is for this reason that the ontological structure of any growth system is always and necessarily organized semantically. Regardless of domain, if a system grew into existence, an observer can assume overwhelming semantic relevance that differentiates those things that appeared earlier (causally more energetic) from those things that appeared later (causally less energetic).

This is true of all systems. All systems exhibit semantic contingency as a result of growth. Cognition system's included (but not special). The mind (a mind, any mind), is an evolving system. Intelligence evolves over the life span of an individual in the same way that the proclivity towards intelligence evolves over the life-span of the species (or deeper). Evolving systems can not be expressed as equation. If they could, evolution wouldn't be necessary, wouldn't happen. Math-obsessed people have a tendency to confuse the feeling of the concept of pure abstraction with the causal reality of processing (that allows them to experience this confusion).

Just as important, data is only intelligible, (process-able, representative, model, abstraction) if it is made of parts in a specific and stable arrangement to one another. The zeroith law of computation is that information or data or abstraction must be made of physical parts. The crazies who advocate a "pure math" form of mind or information simply sidestep this most important aspect of information. This is why quantum computing is in reality something completely different than the information-as-ether inclination of the duelists and metaphysics nuts. Where it may indeed be true that the universe (any universe) has to, by principle, be describable, abstract-able by self consistent system of logic, that is not the same what's so ever as the claim that the universe IS (purely and only) math.

Logic is an abstraction. As such it needs a physical realm in which to hold its concepts as parts in steady and constant and particular relation to each-other.

My guess is that we confuse the FEELING of math as ethereal and non-corporal pure-concept with the reality which of course necessitates both a physical REPRESENTATION (in neural memory or on paper or chip or disc) and a set of physical PROCESSING MACHINERY to crawl it and perform transforms on it.

What feels like "pure math" only FEELS like anything because of the physicality that is our brains as copular machinery as they represent and process a very physical entity that IS logic.

We make this mistake all day long. When the only access to reality we have is through our abstraction mechanism, we begin to confuse the theater that is processing with that which is being processed and ultimately with that which that which is being processed represents.

Some of the things the mind (any mind) processes are abstractions, stand-ins for other external objects and processes. Other things the mind processes only and ever exist in the mind. But that doesn't make them any less physical. Alfred Korzybski is famous for declaring truthfully, "The map is not the territory!" But this statement is not logically similar to the false declaration, "The map is not territory!". Abstractions are always and only physical things. The physics of a map, an abstraction system, a language, a grammar, is rarely the same as the physics of the things that map is meant to represent, but the map always obeys and is consistent with some set of physical causal forces and structures built of them.

What one can say is that abstraction systems are either lossy or they aren't useful as abstraction systems. The point of an abstraction is flexibility and processing efficiency. A map of a mountain range could be built out of rocks and made larger than the original it represents. But that would very much defeat the purpose. On the other hand, one is advised to understand that the tradeoff of the flexibility of an effective map is that a great deal of detail has been excluded.

Yet, again and again, we ourselves, as abstraction machines, confuse the all too important difference between representation and what is represented.

Until we get clear on this, any and all attempts at merely squaring up against the problem of machine intelligence will fail.

[more later…]

Randall Reetz