Search This Blog

Wednesday, March 12, 2008

Why Your Point Looks Like My Point

I've spent years thinking about ways to represent (diagram, explain,
summarize, illustrate, simplify) systems that expose deep and real
truths or salient aspects that would otherwise be hidden behind the
full cacophony of the whole system in situ. That is, after all, what
abstraction is all about, a filtering away of what doesn't matter and
towards what does. Sounds simple enough... but filtering and
representation as simplified abstraction that magnifies deep patterns
and vacuums away the rest is to my reckoning exactly why evolution is
such a slow, step-wise, intractable, and energy hungry process. Of
course it is instructive to remember that abstractions are no less
real than the systems they represent. In fact, one can make an
argument that evolution produces abstractions, that the current state
of any system is a layering of abstractions that shape the morphology
of a system such that it can exploit more effectively the resources
in its environment. And while the focus of this discussion is the
kind of mental or notational mapping done by humans, this is only one
way that nature has found to represent and thus simplify the access
and processing of external resources.

In particular I am focusing in on diagrammatic representations of
systems as network maps that represent the causal relationship
between the parts or sub-systems that make up a system. I call these
ontologies, "hierarchies of influence". Influence hierarchy maps
naturally take the shape of cones and I am interested in qualitative
differences between the point-y and funnel-y ends of these cones. In
particular, I am curious about the apparent commonalities of these
cones at their lowest or most causal points.

So what exactly IS an "influence cone"? The concept is based on the
idea that all elements in a system proportionally effect or are
effected by all other elements in that system, and that these
relationships can be represented by a network diagram or ontology.
Once all elements in the resulting influence hierarchy map are
optimally arranged to minimize link length, a spacial arrangement
will appear with cause/effect range across the dominant axes (things
that cause on one end, things effected on the other).

Think about it, given any two interacting elements, one is always
going to exert more control over the other, is going to cause more
than it is effected by the other. If you take all of the elements of
a system, all of the parts and subsystems that together result in the
shape and behavior of a system, if you take all of these elements and
arrange them so that the ones that cause more find their way to one
end and the ones that are more effected are at the other end of the
spectrum then an ontology as network of influence will result and
this network will be shaped like a cone. Because it is easier to be
effected than it is to cause, there will always be far more elements
on the effect (or wide) end of the cone, and the other end will taper
to a point where sits the one or two elements that end up effecting
everything above them and are not themselves controlled by any other
elements. These most causal elements at the base of the influence
cone are one-way linked to other elements... instructions travel out
from them but rarely travel back the other way. The same (in reverse)
is of course true of the effect end of the cone, these elements are
more likely to be one way linked to other elements that effect them.

Depending on which relationship parameter is being scrutinized, a
system can of course be represented simultaneously by many influence
cone abstractions. Further complicating reality, the definition of a
system is arbitrary, and the same element or subsystem can appear in
an almost infinite number of systems each with their own almost
limitless set of influence cone mappings.

One can imagine building influence cones of other influence cones or
more provocatively, super-imposing multiple influence cones, building
an n-dimentional super-cone of all possible influence cones. In such
a super-cone, an element shared by multiple cones would exist not as
a point but as a probability cloud. Never the less, one can imagine
that the the causal end of the resulting super-influence cone would
share, in some rough probabilistic way, the causal ends of many sub

This most likely explains the coincidences and serendipitously shared
concepts scientists and philosophers frequently expose when comparing
multiple domains and disciplines at base or deeply causal ends.

Anyone who has kept abreast of progress made in the sciences over the
past 100 years will have been curiously struck by strange conceptual
parallels that show up across such seemingly separate fields of study
as information science, thermodynamics, linguistics, bioinformatics,
genetics, genomics, evolution, AI, and many of the attempts to build
a Grand Unified Theory (GUT) to explain the universe from first
principals. Another way of explaining away the apparent parallels
across the causal base of all domains is to say they are a byproduct
of the ignorance that is a natural byproduct of exploring at the edge
of the known. It is by definition grey and blurry out at the fringe
of the known. So the question we have to ask ourselves constantly is
"just what is it that seems familiar; real patterns or the fact that
all patterns blurred sufficiently will appear equivalent?" Truth or
just another shade of grey?

Yet, I don't think we are being fooled by our own senses, because I
see patterns come into sharper and sharper focus as our knowledge
increases. But, there is a third and more problematic reading. The
third reading is built on the assumption that we are making progress
teasing accurate abstractions from the order inherent in nature.
Patterns seen at base of all domains of study are assumed to be
reflect actual similarities... but these similarities are assumed to
be attributes of mapping, of limitations built into our abstraction
process and say nothing about the actual shape and behavior of the
systems they represent. This is the post-modern position. Hard
postmodernists refuse to acknowledge the possibility that any real
pattern exists beyond our mapping. Medium postmodernists say reality
might have pattern but because we can not see it without abstraction
it doesn't matter either way. Soft postmodernists think the reality/
mapping activity will introduce signal/noise confusions that are
unavoidable but that working knowledge grows as our map gets better
and better at understanding and mitigating this problem.

Personally, I am loath to the obstinate arrogance and human-
centricity that practically oozes from the self inflicted wound that
is postmodernism. Modern humans have only been here to share this
corner of the universe for fourteen ten thousandths of the history of
this universe. If reality is dependent on our experience of reality
(this is the honest-to-god position of the hardest postmodernists),
then how did the universe go about its business long enough to create
us in the first place? However, to the extent that abstraction
methods do end up clouding and obscuring our view of reality, we must
continue to pledge vigilance against the demon that works tirelessly
to confuse understanding. The very fact that this category of noise
generation has been exposed is proof that the hard relativism of the
hardest postmodernists is wrong. The fact that we continue to learn
to recognize (and mitigate the destructive effect of) more and more
subtle sources of measurement, observation, and mapping noise, means
that our maps will become more and more accurate. Post-modernist
cautions have led to protections that have made science that much
more accurate and authoritative.

Mapping methods can indeed superimpose their own grammatical
structures over any raw subject being mapped. Plus, we have a
tendency to re-use familiar mapping techniques (description
languages). If these english words have worked to communicate the
shape and behavior of a mouse, why not use them to describe the solar
system, or the English language itself? Again we come to a
crossroads of mapping cautions, again we are visited by the taunting
ghosts of Penrose, Godel and Turing. Mapping, abstraction, useful
understanding is stronger for it.

Recent Posts