Search This Blog

Real-Time Observation Is Always More Efficient Than After-The-Fact Parsing

Non-random environments (systems):

- have evolved (from a more simple past)
- are (variously) optimized to input conditions and output demands
- are sequentially constructed in layers
- are re-constructed periodically
- are derived from the constraints of pre-existing environments

Understanding (extracting pattern rules and instances of these rules) is made more efficient through observations undertaken over the course of an environment's construction period. Extracting pattern after the fact requires the act of inferring construction sequence from existing artifact. The number of possible developmental paths (programed algorithms) that will result in a particular artifact are infinite. Parsing through this infinite set towards a statistically biased guess at the most likely progenitor is lossy at best and computationally prohibitive.

For instance, the best (shortest algorithmic complexity) candidate produced by post construction parsing may indeed be a more likely (least energy) progenitor, but this may not predict the actual causal chain that resulted in that environment. Projections based on a statistically optimal history will diverge from the futures actually produced by the environment.

The only time that a statistical (minimum algorithm) parsing of an environment is guaranteed to match reality is when that parsing includes the whole system (the entire Universe).

Observing the genesis of an environment minimizes the mandatory errors inherent in statistical after-the-fact (Solomonoff) algorithmic probability parsing of a pre-existing system.

Said more succinctly; If you want to grow an optimal system, use algorithmic probability and algorithmic complexity as metrics towards optimization, but if you want to describe a pre-existing system, it is best to build this description by observing it's genesis.

Randall Reetz