tag:blogger.com,1999:blog-6356093984428839526.post1632719146014356669..comments2019-03-03T12:28:46.572-08:00Comments on COMPLEXITY METRIC: Friendly AI?Randall Lee Reetzhttp://www.blogger.com/profile/15879202191444326979noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-6356093984428839526.post-6861594657136149372009-03-10T23:31:00.000-07:002009-03-10T23:31:00.000-07:00And now for my latest piecemeal and evolving under...And now for my latest piecemeal and evolving understanding of Tim's algorithm (though I am probably still getting it wrong). I now think I understand that his method involved an interplay between history string length (what he referred to as "horizon"), and various methods to loop towards a best mathematical or statistical model to generate a closer and closer match. He suggested (but did not explain) that a conditional statement (a test) could be placed in the main repeat loop such that failed attempts at getting better at approximating models that would match the string would cause the history string to be truncated in steps. If all attempts at fitting an equation to the history string fail, it makes sense to start with a again with shorter history string (bring the horizon in tighter to the present). Short strings being easier to model. I now think his seemingly arbitrary tests for all zeros, all ones, repeated one/zero/one... strings and their opposites was an attempt to cover the simplest possible patterns of the shortest possible history strings. As the algorithm tries, fails, and chops the string shorter, eventually the history string is only a few bits long and it is more and more likely that the simplest repetitive patterns will match. Like I said, it is hard to imagine that his very basic scheme would ever result in the kinds of experience we all associate with human level intelligence. On a large enough machine, it might work at predicting short strings, but he never really showed how these short string patterns would be layered into a grammar model that would ever grow to the level of complex systems semantics. I am hoping he writes me because I am very curious to understand where my assessment of his work is wrong and how.<BR/><BR/>There are those that argue that a linear Turing Tape AI will never be intelligent no matter how well it can mimic the actions of an intelligent being. Sentience, it is argued requires real time self awareness… in effect, watching intelligence as it emerges within the self. This would be hard to approximate in a strict Turing Machine as by definition, it would have only one read/write head and only one tape. Our brains aren't Turing Machines, and our memory sure as hell isn't a Turing Tape. But that doesn't mean we couldn't some day build a graph brain like ours. Who knows, maybe a network graph brain like ours is a limited hack when compared to some other thinking topology yet to be discovered or invented? Much of biology has already been shown to be easily bested... why should we expect that the brain is any more perfect than any of the historical hacks and refinements along a scheme that are the rest of our body parts and systems?Randall Lee Reetzhttps://www.blogger.com/profile/15879202191444326979noreply@blogger.com