George Dafermos -- Re: Osaka Organis design/ Waking the Planet

Date: 2003/03/28 10:20
From: "George Dafermos" <georgedafermos@discover.org>
To: Gerry Gleason <gerry@geraldgleason.com>


> >
> >However, I can't really connect the last few words
> >"and therefore the insolvability of the general halting problem is exactly the equivalent of Goedel's incompleteness theorem from math" to the rest of the paragraph. Maybe my social sciences background is to be blamed for this. I know what the Halting Problem conveys but still I can't make any sense. So, please give me some help on this.
> >
> Although I haven't had time to read through that chapter, the key
> theoretical move is the "principle of computational equivalence".
> Mathematicians use the term "homomorphic" to describe when you can make
> an exact one to one mapping between two axiomatic systems, then you can
> automatically know that any result proved in one system is also true in
> the other. In the bulk of the book, he is examining one formal system
> after another, and showing the chaos that is present. He is saying that
> the chaos is there because there is computation there, and wherever
> there is computation there is a fundamental irreducability.
>
> Maybe the point I'm making is too cryptic to be useful here, but it
> really struck me that he is casting a much wider net with this than
> Turing or Goedel are likely to have comptemplated (although such genius
> may well have done, but not proved). The first step in these theoretical
> frameworks is to establish through homomorphism that all the systems of
> interest are equivalent, so that further results can be taken to apply
> to the entire class (Turing machine equivalent logical machines, or
> mathematical systems equivalent to basic arithmetic for Goedel). Douglas
> Hoffsatter's book Goedel, Escher, Bach, and Eternal Golden Braid is a
> wonderful way to learn about how this works even if you aren't
> particularly mathematical.
>
> Now, with the wider net being cast, I'm suggesting that these two
> results are the same result, that the two classes (and many others
> including physical and biological systems) are equivalent because it's
> all computation at the bottom (it's turtles all the way down ;-).
> Really, how different is to to ask whether an arbitrary computer program
> will terminate vs. asking whether an arbitrary mathematical statement
> can be proved. Although the two proofs are different, the result is the
> same one. Even more mind-bending is that Goedel is saying that we can't
> prove that we might not uncover a contradictory result this way (i.e.
> one of these problems is solvable but the other isn't).
>
> Hope this isn't just more confusing.
>
> WRT

It all makes sense now. The all-pervasive underlying pattern...i know. I should also confess that most of times I've tried to engage with Wolfram's book, I end up looking at the pictures instead of reading it.

> >
> Very interesting. I wonder if you could find some similar relatonships
> in say, nerve cell grouping in brains and similar. I have some partially
> formed thoughts about how the small groups would be densely connected
> internally, but the external connections to larger groupings might be
> "function specific" so that any individual would have a scope < 150 but
> the small group as a whole would be connected to a much larger network.
>
> If the larger community (surrounding?) is similarly organized, I can see
> the scope being very wide indeed, but more in terms of replicating (with
> variation) the 15 -> 150 units in physically seperated spheres.
>
In fact, the number of 150 has everything to do with brains. The number 150 originates from the work of the British anthropologist Robin Dunbar on social channel capacity. Having examined several kinds of primates (monkeys, humanschimps, etc) in order to identify why we humans have a bigger neocortex (a region in the brain which deals with complex reasoning), Dunbar concluded that group size is what determines the size of the neocortex. On other words, the larger the groups we live with, the larger the neocortex. Dunbar's main thesis is that our brains evolve to cope with the complexities of larger social groups. For instance, if you belong to a group of five people, you have to keep track of ten separate relationships. But if you belong to a group of twenty people, you have to keep track of 190 relationships. Obviously, while there is only a fivefold increase in group size, the additional burder on one's social channel capacity is twentyfold. Dunbar has developed an equation (the size of the neocortex relative to the size of the whole brain) which works out the maximum group size of any given animal. In the case of humans, the number is roughly 150.
And yes, as Francois correctly pointed out, the theory of small worlds and networks is definitely relevant.

George


Back to Index ...