Archive for March 2008

That’s some distributed temperature right there, Dude!

March 31, 2008

I’ve been thinking about massively parallel FARG, distributed temperature, and distributed coderacks:

Now, whenever a codelet is about to change something up, why add it to the global, central, unique, coderack? I don’t see a good reason here, besides the “that’s what we’ve always done” one. If a codelet is about to change some structures in STM, why not have (i) a list (or a set, or a collection, etc.) of structures under question & (ii) create a list-subordinated coderack on the fly? Instead of throwing codelets into a central repository, they go directly to the places in which they were deemed necessary. There are multiple repositories for codelets, multiple coderacks.

I argued that I liked the idea because (i) it enables parallelism of the true variety, (ii) it helps us to solve the stale codelets issue, and (iii) programming can (in principle) be done gradually, still in simulated parallel.

Now, I was wrong about temperature all along. Here’s a new idea:

Imagine that each of the coderacks has the following behavior: Get a RANDOM codelet, then run it.

That’s massively parallel temperature right there. Have a nice day. Thanks for stopping by.

Unconvinced? Think about this: some coderacks will start to become really small (as Abhijit pointed out in the comments previously), with one or two codelets, then being emptied and destroyed. That means that at that particular point (or thing) in STM, temperature is really low. However, other coderacks will be full of stuff waiting to run; which means that there, temperature is running high.  Distributed temperature with high randomness in hot spots, low randomness in cool spots.

Maybe this has to be coupled with some information about concepts, but I’m not sure anymore. I think that it just might be one of those wonderful, marvelous, emergent effects we take so much pleasure in playing with.

Relativity in words of four letters or less

March 28, 2008

Now, see, this is FARG-tastic.

Language Log: X is the Y of Z

March 26, 2008

Language Log looks at the old X-is-the-Y-of-Z meme, with a great deal of focus on Switzerland. Harry, what is the Switzerland of Athens? You could be the first person in the world to answer that burning question!

Also today: The fractal theory of Canada. The Canada of the electron is the neutrino.

Syntactic analogy example

March 25, 2008

So Charlie Stross’s blog today had a very strange syntactic construction. Charlie was the guest of honor at an SF convention. Next year, he’s looking forward to attending, while not being a guest of honor.

Well, next year the eastercon is going to be held in Bradford, a city with which I am not unacquainted, and I’m really looking forward to not going to be one of the guests of honour!

Of course, it’s easy to see what he means — but something about “looking forward to not going to be” doesn’t ring true. (Partly this is because English has no future tense, despite all intuition to the contrary.) Clearly, another “ing” is needed. I submit that the obvious solution here is — and in a sense, this even sounds felicitous if you don’t think too hard:

I’m really looking forward to not goinging to be one of the guests of honour!

Works for me, anyway.

This is NOT an animal. It is NOT alive. But is it like your toaster?

March 23, 2008

Recently in our internal mailing lists we have discussed hyperbole in cognitive science; and all the fantastic claims that numerous cognitive scientists make. Every would-be Dr. Frankenstein out there seems to claim to have grasped the fundamental theory of the mind, and in next year we will finally have the glorious semantic web, we will be translating War and Peace into Hindu in 34 milliseconds, we will be having love and sex with robots, and, of course, we will be able to download our minds into a 16GB iPhone and finally achieve humanity’s long-sought after ideal of immortality

Doug Hofstadter, of course, has long been dismissing these scenarios as nothing short of fantastic.  

I think it’s safe to say that, in these sacred halls of CRCC, we are Monkeyboy-Darwinist-Gradualists who really disgust “excluded middle theories”: Either something understands language or it doesn’t. Either something has consciousness or it doesn’t. Either something is alive or it isn’t. Either something thinks or it doesn’t. Either something feels pain or it doesn’t. 

I guess it’s safe to say that we believe in gradualism. The lack of gradualism and the jump from interesting ideas to “next year this will become a human being” goes deeply against my views. So my take on the whole issue of grand statements in Cognitive Science is that much more gradualism is needed. People seem to be having enormously simplistic views of the human mind. As gradualists, we do, however, believe in the longer-term possibility of the theories being developed and cognitive mechanisms being advanced and machines becoming more and more human-like. 

In fact, Harry has even stopped (but note that “stopping” is temporary, and is different from “quiting”, or “forever leaving”) his work on Bongard problems. Harry feels that our work will lead to dreadful military stuff. In fact, it is already happening, as he points out, and here is an eerie example.  (Look at how this thing escapes the near certain fall in the ice.) 

This “baby” is called the BigDog, and, yes, it is funded by DARPA. So there we have it, Harry: already happening. The military will get their toys, with or without us.And this is gradualism at its best.

Remember: this thing is not an animal.

It is not alive.

But is it just as mechanic as a toaster?

The State of Seqsee

March 17, 2008

I am relieved to have reached a stage where Seqsee sees all the sequences that I wanted it to see in the initial release. This does not mean that the work is done. It is still a long way home.

So what sequences can it see? If you allow me to include sequences that it sometimes sees, it is a long list. Many of these it can reliably extend, and making Seqsee reliable on the other sequences is the main work left.

The sequences:

  • 1, 2, 3, 4…
  • 1, 1, 2, 2, 3, 3, 4, 4…
  • 1, 2, 2, 3, 3, 3, 4, 4, 4, 4…
  • 1, 7, 2, 8, 3, 9…
  • 1, 7, 1, 2, 8, 1, 2, 3, 9…
  • 1, 1, 2, 1, 2, 3, 1, 2, 3, 4…
  • 1, 1, 1, 2, 1, 1, 2, 1, 2, 3, 1, 1, 2, 1, 2, 3, 1, 2, 3, 4…
  • 1, 1, 1, 2, 1, 3, 1, 4…
  • 2, 1, 2, 2, 2, 2, 2, 3, 2, 2, 4, 2…
  • 1, 2, 3, 2, 3, 4, 5, 3, 4,5, 6, 7…
  • 1, 1, 2, 1, 1, 2, 3, 2, 1…
  • 1, 17, 17, 1, 1, 1, 17, 17, 17, 17…
  • 1, 1, 1, 2, 3, 3, 3, 4, 5, 5, 5, 6
  • 2, 3, 5, 7, 11… (when primes are allowed in the domain via a switch)
  • 2, 3, 2, 3, 4, 5, 4, 3, 5, 6, 7, 6, 5, 7, 8, 9, 10, 11, 10, 9, 8, 7…
  • 1, 2, 1, 2, 3, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 7…
  • 1, 2, 2, 3, 3, 5, 4, 7, 5, 11, 6, 13, 7, 17…
  • 1, 2, 3, 17, 4, 5, 6, 17, 7, 8, 9, 17…
  • 1, 2, 3, 17, 3, 4, 5, 17, 5, 6, 7, 17…
  • 1, 1, 1, 2, 3, 2, 3, 2, 3, 4, 5, 6, 4, 5, 6, 4, 5, 6…
  • 1, 1, 2, 3, 1, 2, 2, 3, 4, 1, 2, 3, 3, 4, 5…
  • 1, 2, 3, 1, 2, 2, 3, 1, 2, 2, 2, 3…
  • 1, 1, 2, 3, 1, 2, 2, 2, 3, 4, 1, 2, 3, 3, 3, 3, 4, 5…

Now that Seqsee has reached a feature-freeze, I will begin in earnest the task of fine-tuning. I have spent a long part of my work building tools, and these had better serve me well here.

I hope that this story does not end with me likened to George W. Bush when he said (five years ago!) “My fellow Americans: Major combat operations in Iraq have ended”. There are likely to be many surprises as I fine tune (and redo substantial chunks of various codelet families, remove accumulated cruft and so forth). That fine-tuning is a story unto itself, and it will get its own post.

  

On massively parallel coderacks

March 11, 2008

Here’s a question: how to make FARG massively parallel? I’ve written about parallel temperature, and here I’d like to ask readers to consider parallel coderacks.

Like temperature, the coderack is another global, central, structure. While it only models what would happen in a massively parallel minds, it does constrain us from a more natural, really parallel, model. Though I’m not coding this right now, I think my sketched solution might even help the stale codelet problem Abhijit mentions:

We need the ability to remove stale codelets. When a Codelet is added to the Coderack, it may refer to some structure in the workspace. While the codelet is awaiting its turn to run, this workspace structure may be destroyed. At the very least, we need code to recognize stale codelets to prevent them from running.

Consider that most codelets fit into one of three kinds: (i) they can propose something to be created/destroyed, (ii) they can evaluate the quality of such change, and (iii) they can actually carry it out.

Now, whenever a codelet is about to change something up, why add it to the global, central, unique, coderack? I don’t see a good reason here, besides the “that’s what we’ve always done” one. If a codelet is about to change some structures in STM, why not have (i) a list (or a set, or a collection, etc.) of structures under question & (ii) create a list-subordinated coderack on the fly? Instead of throwing codelets into a central repository, they go directly to the places in which they were deemed necessary. There are multiple repositories for codelets.

Why do I like this idea? First, because it enables parallelism of the true variety. Each of these STM-structure-lists-bound coderacks can be running in their own thread. If some crazy codelet wants to change some set of structures, it needs to find the proper coderack, or to create it during a run. This means codelets cannot interfere with STM structures to which they haven’t been assigned access; & the STM-structure-list-coderack will assure that only one codelet goes at a time.

Moreover, it helps us to solve the stale codelets issue, by simply destroying the coderack when something needed inside the lists is gone. If a structure is destroyed, and a codelet was waiting to work on it, the codelet–in fact all the coderacks associated with the structure–can go back to cyberspace heaven.

How about programming this thing? It doesn’t even have to be too hard. It can be done gradually, as all the coderacks can be running in simulated parallel and fully tested before venturing into the dangerous waters in which the threads lead us.

(I don’t know when I’ll be able to try this idea out, but hopefully, one day soon.)

Does that make any sense?