Thursday, October 29, 2020

Hello, World!

OK, so it's time to make a real system, a beginner's system, and see if we can get it to exhibit any interesting behavior at all. We will describe a simple perceptual scenario, and then describe the system behavior we would like to see in response to it. 

This story-telling post will be the last step before beginning real implementation.

 

The Perceptual Scenario

Let's make this Real Easy. We have a perfectly motionless star field, and an asteroid traversing it. The asteroid is easily visible: a medium-bright spot three pixels across. And it is traveling slowly: five pixels per frame. 

The top-level goal is to identify the asteroid in every frame, and build an abstraction that links all those instances of the asteroid together. That is, the top-level Abstractor recognizes that the sequence of asteroid appearances in the sequence of frames are all the same object. It ties them together in a data structure and tells us the direction and velocity of the asteroid's movement.


The System Scenario

How should the system behave in response to the above perceptual scenario?

  • The sensor starts firing, producing an image every second.

  • The THRESHOLD Abstractor fires, and produces a reasonable threshold for binarizing the image.

  • STARS takes the threshold and uses it to produce a list of star-like objects. This list includes information like centroid, diameter, and total energy.

  • BLINKER looks at sets of results from STARS, for example in bunches of 5 at a time, and looks for out-of-place objects. It will report on objects that are in a particular location in frame N, but were not there at N-1. It will also report on objects that were in a particular location at frame N, but are no longer there at N+1.

  • ASTEROIDS uses the data from BLINKER to construct sequences of locations that it believe to be consecutive images of the same object. It concludes its abstraction when the object departs the camera's field of view. This is the top-level goal of this system. When we produce one such Abstraction, the system shuts down.

 

What interesting behavior will we see?

In keeping with the idea of starting out small, we want this demonstration to exhibit only a single interesting property of the eventual system. Namely, one of the pillars of an intelligent system: self-organization. The sequence of events outlined in The System Scenario, above, will not be hard-wired into the system at programming time, but rather created during run time.

How will that work?

 

  1. The Bulletin Board gets instantiated before anything else. It is an independently running goroutine. All Abstractors know how to talk to it.

  2. The Sensor will start firing as soon as it is created, because that's what Sensors do. It takes a major intervention by a high-level control system (which we do not have yet) to get a Sensor to stop firing.

  3. All the other Abstractors, when they are first instantiated, send Work Requests to the Bulletin Board. These are Messages, like everything else, so their format is very extensible. In the future these Work Requests will be able to express much more specific kinds of work. But for now, they only say "I want images", "I want thresholds", "I want stars" -- things like that.

  4. After sending out their own work requests, the non-Sensor Abstractors also start perusing whatever Work Requests the Bulletin Board has. When they fins one that looks interesting, they send out a Work Offer in response -- and it gets accepted. Soon the system has wired itself up at run-time, and starts working as outlined the The System Scenario.

OK, so in this example the Work Requests and the Work Offers are pretty trivial, and don't really give us anything more interesting than hard-wiring at programming time would. But the hope is that this provides a foundation that we can elaborate later to yield much more interesting runtime behavior.

This sounds like a plan.

Let us begin.


No comments:

Post a Comment