Some sources of variable behavior in this architecture are Abstractor parameterization, alliance-forming, and reproduction.
Two different instances of a given Abstractor type can show very different behavior based on different parameterization. Larger-scale changes in system behavior will result from varying the Abstractor chains that come together to create the top-level Abstractions.
Somehow these two sources of variable behavior must be made to operate spontaneously, without programmer intervention. But how?
Parameterization and Inheritance
Parameterization is the initial set of variable values that an Abstractor is instantiated with, plus more values that it learns during its processing career. The values of this set of variables determines how the Abstractor processes its data, and how it behaves in the marketplace -- possibly how it decides to reproduce? All of these values are subject to change by the Abstractor itself as it gains experience processing.
If an Abstractor reproduces, the daughter Abstractor will begin as a clone of the parent with the same paramaterization that the parent has at the time of reproduction. Strictly Lamarckian.
The daughter will then begin its own career, probably diverging from its parent as it progresses.
Some examples of Abstractor parameters:
- A low-level region growing Abstractor has a parameter that controls how large a gap can exist between two foreground pixels for them to still be considered part of the same region.
- An object tracking Abstractor has a parameter controlling what type of region growing Abstractor it will use during periods of high-noise images.
- An Abstractor has a parameter that controls its economic risk-taking behavior in situations where it has the choice of investing its own savings to produce an Abstraction in the hope of receiving payment.
Searching Parameter Space
If an Abstractor has, for example, five parameters that control how
it applies its algorithm to its inputs, you can understand those five
variables as its parameter space. Any particular parameterization is a
point in that space. It is likely that only a small fraction of the
entire possible parameter space will produce useful results.
When a parentless Abstractor first begins working, it does not know what region of its parameter space will yield useful results, so it will start a searching through its entire parameter space.
What it's searching for is a region of parameterization space that produces meaningful results. An Abstractor can make its own judgments about what results are meaningful. For example, an Abstractor that finds foreground and background regions in a binary image knows that it does not have a meaningful result if it has classified the entire image as a single region. The Abstractor may even have a metric that allows it to judge some results as better than others. For example an image thresholder may prefer a threshold that is near the middle of a region of grayvalues over which the foreground area of the thresholded image shows little change.
But the results that an Abstractor judges to be meaningful are not necessarily useful to higher level Abstractors. To learn the needs of its prospective customers, the Abstractor will have to take its best guess, create its Abstraction, and then submit it to the judgement of the market. What it then learns from the market will constitute a new phase of narrowing its parameter space in the Abstractor's attempt to become profitable.
If an Abstractor has, for example, five parameters that control how
it applies its algorithm to its inputs, you can understand those five
variables as its parameter space. Any particular parameterization is a
point in that space. It is likely that only a small fraction of the
entire possible parameter space will produce useful results.
When a parentless Abstractor first begins working, it does not know what region of its parameter space will yield useful results, so it will start a searching through its entire parameter space.
What is it searching for? A region of parameterization space that produces meaningful results.
Alliance-Forming
Costs and rewards imposed by the Market drive alliance-forming behavior among Abstractors. There will probably be small costs associated with inspecting potential inputs, but the largest costs incurred by Abstractors is the creation of Abstractions 'on consignment'. These are Abstractions that are created without any certain customer and sent to the Bulletin Board in the hope that they will find use in some higher-level work and thus be paid for. Because it must defray the costs of speculative Abstractions, an Abstractor must charge higher prices for those which are accepted.
But Abstractors strive at all times to maximize profit, and a large component of that effort is cost reduction. A great way to reduce cost is to produce more of what your customers want to buy, and less of what they don't. When an Abstractor gets paid for some of its work, it will produce more work like that in succeeding frames.
What is 'more work like that', exactly? Probably a region of the Abstractor's parameter-space.
Alliances form from both ends. When higher-level Abstractors find suppliers that they like, they start checking those suppliers first. When lower-level Abstractors notice that they are getting paid for particular results, they will narrow their search window to focus on the types of results that are getting bought. This allows them to produce results less expensively, which means they can maintain their profit margin while reducing prices. They can also accept a lower profit margin when they have a higher expectation of frequent business. Lowering prices makes them still more attractive to their higher-level customers, and an alliance has formed.
An Example of Shifting Alliances
Let's say we have an Abstractor that is designed to track a tumbling asteroid across a time-sequence of images. It buys Abstractions from a region-growing Abstractor and looks for regions that are moving smoothly across the stack.
At first the asteroid is well-lit and the region-grower has no trouble seeing it in each image. The tracker is happy because it finds the asteroid in each image, always the same delta from one image to the next. It gets to put out high-certainty Abstractions that get bought for a good price. Life is good.
But, alas, the Tracker's happiness is soon to fade.
As the asteroid continues to rotate, it begins to occasionally show a face that is covered with some dark stuff. So dark that the faint dot of the asteroid in our images becomes occasionally invisible for a few frames at a time.
The Region Grower is no longer able to locate the asteroid in all images, so the Tracker is no longer able to create a high-certainty sequence across each stack of images. Because it is no longer able to post high-certainty results, its customers become unhappy. They cannot afford to pay as well for low-certainty results.
So! The Tracker starts to look for a new supplier of region-growing, specifically for the images in which the asteroid has not been found. It advertises its desire for region-finding work to be done on specific small squares of the troublesome images, where it calculates the asteroid ought to be.
This advertising of a request for work gets the attention of an exotic region-grower that uses non gray-values but noise analysis to find regions. Attracted by the Tracker advertisement, it tries its technique on the images in question and succeeds! It finds regions in which the distribution of gray-values has a significantly different standard deviation than does the general background.
When the Tracker sees these new results, it finds that the new region-grower has found regions in just the expected places, and of the expected size. It pays for the work, and a new alliance is formed.
If the asteroid comes to a time when it is once again easy for the grayvalue region finder to provide all results, Tracker will stop using the more expensive exotic region finder. But if at some point the bad images return, it will remember whom to call.
Market forces are central to adaptation
I have thought about adding market behavior to Abstractors for a long time, but I have always thought of it as a bit of a frill. Maybe a way to make the system use less resources, but nothing more.
Now I see that, for intelligent behavior to emerge from a complex system, you need more than simply many interacting agents. You need some force that makes the many interactions tend in adaptive directions.
That force is supplied by the Market.
No comments:
Post a Comment