an example of actor science


Consider an adaptive Unix shell project as a particular example. The actor's universe is the user of the shell. The objects and relations in that universe are things like pages, directories, programs, and their various relations (pages can be in directories, directories can't be in pages, and so on). This list may also include keyboard layout and window layout, depending on how much the programmer wishes to try to model. The user's actions are phenomena. Initially, the actors might be limited only to detecting and suggesting corrections for incorrect command or page accesses. It might even be beneficial for the system to run silently for a while first, simply observing the user's actions and generating and testing hypotheses only internally.

As soon as the universe does something, that phenomenon is posted to all the forums. But each forum is dedicated only to certain things. For example, one forum might worry only about commands that alter pages (mv, rm, anything with ">", and so on), others might worry only about finding the right page when there's an error, yet others might be trying to predict the user's typing characteristics, and so on.

Once a forum has a relevant phenomenon, all the actors who exchange information through it examine the phenomenon to see if they have anything credible to say about it. To do so, they must do some computation and comparison of the phenomenon with their internal set of beliefs. If their beliefs have little of relevance to add, they say nothing, hoping that the next phenomenon may be more interesting.

Those actors that find a good fit between the phenomenon and their beliefs generate hypotheses to try to explain the phenomenon. For example, if the phenomenon is that the user typed something that couldn't be found and some actor has a theory about the user's characteristic typing patterns, that actor might suggest what the user intended to type. This is a hypothesis. If other actors specialized for that subdomain find the hypothesis interesting they might then suggest experiments to test that hypothesis---or suggest hypotheses of their own.

Consider one particular subdomain of the problem being considered, say spell-checking page names. Each actor might be a Slipnet-like network of beliefs. Each belief is implicit in the structure of the Slipnet, but its nodes and edges are labeled with concepts found in the subdomain. This commonality of labeling lets actors communicate their beliefs through forums. In this particular case, the shared concepts might be the page names the user often types and each actor is trying to find a way to slip the incorrect page name into one of the accepted page names. To do so, its structure gives it a basic set of predispositions to think that the user has some predilection to mistype certain things and not others.

For example, say an actor has an implicit model of the keyboard and it worries only about letter transpositions. The space of letter transpositions is large (26x26 = 676) and since each actor only has small memory it can't keep that many possibilities around and do a statistical analysis over the entire space to try to determine which one the user meant---assuming the error was a transposition that is, it may have been something else (say a letter deletion). Instead, what it does is build its Slipnet of transposition probabilities on the fly, only keeping the most likely transpositions. Each detected transposition (found by comparing an incorrect page name with its probably immediate subsequent correction) biases the actor to slip between that pair of letters with a little higher probability next time. It might be weighted by how often the user has slipped between the two letters (transposing "ai" into "ia" would result in a slight strengthening of a one-way link between "a" and "i"). Only those slippages noted in the past little while would influence the actor's decisions. In essence, this actor is trying to build a tiny, and implicit, model of how the user behaves restricted to a tiny part of the overall universe.

Other actors working on the same task might see the same phenomenon and conclude something different. Perhaps one actor's way of viewing the world is by grouping letters into consonants and vowels and looking for slippages between them. Another might be building a model of slippages across the keyboard (typing "d" instead of "c", for example, because one is above the other on standard keyboards). Of course, all this knowledge (vowels, keyboard distances, letters) has to be fed to the actors at the beginning by the second-stage programmer of the system.

If a lot of hypothesis actors agree on some correction, even if for different "reasons," then another set of actors who watch the hypothesis actors handle that error might then suggest the correction as an experiment. The hypothesis actors act as idea generators, the experiment actors act as idea discarders, funneling only the most promising ideas up for further testing. If the experiment is actually done (in this case, if it makes it to a list presented to the user to ask "Is this what you meant?") it leads to reinforcement for the actors allowing it through and for the actors taking part in suggesting the hypothesis. That reinforcement may be positive or negative depending on the experiment's outcome and may affect either the kind or number---or both---of future ideas.



last | | to sitemap | | up one level | | next