Experimental Rete implementation

Last week I tried a "new" approach for pyRete's Rete implementation. At the moment I'm using a number of objects linked together such that they make up a Directed Acyclic Graph. Which also happens to be the way most other implementations handle it (I think).

Take for example object type tests. In pyRete they are implemented such that the Rule Compiler instantiates an ObjectTypeNode and initializes it with the type to test against. Later on it connects the instance to the "next" nodes in the Rete Network (usually Alpha Nodes). The ObjectTypeNode basically looks like this:

>>> class ObjectTypeNode(Node):
... def __init__(self, type):
... self.type = type
... self.successors = []
... def activate(self, tag, timestamp, obj):
... if obj.__class__.__name__ == self.type:
... for node in self.successors:
... node.activate(tag, timestamp, obj)
The above is of course grossly simplified, for starters it doesn't have any error checking, but it's enough to convey the general idea.

There are several other types of nodes as well. They all have a list of successors and at least one activation method (Join Nodes have two). When a fact is asserted, it is propagated through the Rete Network via the activation methods of each node.

There are several problems with my current approach and it's no secret that I've been trying to clean up and simplify both the Rete Implementation, the Rule Parser and the Rule Compiler. However, a simple, efficient implementation using an object-based DAG have so far eluded me. So last week I tried to use functions (instead of objects) together with a hash table to provide, among other things, storage (it replaces the memory node objects).

So far, I've only got some very simple tests working but the code is smaller, cleaner and faster. The ObjectTypeNode above is replaced by:
>>> class Rete(dict):
... def __init__(self):
... self["OBJ"] = {}
... def make_object_type_node(self, var, value):
... key = (var, '__class__.__name__', value)
... def activate(tag, o):
... if getattr(getattr(o, '__class__'), '__name__') == value:
... token = Token()
... token[var] = o
... if tag == '+':
... self["MEM"][key].append(token)
... elif tag == '-':
... self["MEM"][key].remove(token)
... self.propagate(tag, token, key)
... self["OBJ"][key] = activate
... self["MEM"][key] = []
... self["NEXT"][key] = []
... return key
What happens here is that the Rule Compiler calls the make_object_type_node instead of instantiating an object. The method calculates a unique key, defines the activate function and registers it in the common hash table. It also registers two empty lists which are used as output memory ("MEM") and as a list of succesor nodes ("NEXT").

The object type test is performed within the activate method and uses a common propagate method, which looks like this:
>>> def propagate(self, tag, obj, key):
... [self[dictionary][_key](tag, obj) for _key, dictionary in self["NEXT"][key]]
One of the *really* good things with it is that those two lines can be used for all types of nodes. The Rule Compiler only needs to register which dictionary the next node is found in (usually "ALPHA" for Object type tests) and which key it has.

One of the bad things with this approach is that all of the function calls are nested so it might not work too great for large Rete Networks.

The idea was actually to generate one function that would have *all* tests (for one rule) laid out as a procedural function which would update a common (to all rules) storage of some sort. I'm not 100% sure of what it would look like and since Python doesn't have macros I can't really do it either so this will do for now.

Once I get the more complicated things working (Not Nodes) I'll let you know if it's better or worse than my current implementation.

2 kommentarer:

Mark Proctor sa...

What we did to make things clean is work with Abstract implementations and Interfaces, not objects.

We have two abstract classes - ObjectSource and TupleSource. ObjectSource propagates Objects to its Sink and TupleSource propagates Tuples to its sink.

We then have two interfaces ObjectSink and TupleSink - a source progates to a sink.

An AlphaNode will extend ObjectSource and implement ObjectSink while a BetaNode such as a JoinNode will extend TupleSource but it implements both TupleSink and ObjectSink. As a JoinNode receives Tuples on the left and Objects on the right.

Everything from the root Rete node to the leaf Terminal node obeys this. So that means you can traverse the entire graph with only knowledge of those 2 abstract classes and 2 interfaces.

Over all through good OO design we have a very clean Rete implementation.


Johan Lindberg sa...

Hi Mark,

and thanks for the comment. I've looked at your implementation quite a bit (as well as Jamocha's). I can imagine that your OO-design really have made your life simpler and allows for an easy way to add functionality.

However, the reason I even started to think about alternatives to OO was because I'm worried about performance. What I really want is a way to inline all of the activations in the Nodes of the Rete Network into one large function (function calls are expensive in Python).

Anyway, this is all largely experimental and should not be taken too seriously unless it actually outperforms my current design of course ;-)