Corrected profiling results

It's official: I'm a dimwit!

The inline versions of the profiling tests I presented last week are wrong. The correct time for the plain inline version is in fact somewhere between a third to half of the OO implementation.

The inlined version was incorrectly using eval calls to test the conditions. The whole speed-up stems from replacing, for example,

>>> eval('a.n > 10', token)
>>> a = token['a']
>>> a.n > 10
The reason that I can lose the eval in the inline version is because it works by translating source code from the rule definition to regular Python code, which is then compiled into a function object which is invoked at run-time.

In the OO-version, the test to be performed is bound to the instance of the node and since the activate method is "pre-defined" we need eval in order to execute whatever test is stored in the instance. Most activate methods besically looks like this:
>>> def activate(self, tag, token):
... if eval(self.test, token):
... # store in mem and propagate
There might be a way of doing the same sort of "trick" to get rid of the eval in the OO-implementation as well. I'll have to think about that though.

1 kommentar:

cnayoung sa...

You are officialy wrong. You are not a dimwit! Well, no more than anyone else, anyway.