In these days I read a lot of articles about polynomial representation.
I was interested in finding an efficient representation for polynomials, to be used in my AKS implementation.
Most of these representation are quite straightforward, like using arrays of coefficients, arrays of
(coefficient, exponent) pairs or simply using integers in which
n bits represent a coefficient.
I decided to
ask on StackOverflow how could I optimize my current polynomial implementation, and in a comment pointed out that
Minato wrote an
article in which he described a new way to represent polynomials.
I've decided to give this idea a try and, even though I don't think it will get my AKS faster, I found it really interesting.
Minato's
representation is really different from all the "classic"
representations. The idea is to represent the polynomial as a DAG(
Directed Acyclic Graph), in particular a special form of BDD(
Binary Decision Diagram), which is called
Zero-Suppressed BDD.
The
result is that it is possible to represent polynomials with millions of
terms using only thousands of nodes(actually it could be even less,
depending on the polynomial).
Since the operations, such as addition or equality, have a complexity proportional to the
size
of the graph, and not to the number of terms, so the representation
seems promising(it can be thousands of times faster if the polynomials
are particularly "regular").
Brief description of ZBDDs
Each node of a ZBDD has a label
v
and two outgoing edges, the 1-edge and the 0-edge(which will be drawn
with a dashed line in the images). The 1-edge is connected to a child
node, which is called the
high child, while the 0-edge connects to the
low child. So we can represent a node
A as a triplet
(v,low,high).
The ZBDD is a
rooted
DAG, so there is a node that has indegree 0 and such that there is a
path from this node to any other node in the graph(and it is the only
node with such property). The order of the labels is fixed, so that in
any path from the root different labels always appear in the same order.
In a ZBDD there are also two special nodes, called
sinks, which are labelled
⊤ and
⊥. The first is also called the
true sink, while the latter is called the
false sink. These two nodes have outdegree 0.
The graph must also abide two reduction rules:
- In a graph G there must not be two nodes A and B such
that vA=vB and lowA=lowB and highA=highB[no
isomorphic subgraphs]
- Every node whose high child is the false sink should be removed(eventually attaching its low child to its parent).
It's quite simply to abide this rules, we just need a procedure,
which I'll call ZUNIQUE, that controls node creation. Whenever we want
to create a node
(v,low,high) we call ZUNIQUE and this procedure
checks if there exist an isomorphic node(and returns it in such case),
or, if the high child is the false sink it returns the low child.
An example of a ZBDD is the following:
Operations on these graphs are quite easy to write in a recursive fashion.
Let us suppose that we want to apply a binary operation
⋄(which I'll call
meld) between two ZBDDs. We know the results of
⊤⋄⊤,⊤⋄⊥,⊥⋄⊤,⊥⋄⊥.
An algorithm, MELD(
F,
G), to apply such generic operation to two ZBDDs
F and
G can be described by these steps:
- If F and G are sinks, return F⋄G, since it's a base case.
- If vF=vG then return ZUNIQUE(vF, MELD(lowF,lowG), MELD(highF, highG))
- else if vF<vG, then return ZUNIQUE(vF, MELD(lowF, G), MELD(highF, G))
- otherwise return ZUNIQUE(vG, MELD(lowG, F), MELD(highG, F))
If we replace the base case operations we can produce the result of any boolean binary operation, such as AND, OR, XOR.
Polynomial representation
Minato had a simple yet brilliant idea.
Let us consider the polynomial
x4+x3+x. Since any natural
number
n can be written uniquely as a sum of different powers of
two, we can rewrite it as
x4+x2⋅x1+x1.
Now we
can consider
x4,x2 and
x1 as three different boolean
variables. Grouping
x1 we get
x4+x1⋅(x2+1), now if
we replace sums with 0-edges and products with 1-edges we obtain the
following ZBDD:
If
we consider the paths from the root
x1 to the sinks we can
re-obtain the original polynomial in this way: if two nodes are
connected with a 1-edge multiply the two labels together. Otherwise if
they are connected by a 0-edge simply skip that label. Then sum the
results for all the paths.
So we can see that we have
the path
x1−x2−⊤ which is replaced by
x1⋅x2⋅1=x3, then there is the path
x1−x2⋯⊤,
which is replaced by
x1⋅1=x1, and there is also the path
x1⋯x4−⊤ which yields
x4. The path
x1⋯x4⋯⊥ yields
0. So if we sum these results together we
get
x4+x3+x1+0, which is our polynomial.
To represent the integer coefficients we can use the same trick.
Every
natural number can be written uniquely as sum of powers of two, then
every power of two can be considered as a boolean variable and be used as
xi before.
Is the representation compact?
You may wonder if this funny representation is compact and or efficient.
For example if we take the polynomial
24x7+4x6+3x3+16x2+15x, the resulting ZBDD is:
And
it does not seem so compact. It has 15 nodes but it represent a
polynomial with 8 terms(and the first one is zero). But if we consider a
polynomial such as
257x55+769x54+8x52+257x43+769x42+8x40+257x23+769x22+8x20+257x11+769x10+8x8, then we obtain the
following ZBDD:
This
ZBDD contains only 12 nodes and represents a polynomial of degree 55,
with 12 terms. This polynomial would require 56 "slots" to be
represented as array of coefficients and would require 12
pairs to
represent it as an array of coefficient-exponent pairs, but in that case
the operations would be slower. Also, we can modify it slightly to
obtain something like this:
Which represents
257x55+769x54+8x52+257x43+769x42+8x40+16x37+16x33+16x32+257x23
+769x22+8x20+257x11+769x10+8x8+16x5+16x+16, a polynomial with 18 terms using only 15 nodes.
This
may seem a small gain, but it becomes really important when we want to
deal with big polynomials. For example the polynomial
∏8k=1(xk+1)8 which
has
98=43046721) terms can be represented with only
26279
nodes which is about four times the square root of the number of nodes.
Operations on Polynomials
We will now see that it's pretty easy
to devise algorithms that compute the sum or product of two polynomials
represented as ZBDDs.
First of all we have to devise an
algorithm that computes the product of a polynomial and a variable. The
algorithm MULVAR(
F,
v) is pretty straightforward if we think carefully about this example:
Basically
we have to notice that to multiply a polynomial whose root is labelled
v we simply have to swap its children and to multiply the new low
child(which previously was the high child) by v2. This allows us to
write a simple recursive function, like the following:
- If v<vF then return ZUNIQUE(v, false-sink, F)
- Else if v=vF then return ZUNIQUE(v, MULVAR(highF, v2), lowF)
- Else return ZUNIQUE(vF, MULVAR(lowF, v), MULVAR(highF, v))
Addition
If we consider two polynomials
F and
G, we can
easily see that if their graphs do not share any subgraph, then their
sum is their union(which we can compute as
F OR
G, using the
ZBDD algorithm).
If there are common subgraphs then the merging wouldn't count them twice.
So
what we have to do is writing
F+G as
F⊕G+2×(F∧G), where
oplus is the XOR of two ZBDDs,
∧ is
their AND and we can compute
2×F multiplying
F by the
variable
2.
Multiplication
Let
F and
G be polynomials such that
vF=vG=v, then we can compute their producting with the relation:
F×G=(lowF×lowG)+(highF×highG)×v2+((highF×lowG)+(lowF×highG))×v
We can already compute the expression, since it uses only addition and multiplication by a variable(plus the recursive calls).
What
happens if we have
F×G and
vF≠vG? Suppose
vF<vG than we should simply multiply
lowF and
highF by
G and we would be done.
An implementation in Python
Writing an implementation in python
is quite straightforward. It takes only about 130 lines of code. The
only thing that we should decide is where to put the "caching system".
We could create a "Polynomial" factory that will return unique
polynomials, just like ZUNIQUE, but giving the opportunity to create
more factories, or we can just provide a single factory.
My implementation uses a single factory, which is actually provided as the
__new__ method. All operations can be cached using a simple decorator(to avoid rewriting boiler-plate code every time).
def memoize(name=None, symmetric=True):
def cached(meth):
if name is None:
name_ = meth.__name__.strip('_')
else:
name_ = name
def decorator(self, other):
try:
return self.CACHE[name_][(self, other)]
except KeyError:
result = meth(self,other)
self.CACHE[name_][(self,other)] = result
if symmetric:
self.CACHE[name_][(other,self)] = result
return result
return decorator
return cached
class Poly(object):
CACHE = {
'new': {},
'and': {},
'or': {},
'xor': {},
'add': {},
'mul': {},
}
def __new__(cls, label, low=None, high=None):
if high is not None and high.label is False:
return low
try:
return cls.CACHE['new'][(label, low, high)]
except KeyError:
r = cls.CACHE['new'][(label, low, high)] = object.__new__(cls, label, low, high)
return r
def __init__(self, label, low=None, high=None):
self.label, self.low, self.high = label, low, high
@memoize()
def __and__(self, other):
if other.is_terminal():
f, g = other, self
else:
f, g = self, other
if f.is_terminal():
return g if f.label is True else f
elif f.label == g.label:
return Poly(f.label, f.low & g.low, f.high & g.high)
return f.low & g.low if f.label < g.label else f & g.low
@memoize()
def __or__(self, other):
if other.is_terminal():
f, g = other, self
else:
f, g = self, other
if f.is_terminal():
return (g if g.label is not False else f) if f.label is True else g
elif f.label == g.label:
return Poly(f.label, f.low | g.low, f.high | g.high)
return (Poly(f.label, f.low | g, f.high) if f.label < g.label else
Poly(g.label, f | g.low, g.high))
@memoize()
def __xor__(self, other):
if other.is_terminal():
f, g = other, self
else:
f, g = self, other
if f.is_terminal():
return (Poly(f.label ^ g.label) if g.is_terminal() else
Poly(g.label, f ^ (g.low), g.high))
elif f.label == g.label:
return Poly(f.label, f.low ^ g.low, f.high ^ g.high)
return (Poly(f.label, f.low ^ g, f.high) if f.label < g.label else
Poly(g.label, f ^ g.low, g.high))
@memoize()
def __mul__(self, other):
if isinstance(other, Poly):
if other.is_terminal():
f, g = other, self
else:
f, g = self, other
if f.is_terminal():
return g if f.label is True else f
elif f.label == g.label:
return (f.low * g.low +
(f.high * g.high) * (f.label[0], f.label[1]*2) +
(f.high * g.low + f.low * g.high) * f.label)
return (Poly(f.label, f.low * g, f.high * g) if f.label < g.label else
Poly(g.label, f * g.low, f * g.high))
else:
if self.is_terminal():
return (Poly(other, Poly(False), Poly(True)) if self.label is True
else self)
elif self.label < other:
return Poly(self.label, self.low * other, self.high * other)
elif self.label == other:
return Poly(self.label, self.high * (other[0], other[1]*2), self.low)
return Poly(other, Poly(False), self)
@memoize()
def __add__(self, other):
if self.is_terminal() and other.is_terminal():
return self | other
xor = self ^ other
intersect = self & other
if intersect.is_terminal() and not intersect.label:
result = xor
else:
result = xor + (intersect * ('2', 1))
return result
def is_terminal(self):
return self.low is self.high is None