Monday, April 27, 2015

Three Little Points, All In a Row (Or Not)

In computer graphics, it is not uncommon to need to determine whether three points are colinear; ie, they all lie on a single line. In the course of some unrelated algebra, my research led me to an interesting trick that can be used to find the answer.

This trick is the shoelace formula. In its general form, it can be used to find the area of an arbitrary polygon in the plane. It works by constructing, for each polygon edge, a quadrilateral, and then sums the quadrilateral areas. Then, because the desired answer is based on triangles and not quadrilaterals, you just divide the area by two.

My wikipedia browsing also tells me that the shoelace formula is a special case of a more general result, known as Green's Theorem. I happen to have prior experience with that theorem from my job, where it was mentioned in the first ten pages of a textbook that I really, honestly tried to read and understand. I consider myself good at math, but when you're throwing around volume integrals and second partial derivatives like candy from a broken pinata, even I don't get far.

Getting back on topic: how can you use the shoelace formula to test three points for colinearity? The premise is straightforward: a happy, well-behaved triangle has a non-zero area. The only way for a triangle to have zero area is to be degenerate; ie, all three points are in a line. The colinearity test thus falls out as simply calculating the area, and then checking if it is nonzero.

As a concrete example, say you have three points A, B and C in the real 2D plane. According to the shoelace formula, the area of triangle ABC is equal to:

area = | AxBy + BxCy + CxAy - BxAy - CxBy - AxCy | / 2

We are not actually concerned about the specific area of the triangle. Since we only wish to know whether the area is nonzero, we can drop the division and the absolute value operation:

AxBy + BxCy + CxAy - BxAy - CxBy - AxCy = 0 ⇔ A,B,C are colinear

The neat thing I found while playing with this expression is that it factors into several, somewhat similar expressions. You could, for example, group all the X factors together:

Ax(By - Cy) + Bx(Cy - Ay) + Cx(Ay - By)

If you are feeling saucy, however, you might group all the Y factors together:

Cy(Bx - Ax) + By(Ax - Cx) + Ay(Cx - Bx)

These two expressions look similar; in one case the coefficients loop around in the order A,B,C, while in the other it is C,B,A. Cubists might take a different approach, and group terms together by pairs of points:

(AxBy - BxAy) + (BxCy - CxBy) + (CxAy - AxCy)

These expressions ring a very large bell in the minds of people that have spent a long time with 2D linear transformations, because they have the shape of a determinant calculation. That is:

det(
ab
cd
) = ad - bc

Recasting in this light makes the expression look like this:

det(
AxAy
BxBy
) + det(
BxBy
CxCy
) + det(
CxCy
AxAy
)

Linear algebra teaches us that if you swap two rows in a matrix, you negate its determinant. As the penultimate move, I will do that to one of the matrices, and adjust their order:

det(
AxAy
BxBy
) + -det(
AxAy
CxCy
) + det(
BxBy
CxCy
)

This expression rings an even equally big bell in the minds of people that have spent a lot of time with 3D linear transformations, because it ALSO has the shape of a determinant calculation, only with 3D matrices. Recasting in this light makes the expression look like this:

det(
AxAy1
BxBy1
CxCy1
)

What I found interesting about this expression is that it produces all the other expressions depending on how you compute the determinant! The expression with all the X factors together results from taking the determinant down the first column, while the expression with all the Y factors together results from taking the determinant down the second column. We get back our triple of 2D determinants by taking the determinant down the third column.

What do we get if we take the determinant across the first row?

Ax(By - Cy) + Ay(Cx - Bx) + (BxCy - CxBy)
= AxBy - AxCy + CxAy - BxAy + BxCy - CxBy
= AxBy + BxCy + CxAy - BxAy - CxBy - AxCy

Predictibly, we get the same expression.

The power of this determinant form is that we can craft other, less obvious expressions. For example, linear algebra teaches us that elementary row operations do not change the value of a matrix's determinant. Therefore, this equivalent test will also work:

det(
Ax-CxAy-Cy0
Bx-CxBy-Cy0
001
)

The determinant here is easier to calculate, because of the row of zeroes and a single one:

(Ax-Cx)(By-Cy) - (Bx-Cx)(Ay-Cy)

This expression is also faster to calculate. Observe that the original expression required six multiplications and five addition/subtractions. This new expression still requires five additions/subtractions, but only two multiplcations.

I will end this post with an observation: in three dimensions, a common test for colinearity of three points is to construct two vectors from the three points (using point subtraction) and compute their cross product. The notion of a cross product in two dimensions isn't well defined, but this fast expression just derived has the flavour of a 2x2 cross product between two such 'difference vectors'. Alternatively, you can think about it as a translation of the three points that sends point C to the origin. This translation takes away a degree of freedom, making the problem easier.

No comments:

Post a Comment