Archive for November, 2012

Precomputed node sorting in BV-tree traversal

Friday, November 23rd, 2012

Here is another post about a small optimization I just came up with. This time the context is BV-tree traversal, for raycasts or sweeps.

So let’s say you have some raycast traversal code for an AABB-tree. If you do not do any node sorting to drive the tree traversal, your code may look like this (non-recursive version):

const AABBTreeNode* node = root;
udword Nb=1;
const AABBTreeNode* Stack[256];
Stack[0] = node;

while(Nb)
{
node = Stack[--Nb];

if(TestSegmentAABBOverlap(node))
{
if(node->IsLeaf())
{
if(TestLeaf(node))
ShrinkRay();
}
else
{
Stack[Nb++] = node->GetNeg();
Stack[Nb++] = node->GetPos();
}
}
}

This should be pretty clear. We fetch nodes from the stack, perform segment-AABB tests against the nodes’ bounding boxes.

If the node is a leaf, we test its primitive(s). In case of a hit, we reduce the length of the query segment, to reduce the total number of visited nodes.

If the node is not a leaf, we simply push its children to our stack – in reverse order so that “Pos” gets tested first -, and continue. Easy peasy.

Now this is a version without “node sorting”: we always push the 2 children to the stack in the same order, and thus we will always visit a node’s “Positive” child P before the node’s “Negative” child N. This is sometimes just fine, in particular for overlap tests where ordering does not usually matter. But for raycasts, and especially for sweeps, it is better to “sort the nodes” and make sure we visit the “closest node” first, i.e. the one that is “closer” to the ray’s origin. The reason is obvious: because we “shrink the ray” when a hit is found, if we visit the the closest node P first and shrink the ray there, the shrunk segment may not collide with N at all, and we will thus avoid visiting an entire sub-tree.

Node sorting is not strictly necessary. But it is a good way to “optimize the worst case”, and make sure the code performs adequately for all raycast directions. It has, nonetheless, an overhead, and it is likely to make the best case a little bit slower. A good read about this is the recently released thesis from Jacco Bikker, which contains a nice little code snippet to implement SIMD node sorting for packet tracing.

When dealing with simpler one-raycast-at-a-time traversals, there are usually 2 ways to implement the sorting, depending on your implementation choice for the segment-AABB test. If your segment-AABB test produces “near” and “far” distance values as a side-effect of the test, all you need to do is compare the “near” values. If however you are using a SAT-based segment-AABB, those near and far values are typically not available and an extra distance computation has to be performed. It is not necessary to use a very accurate distance test, so one option is simply to project the nodes’ centers on the ray direction, and use the resulting values. If we modify the code above to do that, we now get something like:

const AABBTreeNode* node = root;

udword Nb=1;
const AABBTreeNode* Stack[256];
Stack[0] = node;

while(Nb)
{
node = Stack[--Nb];

if(TestSegmentAABBOverlap(node))
{
if(node->IsLeaf())
{
if(TestLeaf(node))
ShrinkRay();
}
else
{
const Point& BoxCenterP = node->GetPos()->mBoxCenter;
const Point& BoxCenterN = node->GetNeg()->mBoxCenter;
if(((BoxCenterP - BoxCenterN).Dot(RayDir))<0.0f)
{
Stack[Nb++] = node->GetNeg();
Stack[Nb++] = node->GetPos();
}
else
{
Stack[Nb++] = node->GetPos();
Stack[Nb++] = node->GetNeg();
}
}
}
}

The above code could be improved, the branch could be removed, the last push to the stack could be avoided since it will otherwise probably create an LHS, etc. But this post is about node sorting, so I will only focus on this part.

It does not look like much, but it turns out that it can have a very measurable performance impact when the rest of the function is already highly optimized. It fetches the 2 children nodes (cache misses), it has a float compare (very slow on Xbox), and that dot product is annoying.

So, let’s get rid of all of these.

In order to do that, we will need to go back in time a bit. To the days of the painter’s algorithm, before Z-Buffers, when it was mandatory to render opaque polygons back-to-front. At that time even radix-sorting all triangles was considered too slow, so we often just… precomputed the sorting. We had 8 precomputed “index buffers” for 8 possible main view directions, and the whole sorting business became free. There are still various traces of those early algorithms online. This thread mentions both Iq’s version, called “Volumetric sort” and the similar article I wrote some 10 years before that. That was back in 1995, so the idea itself is nothing new.

What is new however, I think, is applying the same strategy to BV-tree traversals. I did not see this before.

So there are 8 possible main view directions. For each of them, and for each node, we need to precompute the closest child. Since we have only 2 nodes in the binary tree, we need only one bit to determine which one is closest, and thus we need 8 bits per node to encode the precomputed sorting. That’s the memory overhead for the technique, and it may or may not be acceptable to you depending on how easy it is to squeeze one more byte in your nodes.

The precomputation part is trivial. A vanilla non-optimized version could look like the following, performed on each node after the tree has been built:

static bool gPrecomputeSort(AABBTreeNode* node)
{
if(node->IsLeaf())
return true;
const AABBTreeNode* P = node->GetPos();
const AABBTreeNode* N = node->GetNeg();
const Point& C0 = P->mBoxCenter;
const Point& C1 = N->mBoxCenter;

Point DirPPP(1.0f, 1.0f, 1.0f); DirPPP.Normalize();
Point DirPPN(1.0f, 1.0f, -1.0f); DirPPN.Normalize();
Point DirPNP(1.0f, -1.0f, 1.0f); DirPNP.Normalize();
Point DirPNN(1.0f, -1.0f, -1.0f); DirPNN.Normalize();
Point DirNPP(-1.0f, 1.0f, 1.0f); DirNPP.Normalize();
Point DirNPN(-1.0f, 1.0f, -1.0f); DirNPN.Normalize();
Point DirNNP(-1.0f, -1.0f, 1.0f); DirNNP.Normalize();
Point DirNNN(-1.0f, -1.0f, -1.0f); DirNNN.Normalize();

const bool bPPP = ((C0 - C1).Dot(DirPPP))<0.0f;
const bool bPPN = ((C0 - C1).Dot(DirPPN))<0.0f;
const bool bPNP = ((C0 - C1).Dot(DirPNP))<0.0f;
const bool bPNN = ((C0 - C1).Dot(DirPNN))<0.0f;
const bool bNPP = ((C0 - C1).Dot(DirNPP))<0.0f;
const bool bNPN = ((C0 - C1).Dot(DirNPN))<0.0f;
const bool bNNP = ((C0 - C1).Dot(DirNNP))<0.0f;
const bool bNNN = ((C0 - C1).Dot(DirNNN))<0.0f;

udword Code = 0;
if(!bPPP)
Code |= (1<<7); // Bit 0: PPP
if(!bPPN)
Code |= (1<<6); // Bit 1: PPN
if(!bPNP)
Code |= (1<<5); // Bit 2: PNP
if(!bPNN)
Code |= (1<<4); // Bit 3: PNN
if(!bNPP)
Code |= (1<<3); // Bit 4: NPP
if(!bNPN)
Code |= (1<<2); // Bit 5: NPN
if(!bNNP)
Code |= (1<<1); // Bit 6: NNP
if(!bNNN)
Code |= (1<<0); // Bit 7: NNN

node->mCode = Code;
return true;
}

Then the traversal code simply becomes:

const AABBTreeNode* node = root;

udword Nb=1;
const AABBTreeNode* Stack[256];
Stack[0] = node;

const udword DirMask = ComputeDirMask(RayDir);

while(Nb)
{
node = Stack[--Nb];

if(TestSegmentAABBOverlap(node))
{
if(node->IsLeaf())
{
if(TestLeaf(node))
ShrinkRay();
}
else
{
if(node->mCode & DirMask)
{
Stack[Nb++] = node->GetNeg();
Stack[Nb++] = node->GetPos();
}
else
{
Stack[Nb++] = node->GetPos();
Stack[Nb++] = node->GetNeg();
}
}
}
}

As you can see, all the bad bits are gone, and node sorting is now a single AND. The “direction mask” is precomputed once before the traversal starts, so its overhead is completely negligible. An implementation could be:

//! Integer representation of a floating-point value.
#define IR(x) ((udword&)(x))

static udword ComputeDirMask(const Point& dir)
{
const udword X = IR(dir.x)>>31;
const udword Y = IR(dir.y)>>31;
const udword Z = IR(dir.z)>>31;
const udword BitIndex = Z|(Y<<1)|(X<<2);
return 1<<BitIndex;
}

And that’s it. Gains vary from one scene to another and especially from one platform to another, but this is another useful trick in our quest to “speed of light” BV-tree traversals.

Restrict this

Tuesday, November 20th, 2012

A quick post about a little-known feature of the “restrict” keyword…

I assume you all know about “restrict” already, but if not, let’s start with a simple example of what it’s useful for.

Say we have a function in some class looking like this:

class RTest
{
public:
RTest() : mMember(0)    {}

void DoStuff(int nb, int* target);

int mMember;
};

void RTest::DoStuff(int nb, int* target)
{
while(nb–)
{
*target++ = mMember;
mMember++;
}
}

Looking at the disassembly in Release mode, you get something like the following (the isolated block in the middle is the loop):

00E9EEA0  mov         eax,dword ptr [esp+4]
00E9EEA4  test        eax,eax
00E9EEA6  je          RTest::DoStuff+1Fh (0E9EEBFh)
00E9EEA8  mov         edx,dword ptr [esp+8]
00E9EEAC  push        esi
00E9EEAD  lea         ecx,[ecx]

00E9EEB0 mov         esi,dword ptr [ecx] // Load mMember
00E9EEB2  mov         dword ptr [edx],esi // *target = mMember
00E9EEB4 inc         dword ptr [ecx] // mMember++
00E9EEB6  dec         eax
00E9EEB7  add         edx,4 // target++
00E9EEBA  test        eax,eax
00E9EEBC  jne         RTest::DoStuff+10h (0E9EEB0h)

00E9EEBE  pop         esi
00E9EEBF  ret         8

So as you can see, there is a read-modify-write operation on mMember each time, and then mMember is reloaded once again to write it to the target buffer. This is not very efficient. Loads & writes to memory are slower than loads & writes to registers for example. But more importantly, this creates a lot of LHS since we clearly load what we just wrote. On a platform like the Xbox, where an LHS is a ~60 cycles penalty on average, this is a killer. Generally speaking, any piece of code doing “mMember++” is a potential LHS, and something to keep an eye on.

There are various ways to do better than that. One way would be to simply rewrite the code so that mMember is explicitly kept in a local variable:

void RTest::DoStuffLocal(int nb, int* target)
{
int local = mMember;
while(nb–)
{
*target++ = local;
local++;
}
mMember = local;
}

This produces the following disassembly:

010AEED0  mov         edx,dword ptr [esp+4]
010AEED4 mov         eax,dword ptr [ecx] // Load mMember
010AEED6  test        edx,edx
010AEED8  je          RTest::DoStuffLocal+1Ch (10AEEECh)
010AEEDA  push        esi
010AEEDB  mov         esi,dword ptr [esp+0Ch]
010AEEDF nop

010AEEE0  mov         dword ptr [esi],eax // *target = mMember
010AEEE2  dec         edx
010AEEE3  add         esi,4 // target++
010AEEE6 inc         eax // mMember++
010AEEE7  test        edx,edx
010AEEE9  jne         RTest::DoStuffLocal+10h (10AEEE0h)

010AEEEB  pop         esi
010AEEEC mov         dword ptr [ecx],eax // Store mMember
010AEEEE  ret         8

This is pretty much what you expect from the source code: you see that the load has been moved outside of the loop, our local variable has been mapped to the eax register, the LHS are gone, and mMember is properly updated only once, after the loop has ended.

Note that the compiler inserted a nop just before the loop. This is simply because loops should be aligned to 16-bytes boundaries to be the most efficient.

Another way to achieve the same result without modifying the main code is to use the restrict keyword. Just mark the target pointer as restricted, like this:

void RTest::DoStuffRestricted(int nb, int* __restrict target)
{
while(nb–)
{
*target++ = mMember;
mMember++;
}
}

This produces the following disassembly:

010AEF00  mov         edx,dword ptr [esp+4]
010AEF04  test        edx,edx
010AEF06  je          RTest::DoStuffRestricted+1Eh (10AEF1Eh)
010AEF08 mov         eax,dword ptr [ecx] // Load mMember
010AEF0A  push        esi
010AEF0B  mov         esi,dword ptr [esp+0Ch]
010AEF0F  nop

010AEF10  mov         dword ptr [esi],eax // *target = mMember
010AEF12  dec         edx
010AEF13  add         esi,4 // target++
010AEF16 inc         eax // mMember++
010AEF17  test        edx,edx
010AEF19  jne         RTest::DoStuffRestricted+10h (10AEF10h)

010AEF1B mov         dword ptr [ecx],eax // Store mMember
010AEF1D  pop         esi
010AEF1E  ret         8

In other words, this is almost exactly the same disassembly as for the solution using the local variable - but without the need to actually modify the main source code.

What happened here should not be a surprise: without __restrict, the compiler had no way to know that the target pointer was not potentially pointing to mMember itself. So it had to assume the worst and generate “safe” code that would work even in that unlikely scenario. Using __restrict however, told the compiler that the memory pointed to by “target” was accessed through that pointer only (and pointers copied from it). In particular, it promised the compiler that “this”, the implicit pointer from the RTest class, could not point to the same memory as “target”. And thus, it is now safe to keep mMember in a register for the duration of the loop.

So far, so good. This is pretty much a textbook example of how to use __restrict and what it is useful for. The only important point until now, really, is this: as you can see from the disassembly, __restrict has a clear, real impact on generated code. Just in case you had any doubts…

Now the reason for this post is something more subtle than this: how do we “restrict this”? How do we restrict the implicit “this” pointer from C++ ?

Consider the following, modified example, where our target pointer is now a class member:

class RTest
{
public:
RTest() : mMember(0), mTarget(0) {}

int DoStuffClassMember(int nb);

int mMember;
int*  mTarget;
};

int RTest::DoStuffClassMember(int nb)
{
while(nb–)
{
*mTarget++ = mMember;
mMember++;
}
return mMember;
}

Suddenly we can’t easily mark the target pointer as restricted anymore, and the generated code looks pretty bad:

0141EF60  mov         eax,dword ptr [esp+4]
0141EF64  test        eax,eax
0141EF66  je          RTest::DoStuffClassMember+23h (141EF83h)
0141EF68  push        esi
0141EF69  mov         edx,4
0141EF6E  push        edi
0141EF6F  nop

0141EF70  mov         esi,dword ptr [ecx+4] // mTarget
0141EF73  mov         edi,dword ptr [ecx] // mMember
0141EF75  mov         dword ptr [esi],edi // *mTarget = mMember;
0141EF77  add         dword ptr [ecx+4],edx // mTarget++
0141EF7A  inc         dword ptr [ecx] // mMember++
0141EF7C  dec         eax
0141EF7D  test        eax,eax
0141EF7F  jne         RTest::DoStuffClassMember+10h (141EF70h)

0141EF81  pop         edi
0141EF82  pop         esi
0141EF83  mov         eax,dword ptr [ecx]
0141EF85  ret         4

That’s pretty much as bad as it gets: 2 loads, 2 read-modify-writes, 2 LHS for each iteration of that loop. This is what Christer Ericson refers to as the “C++ abstraction penalty”: generally speaking, accessing class members within loops is a very bad idea. It is usually much better to load those class member to local variables before the loop starts, or pass them to the function as external parameters.

As we saw in the previous example, an alternative would be to mark the target pointer as restricted. In this particular case though, it seems difficult to do since the pointer is a class member. But let’s try this anyway, since it compiles:

class RTest
{
public:
RTest() : mMember(0), mTarget(0)   {}

int DoStuffClassMember(int nb);

int mMember;
int__restrict mTarget;
};

Generated code is:

00A8EF60  mov         eax,dword ptr [esp+4]
00A8EF64  test        eax,eax
00A8EF66  je          RTest::DoStuffClassMember+23h (0A8EF83h)
00A8EF68  push        esi
00A8EF69  mov         edx,4
00A8EF6E  push        edi
00A8EF6F  nop

00A8EF70  mov         esi,dword ptr [ecx+4]
00A8EF73  mov         edi,dword ptr [ecx]
00A8EF75  mov         dword ptr [esi],edi
00A8EF77  add         dword ptr [ecx+4],edx
00A8EF7A  inc         dword ptr [ecx]
00A8EF7C  dec         eax
00A8EF7D  test        eax,eax
00A8EF7F  jne         RTest::DoStuffClassMember+10h (0A8EF70h)

00A8EF81  pop         edi
00A8EF82  pop         esi
00A8EF83  mov         eax,dword ptr [ecx]
00A8EF85  ret         4

Nope, didn’t work, this is exactly the same code as before.

What we really want here is to mark “this” as restricted, since “this” is the pointer we use to access both mTarget and mMember. With that goal in mind, a natural thing to try is, well, exactly that:

int RTest::DoStuffClassMember(int nb)
{
RTest* __restrict RThis = this;
while(nb–)
{
*RThis->mTarget++ = RThis->mMember;
RThis->mMember++;
}
return RThis->mMember;
}

This produces the following code:

0114EF60  push        esi
0114EF61  mov         esi,dword ptr [esp+8]
0114EF65  test        esi,esi
0114EF67  je          RTest::DoStuffClassMember+26h (114EF86h)
0114EF69 mov         edx,dword ptr [ecx] // mMember
0114EF6B mov         eax,dword ptr [ecx+4] // mTarget
0114EF6E  mov         edi,edi

0114EF70  mov         dword ptr [eax],edx // *mTarget = mMember
0114EF72  dec         esi
0114EF73 add         eax,4 // mTarget++
0114EF76 inc         edx // mMember++
0114EF77  test        esi,esi
0114EF79  jne         RTest::DoStuffClassMember+10h (114EF70h)

0114EF7B mov         dword ptr [ecx+4],eax // Store mTarget
0114EF7E mov         dword ptr [ecx],edx // Store mMember
0114EF80  mov         eax,edx
0114EF82  pop         esi
0114EF83  ret         4
0114EF86  mov         eax,dword ptr [ecx]
0114EF88  pop         esi
0114EF89  ret         4

It actually works! Going through a restricted this, despite the unusual and curious syntax, does solve all the problems from the original code. Both mMember and mTarget are loaded into registers, kept there for the duration of the loop, and stored back only once in the end.

Pretty cool.

If we ignore the horrible syntax, that is. Imagine a whole codebase full of “RThis->mMember++;”, this wouldn’t be very nice.

There is actually another way to “restrict this”. I thought it only worked with GCC, but this is not true. The following syntax actually compiles and does the expected job with Visual Studio as well. Just mark the function itself as restricted:

class RTest
{
public:
RTest() : mMember(0), mTarget(0)   {}

int DoStuffClassMember(int nb)   __restrict;

int mMember;
int*  mTarget;
};

int RTest::DoStuffClassMember(int nb)    __restrict
{
while(nb–)
{
*mTarget++ = mMember;
mMember++;
}
return mMember;
}

This generates exactly the same code as with our fake “this” pointer:

0140EF60  push        esi
0140EF61  mov         esi,dword ptr [esp+8]
0140EF65  test        esi,esi
0140EF67  je          RTest::DoStuffClassMember+26h (140EF86h)
0140EF69  mov         edx,dword ptr [ecx]
0140EF6B  mov         eax,dword ptr [ecx+4]
0140EF6E mov         edi,edi

0140EF70  mov         dword ptr [eax],edx
0140EF72  dec         esi
0140EF73  add         eax,4
0140EF76  inc         edx
0140EF77  test        esi,esi
0140EF79  jne         RTest::DoStuffClassMember+10h (140EF70h)

0140EF7B  mov         dword ptr [ecx+4],eax
0140EF7E  mov         dword ptr [ecx],edx
0140EF80  mov         eax,edx
0140EF82  pop         esi
0140EF83  ret         4
0140EF86  mov         eax,dword ptr [ecx]
0140EF88  pop         esi
0140EF89  ret         4

This is the official way to “restrict this”, and until recently I didn’t know it worked in Visual Studio. Yay!

A few closing comments about the above code…. Astute readers would have noticed a few things that I didn’t mention yet:

The curious “mov edi, edi” clearly doesn’t do anything, and it would be easy to blame the compiler here for being stupid. Well, the compiler is stupid and does generate plenty of foolish things, but this is not one of them. Notice how it happens right before the loop starts? This is the equivalent of the “nop” we previously saw. The reason why the compiler chose not to use nops here is because nop takes only 1 byte (its opcode is “90”), so we would have needed 2 of them here to align the loop to 16-bytes. Using a useless 2-bytes instruction achieves the same goal, but with a single instruction.

Finally, note that the main loop actually touches 3 registers instead of 2:

  • esi, the loop counter (nb–)
  • eax, the target address mTarget
  • edx, the data member mMember

This is not optimal, there is no need to touch the loop counter there. It would probably have been more efficient to store the edx limit within esi, something like:

add        esi, edx                // esi = loop limit
Loop :
mov       dword ptr [eax], edx
add        eax, 4
inc          edx
cmp       edx, esi
jne         Loop

This moves all ‘dec esi’ operations out of the loop, which might have been a better strategy. Oh well. Maybe the compiler is stupid after all :)

shopfr.org cialis