Archive for August, 2011

Multi-SAP redux

Saturday, August 27th, 2011

Some years ago I wrote a bit about the Multi-SAP approach in my SAP document. I think I also said I would release some code, but never did. Erwin Coumans tried to implement one version in Bullet, but it’s still marked as “experimental” some years later, and it looks a bit like everybody gave up on that technique - including us in PhysX 3.0.

Well I didn’t.

I still think it’s a very nice method, and very easy to implement once you have a regular sweep-and-prune working. So I’m restarting this effort, refreshing my old prototype, adding some new tricks learnt since last time, etc.

As a teaser, and to show non-believers that yes, it rocks, I quickly added it to my old CDTestFramework, as found in Bullet 2.78. I’ll let you guys test it on your respective machines, but at least here at home, it beats the alternative algorithms in Bullet’s default scene.

msap_demo

Stay tuned…

Clear / reset dynamic arrays

Thursday, August 25th, 2011

The typical STL-vector-style dynamic array usually has 2 functions to “clear the array”. One releases objects and deallocates the memory (size and capacity become 0), one simply releases objects and sets the size to 0 without actually deallocating the memory (capacity remains the same).

Let’s pretend the function names are “clear” and “reset”, and let’s pretend that “reset” is the version deallocating the memory, while “clear” does not.

“reset” is good to save memory and make sure you don’t get arrays that grow out of control, never actually releasing huge amount of memory that they may not need anymore.

“clear” is good to limit the number of array resizes when the same array is reused over and over, say from one frame to the next.

But of course the truth is that both are actually bad. “reset” is bad because it makes you resize and reallocate stuff over and over again. “clear” is bad because it will waste memory at some point (Murphy’s Law being what it is, one of those never-deallocated arrays WILL grow out of control even if you think “it never happens”).

However a lot of programmers still continue to use dynamic arrays, and still continue to use either “clear” or “reset” depending on what they think is the best option in each particular case.

I would argue that in most cases except maybe the most trivial ones, one can’t really tell ahead of time whether “reset” or “clear” should be used. And thus, maybe we should not.

Instead, just create a small wrapper doing the proper thing depending on how many objects are actually contained in the array:

void Array::resetOrClear()
{
const int s = size();
const int c = capacity();
if(s>c/2)
clear();
else
reset();
}

In other words:

  • if we used more than half the capacity, it’s likely that the memory is not wasted and we may need a similar amount next frame => keep the memory => “clear”
  • if we used less than half the capacity, maybe we’re an array that grew for some time during a spike but now the memory isn’t used anymore => discard the memory => “reset”

You don’t need to pause and think for each array whether to call “clear” or “reset”, just call “resetOrClear” all the time, and it will likely make the right decision, limiting both the number of resizes and the amount of wasted memory.

Or better, of course: don’t use bloody dynamic arrays.

shopfr.org cialis