Archive for the 'Rant' Category

A tire is not a rigid body

Wednesday, April 20th, 2016

I don’t know how many times I had this discussion so I’ll just write it here once.

No, you do not get the most realistic vehicles using vanilla rigid bodies + joints, and it is pointless to implement the Coulomb friction model perfectly because it is only a model, and it does not work well for tires.

See here for example.

You need to use a dedicated vehicle SDK, like the one in PhysX. Using perfect rigid bodies and joints, with a perfect solver, implementing perfect Coulomb friction, will not give you a perfect car simulation.

PhysX is Open Source (EDIT: or is it?)

Thursday, March 5th, 2015

https://developer.nvidia.com/content/latest-physx-source-code-now-available-free-github

Note that contrary to what the post says, this is only the second best version (3.3.3). We are currently working on 3.4, which already contains significant changes and significant speedups (for example it includes Opcode2-based mesh collision structures, which provides faster raycasts, overlap and sweep queries). I think we will eventually open source 3.4 too, when it is released.

EDIT:

I’ve been reading the internet and receiving private emails after that. Apparently what I wrote is technically wrong: it is not “Open Source” because it does not have a proper open source license, it comes with a EULA, etc.

I notice now that both NVIDIA’s press release (above) and EPIC’s (here) are carefully worded. They actually never say “Open Source”, or even “open source”. They just say things like:

“NVIDIA opens PhysX code”

“PhysX source code now available free”

“The PhysX SDK is available free with full source code”

The weird thing then, is that many internet posts do the same mistake as I did, and present the news as if PhysX was indeed “Open Source:

http://techreport.com/news/27910/nvidia-physx-joins-the-open-source-party

http://www.dvhardware.net/article62067.html

http://forums.guru3d.com/showthread.php?p=5024001

https://forum.teksyndicate.com/t/physx-made-open-source/75101

http://hardforum.com/showthread.php?t=1854357

(etc, etc)

Why is everybody making this mistake, if indeed none of the official press releases actually said that?

I’ll tell you why.

That’s because the distinction between “NVIDIA opens PhysX source” and “PhysX is open source” is so subtle that only pedantic morons misguided souls would be bold enough to complain about it when given something for free.

Give them a finger, they’ll take the whole hand, and slap you with it.

I have the feeling this is the only industry where people are so insane and out of touch with reality. You’ve given a free Porsche and then you complain that it is not “really free” because you still need to respect the “strings attached” traffic code. Hello? Just say “thank you”, enjoy what you’re given, or go buy a Ferrari if you don’t like Porsche. Jeeez.

Closed comments

Thursday, June 20th, 2013

Sigh.

Yes I’ve closed the comments after a recent spam attack that left me with 750+ comments to moderate in just one day. Sorry, I have no time for this.

Firefox

Thursday, May 23rd, 2013

Looks like this blog doesn’t render properly in Internet Explorer. Oh well, no idea why. Use Firefox.

The evolution of PhysX - Addendum

Monday, May 13th, 2013

I got a bunch of questions about my last series of blog posts so I thought I’d add a quick note here - at the risk of confusing people even more.

The figures I posted are for the CPU part of PhysX only. This does not concern or affect the GPU parts of PhysX in any way. Those things are orthogonal. If we optimize the CPU parts and get a 10X speedup, it does not mean your GPU will suddenly provide 10X less value, because it is running others parts of PhysX anyway - neither the rigid bodies, nor the raycasts/sweeps.

Only a few features are GPU-accelerated, e.g. cloth or particles, mainly because they are the ones that map well to GPUs, and they are the ones for which the GPUs provide real X factors.

Now as shown in the recent “destruction video” I posted, people here are also working on GPU-optimized rigid bodies. This new module is called “GRB”, and it is currently not part of PhysX. But it does provide a speedup compared to our current CPU solution. In other words, it is still faster than PhysX 3.3 on CPU. You might have a hard time believing it, but people are trying to be fair and honest here. One of our tasks is to optimize the CPU rigid bodies as much as we possibly can, just to make sure that the GPU rigid bodies do provide some actual benefit and speedups. If you don’t do that, you release your GPU solution, it’s slower than a CPU solution, and you look like a fool. Like AGEIA. We are not making that mistake again. The CPU solution is here as a reality check for ourselves. I suppose we could just use Bullet or Havok for this, but… well… we think we can do better :)

Meanwhile, it is correct that the features that do work on GPU are currently only working on NVIDIA cards, simply because they are implemented using CUDA. There are both obvious political and technical reasons for this. It should be pretty clear that at the end of the day, NVIDIA would like you to choose one of their GPUs. If you are actually complaining about that, then there is little discussion left to have. Of course they want to sell their products, like every other company in the world. And of course they are going to use their own technology, CUDA, to do so. To me this is pretty much the same as what we had in the past with D3D caps. Some cards supported cubemaps, or PN-triangles, or whatever, and some didn’t. GPU PhysX is the same. It’s just an extra cap supported by some cards, and not by other. Complaining about this is silly to me. It would be like complaining that ATI didn’t make any effort to make PN-triangles work on NVIDIA cards. Seriously, what?

The deal is simple. NVIDIA gives you a free, efficient, robust physics engine. In exchange, if possible, add some extra GPU effects to give people an incentive to buy NVIDIA cards. Fair enough, right? I don’t see what the fuss is all about.

—-

Anyway, the usual disclaimer applies here: I’m not a spokesperson for NVIDIA, what I write are my own thoughts about it, and for all I know I may be completely wrong about their intentions. What I know for a fact though, is that most of the stuff I read online about PhysX is just completely insane wrong.

I’ve been optimizing rigid body simulation in NovodeX/PhysX for a long time now, and there’s no big conspiracy behind it. Again, all those engines are free and publicly available so I invite you to run your own experiments, do your own benchmarks, and see for yourselves. We really have nothing to hide.

Morons

Friday, June 15th, 2012

Somebody posted this at work:

www.rockpapershotgun.com/2012/06/09/neal-stephensons-making-a-game-called-clang/

At 1′59 in the video, check out the Google search with its highlighted 1.990.000 results. This is supposed to show that there is a high demand for a knight-vs-samourai game.

Ok. So I am going to kickstart a game project featuring hamburgers shooting Americans. This gives me 54.200.000 results at time of writing, so clearly there must be a high demand for such a game!

Fucking bloody morons.

The prophet programmer

Friday, September 9th, 2011

It seems there is one in every team. He is living by the book, following the rules to the letter. He considers himself bright and smart because he always knows the latest trends, the latest official Right Way to write things according to the C++ Standard. He follows it religiously, even the new rules not implemented yet by any compiler. And he looks down on you if you do not write “proper” code. He is the prophet. The Book is right. You must follow the rules.

His zealotry has no limits. He will conscientiously rewrite your “illegal” C++ when you are not looking. For your own good of course.

He will go and meticulously replace all your “i++” with “++i” in your vanilla integer for-loops.

He will go through incredible hoops to get rid of a lonely “goto”, used to skip a large block of code and jump to your function’s epilogue.

He will use unreadable, cryptic, unbelievable templates to replace your simple define, because “define is bad”.

He will tell you with a straight face that he went ahead and replaced all the “NULL” in the codebase with “0″ or “nullptr”, because “NULL is bad C++”.

He filled his head with many of those mantras, and he is obsessed with them. They are the rules. They must be followed.

Well, my dear prophet programmer, I have news for you: you are not bright. You are not smart. You are not clever. You’re a fucking robot. It does not take a genius to blindly follow recipes from your cookbook. You are a brain-washed moron doing a machine’s job. If you blindly follow the Standard, you end up with standard code, which by definition anybody can write.

The best programmers are not the ones blindly following anything. They are exactly the opposite of you. The best programmers are the ones who know when rules should be bent, when boundaries should be broken, and when envelopes should be pushed. The best programmers are the ones who, constantly, on a case by case basis, hundred times a day, stop for a moment and think about how to best solve a problem. They are not the ones turning off their brain to follow a recipe. They are not the ones trying to fit a preconceived solution (design pattern?) to everything. If a preconceived solution solves your problem, it was probably not really a problem worth solving - that is, it is such a common and tired issue that anybody can look up a standard answer in a book. How does solving such a thing in such a way makes you “smart” ?

The best programmers are creative. They have a big imagination, and they are not afraid to use it. They borrow techniques from one field and apply them successfully to an apparently unrelated field, discovering subtle links and connections between them in the process. They are never satisfied with the status quo.

The best programmers, the heroes, the top coders, like Nick of TCB did with the sync-scrolling eons ago, are the ones who invent new techniques to solve problems that nobody solved before them. By definition they are not standard. They are the very opposite of what you preach.

Venting some steam

Tuesday, June 7th, 2011

After maybe a decade using Win2K, I finally joined the modern world and bought a new PC, which came with a pre-installed, 64-bit version of Win7.

Man, what a disaster.

After just a few hours I suddenly remembered why I had not upgraded the machine for all those years.

  • The search function (you know, the basic stuff to find files) doesn’t work anymore. Ridiculous.
  • The machine only has two times 289 Gb, even though it’s advertized as “640 Gb” (there’s even a sticker on the side claiming this).
  • Where the fuck are my Windows CDs or DVD?! You mean I can’t reinstall it when needed? You think it will never get corrupted or something?
  • The guarantee is invalidated if I open the machine, LOL. Of course opening the PC was the first thing I did, breaking some seal in the process that was there to prevent me from doing so.  Day 1, guarantee is already dead, yay.
  • 3DS MAX refuses to install on this OS. Bloody marvellous. It’s a legal version and everything, but no, of course, doesn’t work. Yes it’s 3DS MAX 6, so fucking what? I’m a programmer, I don’t need all the fancy stuff from new versions, and all those plug-ins don’t work in newer versions of MAX anyway. Right now I’m totally screwed, I can’t create new levels for Konoko Payne. Don’t fucking tell me it’s hard to make something work on both old Windows and new Windows, my own engine does that perfectly, it’s just fucking laziness or stupidity if you can’t even do that.
  • Don’t get me started on the new retarded features that keep getting in the way. Give me a bloody search feature that works, then we’ll talk about the fancy bullshit.

I’m very tempted to just format the whole thing and re-install Win2K over the Win7 crap.

Don’t trust the compiler

Wednesday, February 9th, 2011

How clean your source code looks like is not as important as how clean your generated code looks like.

When supporting multiple platforms, your code not only has to work on the weakest machine. It also has to be compiled by the weakest compiler.

Don’t trust the compiler. No, it will not magically optimize your crap.

A bold statement

Friday, January 7th, 2011

Let me make a bold statement:

All functions taking a “const String&” parameter as input are bad.

…where String is either your own custom string class, or std::string, whatever. This is bad, of course, because it forces your users to have a String around. If they get the string from an external source (typically as “const char*”) you just forced them to create a temporary String just to call the function (or worse, the temporary will get created without them even noticing). Since you pass it as a const, you will not modify the text and there is no reason to force people using a String. The proper API just takes a “const char*” parameter.

(Yeah, yeah, I know, it ignores Unicode stuff and loses some minor optimizations when the String class caches the length of the text, whatever, you get the idea of the bold statement. And also, any const function in the String class is stupid, as an old classic GOTW entry showed eons ago).

shopfr.org cialis