Archive for September, 2011

Pensées du jour

Friday, September 30th, 2011

To iterate is human, to recurse is divine, and the Big Bang was a stack overflow.

—-

I should have a patent on patents. People would pay me recursively and I would get a cash overflow!

—-

The true meaning of LHS is “Little Hidden Stall”.

—-

A stupid magician is a Garcimoron!  (*)

(*) I realize only 1% of the audience will get that one :)

Simulationism

Sunday, September 18th, 2011

Simulationism is basically the belief that we actually live in a computer simulation. While not seriously believing in it, I found it interesting from a programmer’s point of view. Bizarrely a lot of things “make sense” from this perspective:

- We live in a giant, cosmic version of a computer simulation. It started with a Big Bang, equivalent of a Big Boot. It might end up with a reboot. The universe is simply the sandbox in which the simulation runs.

- The entity that created this sandbox can be seen as the First Scientist, or the First Programmer, or whatever you want to call it. It is a being outside of the universe, i.e. outside of the simulation. We might have been created in its image, or not. It depends on what the simulation is trying to prove or achieve, which is something we may never know.

- The laws of physics are simply the rules that have been hardcoded to make the simulation work. There might be no particular reason why the rules are what they are. Maybe they just make for an interesting simulation, in the same way totally artificial and arbitrary game rules create interesting gameplay.

- Fundamental physical constants appear meticulously tweaked because they actually are. Every programmer knows about those “magic values” that most of us used at one point or another to make everything work well. The First Programmer may have tweaked the gravity force, etc, with a cosmic slider just like we would tweak the lacunarity or the fractal increment when generating a procedural landscape.

- We are probably not the first simulation run. We are just one particular run, and if those constants all work out ok this is simply because a large number of failures might have preceded us. Things might not be perfect yet because we are a work in progress, in a way similar to what Teilhard De Chardin was writing (e.g. about the Omega Point). In this iteration humans may not be very wise yet, but we may do better in the next run.

- As a consequence, there might also be no reason for the existence of some things in our universe, in the same way there is often “dead code” remaining in a codebase. Introns in our DNA might be just that. As programmers know, there is no reason to optimize or even clean up your code before it even works. In other words we’re still prototype code, not production code.

-  We have no way to know what the outside world, where the First Programmer lives, looks like. We are quite simply “evolved virtual creatures” similar to the ones Karl Sims created a while back. Only much, much more involved, to the point that our conscience emerged. But then again, conscience might just be a convenient way to label what appears, to us, like unimaginable complexity (which reminds me of what Jean Guitton was saying about randomness: there’s no randomness, it just appears random to us because the forces that acted to produce a given event are beyond our analytical capabilities). So while conscience might appear like a miraculous trait to us, it might just be because we lack the proper caps to grasp it. In the same way a blind person can not really understand what the color blue or red is, we might lack the sense to properly understand conscience. But it might end up being a simple thing to program for a higher being like our First Programmer, in the same way programming “eyes” for a game AI is relatively easy in our own computer simulations. The bottom line anyway, is that we can not imagine the world outside of our universe, no more than a game AI can imagine our “real” world beyond the walls of the computer memory.

- We only see our world through imperfect sensors (our eyes do not see UVs or infrareds, our ears can not hear infra or ultra-sounds, our sense of touch is good for our fingertips but lousy in our back, etc, etc, all our sensors are pretty limited). In the same say a game AI sees the computer world through imperfect sensors like raycasts, sound volumes, collision detection checks, etc. Our limited sensors can not even sample the world inside our universe accurately (our brain does its best to construct something from the limited-accuracy inputs), so they are totally inadequate to figure out the real world of the First Programmer, outside the universe. Similarly, a game AI only “sees” and “hears” and “feels” what its limited sensors have been programmed to feed it with. Those sensors only let the game AI capture a small part of the program they live in. If we would succeed in creating a real AI whose consciousness would emerge, it would first discover the concept of a computer, i.e. the universe beyond their game world. But it still would not be able to imagine our world behind, in the same way our limited sensors can not tell us much about the world of the First Programmer.

- “Rien ne se perd, rien ne se crée, tout se transforme”. This is because memory is limited, really. Atoms or sub-particles are counterparts of bits in the computer memory. When an object is deleted and a different object gets created at the same memory location, it is the same as when our bodies die, decompose, and go back to cosmic dust. Our giant cosmic simulation has a finite amount of resources, in the same way a computer has a finite amount of memory.

- The First Programmer does not intervene in human affairs, does not answer prayers, does not perform miracles, does not spy on each simulated entity, simply because this is not how simulations work. You usually do not mess with a simulation while it is running. You tweak the settings, run the simulation to the end, check the output, adjust the parameters and run another one, until you get the desired results.

- A special note must be written about time, which is a very relative concept. We already know from Einstein, Langevin and others, that time slows down when you go faster (see e.g. the twins paradox, etc). To the limit, when you reach the speed of light, time does not flow anymore, it stops. The photon knows its complete history from birth to death in an instant, etc. For our computer simulations, time passes a lot quicker than for us. A lot of things happen inside a computer simulation in a few nanoseconds, a lot of history. The same might be true for the First Programmer and his simulation, i.e. us. Many centuries for us might pass in the blink of the First Programmer’s eye. This does not favorize interventions or reactions to events happening in our simulated world, simply because it is too quick for the First Programmer to react. For example the whole modern civilization might be simulated in one frame of the First Programmer’s game, so there is not much he can do about punishing sins or rewarding good deeds (if he even cares, after all we are only game AIs in this). The only thing he can do is record the simulation results and analyze them later when the simulation has ended, i.e. after the end of our world and before it is reborn/rebooted for a new iteration.

Static/Bipartite SAP

Wednesday, September 14th, 2011

Another idea for optimizing the SAP when we have a small amount of dynamic objects moving amongst a large amount of static objects:

Put the dynamics in their own vanilla SAP. Get the dynamic-vs-dynamic overlaps from there, nothing new.

Put the dynamics & statics in a special SAP (“Static SAP”, or “Bipartite SAP”) modified to only report dynamic-vs-static overlaps:

  • The statics are stored in the SAP as usual (same box/endpoint structures as before)
  • The dynamics however are stored in a separate array, using a new box structure.
  • When the dynamic objects are inserted in the SAP, their endpoints are not inserted into the same array as for statics. They are stored directly in the new box structure, i.e. in the (dynamic) box class itself. They link to the static arrays of endpoints (their virtual position within those arrays, if they would have been inserted there as usual)
  • When running “updateObject” on those dynamic objects, we move their endpoints virtually instead of for real. That is, we still look for the new sorted positions within the arrays of endpoints, but we don’t actually shift the buffers anymore. The update is a const function for the main SAP data-structure, the only thing that gets modified is the data within the dynamic box we’re updating. In other words we don’t have real “swaps” anymore. No more memmoves, thus reduced number of LHS, etc.

The drawback is that we update dynamic objects twice, once in the “static SAP”, once in the normal SAP.

Didn’t try it, don’t know about the perf, but might be interesting.

Comments

Monday, September 12th, 2011

The comments have been re-activated, after a few people complained. Oh well.

The prophet programmer

Friday, September 9th, 2011

It seems there is one in every team. He is living by the book, following the rules to the letter. He considers himself bright and smart because he always knows the latest trends, the latest official Right Way to write things according to the C++ Standard. He follows it religiously, even the new rules not implemented yet by any compiler. And he looks down on you if you do not write “proper” code. He is the prophet. The Book is right. You must follow the rules.

His zealotry has no limits. He will conscientiously rewrite your “illegal” C++ when you are not looking. For your own good of course.

He will go and meticulously replace all your “i++” with “++i” in your vanilla integer for-loops.

He will go through incredible hoops to get rid of a lonely “goto”, used to skip a large block of code and jump to your function’s epilogue.

He will use unreadable, cryptic, unbelievable templates to replace your simple define, because “define is bad”.

He will tell you with a straight face that he went ahead and replaced all the “NULL” in the codebase with “0″ or “nullptr”, because “NULL is bad C++”.

He filled his head with many of those mantras, and he is obsessed with them. They are the rules. They must be followed.

Well, my dear prophet programmer, I have news for you: you are not bright. You are not smart. You are not clever. You’re a fucking robot. It does not take a genius to blindly follow recipes from your cookbook. You are a brain-washed moron doing a machine’s job. If you blindly follow the Standard, you end up with standard code, which by definition anybody can write.

The best programmers are not the ones blindly following anything. They are exactly the opposite of you. The best programmers are the ones who know when rules should be bent, when boundaries should be broken, and when envelopes should be pushed. The best programmers are the ones who, constantly, on a case by case basis, hundred times a day, stop for a moment and think about how to best solve a problem. They are not the ones turning off their brain to follow a recipe. They are not the ones trying to fit a preconceived solution (design pattern?) to everything. If a preconceived solution solves your problem, it was probably not really a problem worth solving - that is, it is such a common and tired issue that anybody can look up a standard answer in a book. How does solving such a thing in such a way makes you “smart” ?

The best programmers are creative. They have a big imagination, and they are not afraid to use it. They borrow techniques from one field and apply them successfully to an apparently unrelated field, discovering subtle links and connections between them in the process. They are never satisfied with the status quo.

The best programmers, the heroes, the top coders, like Nick of TCB did with the sync-scrolling eons ago, are the ones who invent new techniques to solve problems that nobody solved before them. By definition they are not standard. They are the very opposite of what you preach.

shopfr.org cialis