Saturday, March 12, 2011

New Place for the Blog

I decided to move this blog back to my home page:
www.tomasarce.com
I think it will simplify things... even though it is a bit risky.

Friday, March 11, 2011

Navier Stokes - Simulating fluid dynamics

Interesting article about Fluid Dynamics: http://www.infi.nl/blog/view/id/71/Navier_Stokes_iPhone_vs_iPad

Free Download of a 3D Scan Head

Link: http://www.ir-ltd.net/infinite-3d-head-scan-released I think one day I will need it. ;-)

Network Architecture Client Server vs P2P

This is a question I am thinking these days. I love pushing what is normal into something new but I also hate to be wrong, mainly because I do not get a cookie at the end. So while everyone is doing Client Server type of architectures I want to really know why.
  1. It is because punching a hole in the NATs could be a problem...? No really, I think everyone has skype or other p2p programs and they work just fine. 
  2. Is it because Cheating is a problem? Not really as you can cheat in both architectures, and people is not so concern about how they die but rather why.
  3. Is it because it is more complicated in P2P? No really...
  4. Is it because there will be more over all traffic, which increases the chances of something going wrong? I don't think this is valid neither, because the internet handles plenty of traffic already.  What about local traffic? again not a big deal for the download side because it should be about the same not to mention if other people in your home is using the local net. But something related to this may be...

I think it really comes down to to a simple thing. I think it has to do with the upload rate. Since a node in each peer really has to upload X*Nplayers whether a C/S only has to upload X. Then to make something **new** can not be a pure P2P and definitely not a pure C/S.

I read an interesting post of how you could think outside the box and make the decision base on the actual data. This sounds more like something worth exploring. However there is a potential issue here and is that you could get the best/worse of both world.

In a related topic, I am convince that there is an untapped resource in the network components of games and its the fact that they could share computations. Usually a C/S the server does the hold CPU work and Clients get the results. Which is great for the client but not so great for the server. There is a sharing of complexity as the clients usually handle all the rendering crap. Well this is great but the model does not scale well.

The peer to peer approach by its nature hints at the possibility of sharing computation across Nodes. If this is the case then we could have a cloud base architecture which gets more and more powerful the more nodes we add. I am definitely in the belief that this is possible and because of this it may merit looking at the p2p approach even if it in a hybrid way.

I will keep this blog posted on what I find.

Thursday, March 10, 2011

Funny Russian experiment

This experiment: Link
It is a little A HEAD of its time. It seems disgusting, funny, interesting, crazy, etc. However if you think the march toward the machine has stop you better think again.

Tuesday, March 8, 2011

Doubling of computer power is underestimated

In an artical call Software Progress Beats Moore’s Law in the NY Times talk about how the software evolve much faster than Moore's Law. As a game programmer I can testify to this to be true. If you look at the original software release in consoles verses the newest games you really see how much more powerful the latest games are. yet the hardware has not change at all. In fact this is what makes consoles so nice to develop for. But software improvements are clearly visible as programmers really learn to take advantage of the hardware. But not just learning how to use the hardware better but also the fact that the underlaying algorithms which operate the games get better as well.

In game development there is a concept call virtualization... well actually I am not sure is a standard concept or is just my concept. Basically is the fact that you operate with data which does not exist, or saying it differently that you deal with data as it did exist, but it really does not. This concept could be use in things like sparse voxel trees, or how we did it in Area 51 collision spaces. In parallel programming this is taken a step farther by thinking that the data not only exists but it is also changing. Not only changing but changing in all frequencies and in all values. Whether it actually is happening or not is usually irrelevant to the algorithms.

With concepts as strange at those makes the hardware evolution looks like a turtle. What give the big advantage to software evolution is that it is all virtual. There is no need to create any physical object which involves dealing with factories etc. Also unlike the hardware the software have allot of bug or saying differently is more imperfect. This also is an advantage of the software because it can allocate resources where is needed the most.

Free Market Failure

The most honest explanation I hear so far to what happen and what is going on now.