It's because of that flattening Moore's Law curve
After the short parenthesis on the Java / GPL stuff - looks like Sun will clarify everything in a matter of days - I'm switching back to my original topic.
As I told you a few days ago, one of the purposes of the blueMarine cluster of projects is to research in new technologies and understand some possible future scenarios of computing. One of these is parallel-computing-made-easier. I'm getting convinced that in the next few years parallel computing (that is the capability of performing intense computation tasks in a shorter time by distributing the load on multiple CPUs) will become more pervasive than today - not because I've dreamt about it after eating too much onions, but because I'm seeing some techy people talking more and more about it.Consider the following points:
- James Gosling, at the latest Italian Java Conference, pointed out how the clock rate increase curve is flattening - that is, it's becoming more and more difficult to increase the clock frequency of a microprocessor. This is because we're close to some physical limits. Chip manufacturers are working around this: as they are still pretty much capable to stuff more and more transistors in their chips, they have started delivering multi-core chips. This does have an impact on our work: the same Java application that runs at a given speed on my 1GHz iBook is able to get all the 1.5x performance gain when run on a 1.5GHz PowerBook; but can't get all the computing power on my MacBook Pro if it's single threaded. That is, an architectural change is required and it will be more and more important as we will get 4- and 6-core computers.
- Another thing I'm considering is that people - at least some kind of professionals - is more and more likely to own multiple computers. For instance I own seven: two laptops (MacBook Pro, iBook), three PCs and two Mac Mini's (PPC and Intel). This is because I'm a Software Architect, of course; but I know that for e.g. professional photographers owning two or three computers is an usual thing. Now, most of desktop applications that come to my mind aren't able to run in a distributed fashion: that's a pity, as we can't exploit all the computing power we have at hand.
- Think, for instance, what could be done if a group of friends, or a community, decided to share the available computing power they own at home. This shouldn't sound as a new concept, indeed: it was the basic idea of the Seti@Home project: distribute a computing application that runs in the idle cycles of people's computers. The concept has been then further expanded and a specific software named BOINC is today able to work for a number of different research projects (there are lots of similar projects, blog readers are encouraged to add references to other similar projects they know in the comments below). One of the pitfalls of the BOINC architecture is that there's not (or limited) support for sandboxing: in this scenario Java has a clear advantage (sure, this doesn't mean that all the security issues vanish magically, but they are less hard to address). Somebody is even pushing the idea forward: abandon the centralized model of BOINC (a "server-side" which is the only entity that can offer tasks, and a "client-side" where volunteers' computers run other's tasks) and go for a "peer-to-peer" model, when everybody can offer tasks and run other's tasks. For instance GridEcon is a UE-funded research project to find out if this could lead to a new economic model.
- Last but not least, some companies have started selling hosting services on some massively-parallel architectures, where customers pay proportionally to the number of CPU hours they consume. The Sun Grid is an example (you guess, you can access it with Java
Related Topics >>