http://www.mini-itx.com/projects/cluster/
Looks way sweet.
http://www.mini-itx.com/projects/cluster/
Looks way sweet.
Please support the defense of Ukraine.
Direct or via Unclutter App
No Ads.
No Trackers.
No Social Media.
All Content Locally Hosted.
Built on Free Software.
We have complied with zero government requests for information.
~ Est. 1999 ~
A pillar of corporate stability since the second millenium.
© 1999-2999 Tom Owad
"achieving a performance of around 3.6 GFLP"
Doesn't it seem grossly underpowered?
I've clocked my 1GHz G4 Powerbook at 4 gigaflops...
Why isn't that cluster a screamer?
I'd like to see a nano-itx cluser, filling up an ANS chassis with as many nodes that will fit (lots and lots).
http://news.bbc.co.uk/2/hi/technology/3532706.stm
The new nasa cluster should be interesting, but why, oh why, do they have to fall in with the penguinistas?
this one, too
http://arrakis.ncsa.uiuc.edu/ps2/cluster.php
The reason that thing is so slow is because they're all VIA C3 series processors. Very low-power chips, best used for basic computing. But at such low cost, making a cluster out of them can be appealing. I doubt he spent much more than $1500 on that cluster.
only 120 Watts at idle? not bad
odd...
"looks to be equivalent to at least 4 (maybe 6) 2.4Ghz Pentium IV boxes in parallel on a similar network "
but... this just seems wrong... 4 X 2.4GHz Pentium IVs, as crappy as I think they are, would have to do better than 3.6 gigaflops, right?
A gigaflop is a lot more processing power than you think. I honestly doubt that you got 3 Gfp on your PowerBook..the highest end Power Macs don't get much better than 1 Gfp. 3.6 Gfps for four 2.4GHz P4's sounds about right to me.
Hey, I'm no expert, I got this piece of software, it said so...
its not always above 4000 megaflops, usually around 3600, but sometimes its up there.
Know of any simple, say, command line benchmarking utilities?
Only problem is that there aren't many apps for clustered computing that the average (Applefritter) user would use. What I mean is that there are no cluster-based games. Of course, this is because the market is so small for these games. What would be cool is if there was an intermediate program that would take instructions from a program not designed for clusters and divide the instructions among the diffrent nodes. Then, the program would think that it was running on one really fast processor while in reality it would be running on many processors. You'd probably have to write an entirely diffrent OS, or do some serious software hacking on a current one to make this work, but it would be cool.
The problem is a compile time issue. Taking it on at run time would be slower. When writing apps that use clustering or parallel processing you write the code and specify chunks that can be done in parallel. Convincing a run time program to make the decisions of what needs to be done in sequence andd what needs to be done in parallel would require much analysis of the code by the program, essentially training it to do what should have been done when the program is compiled. Modern proccessors and compilers already do some of this, but it is all tuned for single processors.
A decent modern OS can run multiple threads on different CPUs to get better performance, but each thread will only run as fast as the CPU that it is on. If the texturing code or what ever is running on one thread, that's the single CPU bottle neck again, 'cause everything else has to wait for the textures to be processed to continue with the game (input, audio, network, etc.)