LeonardoWake up and smell the glaciersEagle River, AlaskaIcrontian
edited September 2007
Mason, I think you got taken for a ride on this.
Having followed RWB's link, I could hardly call this computer a "supercomputer." I'm also thinking that this computer had nowhere near the power of the 1997 supercomputer mentioned in the article.
Maybe I just no longer know the definition of "supercomputer."
According to Wiki the price per GFLOP was $30,000 (ie: 1 Gigaflop of processing power in a machine at the time would have cost you $30,000) in 1997. In 2007 that price was $0.42.
Performance of a $5M 1997 era supercomputer for under $2K doesn't seem that unreasonable.
I'm sure it was the thing to do to get this performance at the time it was built. If this is the case, I think a pair of Q6600s might smoke this unit today.
There was no ride involved. "Yeah, but does it fold?" was me attempting to sarcastically point out that there does not exist a F@H client that could fully utilize such a system. I never understood why a client has not been produced that would support my whole farm working on the same project together as they are all wired w/ Gigabit anyway, and I would assume be VERY productive.
0
LeonardoWake up and smell the glaciersEagle River, AlaskaIcrontian
edited September 2007
no no, what I meant by "taken for a ride" was that article's author calling the Beowulf cluster (4 X AMD dual core) a supercomputer. Makes me thing the writer's total hardware experience is adding a RAM module to an off-the-shelf beige box.
I suppose it all depends on the definition of the word "supercomputer."
no no, what I meant by "taken for a ride" was that article's author calling the Beowulf cluster (4 X AMD dual core) a supercomputer. Makes me thing the writer's total hardware experience is adding a RAM module to an off-the-shelf beige box.
I suppose it all depends on the definition of the word "supercomputer."
Yeah... but you gotta admit, you have at least a slight urge to try to build your own, bigger, better, "super computer" or beowulf cluster for that matter. I mean this thing is rather cheap. Dunno what I'd use it for though.
Yeah, calling a makeshift 4 socket system using Ethernet as an interconnect a "supercomputer" is quite a stretch.
Seriously tho, does anyone know why a F@H client does not exist that would distribute a single project amongst small clusters? I could be very wrong but I would assume that if well executed such a client would greatly increase the output of those of us with farms.
Yeah... but you gotta admit, you have at least a slight urge to try to build your own, bigger, better, "super computer" or beowulf cluster for that matter. I mean this thing is rather cheap. Dunno what I'd use it for though.
I'm not even sure it matters what I would use it for, it would be worth the money just for the fun of building and having it. Plus, I'm sure I'd find some uses for it besides the obvious.
I'm not even sure it matters what I would use it for, it would be worth the money just for the fun of building and having it. Plus, I'm sure I'd find some uses for it besides the obvious.
Yeah, calling a makeshift 4 socket system using Ethernet as an interconnect a "supercomputer" is quite a stretch.
Seriously tho, does anyone know why a F@H client does not exist that would distribute a single project amongst small clusters? I could be very wrong but I would assume that if well executed such a client would greatly increase the output of those of us with farms.
At least until the SMP client FAH was basically 100% linear. Meaning to do the next % of the WU it needed the calculations of the previous step. So having 10 slower CPUs doesnt make 1 fast CPU for FAH. The topic was often brought up back when computers with like 133MHz were more common and schools were giving them away. Believe me, I among many others would have liked to harness the power of many of the junk PCs we have come across through the years to make a cluster.
Think of FAH as a simulation of atoms, atoms in a protein molecule. Atoms are at position 1, some calculations are made on those atoms based on environmental conditions and they end up at position 2. Do the calculations again and you are at step 3. You need to know where everything is at during step 2 to get to step 3. So having a CPU trying to work on step 10 when it doesnt know where it was at during step 4-7 is kinda pointless.
I thought the point of using a cluster was that applications weren't aware of the multiple cores... the clustering software in the OS handles it?
So an application like F@H would feel it's running on a single-core machine, and the clustering software distributes the load across the four nodes in the background?
Before SMP the load was linear. I could not have been split up, at all.Take a step in any direction 10 times. Even if you split up each movement to another CPU, 1 CPU only working at once because you need the previous location to continue to the next position.
Making the FAH program see 3GHz of CPU even when it was 10 300Mhz computers does nothing as 1 calculation was done on 1 CPU. Then FAH decides where that protein moves next and another command is given for a CPU to work on. Think of it not being threaded.
Comments
http://www.clustermonkey.net//content/view/211/1/1/1/
It's only 4 3800+ X2 processors..... wtf.
Having followed RWB's link, I could hardly call this computer a "supercomputer." I'm also thinking that this computer had nowhere near the power of the 1997 supercomputer mentioned in the article.
Maybe I just no longer know the definition of "supercomputer."
Performance of a $5M 1997 era supercomputer for under $2K doesn't seem that unreasonable.
I'm sure it was the thing to do to get this performance at the time it was built. If this is the case, I think a pair of Q6600s might smoke this unit today.
(PS- linky would not come up for me).
There was no ride involved. "Yeah, but does it fold?" was me attempting to sarcastically point out that there does not exist a F@H client that could fully utilize such a system. I never understood why a client has not been produced that would support my whole farm working on the same project together as they are all wired w/ Gigabit anyway, and I would assume be VERY productive.
I suppose it all depends on the definition of the word "supercomputer."
Yeah... but you gotta admit, you have at least a slight urge to try to build your own, bigger, better, "super computer" or beowulf cluster for that matter. I mean this thing is rather cheap. Dunno what I'd use it for though.
Seriously tho, does anyone know why a F@H client does not exist that would distribute a single project amongst small clusters? I could be very wrong but I would assume that if well executed such a client would greatly increase the output of those of us with farms.
I'm not even sure it matters what I would use it for, it would be worth the money just for the fun of building and having it. Plus, I'm sure I'd find some uses for it besides the obvious.
You could...... use it for TEXT EDITING!:p
At least until the SMP client FAH was basically 100% linear. Meaning to do the next % of the WU it needed the calculations of the previous step. So having 10 slower CPUs doesnt make 1 fast CPU for FAH. The topic was often brought up back when computers with like 133MHz were more common and schools were giving them away. Believe me, I among many others would have liked to harness the power of many of the junk PCs we have come across through the years to make a cluster.
Think of FAH as a simulation of atoms, atoms in a protein molecule. Atoms are at position 1, some calculations are made on those atoms based on environmental conditions and they end up at position 2. Do the calculations again and you are at step 3. You need to know where everything is at during step 2 to get to step 3. So having a CPU trying to work on step 10 when it doesnt know where it was at during step 4-7 is kinda pointless.
So an application like F@H would feel it's running on a single-core machine, and the clustering software distributes the load across the four nodes in the background?
Making the FAH program see 3GHz of CPU even when it was 10 300Mhz computers does nothing as 1 calculation was done on 1 CPU. Then FAH decides where that protein moves next and another command is given for a CPU to work on. Think of it not being threaded.