Need advice - losing 90 lab machines
the_technocrat
IC-MotY1Indy Icrontian
Hey all,
Just wanted to let you all know that I'll be losing all of my labs in the next few months. My workplace isn't giving me enough of a budget to properly maintain 400 workstations, so I've been left with no choice but to go to a thin-client architecture. Thin clients aren't their own installation of f@h, so as far as a bunch of workstations, this is a total loss of f@h capability for me.
(Don't freak out)
However...
In a thin client architecture, you have several dummy terminals logged into a single master machine. Windows XP only allows 15 concurrent users on a single master machine, and we have to stay on Windows XP for the kiddies. (Maybe next summer we'll look at Vista... That means that for every lab I lose (30 machines) I need two master machines that 15 workstations will be logged into.
And those master machines have to be beasts. So I'm losing 15 workstations at 1-1.5Ghz with about 512MB 333 or 400mhz RAM that were turned on weekdays from 7:30am-4:00pm. (42.5 hours/week) I'm gaining a monster server that I can keep on 24/7 (168 hours/week). I've been pricing out machines, and my main candidate right now is an IBM x3550 - 2 dual-core 3Ghz Xeons and 4GB 667mhz memory.
Here's where I could use some advice. This 15-thin-client-to-server setup is cheaper to purchase and maintain than getting 15 Dell workstations, so that's why I'm doing the thin-client thing. But keeping in mind that I'm going to have at least 6 (probably 10) of these 15-thin-client-to-server setups (3, maybe 5 labs), is it cheaper still to go with a blade setup? My thinking is: I have to support 6 15-machine subgroups with 6 x3550's. Or if I can do it for the same price, I'd much prefer to do 9 10-machine subgroups.
Does anyone have any experience with the bladecenters? I'm thinking of performance here. I'm posting here thinking that the better performance for the same $ I can give my workstations, the better the f@h numbers will be as well.
So, how about it? My primary goal is to get as much computing power to the workstations for a given price, knowing that I'm going to be limited to 15 concurrent logons per master server. Although my concern is with giving good service to the workstations, in the back of my mind the f@h potential is there too... :bigggrin: Since I think both goals depend on the same output from the situation, I don't feel that there's any conflict of interest here...
This is my first experience with the thin client architecture, besides little tests I've been doing over the last 2 months with workstations linked to other workstations. Any advice / ideas / expirience would help me (and the ppd) out a lot.
Thanks!
edit:
Oh, and just FYI, the cheapest Dimension workstations you can get from Dell right now are about $500, which makes the cost of a 30-machine lab $15,000. Thin clients are $300 each, which means I need at least 2 servers for less than $6000 to beat the purchase cost of an entire lab. The maintenance costs speak for themselves - 30 public workstations vs. 2 secured servers... If you extrapolate that out for an entire campus, let's say a high school with 3 labs, that means I've got to stay under $18,000 and get at least 6 servers out of it. (that's why I'm looking into a shared-infrastructure platform like the blades...)
Just wanted to let you all know that I'll be losing all of my labs in the next few months. My workplace isn't giving me enough of a budget to properly maintain 400 workstations, so I've been left with no choice but to go to a thin-client architecture. Thin clients aren't their own installation of f@h, so as far as a bunch of workstations, this is a total loss of f@h capability for me.
(Don't freak out)
However...
In a thin client architecture, you have several dummy terminals logged into a single master machine. Windows XP only allows 15 concurrent users on a single master machine, and we have to stay on Windows XP for the kiddies. (Maybe next summer we'll look at Vista... That means that for every lab I lose (30 machines) I need two master machines that 15 workstations will be logged into.
And those master machines have to be beasts. So I'm losing 15 workstations at 1-1.5Ghz with about 512MB 333 or 400mhz RAM that were turned on weekdays from 7:30am-4:00pm. (42.5 hours/week) I'm gaining a monster server that I can keep on 24/7 (168 hours/week). I've been pricing out machines, and my main candidate right now is an IBM x3550 - 2 dual-core 3Ghz Xeons and 4GB 667mhz memory.
Here's where I could use some advice. This 15-thin-client-to-server setup is cheaper to purchase and maintain than getting 15 Dell workstations, so that's why I'm doing the thin-client thing. But keeping in mind that I'm going to have at least 6 (probably 10) of these 15-thin-client-to-server setups (3, maybe 5 labs), is it cheaper still to go with a blade setup? My thinking is: I have to support 6 15-machine subgroups with 6 x3550's. Or if I can do it for the same price, I'd much prefer to do 9 10-machine subgroups.
Does anyone have any experience with the bladecenters? I'm thinking of performance here. I'm posting here thinking that the better performance for the same $ I can give my workstations, the better the f@h numbers will be as well.
So, how about it? My primary goal is to get as much computing power to the workstations for a given price, knowing that I'm going to be limited to 15 concurrent logons per master server. Although my concern is with giving good service to the workstations, in the back of my mind the f@h potential is there too... :bigggrin: Since I think both goals depend on the same output from the situation, I don't feel that there's any conflict of interest here...
This is my first experience with the thin client architecture, besides little tests I've been doing over the last 2 months with workstations linked to other workstations. Any advice / ideas / expirience would help me (and the ppd) out a lot.
Thanks!
edit:
Oh, and just FYI, the cheapest Dimension workstations you can get from Dell right now are about $500, which makes the cost of a 30-machine lab $15,000. Thin clients are $300 each, which means I need at least 2 servers for less than $6000 to beat the purchase cost of an entire lab. The maintenance costs speak for themselves - 30 public workstations vs. 2 secured servers... If you extrapolate that out for an entire campus, let's say a high school with 3 labs, that means I've got to stay under $18,000 and get at least 6 servers out of it. (that's why I'm looking into a shared-infrastructure platform like the blades...)
0
Comments
Bump.
It would seem that way. I wonder though, if F@H cedes processing power over as quickly when hit from demands from external thin clients as it does on the machine in which its installed. Also, remember the big memory utilization of F@H, especially with many of the newer work units. I have no clue how that would affect thin client processing orders to the server and the server's communication back to the clients.
I was thinking about this. I would have to test it. If f@h didn't step down, I'd have to program it to only run from 4pm to 7am. Still, not too shabby...
How CPU intensive is it?
What kind of thin-client machines are you looking at?
You will run f@h from the console, the SMP version will handle 4 processors.
Definitely, as much as I want to fold for T93, it doesn't do anyone any good if I get canned. :-) It's no big deal for me to get a good test, half the time the lab is full of study hall students just surfing, so I've 'volunteered' them to help me out with a few tests in the past... :-)
btw, I configured a blade to the same memory/HD specs as the x3550 I configured above - The blade is $4170 with dual 2Ghz Xeons, while the x3550 is $3475 with dual 3Ghz Xeons... I think that pretty much tosses the bladecenter idea out now that I look at it. I can't justify the savings on maintenance when the difference is $500 per machine.
So it looks like we'll still go the x3550 route. I've got the rack space, electric and UPS for it, so no problems there.
OK, well, just wanted to let you guys know what is going to be up with my production, might get a little jumpy in the next few months, so if you see me drop a day here or there, that's why...
They're running MS Office suite, Internet Explorer, and a few educational programs originally make for Win2000, so no big deal there. I've been running 3 workstations off of a 1.5Ghz/512MB machine as a test, never gets above 40% utilization on both ends.
We're looking at the WYSE S10's. The network infrastructure was recently updated to gigabit as well, so I think we're OK there...
Yeah, I run the workstations from the console now (see the Group Policy link in my sig). 4 proc's, does that mean 2xdual core, or does the SMP automatically look at what you have and do what it needs to do?
On my dual-core rig, it still runs four folding cores, but two run on each physical core.
Are you allowed to build it yourself?
Have you considered Clovertown X53xx series or are they too pricey? If you build it yourself, I'm pretty sure these would be a better bang+fold for your dollar.
The issue here is that I might not be at this place forever, and they don't pay enough to get high-end personnel in here. (I took a 30% pay cut to have a position that boosts the resume...) So... I'm trying to keep everything pretty standard, and easily available to a support contract if I were ever to leave... that's why all of our switches are unmanaged, we've got all IBM servers, automated tape library, etc.
Also, where there are issues with programs or profiles, it's easier to fix on a regular machine. We also have seen issues where some programs (nothing main stream) don't function properly because the programs act as if they're not seeing input from the user. (If I'm connected to the session via a program like Webex, I can enter input with my keyboard and mouse but the user can't.)
Using a thin client may save you money, but make sure you go into this with your eyes wide open. Make sure that you double check for licensing issues. Talk to the manufacturer of the products and verify they will work in a thin client environment.
Hrm, thanks for the input. You think it's a Citrix thing? Network thing? We recently upgraded to Gigabit here, and I was planning on going with terminal services over something like Citrix...
Citrix and Terminal Service setups are definitely looking to be a PITA... My latest idea is to set up the server with VMware and have a bunch of WinXP virtual machines that the thin clients can automaticallt RDP into. No licensing issues (we already have the WinXP licenses in the form of 400 full install CD's) and no Citrix messiness.
The server has 2 Gigabit NIC's, with room for 2 more on PCI if I need it, and load balanced between. Worth checking out, I think.
Hope it works out for you.
32-bit Win2K3 server enterprise edition apparently can address more than 4GB, I'm not sure if it's efficient tho. I'll probably get a 64-bit host OS.
Anyways, I installed the Win2K3 standard edition OS we already had just to test and IT WORKS. WORKS GREAT!!
Essentially, I made a virtual machine and installed it like I do to any other machine in the computer lab. Then I took a snapshot of it, and saved that image as the 'Default Lab' snapshot.
Then I went and made a few other virtual machines, and made them be clones of my default lab virtual machine. Any changes to the master virtua machine, and the other machines are changed also.
So basically, I'll have 30 virtual machines cloning themselves from a single master image. Revert the master image back to its snapshot (easy) and all 30 virtual lab machines are "re-imaged".
And it's working great with the one thin client I had time to test on Friday, the end user has no indication it's a thin client, since I have it set up to automatically connect to the it's assigned WinXP virtual machine on boot.
The only issue I'm having so far is that it seems that the thin clients we have are doing video in OpenGL instead of DirectX? Google Earth complains, but still works under OpenGL, albeit rather jumpy.
So yeah, the test is a success, bt I'll need to re-do the thing when I figure out what 64-bit host OS I can get my hands on. Oh, and when I can buy the right memory for the server - doh! (running on the 1GB it comes with right now, because I ordered the wrong kind)
RDP and graphics don't go well together. Remember, your virtual machines don't have a high/medium end graphics card to rely on. Don't expect much to work in the area of 3d graphics and video.
7 thin clients in the lab right now - they are working great. Kids just do internet/MSoffice stuff, so it's no prob with the low vid bandwidth. They try to watch YouTube, but they're not supposed to do that anyway... :-)
Disk utilization is at about 45% busy with 7 kids on, processor and mem are good. Looking good for being a single 250GB disk, 4GB mem and a single dual-core 3Ghz Xeon. (the test rig)
had to make a group policyso they couldn't "shut down" though - they were shutting down the virtual machines...which can't be restarted from the lab doh! Now they can only log off, so no problems.
looks like a single disk in JBOD can handle about 5-10 @ 30% utilization... so maybe I can get a server with 6 disks, have 3 logical drives with RAID 0, and split 30 machines across those 3 "drives"... Just need to make sure the RAID controller doesn't become a bottleneck...
Raid 0 will make it nice and fast. Do you have a fall back plan if a drive fails? If a drive files, you'll have to rebuild the OS... but since you're using VMWare, have you made a backup of the vm image? How are you backing up any saved work of the students? You may want to consider Raid 5 where you get speed and redundancy or a well thought-out backup and recovery strategy.
Whoops, brain freeze. I meant Raid 1 for the production machine, of course. Why I continue to call it by the number when I could just say 'mirrored array' is beyond me.
If 3 logical drives running RAID 1 overwhelm the bandwidth of the RAID controller, I'd probably go with 2 logical RAID 5 arrays. Probably the best option when I've got 6 disks and am looking for the fastest disk IO?
The kids save their junk on their network drive, and that's backed up daily, no problem there.
Just got in the Server 2003 x64 CD, will be installing it next week.
RAID 5 is much more overhead for the controller... it has to do more calculations.
you guys must be on some serious salaries!
techno- The last serious drive controller review that I saw didn't show a penalty for r5. If I find it I'll post the link. Though r5 does like more than three drives.
I can't speak for everyone else- but I can tell you that I have to take every dollar of mine seriously-
yeah, I'm more concerned about the controller throughput. If handling 3 logical drives would max it out, not good...