SM18 down... sorta.
Just found SM18 was down for day or two thanks to it's password "expiring"...
Anyways, it's back up now, although it seems it (and all my machines) cant connect to get WUs.
EDIT: NM, just got it's WU, although it seems they havent been showing up in the stats for a couple days.
Anyways, it's back up now, although it seems it (and all my machines) cant connect to get WUs.
EDIT: NM, just got it's WU, although it seems they havent been showing up in the stats for a couple days.
0
Comments
I've read that Stanford has once again revamped the way they release stats. I've noticed they seem to slow down periodically, then all show up at once in a big fat clump of points.
I just love the way Stanford messes with their stats server and doesn't ell anyone that they are doing it. They messed Jason at EOC up with their latest shennanigans just when he was getting ready to roll out his new stats pages.
You can change it so that the pw doesnt expire.
Basicly, the stats server has reached capacity. So, they feed updates when server laod is less.... And priority is given to short-term web lookups by users in http:// sessions. Stanford simply does not have th efunds for a stats cluster right now, not a big one, and points are aggregating so fast with Gromacs that the derivative updated feeds have to be of larger points. Server does not have capacity to gen custom snapshots for all folders on demand. So, predictions will float more unless the time from update to update is taken into account with calcs, and a floating algorthm used based on hours between updates for updating and an average points per hours in update period imposed on each "actual" prediction range.
Furthermore, if servers are busy accepting and sending WUs, they cannot link to update to stats server, so we have two things going here-- the busiest servers are havign increased traffic per hour as more and more fast boxes come online. So points perk to Stanford's stats server unevenly, and you can expect points to take 3-8 hours or more to perk to Stanford shown stats as the points are updated there.
One way for Jason to solve would be a more complex update file, but it would be a variable length record per user file that would be a PITA to parse if we wanted true hour by hour stats mirrored. That is why I use Arachnid, that site can deal better with predictions with uneven periods between updates. Updating there takes a background to serving last snapshots per users....
Jason needs to save last update file or stimestamp from it, grab new timestamp, then calc per period, and period will float. Getting research results is folding's priority, and resources are limited-- so stats will lag. Faster boxes mean more stats lag for same boxes, many fast boxes exacerbate the problem, but results flow to folding faster. Dropping Genome@Home lets folding concentrtate more on one area, all of using limited funds internally for WU processing and eval and for gating of web feed, and for stats updating (bandwidth internally and externally and securing same and the actual updating of stats cost money).
Folding itself is in a squeeze moneywise, it has to prioritize to processing incoming work to justify its existence to Stanford the university. And power itself is expensive, as CA is having rolling blackouts and the core machines have to be up 24\7 mostly-- that means fuel costs for generators among other things, both to keep machines cool and to power them and the routers that reach out to the web and link internally.
One reason Genome@Home is being discontinued is that folding is being focused partly due to funds limits and the need for more and more machines locally at Stanford to handle the Folding@Home workflow and llok at it and calc new WUs and figure out new approaches. Folding does not have unlimited funds, and almost never will.
John D.
Had a bug because Stanford changed user stats to integers only instead have having decimals. Had to take the data out and put back in. Hopefully today it should be up he said.