SSD caching strategies

GargGarg Purveyor of Lincoln Nightmares Icrontian
edited September 2011 in Hardware
I just read about a new OCZ SSD. It's intended to work with their Dataplex software to store more frequently accessed data on the SSD, while using the HDD for the bulk of the storage.

How do you think this would compare with the caching on Intel Z68 chipsets? I'm assuming that's a hardware-only solution, which seems potentially faster.

It'd be awesome to do a review comparing the two. The OCZ drive + Dataplex could be a good option for those happy with their current mobo.

Comments

  • RootWyrmRootWyrm Icrontian
    edited September 2011
    Patent dodging at work.
    Compare to Seagate Momentus XT hybrid drives, really. Except slower. Because it'll be in software. And almost certainly buggier. (That's just how it goes.)
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited September 2011
    Hybrid storage has been around a hell of a lot longer than the Momentus XT; market implementations pre-date Vista at the very latest.

    I'd like to see some benchmarks, too, but I suspect the difference isn't substantial once the cache is made.
  • RootWyrmRootWyrm Icrontian
    edited September 2011
    Thrax wrote:
    Hybrid storage has been around a hell of a lot longer than the Momentus XT; market implementations pre-date Vista at the very earliest.

    Actually, try "predate YOU." And no. I'm not joking in the least bit. They're called hierarchical storage architectures. (And no. I'm not joking. IBM was delivering it in the late 60's.)

    It's probably only going to look pretty much like this. With folks ignoring the increased risk of data loss that is inherent in a software-caching architecture built for consumers. (Race to the bottom, hooray.)
  • GargGarg Purveyor of Lincoln Nightmares Icrontian
    edited September 2011
    I read an earlier review of the Momentus XT that was lukewarm, but it looks like it's actually pretty decent (and dead-simple). I also didn't realize it was a 2.5" drive. Looks like it'd be a great upgrade to for my old laptop, but I think I don't think the body of the thing will hold together much longer.

    I'm more thinking about what would be the better option for upgrading my home rig. I kind of want to go Z68, but it'll be awhile before I can afford the board and the 2500K, so I'm looking at all the options.
  • BasilBasil Nubcaek England Icrontian
    edited September 2011
    Gargoyle wrote:
    I read an earlier review of the Momentus XT that was lukewarm

    Sure it was for the XT?
    Was another hybrid drive a while back, the momentus PSD IIRC, and it was pretty lack lustre.
  • RootWyrmRootWyrm Icrontian
    edited September 2011
    Gargoyle wrote:
    I read an earlier review of the Momentus XT that was lukewarm, but it looks like it's actually pretty decent (and dead-simple). I also didn't realize it was a 2.5" drive. Looks like it'd be a great upgrade to for my old laptop, but I think I don't think the body of the thing will hold together much longer.

    I'm more thinking about what would be the better option for upgrading my home rig. I kind of want to go Z68, but it'll be awhile before I can afford the board and the 2500K, so I'm looking at all the options.

    Generally folks are very lukewarm on the Momentus XT because, honestly, it's a very lukewarm product. The 4GB of "hybridization" that's offered just isn't enough. Sure, you might boot Windows faster, but that's about it. It won't help game performance, and the caching strategy (most-frequently accessed blocks only) results in mostly filling it with Windows files. Seriously, consider how large Windows 7 is - it's more than 4GB by itself.

    Ultimately, adding more 'cache' isn't going to help anything unless the caching strategy is smarter. Which means software. Which means reduced performance in cache migration. Which means increased data risk. Which means potentially reduced disk performance overall. Which means increased CPU utilization and dependency. Which means increased erase+write activity on the SSD. There's not exactly any upsides here other than not having to pay attention to where you install things.

    Honestly, my recommendation remains going with an SSD large enough to install your most frequently used applications and your OS, and use a 7200RPM for user profiles and the rest of your software. If you've got a huge Steam install, use Windows 7 hardlinks to put individual games on the SSD if it's that important. (Yes, it does work. I do it myself.) UEFI's SSD "goodies" are still too immature and buggy, especially with the poor job motherboard manufacturers have done implementing it in general. Software caching strategies just don't have enough upsides to be worth it.
  • RootWyrmRootWyrm Icrontian
    edited September 2011
    http://www.theregister.co.uk/2011/09/23/ocz_synapse_cache/
    NOT INDEPENDENT BENCHMARKS, JUST MARKETING.

    As expected, it's purely brute-forcing it with capacity (64GB and 128GB.) Logically it's not dissimilar to the Momentus XT. Just makes it a software layer abstraction that redirects all writes to SSD to complete more quickly - entirely expected and anticipated behavior. Past that point, it's hot blocks for reads. Data is likely mirrored/copied rather than actively migrated between points (kinda given away in the name) to reduce data loss risk somewhat - cranked back up by all writes landing on SSD. Basically your data safe point for writes, no matter how much people will hew and cry about it, is when it is committed to the final resting point and confirmed good. Period. What if the software bugs out and loses track of writes, doesn't flush when you restart? (No, it is not "impossible.")
    This is not new stuff to me, especially not with SSDs as a landing zone, and to be honest a half dozen runs of various benchmarks aren't going to show you the expected performance degradation curve. You won't actually see the performance degrade curve until the drive starts filling up with both read and write, and it has to start selecting on the read more frequently. Presumably it's a high write bias (percentage of disk reserved for 'write' cache) based on the performance numbers, so if you're mostly looking for one or two apps that you use constantly to load faster? Probably fine. If you're doing a lot of mixed reads and not a lot of writes, wait for someone from the Enterprise space to evaluate how it handles the biasing in depth. If you're write biased, the potential is there, but the question is still long term handling along with software.

    Nutshell: it's a 1.0. If you like your data safe, nope.avi. True of any 1.0 but especially Windows software, doubly so stuff that messes with storage. If you don't care if you have to restore from backups and have nothing on the disk you can't afford to lose? Worth a look.

    (WTB time to review these things as they hit my desk. Also sleep.)
  • GargGarg Purveyor of Lincoln Nightmares Icrontian
    edited September 2011
    Would a journaling file system adapted for this application address problems with software bugging out?
  • RyderRyder Kalamazoo, Mi Icrontian
    edited September 2011
    Wow.. people haven't even used the software yet and you make it sound like the apocalypse.
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited September 2011
    I like the part where he guaranteed inevitable data loss without knowing anything about how the software functions.
  • BuddyJBuddyJ Dept. of Propaganda OKC Icrontian
    edited September 2011
    Typical...

    While on the topic of hybrid storage, would there be any benefit to this: A SSD with OCZ's new fancy Dataplex stuff and a secondary SATA port/controller on the device where users could attach their own mechanical HDD of suitable size. The SSD would then use the powers of data magic to control what gets to live on the SSD and what it writes to the HDD.

    tl;dr
    SSD magic box plugs into mobo
    HDD plugs into magic SSD box

    Victory ensues?
  • RyderRyder Kalamazoo, Mi Icrontian
    edited September 2011
    Buddy J wrote:
    tl;dr
    SSD magic box plugs into mobo
    HDD plugs into magic SSD box

    Victory ensues?
    So like the Revo Hybrid drive, only you get to select which spinner. I suggested the Revo Hybrid be sold with no spinner on it, but that would be harder to warranty then. We will see.
  • RootWyrmRootWyrm Icrontian
    edited September 2011
    Gargoyle wrote:
    Would a journaling file system adapted for this application address problems with software bugging out?

    Journalling only goes so far. I've had JFS2 blow up on me, and I've had GPFS blow up on me. Generally speaking, if you BSOD, all bets are off. Journalling can narrow the loss window significantly (or widen it, depending on how it breaks,) but I'm presuming they're doing it the same as DiskKeeper, which means straight up intercepting NTFS writes.
    As I mentioned, the read caching's more or less going to be 100% safe. The writes are going to be the risk area. So if we say there's a 70/30 Read/Write bias on the 64GB, flush to spinning is subject to fragmentation, and you're doing constant writes of 400MB/s here's what you get.
    Write space is 19.2GB total, divided by 400MB is 49.15 seconds before full. Mechanical disk intake rate is an optimistic 45MB/s, meaning 437 seconds to write that 19.2GB. We'll say you're doing a total of 12GB of writes - that's 30.72 seconds to the SSD. Not bad. But that's 273 seconds to flush to disk - that's your "at risk" time. The amount of data at risk drops 45MB every second. (Again, once it's read caching, your data is absolutely safe. Unless the mechanical disk blows up.)

    Now, obviously 98% of people are not going to write 12GB chunks. They just aren't. That's a pretty hefty chunk of data right there. Certainly, I've done it and continue to do it - VMware Workstation VMs. But realistically you aren't going to be doing that on a regular basis. Unfortunately, that tends to drop the disk write rate since it increases the random seek. If you're doing 12GB of photos, your disk write rate is likely going to be nearer 20MB/s effective, which is 614.4 seconds of at risk time. To be clear, it's not like this risk doesn't already exist and isn't already present - it's called the cache on the mechanical disk. It's just that the amount of data at risk is going from 16-32MB potential to 19.2-38.4GB.

    It also raises the question of how to deal with the mechanical disk, which will of course, need defragmented. Presumably it's a traditional intercept cache, which means there'd likely be an attempt to cache the blocks during both read and rewrite phases when using a third party defragmenter (e.g. DiskKeeper, O&O, etc.)
Sign In or Register to comment.