Thread: Ssd raid 0!

Page 3 of 4 FirstFirst
1
2
3
4
LastLast
  1. #41
    Thanks for the replys everyone! It seems theres quite a few disputes on if it is worth it or not, and unfornately i'm not smart enough to decipher which is right since most of you sound like scholars talking about this stuff. I did put some request tickets to OCZ about my question and got some replys, which I'll post here. (I already posted in another thread which I believe was locked rightfully so, my apologies for multiple threads). This is copy and paste from my OCZ tickets:

    OCZ first reply: "While you can put different drives into a raid0 together, the performance of the raid volume will be limited by the speed of the slower drive, so the Agility drive will cause the Vertex drive to run slower, at Agility speeds. Trim does not work in a raid volume. The raid controller will block the trim command from getting to the drives. Even without trim the drives will still maintain themselves automatically. Performance in a raid0 is the combined sum of the speed and capacity of all drives in the raid volume, so it would be approximately double speed if you put two drives together into a raid together. As the raid controller isn't 100% efficient you don't get quite double, but it should be close."

    My response: "Thanks for the quick reply! Now I was under the impression that without Trim, the SSD's performance would just deterriorate over time?"

    OCZ response: "Trim is just one part of maintenance, and it actually performs a relatively minor role for drive maintenance. The drive automatically cleans out unused spaced when it is powered on but idle."

    Evildeffy has the right idea as me, I'm at 6 GB of space left on my vertex 3, and so 1/2 of my games are on my regular 250gb HDD, and I just want all the speed and performance on SSD's. What happened is I bought the agility 3 as a deal from newegg on sale and rebate and was going to put it into a laptop, but decided I was gonna wait until late this year to get a cheaper ivy bridge one...so in the mean time I have this agility 3 SSD which can't be returned and want to make use of it.

    EDIT: * I just want to be clear that data loss if of no concern for me, since 80% of the games I play save data online (i mostly play multiplayer games, or versions of a game, not much single player stuff). A drive failing would not have dire consequences to me so that con can be ruled out in a pro/con list if you will.
    Last edited by Uncle Julian; 2012-04-20 at 07:57 PM.

  2. #42
    Deleted
    All this talk and I don't think I've seen actually performance gains in games mentioned. Anandtech and Tom's Hardware have done articles on HDDs in RAID 0 performance and sure it looks good in synthetic tests but I don't remember a single game gaining more than 10% load time decrease from using RaID 0. With the extra risk you're taking using it (if one drive or your mobo/raid card/raid controller fails then you've probably lost any data that hasn't been backed up) and the cost (one bigger drive is cheaper than 2 smaller ones) I can't see it being worth it in the slightest. Even if you're not worried about losing data then there's still the setup time to get everything running again.

  3. #43
    Quote Originally Posted by emanresu View Post
    All this talk and I don't think I've seen actually performance gains in games mentioned. Anandtech and Tom's Hardware have done articles on HDDs in RAID 0 performance and sure it looks good in synthetic tests but I don't remember a single game gaining more than 10% load time decrease from using RaID 0. With the extra risk you're taking using it (if one drive or your mobo/raid card/raid controller fails then you've probably lost any data that hasn't been backed up) and the cost (one bigger drive is cheaper than 2 smaller ones) I can't see it being worth it in the slightest. Even if you're not worried about losing data then there's still the setup time to get everything running again.
    When I had my RAID 0 set up, I ran WoW exclusively off it (it's almost all incompressible data so it's a great "worst case scenario" test for the SF based SSDs I have). I gained an average of 1.5 seconds of time shaved off the load screens from my single drive setup. That's it. Part of it is the incompressible data being shoveled through an SF controller which is trying to compress is but can't (thus wastes three cycles on each data packet trying, failing, and sending it on), but the bigger part of it is that WoW's files are read almost entirely in 4k chunks. The MPQs may be one big file, but the game doesn't read the entire thing, only the parts of it that it needs for your area, so you get a ton of random 4k reads, which even on a RAID 0 makes things seem not as speedy.

    It's definitely more speedy than anything outside of a Velociraptor, but it isn't going to be the earth shattering double performance one really wants out of a RAID 0 setup due to how the games are installed and read from. That's why my second Vertex 3 stores all my other games and the frist is for WoW only. Incidentally, I've actually hit the "provisioning wall" on my Vertex 3 four times now. The way WoW patches writes to an insane number of cells needlessly as it does its "applying non-critical data" or rebuilding its entire cache (why the hell aren't the files already in their proper format when downloaded so we can avoid this crap?). The larger patches have caused all cells to get written to on a 120 GB HD because of all of the "shuffling" WoW's install and patch process does. When that happens, it's time to copy over the WoW folder to my HD, wipe the SSD with the Zero All Data option, and then copy WoW back to regain my SSD's write performance.

    They've certainly streamlined how you install and patch, but I sure wish they'd streamline what is installed as well.

    Just things to consider when using SSDs.

  4. #44
    How would that same test fare with battlefield 3, league of legends, orange box, skyrim, and minecraft loads squishy? That's mostly what I play

  5. #45
    Deleted
    Quote Originally Posted by Squishy Tia View Post
    Part of it is the incompressible data being shoveled through an SF controller which is trying to compress is but can't (thus wastes three cycles on each data packet trying, failing, and sending it on), but the bigger part of it is that WoW's files are read almost entirely in 4k chunks.
    The controller doesn't compress the data when reading though, surely?

  6. #46
    Quote Originally Posted by emanresu View Post
    The controller doesn't compress the data when reading though, surely?
    The controller attempts to decompress the data on a read cycle, but if the data was incompressible it cannot be decompressed either, so it'll waste those three cycles in both directions of the file transfer process. This results in incompressible data having a 15-25% lower throughput max than compressible data. For SSDs that have a controller that uses a DRAM cache, especially a ginormous one like the Everest 2 controllers do, the max throughput becomes much higher for either data type and they both tend to run evenly (especially in writes as the data coming in passes through the cache before hitting the SSD's NAND array).

    Speaking of which, somebody mentioned the the SSD controllers were just Marvell chips (or equivalants) with custom firmwares. This is only partially correct. The core technology inside is a major brand (usually Marvell) RAID controller, but the compression technology is most definitely not from Marvell. Firmware can only control what the device chip has access to for functionality, but does not actually control how it does it. That's done on-die.

    The SF controllers are effectively enhanced RAID controllers, as each NAND chip on the SSD is a RAID device, acting in self contained RAID 0. This is very important to know, because when using SSDs in a RAID environment, you're using what is called nested RAID. That is, the SSD has its own RAID 0, and you're using two SSDs in tandem to create another level of RAID 0. Because of this, while you still get a speed boost, you do not get the same percentage speed boost as you would comparing a single spindle motor drive and a RAID 0 spindle motor drive pair. Nested RAIDs have diminishing returns the deeper the nesting levels get, and each level past the initial RAID 0 (the mobo or RAID controller card's array) increases the diminishing returns gained. This is one of the reasons that DRAM based SSDs have tended to fare better than SF based SSDs (from the same generation of course) - they have enough of a buffer to even out any data flow irregularities caused by having multiple nested RAID controllers.

    BTW, there are a few SSDs out there that use not one, but two controllers onboard, resulting in a pair RAID 0s (the NAND chip arrays in each respective SSD) inside a RAID 0 (the two SSD controllers in each SSD inside yet another RAID 0 (the RAID card/motherboard level RAID array). These have fantastic single drive thoughput, but lose some of their edge in RAIDs vs. single controller SSDs because of extreme RAID nesting.

    Ain't the RAID scene fun? ^_^

  7. #47
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Squishy Tia View Post
    The controller attempts to decompress the data on a read cycle, but if the data was incompressible it cannot be decompressed either, so it'll waste those three cycles in both directions of the file transfer process. This results in incompressible data having a 15-25% lower throughput max than compressible data. For SSDs that have a controller that uses a DRAM cache, especially a ginormous one like the Everest 2 controllers do, the max throughput becomes much higher for either data type and they both tend to run evenly (especially in writes as the data coming in passes through the cache before hitting the SSD's NAND array).
    No, it does not waste attempts to decompress it, it just reads it, it's the very reason why SandForce scores so high on the reading, this is rather bollocks you're shooting out as the controller only attempts compression as it writes, and it stores a tag with it if it is compressed or not.
    Reading has no effect on SandForce controllers, how you come up with that is beyond me.

    If you were to equip SandForce with DRAM cache it'd be useless, the controller isn't designed for that architecture, it was designed to read the same way any other SSD controller does, writing it invokes the compression and only then that data is tagged as compressed and decompressed when written, much like in RAID arrays it requires alot of processing power to execute some commands internally, most prominently RAID5 systems f.ex, hence the cache for a big reason, have you never wondered why read speeds are so similar for any SSD regardless of SSD controller?

    Speaking of which, somebody mentioned the the SSD controllers were just Marvell chips (or equivalants) with custom firmwares. This is only partially correct. The core technology inside is a major brand (usually Marvell) RAID controller, but the compression technology is most definitely not from Marvell. Firmware can only control what the device chip has access to for functionality, but does not actually control how it does it. That's done on-die.
    No, it is entirely correct, i was the one who said it. Marvell has built the controller to specifications of OCZ and OCZ created the firmware completely, the firmware decides how and what the controller can do, which is the entire point of Firmware/OS. Compression does not count for this either considering the Vertex 4 in question is what i just stated and does not use compression, hardware simply sends signals the way the software tells it to, if OCZ specified that channels A, B and C which were used in other similar drives for reading has to be used for writing, then the controller does just that, as hardware is useless without software, this is computer 1-0-1.
    All SSD manufacturers will tell you one thing, Intel especially, FIRMWARE IS EVERYTHING.

    The SF controllers are effectively enhanced RAID controllers, as each NAND chip on the SSD is a RAID device, acting in self contained RAID 0. This is very important to know, because when using SSDs in a RAID environment, you're using what is called nested RAID. That is, the SSD has its own RAID 0, and you're using two SSDs in tandem to create another level of RAID 0. Because of this, while you still get a speed boost, you do not get the same percentage speed boost as you would comparing a single spindle motor drive and a RAID 0 spindle motor drive pair. Nested RAIDs have diminishing returns the deeper the nesting levels get, and each level past the initial RAID 0 (the mobo or RAID controller card's array) increases the diminishing returns gained. This is one of the reasons that DRAM based SSDs have tended to fare better than SF based SSDs (from the same generation of course) - they have enough of a buffer to even out any data flow irregularities caused by having multiple nested RAID controllers.
    Not entirely correct but also not incorrect either. The reason you see increased diminishing returns on scaling of it (which you are referring to) is simply because of the controller you are using, whether it has the complexity and power to push more out of it.
    If i were to get an corporate Adaptec RAID controller for 24 hookups, you can bet your sweet ass that i will receive linear scaling untill that RAID card reaches it's max potential, which ofcourse is reached earlier with SSDs then HDDs, this is also the reason why PCI-ex SSDs exist, they are completely modeled after this very principle. There is absolutely no scaling difference between DRAM based (Marvell, Samsung, JMicron, Indilinx etc.) and SandForce based, it is all dependant on the controller.

    BTW, there are a few SSDs out there that use not one, but two controllers onboard, resulting in a pair RAID 0s (the NAND chip arrays in each respective SSD) inside a RAID 0 (the two SSD controllers in each SSD inside yet another RAID 0 (the RAID card/motherboard level RAID array). These have fantastic single drive thoughput, but lose some of their edge in RAIDs vs. single controller SSDs because of extreme RAID nesting.

    Ain't the RAID scene fun? ^_^
    Yes, these are called the PCI-ex SSDs, they have multiple controllers and work somewhere along those lines, however if it were completely RAID0 then TRIM would not work at all because of the locations of each cell etc. How is it then that OCZ has this very design AND can pass trim when it's doing just the above?

    Personally i think you are not fully aware of the scene yourself, or at least overestimating yourself.

    SSDs are complicated pieces of machinery, however some information you are saying simply is not correct.
    For example, if what you stated about the SSD controllers in the case of the Vertex 4 was completely correct then how is it that the OCZ Octane, a Marvell based controller identical to the controller in Crucial M4 behaves entirely different to each other.
    How is it that the Corsair Performance Pro, which is armed with the same controller as the one in the Crucial M4, also behaves completely different in every way to that controller?

    You are severely underestimating the importance of Firmware and what it does.

  8. #48
    just why ...do you really need that?

  9. #49
    You might lose TRIM unless you are using an Intel Controller, I think either the latest or beta drivers (sata) support TRIM with raid

  10. #50
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by v12dock View Post
    You might lose TRIM unless you are using an Intel Controller, I think either the latest or beta drivers (sata) support TRIM with raid
    You will lose TRIM for the moment regardless of chipset, controller of the SSDs are irrelevant, it's related to the chipset of the mobo, and soon the RST drivers will support TRIM via RAID setups as long as you have an Intel mobo.

    However... Garbage Collection will keep it maintained.

  11. #51
    Fun test with two new 256 GB Vertex 4s in a RAID 0 today (yeah, I went and got them) vs. my Vertex 3s:

    Load time in WoW from character select screen to fully loaded in-game world

    Vertex 3 120 GB x2 RAID 0: 12-20 seconds (depending on character as I have nearly 110 individual addon components that have to load, though not all are active on all chars. Addons will be listed at the end of the post to see why so much loads)

    Vertex 4 256 GB x2 RAID 0: 5-12 seconds

    3.76 GB Disk Image (InstallESD.dmg - Mac OS X Lion self contained installer, fully compressed already)

    Vertex 3 RAID 0: 28.5 seconds (131.929824561403509 MB/sec)
    Vertex 4 RAID 0: 6.5 seconds (!!) (578.461538461538462 MB/sec)

    The disk image is a test of incompressible data (where the SF controllers fall behind). File was transferred as a Write operation from a 4 GB RAM disk to ensure the SSD was the bottleneck in the transfer process (i.e. the weakest link).

    4 GB All Zero file (4000 MB exactly), fully compressible

    RAM Disk to Vertex 3 RAID 0: 9.8 seconds (408.163265306122449 MB/sec)

    RAM Disk to Vertex 4 RAID 0: 6.5 seconds (615.384615384615385 MB/sec)

    Vertex 3 RAID 0 to RAM Disk: 10.25 seconds (390.24390243902439 MB/sec)

    Vertex 4 RAID 0 to RAM Disk: 12.1 seconds (!!) (330.578512396694215 MB/sec)

    So why did the Vertex 3 beat the Vertex 4 on the read test? It would seem that AnandTech's review of the firmware was spot on - at low queue depths (in this case, 1), the sequential read speeds are abysmal. My results pretty much mirror the ones from the AnandTech Revew. My Vertex 4 trounces the Vertex 3 everywhere but sequential reads (i.e. streams/copying). It also explains why my WoW load times were faster, even though the fully compressible (best case scenario) file was read exacerbatingly slow: WoW's reads are the 4k random reads, which are excellent and are likely at a queue depth of >3, while the lowly queue depth of just 1 for the simple file read test brought the SSD to its knees. Thankfully the firmware (savioiur of SSDs the world over) can be altered to open up NCQ streaming at low queue depths, so I'm not too worried right now. And my random write/read performance shot through the roof. Woot.

    One thing to note: For anybody running a Mac Pro and using 10.6.8 still (most likely because they hate the three unkillable features in Lion that make people go bananas), these drives' high IOPS combined with insane 4k random performance virtually eliminates the "stutter" problem that users in 10.6.8 experience. I just ran WoW off my Vertex 3 RAID and the Vertex 4 RAID and the Vertex 4 setup virtually eliminated the initial stutter (as in it's 90% better!) upon zone load. I used Zangarmarsh as the test zone as it is the worst offender and caused upwards of 15 seconds of stutter before it would fully load under my Vertex 3s. Less than 1.5 seconds max stutter at zone load from a fresh client start, and then it's butter smooth (computer's max FPS capability factored in of course).

    So if you're interested in RAID 0 and do lots of A/V streams or file copies, the Vertex 3s will win most of the time. If you mainly play games which are almost all incompressible data and 4k random read/writes, the new Vertex 4 (and competitor equivalants that use the same Marvell controller) will be a better bet.

    Sidenote: The Vertex 4 256 GB cost less than the Vertex 3 250 GB. Go figure.

    BTW, the low read performance can be overcome using a RAID card with configureable stripe size, like the higher end Areca cards and some CalDigit cards. Sadly my NewerTech/HPT 2721 doesn't offer that configurability.

    Anybody want to guess how long it'll take before a firmware is out or some method available to enable that "unused" 512 MB DRAM in the initial shipments of Vertex 4s? ^_^

    Edit: I forgot to list the addons to show why my load times are long. Here they are:

    Code:
    _NPCScan
    _NPCScan.Overlay
    !BugGrabber
    !Swatter
    Accountant
    ACP
    Archy
    Armory
    ArmoryGuildBank
    ArmoryQuickLink
    Atlas
    Atlas_Battlegrounds
    Atlas_BurningCrusade
    Atlas_ClassicWoW
    Atlas_DungeonLocs
    Atlas_OutdoorRaids
    Atlas_Transportation
    Atlas_WrathoftheLichKing
    AtlasLoot
    AtlasLoot_BurningCrusade
    AtlasLoot_Cataclysm
    AtlasLoot_ClassicWoW
    AtlasLoot_Crafting
    AtlasLoot_Loader
    AtlasLoot_WorldEvents
    AtlasLoot_WrathoftheLichKing
    BalancePowerTracker
    BalancePowerTracker_Log
    BalancePowerTracker_Options
    BalancePowerTracker_Pipe
    BamMod
    BetterItemCount
    Blizzard_AchievementUI
    Blizzard_ArchaeologyUI
    Blizzard_ArenaUI
    Blizzard_AuctionUI
    Blizzard_BarbershopUI
    Blizzard_BattlefieldMinimap
    Blizzard_BindingUI
    Blizzard_Calendar
    Blizzard_ClientSavedVariables
    Blizzard_CombatLog
    Blizzard_CombatText
    Blizzard_CompactRaidFrames
    Blizzard_CUFProfiles
    Blizzard_DebugTools
    Blizzard_EncounterJournal
    Blizzard_GlyphUI
    Blizzard_GMChatUI
    Blizzard_GMSurveyUI
    Blizzard_GuildBankUI
    Blizzard_GuildControlUI
    Blizzard_GuildUI
    Blizzard_InspectUI
    Blizzard_ItemAlterationUI
    Blizzard_ItemSocketingUI
    Blizzard_LookingForGuildUI
    Blizzard_MacroUI
    Blizzard_MovePad
    Blizzard_RaidUI
    Blizzard_ReforgingUI
    Blizzard_TalentUI
    Blizzard_TimeManager
    Blizzard_TokenUI
    Blizzard_TradeSkillUI
    Blizzard_TrainerUI
    Blizzard_VoidStorageUI
    BuffTimers
    BugSack
    ColoredTooltips
    CombatStats
    ComboPointsRedux
    ComboPointsRedux_Options
    CombustionHelper
    Comergy
    Comergy_Options
    CooldownCount
    CT_PartyBuffs
    DBM-AQ20
    DBM-AQ40
    DBM-BaradinHold
    DBM-BastionTwilight
    DBM-BlackTemple
    DBM-BlackwingDescent
    DBM-BWL
    DBM-ChamberOfAspects
    DBM-Coliseum
    DBM-Core
    DBM-DragonSoul
    DBM-EyeOfEternity
    DBM-Firelands
    DBM-GUI
    DBM-Hyjal
    DBM-Icecrown
    DBM-Karazhan
    DBM-MC
    DBM-Naxx
    DBM-Onyxia
    DBM-Outlands
    DBM-Party-BC
    DBM-Party-Cataclysm
    DBM-Party-WotLK
    DBM-PvP
    DBM-Serpentshrine
    DBM-Sunwell
    DBM-TheEye
    DBM-ThroneFourWinds
    DBM-Ulduar
    DBM-VoA
    DBM-WorldEvents
    DocsDebugRunes
    EasyMail
    EquipCompare
    FreeBagSlots
    Gatherer
    Gatherer_HUD
    GathererDB_Wowhead
    HunterFocusBar
    ingelasrapture
    LightMyMAcro
    LootHog
    MageManaBar
    MoveAnything
    NeedToKnow
    Omen
    OPie
    OPie_WorldMarkers
    Outfitter
    PassLoot
    PlayerXPBar
    Possessions
    Prat-3.0
    Prat-3.0_HighCPUUsageModules
    Prat-3.0_Libraries
    Quartz
    RangeColors
    RatingBuster
    Recount
    ReforgeLite
    SatchelScanner
    sct
    sct_options
    sctd
    sctd_options
    SlideBar
    Talented
    Talented_Inspect
    TrinketMenu
    Last edited by Squishy Tia; 2012-04-24 at 04:36 AM.

  12. #52
    Thanks for that post Squishy, very interesting. I would love you long time if you could test that raid setup against just a single vertex 3 120g I understand if you don't want too, that's quite a bit of hassle.

  13. #53
    Quote Originally Posted by Uncle Julian View Post
    Thanks for that post Squishy, very interesting. I would love you long time if you could test that raid setup against just a single vertex 3 120g I understand if you don't want too, that's quite a bit of hassle.
    Figure about a 20-40% variance depending on data type (incompressible vs. compressible) when going from Vertex 3 RAID 0 to Vertex 3 Standalone. Vertex 3 in RAID 0 doesn't really benefit WoW much due to it being almost completely incompressible data, which is where the SF controllers fall behind in performance. It'll shave maybe a second or two off your load times, but it won't be as tangible as going to the Vertex 4 (or its equivalants that use said Marvell controller).

    Evildeffy: I'd like to thank you for sparring with me. It got me to delve further into who actually makes the dies for the "controllers" in SSDs. I guess I didn't really know about the close relation of SandForce/OCZ to Marvell. Why the folks posting on the AnandTech comments in those reviews are so upset about the rebadging kind of confuses me, since technically the Everest 2 project was fully in-house (OCZ did the firmware, but as you mentioned the hardware is off the shelf (or slightly modified off the shelf) components from Marvell or some other RAID controller manufacturer (but I'm guessing mostly Marvell).

    Thanks again for getting me to look into it more closely. I chose the Vertex 4 despite the low sequential reads since that issue is a simple fix of enabling low queue depth NCQ streaming, something easily fixed in firmware. That'll make these pretty much trounce everything else out there until the competition releases their versions.

    And knock on wood, I've never had a BSOD or other problem with any OCZ branded SSD from the Summit on up. Who knows, maybe I can chock that up to the problems only being issues with Windows systems. Either way, I've been pleasantly surprised by the reliability. Not that I don't know of OCZ's craptacular customer service though, since I did get the shaft when inquiring about an OS X firmware updater tool.

  14. #54
    Data Monster Simca's Avatar
    15+ Year Old Account
    Join Date
    Nov 2008
    Location
    FL, United States
    Posts
    10,410
    Quote Originally Posted by Evildeffy View Post
    Point 1:
    No, you cannot, STEAM is one such example, Origin aswell and Direct2Drive (not too sure on this last one) all work with the install directory, some people play multiple games and reinstalling them every time is useless, that and you want your programmes you run on it, it will eat a 128GB SSD rather fast, hence doubling it with RAID0 is good.
    You could just use NTFS junction links.



    I think everyone who uses SSDs for gaming should know how to make and manipulate junction links because it will save you loads of time when you get bored of a game but don't want to uninstall it.
    Global Moderator | Forum Guidelines

  15. #55
    I'm assuming a junction link is just another name (Microsoft's) for an alias, letting you have the game on the SSD but having a link to the SSD directory in the physical location on the HD that Steam expects it to be?

  16. #56
    Epic!
    15+ Year Old Account
    Join Date
    Mar 2009
    Location
    Hillsborough, CA
    Posts
    1,745


    You actually made me copy my WTF\Account directory to a friend's computer so I could see just how bad SandForce controllers really are. Here's two 240GB Force GTs using Intel Matrix Storage RAID0.

    I'll let you time the sequence with your own stopwatch...I mean the performance here is truly terrible. It's all the compression's fault. Of course I will lay out some differences:

    • These are almost new drives, I think he bought them less than a month ago.
    • These are 240GB drives, so memory density is at play.
    • It's Windows 7 vs Mac OS X.
    • Add-on details are different.

    EDIT: Why are we still talking about SandForce controllers in a generic RAID0 thread? I would have mentioned before that SandForce drives don't incur a read speed penalty with incompressible data, but Evildeffy got it for me. I propose we change the thread title to "I hate SandForce."

    ---------- Post added 2012-04-24 at 05:43 AM ----------

    Quote Originally Posted by Squishy Tia View Post
    I'm assuming a junction link is just another name (Microsoft's) for an alias, letting you have the game on the SSD but having a link to the SSD directory in the physical location on the HD that Steam expects it to be?
    A Mac OS X alias is closer to a Windows shortcut. A NTFS junction point is akin to a symbolic link in OS X.
    Last edited by kidsafe; 2012-04-24 at 08:26 AM.

  17. #57
    The Lightbringer Evildeffy's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Nieuwegein, Netherlands
    Posts
    3,772
    Quote Originally Posted by Simca View Post
    You could just use NTFS junction links.



    I think everyone who uses SSDs for gaming should know how to make and manipulate junction links because it will save you loads of time when you get bored of a game but don't want to uninstall it.
    RAID0 = Less effort and faster.

    Quote Originally Posted by kidsafe View Post


    You actually made me copy my WTF\Account directory to a friend's computer so I could see just how bad SandForce controllers really are. Here's two 240GB Force GTs using Intel Matrix Storage RAID0.

    I'll let you time the sequence with your own stopwatch...I mean the performance here is truly terrible. It's all the compression's fault. Of course I will lay out some differences:

    • These are almost new drives, I think he bought them less than a month ago.
    • These are 240GB drives, so memory density is at play.
    • It's Windows 7 vs Mac OS X.
    • Add-on details are different.

    EDIT: Why are we still talking about SandForce controllers in a generic RAID0 thread? I didn't even want to mention the fact that SandForce drives do not rely on compression for reads, but Evildeffy got it for me. I propose we change the thread title to "I hate SandForce."

    ---------- Post added 2012-04-24 at 05:43 AM ----------


    A Mac OS X alias is closer to a Windows shortcut. A NTFS junction point is akin to a symbolic link in OS X.
    Oef Kid, damn dude, be gentle, i think you may have just let his head splode.

  18. #58
    The Unstoppable Force Elim Garak's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    DS9
    Posts
    20,297
    Quote Originally Posted by Evildeffy View Post
    RAID0 = Less effort and faster.
    And also less efficient. Because sooner or later you will run out of SSD space (sooner) and will have to upgrade to keep the setup running. While using Junction Points you will only use 1 SSD for the games you are playing NOW and the rest of installs will be stored on huge HDD (maybe even RAID0 of HDDs for speedy transfers of installs from/to SSD).

    For instance, all your steam games are on HDD. You decide to play new game from steam - you copy that game's folder from steam install to SSD and create a JP between them. Now your new game is on SSD and the rest of your steam games are on HDD and you can continue to buy new games and download them right away until you reach the end of your HDD which will happen much much later compared to SSD and can be easily expanded via RAID or non-RAID options.

    I do not understand the reasoning behind storing all the games on SSD. Inefficient use of SSD space.

  19. #59
    Epic!
    15+ Year Old Account
    Join Date
    Mar 2009
    Location
    Hillsborough, CA
    Posts
    1,745
    We use drive space in very different ways then. I have all my user directories on my HDD mounted as E:\. I have my OS installed on my 2x256GB Samsung 830s in RAID0 as C:\. Even with multiple games, I am not close to filling my RAID0 volume.

    It's quite easy for me to upgrade as well. All I have to do is create an image of the RAID0 volume including the EFI and Windows system partition with the Backup & Restore control panel. I can save that to an external backup HDD and then restore it in minutes.

  20. #60
    The Unstoppable Force Elim Garak's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    DS9
    Posts
    20,297
    My game collection exceeds 450GB you have available on your SSD RAID0 setup. (Steam, Origin, MMOs, standalone)

    All these games are stored nicely on HDD.

    Filling HDD is harder then filling SSD - you can't argue about that.
    Extending HDD setup with no OS on it is easier - you can't argue with that either.
    Upgrading is easy both ways but it's less frequent and costy with HDDs.

    You can use whatever tricks you know to live with your setup - that doesn't make it the most efficient though.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •