Comments Locked

28 Comments

Back to Article

  • Umer - Tuesday, June 25, 2019 - link

    I know it may not be a huge deal to many, but Kingston, as a brand, left a really sour taste in my mouth after V300 fiasco since I bought those SSDs in a bulk for a new build back then.
  • Death666Angel - Tuesday, June 25, 2019 - link

    Let's put it this way: they have to be quite a bit cheaper than the nearest, known competitor (Crucial, Corsair, Adata, Samsung, Intel...) to be considered as a purchase by me.
  • mharris127 - Thursday, July 25, 2019 - link

    I don't expect Kingston to be any less expensive than ADATA as Kingston serves the mid price market and ADATA the low priced one. Samsung is a supposed premium product with a premium price tag to match. I haven't used a Kingston SSD but have some of their other products and haven't had a problem with any of them that wasn't caused by me. As far as picking my next SSD, I have had one ADATA SSD fail, they replaced it once I filled out some paperwork, they sent an RMA and I sent the defective drive back to them, the second one and a third one I bought a couple of months ago are working fine so far. My Crucial SSDs work fine. I have a Team Group SSD that works wonderfully after a year of service. I think my money is on either Crucial or Team Group the next time I buy that product.
  • Notmyusualid - Tuesday, June 25, 2019 - link

    ...wasn't it OCZ that released the 'worst known' SSD?

    I had almost forgotten about those days.

    I believe the only customers that got any value out of them where those on PERC and other known RAID controllers, which were not writing < 128kB blocks - and I wasn't one of them. I RMA'd & insta-sold the return, and bought X25M.

    What a 'mare that was.
  • Dragonstongue - Tuesday, June 25, 2019 - link

    Sandforce controller was ahead of it's time, not in the most positive ways all the time either...

    I had an Agility 3 60gb, used for just over 2 years for my system, mom used now an additional over 2.5 years, however it was either starting to have issues, or the way mom was using caused it to "forget" things now and then.

    I fixed with a crucial mx100 or 200 (forget LOL) that still has over 90% life either way, the Agility 3 was "warning" though still showed as over 75% life left (christmas '18-19) .. def massive speed up by swapping to more modern as well as doing some cleaning for it..

    SSD have come A LONG way in a short amount of time, sadly the producers of them via memory/controller/flash are the problem bad drivers, poor performance when should not, not work in every system when should etc.
  • thomasg - Tuesday, June 25, 2019 - link

    Interestingly, I still have one of the OCZs with the first pre-production SandForce, the Vertex Limited 100 GB, which has been running for many years at high throughput and many many Terabytes of Writes.
    Still works perfectly.
    I'm not sure I remember correctly, but I think the major issues started showing up for the production SandForce model that was used later on.
  • Chloiber - Tuesday, June 25, 2019 - link

    I still have a Supertalent Ultadrive MX 32GB - not working properly anymore, but I don't even remember how many firmware updates I put that one through :)
    They just had really bad, buggy firmwares throughout.
  • leexgx - Wednesday, June 26, 2019 - link

    main problem with sandforce is the compressed layer and the nand layer was not ever managed correctly with made trim ineffective and GC was having to be used on new writes resulting in high access times and half speed writes after 1 full drive of writes , you note did not have to be filled.if it was a 240gb ssd all you had to do to slow it down permanently was write over time 240-300gb(due to compression to get an actual full 1DWP) of data only way to reset it was secure erase (unsure if that was ever fixed on the SandForce SF3000 , seagate enterprise SSDs and ironwolf nas SSD)

    other issue which was more or less fixed was rare BSOD (more or less 2 systems i managed did not like them) or well the drive eating it self and becoming a 0Mb drive (extremely rare but did happen) the 0mb bug i think was fixed if you owned a intel drive but the BSOD fix was limited success
  • Gunbuster - Tuesday, June 25, 2019 - link

    Indeed. Not going to support a company with a track record of shady practices.
  • kpxgq - Tuesday, June 25, 2019 - link

    The V300 fiasco is nothing compared to the Crucial V4 fisaco... quite possible the worst SSD drives ever made right along with the early OCZ Vertex drives. Over half the ones I bought for a project just completely stopped working a month in. I bought them trusting the Crucial brand name alone.
  • Christopher003 - Sunday, November 24, 2019 - link

    I had an Agility 3 60gb, used for just over 2 years for my system, mom used now an additional over 2.5 years, however it was either starting to have issues, or the way mom was using caused it to "forget" things now and then.

    I fixed with a crucial mx100 or 200 (forget LOL) that still has over 90% life either way, the Agility 3 was "warning" though still showed as over 75% life left (christmas '18-19) .. def massive speed up by swapping to more modern as well as doing some cleaning for it..
  • Samus - Wednesday, June 26, 2019 - link

    I agree, I hated how they changed the internals without leaving any inclination of a change on the label.

    But the thing that doesn't stop me from recommending them: had anyone ever actually seen a Kingston drive fail?

    It seems their firmware and chip binning is excellent. The later of which is easy for a company that makes so many God damn USB flash drives and can use the shitty NAND elsewhere...
  • jabber - Tuesday, June 25, 2019 - link

    Kingston are my go to budget SSD brand. I bought dozens of those much moaned at V300 SSDs in the day. Did I care? No, because they were light years better than any 5400rpm pile of junk in a laptop or desktop.

    The other reason? Not one of them to date has failed. Including the V400 and onwards.

    They may not be the fastest (what's 30MBps between friends) but they are solid drives.

    Nothing more boring than a top end enthusiast SSD that is bust.

    Recommended.
  • GNUminex_l_cowsay - Tuesday, June 25, 2019 - link

    This whole article raises a question for me. Why is SATA still locked into 6Gbps? I get that there is an alternative higher performance interface but considering how frequently USB 3 has had its bandwidth upgraded lately it seems like a maximum bandwidth increase should be reasonable.
  • thomasg - Tuesday, June 25, 2019 - link

    There's just no point in updating SATA.
    6 Gbps is plenty for low-performance systems, SATA works well, is cheap and simple.

    For all that need more performance, the market has moved to PCIe and NVMe in their various form factors, which is just a lot more expensive (especially due to the numerous and frequently changed form factors).

    USB, as not only an, but THE external port that all users are facing has a lot more pressure behind it to get updated.
    Users touch USB all the time, there's demand for a lot of things over USB; most users never touch internal drives (in fact, most users actively buy hardware without replaceable internal drives), so there's no point in updating the standard.
    The manufactures can just spin new ports and new connectors, since they ship only complete systems anyway.
  • Dug - Tuesday, June 25, 2019 - link

    "There's just no point in updating SATA."
    That could be said for USB, pci, etc.
    There is a very good reason to go beyond an interface that is already saturated, and it doesn't have to be regulated to low performance systems.
  • Samus - Wednesday, June 26, 2019 - link

    SATA is an ancient way of transferring data. Why have a host controller on the PCI BUS when you can have a native PCIe device like NVMe. Further, SATA even with AHCI simply lacks optimization for flash storage. There doesn't seem to be an elegant way of adding NVMe features to SATA without either losing backwards compatibility with AHCI devices or adding unnecessary complexity.
  • TheUnhandledException - Saturday, June 29, 2019 - link

    SATA the protocol was built around supporting spinning discs. Making it work at all for solid state drices was a hack. A hack with a lot of unnecessary overhead. It was useful because it provided a way to put flash drives on existing systems. Future flash will will NVMe over PCIe directly. The only reason for upgrading SATA would be if hard drives actually needed >600 MB/s and they likely never will. So while we will have faster and faster interfaces for drives it won't be SATA. It would be like saying well because we made HDMI/DP faster and faster why not enhance VGA port to support 8K. I mean in theory we could but VGA to support a digital display is a hack and largely just existed for backwards and forwards compatibility because of analog displays.
  • MDD1963 - Tuesday, June 25, 2019 - link

    Why limit yourself to 550 MB/sec? I think having 6-8 ports of SATA4/SAS spec (12 Gbps) would breathe new life into local storage solutions...(certainly a NAS would be limited by even 10 Gbps networks, however, so equipped, but,..gotta start somewhere with incremental improvements, and, many SATA3 spec drives have now been limited to 500-550 MB/sec for years!)
  • Spunjji - Wednesday, June 26, 2019 - link

    You kinda covered the reason right there - where the performance is really needed, SAS (or PCIe) is where it's at. There really is no call for a higher-performing SATA standard.
  • KAlmquist - Tuesday, June 25, 2019 - link

    Good points. I'd add that there is an upgrade to SATA called "SATA Express" which basically combines two PCIe lanes and traditional SATA into a single cable. It never really took off for the reasons you explained: it's simpler just to switch to PCIe.
  • MDD1963 - Tuesday, June 25, 2019 - link

    It would be nice indeed to see a new SATA4 spec at SAS speeds, 12 Gbps....
  • TheUnhandledException - Saturday, June 29, 2019 - link

    Why? Why not just use PCIe directly. Flash drives don't need the SATA interface and ultimately the SATA interface becomes PCIe at the SATA controller. It is just adding additional pointless translation to fit a round peg into a square hole. Connect your flash drive to PCIe and it is as slow or fast as you want it to be. 2x PCIe 3.0 you got ~2GB/s to work with, 4 lanes gets you 4GB/s. Upgrade to PCIe 4 and you now have 8 GB/s.
  • jabber - Wednesday, June 26, 2019 - link

    They could stay with 6GBps just fine. I'd say work on reducing the latency.

    Bandwidth is done. Latency is more important now IMO. Ultra low latency SATA would do fine for years to come.
  • RogerAndOut - Friday, July 12, 2019 - link

    In an enterprise environment, the 6Gbps speed is not much of an issue as deployment does not involve individual drives. Once you have 8,16,32 etc. in some form of RAID configuration the overall bandwidth increase. Such systems may also have NVMe based modules acting as a cache to allow fast retrieval of frequently access blocks and to speed up the 'commit' time of writes.
  • Dug - Tuesday, June 25, 2019 - link

    I would like to see the Intel and Micron Pro included.
    We need drives with power loss protection.
    And I don't think write heavy is regulated to nvme territory. That's just not in the cards for small businesses or even large businesses. 1 because of cost, 2 because of size, 3 because of scalability.
  • MDD1963 - Tuesday, June 25, 2019 - link

    1.3 DWPD endurance (9100+ TB of writes! for a 3.8 TB drive? Impressive! $800+...lower it to $399, and count me in! :)
  • m4063 - Tuesday, September 8, 2020 - link

    LISTEN! The most important feature, and reason to buy these drives, is they have power-loss-protected (PLP) cache, not for protecting your data, BUT FOR SPEED!
    In my believe the most important thing about PLP is it should improve direct synchronous I/O (ESX and SQL) because the drive can report back that the data is "written to disk" as soon as the data hit the cache, where a non PLP drive actually need to write the data to the nand before reporting "OK"!
    And for that reason it's obvious the size of the PLP protected cache is pretty important.
    None of those two features are considered and tested in this review, which is very criticizable.
    This is the main-reasons you should go for these drives. I've asked Kingston about the PLP protected cache size and I got:
    SEDC500M/480 - 1GB
    SEDC500M/960 - 2GB
    SEDC500M/1920 - 4GB
    These sizes could play a huge different in synchronous I/O intensive systems/applications.
    Anatech: please cover these factors in your tests/review!
    (admittedly, I have done any benchmark myself In lack of PLP drives)

Log in

Don't have an account? Sign up now