Mixed I/O Performance

Our tests of mixed read/write IO vary the workload from pure reads to pure writes at 10% increments. Each mix is tested for up to 1 minute or 32GB of data transferred. The mixed random IO test uses a queue depth of 4 while the mixed sequential IO test uses a queue depth of 1. The tests are confined to a 64GB span of the drive, and the drive is given up to one minute of idle time in between each mix tested.

Mixed IO Performance
Mixed Random IO Mixed Sequential IO

The mixed sequential IO performance of the TeamGroup L5 LITE 3D is as good as any other SATA drive, and on the mixed random test it's only slightly slower overall than the Crucial MX500, the fastest TLC SATA drive in this bunch.

Mixed IO Efficiency
Mixed Random IO Mixed Sequential IO

The power efficiency of the L5 LITE 3D during the mixed IO tests is average or slightly better. In both cases there's a TLC SATA drive with a substantial efficiency advantage, and the Samsung 860 PRO sets a high bar for efficiency.

Mixed Random IO
Mixed Sequential IO

The performance profiles for the L5 LITE 3D on the mixed IO tests are both fairly typical for mainstream SATA drives. The random IO performance is fairly flat until the workload is at least 70% writes, then it starts to pick up the pace. The sequential IO performance is more of a gradual decline as the workload shifts more toward writes.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Note: We recently upgraded our power measurement equipment and switched to measuring idle power on our Coffee Lake desktop, our first SSD testbed to have fully-functional PCIe power management. The below measurements are all new, and are not a perfect match for the older measurements in our previous reviews and the Bench database.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Wake-Up Latency

The active idle power consumption of the TeamGroup L5 LITE 3D is higher than most SSDs, but not by much. Unfortunately, enabling power management barely has any effect. The L5 LITE 3D doesn't appear to have functional SATA Link Power Management. The only upside here is that without working power management, there's no extra latency when waking up. (It's possible that DevSleep power management might work on the L5 LITE 3D, but that feature cannot be tested on a desktop system.)

Synthetic Benchmarks, Part 2 Conclusion
Comments Locked

42 Comments

View All Comments

  • flyingpants265 - Friday, September 20, 2019 - link

    Why promote this drive without mentioning anything about the failure rates? Some Team Group SSDs have 27% 1-star reviews on newegg. That's MUCH higher than other manufacturers.. That's not worth saving $5 at all... Is Anandtech really that tone-deaf now?

    -I would not recommend this drive to others -- 5 months, dead.
    -Not safe for keep your data.Highly recommend not to store any important data on it
    -DO NOT BUY THIS SSD! Total lack of support for defective products! Took days to reply after TWO requests for support, and then I am expected to pay to ship their defective product back when it never worked!?
    -Failed and lost all data after just 6 months.
    ...
  • Ryan Smith - Friday, September 20, 2019 - link

    "Is Anandtech really that tone-deaf now?"

    Definitely not. However there's not much we can say on the subject with any degree of authority. Obviously our test drive hasn't failed, and the drive has survived The Destroyer (which tends to kill obviously faulty drives very early). But that's the limit to what we have data for.

    Otherwise, customer reviews are a bit tricky. They're a biased sample, as very happy and very unhappy people tend to self-report the most. Which doesn't mean what you state is untrue, but it's not something we can corroborate.

    * We've killed a number of SSDs over the years. I don't immediately recall any of them being Team Group
  • eastcoast_pete - Friday, September 20, 2019 - link

    Ryan, I appreciate your response. Question: which SSDs have given up the ghost when challenged by the "destroyer"? Any chance you can name names? Might be interesting for some of us, even in historic context. Thanks!
  • keyserr - Friday, September 20, 2019 - link

    Yes anecdotes are interesting. In an ideal world we would have 1000 drives of each model put through its paces. We don't.

    It's a lesser known brand. It wouldn't make too much sense if they made bad drives in the long term.
  • Billy Tallis - Friday, September 20, 2019 - link

    I don't usually keep track of which test a drive was running when it failed. The Destroyer is by far the longest test in our suite so it catches the blame for a lot of the failures, but sometimes a drive checks out when it's secure erased or even when it's hot-swapped.

    Which brands have experienced a SSD failure during testing is more determined by how many of their drives I test than by their failure rate. All the major brands have contributed to my SSD graveyard at some point: Crucial, Samsung, Intel, Toshiba, SanDisk.
  • eastcoast_pete - Friday, September 20, 2019 - link

    Billy, I appreciate the reply, but would really like to encourage you and your fellow reviewers to "name names". An SSD going kaplonk when stressed is exactly the kind of information that I really want to know. I know that such an occurrence might not be typical for that model, but if the review unit provided by a manufacturer gives out during testing, it doesn't bode well for regular buyers like me.
  • Death666Angel - Friday, September 20, 2019 - link

    You can read every article, I remember a lot of them discussing the death of a sample (Samsung comes to mind). But it really isn't indicative of anything: sample size is crap, early production samples (hardware), early production samples (software). Most SSDs come with 3 years of warranty. Just buy from a reputable retailer, have a brand that actually honors warranty and make sure to back up your data. Then you're fine. If you don't follow those those rules, even using the very limited data Billy could give you won't help you out in any way.
  • eastcoast_pete - Friday, September 20, 2019 - link

    To add: I don't just mean the manufacturers' names, but especially the exact model name, revision and capacity tested. Clearly, a major manufacturer like Samsung or Crucial has a higher likelihood of the occasional bad apple, just due to the sheer number of drives they make. But, even the best big player produces the occasional stinker, and I'd like to know which one it is, so I can avoid it.
  • Kristian Vättö - Saturday, September 21, 2019 - link

    One test sample isn't sufficient to conclude that a certain model is doomed.
  • bananaforscale - Saturday, September 21, 2019 - link

    This. One data point isn't a trend. Hell, several data points aren't a trend if they aren't representative of the whole *and you don't know if they are*.

Log in

Don't have an account? Sign up now