Review: ADATA S511 120 GByte SSD

Published by Marc Büchel on 24.08.11
Page:
« 1 2 (3) 4 5 6 ... 12 »

How do we test?

Testenvironment

We recommend that readers who aren't interested in test procedures jump over this page and head directly to the test results.
Models tested
  • ADATA S511 120 GByte SSD
  • Corsair Force3 120 GByte
  • Kingston HyperX SSD 120 GByte
  • Corsair Force3 120 GByte
  • ADATA S511 60 GByte MLC
  • OCZ Agility 3 240 GByte MLC
  • OCZ Vertex 3 240 GByte MLC
  • OCZ IBIS 240 GByte MLC
  • OCZ Revo Drive X2 480 GByte MLC
  • Samsung SSD 64 GByte MLC
  • Corsair F100 100 GByte MLC
  • Corsair X128 128 GByte MLC
  • Corsair P128 128 GByte MLC
  • Intel X25-M 80 GByte MLC
  • Intel X25-M Gen2 160 GByte MLC
  • Intel X25-M Gen2 160 GByte MLC Raid0
  • Intel X25-E 32 GByte SLC
  • OCZ Vertex 120 GByte MLC
  • Samsung SSD PM800 256 GByte MLC
  • Samsung SSD PM800 64 GByte MLC
  • Kingston SSDNow V+ 64 GByte MLC
  • OCZ Agility 128 GByte MLC
  • OCZ Apex 120 GByte MLC
  • Photofast G-Monster 120 GByte V2 MLC
  • Kingston SSDNow VSeries 40 GByte MLC
  • 120 GByte
     

    Testenvironment

    Motherboard ASUS P8P67 Deluxe  
    Chipset Intel P67 1'333 MHz
    CPU Intel Core i7 2600k 3.4 GHz
    Memory Kingston HyperX 2133 4 GByte
    Graphics card Gigabyte GeForce GTX 285  
    Storage (system) Seagate Barracuda 640 GByte
    Operating systems Ubuntu 10.04  
    Filesystem XFS  


    We think everybody reading this article can imagine the following scenario: You just bought a hard drive which according the specs sheet should transfer 120 MByte/s reading and writing. In the reviews you read about astonishing 110 MByte/s but after you put the drive into you system it feels much slower. The whole story gets even worse when you start a benchmark which does randomread/write of 4 KByte blocks. There you only get two to three MBytes/s.

    Because of this we don't want to publish screenshots of standard programs like HD-Tach, HD-Tune, ... we want our tests to be

    • reproducible,
    • accurate
    • meaningful and
    • varied ...

    ... sind.

    We test with activated caches and NCQ (Native Command Queueing) because they're also activated under daily use. But the data size tested is always at least twice the amount of the memory. In this there will be no intereference.

    We noticed that the measuring error is constantly within ±2%. Therefore we mention it only here.

    Additionally we evaluate the S.M.A.R.T. data to assess if there are already errors.

    The following table give you a brief overview to which points we turn our centre of attention.

    Test Observations
       
    Sequential Read/Write Tests
    • Are the values within the specifications?
    • Which influence has the block size?
    • Which influence has the filesystems block size?
    Random Read/Write Tests
    • How severe is the influence on the theoretically possible (sequential) datarate?
    • Which influence has the block size?
    • Which influence has the block size on the filesystem?
       


    iozone3

    iozone3 is a benchmark suit for storage solutions which natively runs under Linux.

    Therefore we are testing the throughput with different block sizes using the following commands:

    KByte/s

    • iozone -Rb test4k.xls -i0 -i1 -i2 -+n -r 4k -s4g -t32
    • iozone -Rb test16k.xls -i0 -i1 -i2 -+n -r 16k -s4g -t32
    • iozone -Rb test32k.xls -i0 -i1 -i2 -+n -r 32k -s4g -t32
    • iozone -Rb test64k.xls -i0 -i1 -i2 -+n -r 64k -s4g -t32
    • iozone -Rb test128k.xls -i0 -i1 -i2 -+n -r 128k -s4g -t32
    • iozone -Rb test256k.xls -i0 -i1 -i2 -+n -r 256k -s4g -t32

    iops

    • iozone -Rb test4ko.xls -i0 -i1 -i2 -+n -r 4k -s4g -t32 -O
    • iozone -Rb test16ko.xls -i0 -i1 -i2 -+n -r 16k -s4g -t32 -O
    • iozone -Rb test64ko.xls -i0 -i1 -i2 -+n -r 64k -s4g -t32 -O
    • iozone -Rb test96ko.xls -i0 -i1 -i2 -+n -r 96k -s4g -t32 -O
    • iozone -Rb test128ko.xls -i0 -i1 -i2 -+n -r 128k -s4g -t32 -O

     

    Why do we test different block sizes?

    It is important to reproduce scenarios of daily usage. Certain parameters need to be variable during the test to make a statement about the product. In our test the parameters are the different block sizes. It defines the size in KBytes which is written/read on the drive during a transaction.

    With this method one can test the reading and writing of either small and big files. In a normal personal computer environment you usually don't find many files smaller than 4 KByte. The relative amount of small files is much bigger on a mail or database server. Therefore tests with small block sizes are of interest for database-based applications.

    In bigger RAID arrays the hard disk cache is usually disabled and the RAID-Controller takes over the job of caching. Exactly in such setups hard drives need to be very fast when reading or writing small amounts of data. Sequential throughput isn't interesting in this case.



    Page 1 - Introduction / Specs / Delivery Page 7 - Random read KByte/s
    Page 2 - Impressions Page 8 - Sequential write ops
    Page 3 - How do we test? Page 9 - Sequential read ops
    Page 4 - Sequential write KByte/s Page 10 - Random write ops
    Page 5 - Sequential read KByte/s Page 11 - Random read ops
    Page 6 - Random write KByte/s Page 12 - Conclusion



    Discuss this article in the forum




    Navigate through the articles
    Previous article Corsair Force GT 120 GByte Review: Kingston HyperX 240 GByte SSD Next article
    comments powered by Disqus

    Review: ADATA S511 120 GByte SSD - Storage - Reviews - ocaholic