Solid-state drive ( SSD ) is a solid-state storage device that uses integrated circuit assemblies as memory to store data continuously. It's also sometimes called solid-state disk , for historical reasons. SSD technology primarily uses an electronic interface that is compatible with traditional hard disk drive (HDD) input/output (I/O) drives, which allows simple replacement in common applications. New I/O interfaces such as SATA Express and M.2 have been designed to meet SSD technology specific requirements.
SSDs do not have mechanical moving parts. This distinguishes them from conventional electromechanical drives such as hard disk drives (HDD) or floppy disks, which contain spinning disks and a moving read/write head. Compared to electromechanical drives, SSDs are typically more resistant to physical shocks, running silently, have faster access times and lower latency. However, while SSD prices continue to decline over time, SSDs (in 2018) are still more expensive per unit of storage than HDD and are expected to continue over the next decade.
In 2017, most SSDs use NAND-based TLC 3D flash memory, which is a non-volatile memory type that stores data when power is lost. For applications that require quick access but do not need persistency data after power is lost, SSDs can be built from random access memory (RAM). The device can use the battery as an integrated resource to store data for a certain period of time after external power is lost.
However, all SSDs still store data in electrical charges, which slowly leak from time to time if left powerless. This causes the outdated hard disk (which has exceeded its endurance level) to begin losing data usually after one (if stored at 30 ° C) to two (at 25 ° C) a year in storage; for new drives takes longer. Therefore, SSDs are not suitable for archival purposes.
Hybrid hard disks or solid-state hybrid drives (SSHD), such as the Apple Fusion Drive, combine SSD and HDD features within the same unit, containing large hard disk drives and SSD cache to improve frequently accessed data performance.
Video Solid-state drive
Development and history
Initial SSD uses RAM and similar technologies
The earliest or not the first semiconductor storage device compatible with the hard drive interface (eg SSD as defined) was 1978 StorageTek 4305. The StorageTek 4305, a replacement plug-compatible for fixed head drive IBM 2305, originally used charged devices charged for storage and consequently reported seven times faster than IBM products at about half the price. This then switches to DRAM. Before StorageTek SSD there were many DRAM and core products (eg DATARAM BULK Core, 1976) sold as an alternative to HDD but this product usually has a memory interface and not an SSD as defined.
In the late 1980s Zitel, Inc., offered the DRAM family-based SSD product, under the trade name "RAMDisk," for use on systems by UNIVAC and Perkin-Elmer, among others.
Flash-based SSDs
In 1991, SanDisk Corporation (later SunDisk) delivered the first SSD, a 20-MB solid-state SSD drive (SSD) that sells OEMs for about $ 1,000. It was used by IBM on ThinkPad laptops. In 1995, STEC, Inc. entered the flash memory business for consumer electronic devices.
In 1995, M-Systems introduced a flash-based solid-state drive as a replacement HDD for military and aerospace industries, as well as for other mission-critical applications. This application requires an average time between the outstanding failure (MTBF) achieved by solid-state drives, based on their ability to withstand extreme shocks, vibrations and temperature ranges.
In 1999, BiTMICRO made a number of introductions and announcements about flash-based SSDs, including 3.5-inch 3.5-inch SSDs. In 2007, Fusion-io announced a PCIe-based solid state drive with 100,000 input/output per second (IOPS) operations from one card performance, with a capacity of up to 320 gigabytes.
In Cebit 2009, OCZ Technology demonstrates 1D SSD flash, terabyte (TB) using PCI Express interface ÃÆ' â € "8. It achieves a maximum write speed of 654 megabytes per second (MB/s) and a maximum read speed of 712 MB/s. In December 2009, Micron Technology announced the SSD using a 6 GB gigabit SATA interface per second (Gbit/s).
Enterprise Enterprise flash drive
Enterprise flash drives (EFDs) are designed for applications that require high I/O performance (IOPS), reliability, energy efficiency, and newer, consistent performance. In most cases, EFDs are SSDs with higher specifications, compared to SSDs that would normally be used on notebook computers. The term was first used by EMC in January 2008, to help them identify SSD factories that will provide products that meet these higher standards. There is no standard body that controls the definition of EFD, so any SSD manufacturer can claim to produce an EFD when in fact the product may not actually meet certain requirements.
An example is the Intel DC S3700 drive chain, introduced in the fourth quarter of 2012, which focuses on consistent performance achievement, an area that was not getting much attention but which Intel claims is important for the enterprise market. Specifically, Intel claims that, on a steady state, the S3700 drive will not vary their IOPS by more than 10-15%, and that 99.9% of all I/Os 4 KB random is served in less than 500Ã,Âμs.
Another example is the Toshiba PX02SS enterprise SSD series, announced in 2016, optimized for use in server and storage platforms that require high endurance from labor intensive applications such as cache writing, I/O acceleration, and online transaction processing (OLTP). The PX02SS series uses a 12 Gbit/s SAS interface, featuring NAND MLC flash memory and achieves random write speeds of up to 42,000 IOPS, random read speeds of up to 130,000 IOPS, and a maximum of 30 write write-per-day (DWPD) drives.
Maps Solid-state drive
Architecture and functions
The main component of the SSD is the controller and the memory for storing data. The main memory component in SSDs has traditionally been volatile DRAM memory, but since 2009 more common non-volatile NAND flash memory.
Controller
Each SSD includes a controller that combines the electronics that bridge the NAND memory component to the host computer. The controller is an embedded processor that executes firmware level code and is one of the most important factors of SSD performance. Some functions performed by the controller include:
- Bad block mapping
- Read and write caching
- Encryption
- Detection and error correction via Error correction code (ECC)
- Garbage collection
- Read scrubbing and reading distraction management
- Use leveling
SSD performance can be scaled by the number of parallel NAND flash chips used in the device. A single NAND chip is relatively slow, due to narrow (8/16 bit) asynchronous I/O interfaces, and additional high latency of basic I/O operations (typical for SLC NAND, ~ 25Ã, s to take page 4Ã, KB from array to I/O buffer on read, ~ 250Ã, s to do page 4Ã, KB from buffer IO to array on write, ~ 2Ã, ms to remove block 256Ã, KB). When multiple NAND devices operate in parallel within the SSD, bandwidth scaling, and high latency can be hidden, as long as quite a lot of outstanding operations are pending and the load is distributed evenly between devices.
Micron and Intel initially made SSDs faster by applying data striping (similar to RAID 0) and interleaving in the architecture. This allows the creation of ultra fast SSDs with an effective read/write speed of 250à ¢ â,¬Â MB/s with SATA 3 Gbit/s interface in 2009. Two years later, SandForce continues to take advantage of this parallel flash connectivity, releasing the consumer SATA 6Gbit/s consumer class SSDs that support read/write speeds of 500Ã, MB/sec. The SandForce controller compresses the data before sending it to flash memory. This process can result in less write and higher logical throughput, depending on the compressibility of the data.
Memory
Flash-based memory
Most SSD manufacturers use non-volatile NAND flash memory in their SSD development because of lower costs compared to DRAM and the ability to maintain data without constant power supply, ensuring data persistency through sudden power outages. SSD flash memory is slower than DRAM solution, and some initial design is even slower than HDD after continuous use. This problem was solved by controllers who came out in 2009 and later.
Flash-based memory solutions are typically packed in standard disk drive form factors (1.8-, 2.5-, and 3.5-inch), but also in the smaller, compact and unique layout made possible by the small size of the flash memory.
Lower priced drives typically use triple-level cell (TLC) or multi-level cell (MLC) cells, which are slower and less reliable than single-level cell (SLC) flash memory. This can be reduced or even reversed by the internal SSD design structure, such as interleaving, changes to the writing algorithm, and higher over-provisioning (over capacity) with which the wear-leveling algorithm works.
DRAM-based
SSDs based on volatile memory such as DRAM are characterized by very fast data access, generally less than 10 microseconds, and are used primarily to speed up applications that should be retained by traditional SSD flash or HDD latencies.
DRAM-based SSDs typically incorporate an internal battery or an external AC/DC adapter and a backup storage system to ensure data persistence while no power is supplied to the drive from an external source. If power is lost, the battery provides power while all information is copied from the random access memory (RAM) to the backup storage. When power is restored, information is copied back to RAM from backup storage, and SSD resumes normal operation (similar to the hibernation function used in modern operating systems).
SSDs of this type are usually equipped with DRAM modules of the same type used on regular PCs and servers, which can be replaced and replaced with larger modules. Like i-RAM, HyperOs HyperDrive, DDRdrive X1, etc. Some manufacturers of DRAM SSDs sold DRAM chips directly to the drive, and did not intend chips to be swapped - like ZeusRAM, Aeon Drive, etc.
A indirect remote-access disk (RIndMA Disk) uses a secondary computer with a fast network or (direct) Infiniband connection to act like a RAM-based SSD, but flash-new, faster, memory-based, SSDs already available in 2009 made this option ineffective.
While DRAM prices continue to fall, Flash memory prices are falling faster. The crossover point "Flash becomes cheaper than DRAM" happened around 2004.
More
Some SSDs, called NVDIMM or Hyper DIMM devices, use DRAM and flash memory. When power fails, SSD copies all data from DRAM to flash; when power is back on, SSD copies all data from flash to DRAM. In a somewhat similar way, some SSDs use the form factor and the bus is actually designed for DIMM modules, while only using flash memory and making it appear as though it were a DRAM. Such SSDs are usually known as ULLtraDIMM devices.
Drives known as hybrid drives or solid-state hybrid drives (SSHD) use hybrids from rotating disks and flash memory. Some SSDs use random access memory magnetoresistive (MRAM) to store data.
In 2015, Intel and Micron announced 3D XPoint as a new non-volatile memory technology. Intel plans to produce 3D XPoint SSD with PCI Express interface in 2016, which will operate faster and with higher endurance than NAND-based SSDs, while the area density will be comparable at 128 gigabits per chip. For the price per bit, XPoint 3D will be more expensive than NAND, but cheaper than DRAM.
Cache or buffer
A flash-based SSD typically uses a small amount of DRAM as a volatile cache, similar to a buffer on a hard disk drive. A block placement directory and data leveling wear are also stored in the cache while the drive is in operation. One SSD controller manufacturer, SandForce, does not use external DRAM cache on their design but still achieves high performance. Removal of such external DRAM reduces power consumption and enables further reduction of SSD size.
Use leveling
If a particular block is programmed and deleted repeatedly without writing to another block, the block will wear out before all other blocks - thereby prematurely terminating the SSD's life. For this reason, SSD controllers use a technique called wear leveling to distribute the writing evenly across all the flash blocks in the SSD.
In a perfect scenario, this will allow each block to be written to its maximum life so that it all fails at the same time. Unfortunately, the process to distribute evenly requires previously written and unchanged (cold data) data to be moved, so that data that changes more frequently (hot data) can be written into the block. Whenever data is moved without being changed by the host system, it improves write amplification and thereby reduces the life of flash memory. The key is to find an optimal algorithm that maximizes both.
Battery or super capacitor
Other components in high performance SSDs are capacitors or some form of battery, which is necessary to maintain data integrity so that data in the cache can be flushed to the drive when power is lost; some can even hold the power long enough to keep the data in cache until power is resumed. In the case of MLC flash memory, a problem called lower page corruption may occur when the MLC flash memory loses power when programming the top page. The result is data previously written and allegedly safe may be corrupted if the memory is not supported by the supercapacitor in case of sudden power loss. This problem does not exist in SLC flash memory.
Most consumer-grade SSDs do not have built-in batteries or capacitors; among the exceptions are the Crucial M500 and MX100 series, the Intel 320 series, and the more expensive 710 and 730 Intel series. Enterprise-class SSDs, such as the Intel DC series, S3700, usually have built-in batteries or capacitors.
Host interface
The host interface physically is a connector with signals managed by the SSD controller. This is most often one of the interfaces found in HDD. They include:
- Serial attached SCSI (SAS, 12.0 Gbit/s) - commonly found on server
- Serial ATA (SATA, 6.0Ã, Gbit/s)
- PCI Express (PCIe, 31.5 Gbit/s)
- Fiber Channel (128 Gbit/s) Ã, - almost exclusively found on server
- USB (10 Gbit/s)
- Parallel ATA (UDMA, 1064Ã, Mbit/s) Ã, - mostly replaced by SATA
- (Parallel) SCSI (& gt; Ã, Â ± 40 Mbit/s) - commonly found on servers, largely superseded by SAS; The last SCSI SSD was introduced in 2004
SSDs support a variety of logical device interfaces, such as the native ATAPI, Advanced Host Controller Interface (AHCI), NVM Express (NVMe), and other proprietary interfaces. The logical device interface determines the set of commands used by the operating system to communicate with the SSD and the host bus adapter (HBA).
Configuration
The size and shape of any device is largely driven by the size and shape of the components used to make the device. HDDs and traditional optical drives are designed around rotating disks or optical disks along with spindle motors in them. If the SSD consists of a variety of interconnected integrated circuits (ICs) and interface connectors, then the shape is no longer limited to the shape of a rotating media drive. Some solid state storage solutions are present in larger chassis that may even be a rack-mounted form factor with many SSDs in them. They will all connect to the public bus inside the chassis and connect to the outer box with a single connector.
For general computer use, the 2.5 inch form factor (usually found in laptops) is the most popular. For desktop computers with 3.5-inch hard disk drive slots, simple adapter plates can be used to create the appropriate drives. Another type of form factor is more common in enterprise applications. SSDs can also be fully integrated in other device circuits, such as the Apple MacBook Air (starting with falling 2010 models). In 2014, mSATA and M.2 form factors are also increasingly popular, especially in laptops.
Standard HDD form factor
The benefit of using the current HDD form factor is utilizing the existing extensive infrastructure to install and connect the drive to the host system. This traditional form factor is known for its rotating media sizes, for example 5.25 inches, 3.5 inches, 2.5 inches, 1.8 inches, not by the dimensions of the drive casing.
Standard card form factor
For apps where space is at a premium, such as for an ultrabook or tablet computer, some simple form factor is standardized for flash-based SSDs.
There is an mSATA form factor, which uses the physical layout of the PCI Express Mini Card. It remains electrically compatible with the PCI Express Mini Card interface specification, while requiring additional connection to the SATA host controller via the same connector.
The form factor M.2, formerly known as the Next Generation Forms Factor (NGFF), is a natural transition of mSATA and the physical layout used, to more useful and advanced form factors. While mSATA takes advantage of existing form factor and connector, M.2 has been designed to maximize card space usage, while minimizing trace. Standard M.2 allows SATA and PCI Express SSDs to be installed into M.2 module.
Form-on-disk-module
A disk-on-a-module is a flash drive with the Parallel ATA (PATA) or 40/44-pin interface, intended to be plugged directly into the motherboard and used as a computer hard disk drive (HDD). DOM devices mimic traditional hard disk drives, so do not need special drivers or other special operating system support. DOM is commonly used in embedded systems, often used in harsh environments where mechanical HDDs will only fail, or in thin clients because of their small size, low power consumption and quiet operation.
In 2016, storage capacity ranges from 64 GB to 128 GB with various physical layout variations, including vertical or horizontal orientation.
Form Factor box
Many DRAM-based solutions use boxes that are often designed to fit in a rack-mounted system. The number of DRAM components required to gain sufficient capacity to store data along with backup power supplies requires more space than traditional form factor HDDs.
Bare-board form factor
A more common form factor for memory modules is now used by SSDs to take advantage of their flexibility in laying out components. Some of them include PCIe, mini PCIe, mini-DIMM, MO-297, and more. SATADIMM from Viking Technology uses an empty DDR3 DIMM slot on the motherboard to provide power to the SSD with a separate SATA connector to provide a data connection back to the computer. The result is an easy-to-install SSD with the same capacity as a drive that typically takes up 2.5-inch full drive space. At least one manufacturer, Innodisk, has produced drives that sit directly on the SATA (SATADOM) connector on the motherboard without the need for a power cord. Some SSDs are based on the PCIe form factor and connect both data and power interfaces via the PCIe connector to the host. This drive can use the PCIe direct flash controller or PCIe-to-SATA bridge device which is then connected to the SATA flash controller.
Field shape array of spherical
In the early 2000s, several companies introduced SSDs in the form of Ball Grid Arrays (BGA), such as Monster Systems (now SanDisk) DiskOnChip and NANDrive from Silicon Storage Technology (now manufactured by Greenliant Systems), and M1000 from Memoright for use in embedded system. The main benefit of SSD BGA is the low power consumption, small packet chip size to fit into the compact subsystem, and that they can be soldered directly to the system motherboard to reduce the adverse effects of vibration and shock.
Comparison with other technologies
Hard disk drive
Making comparison between regular SSD and HDD (spinning) is difficult. Traditional SSD benchmarks tend to focus on poor performance characteristics with HDDs, such as rotation latency and search time. Because SSDs do not have to spin or find data, they may prove far superior to HDDs in such tests. However, SSDs have challenges with reading and writing mixes, and their performance may decrease over time. SSD testing should start from the full drive (in use), because new and empty drives (fresh, out of the box) may have much better writing performance than will be displayed after a few weeks of use.
Most of the advantages of solid-state hard disks versus traditional hard drives are their ability to access data entirely electronically, not electromechanically, resulting in superior transfer speed and mechanical roughness. On the other hand, hard disk drives offer a much higher capacity for their prices.
The field failure rate indicates that SSDs are significantly more reliable than HDD. However, SSDs are uniquely sensitive to sudden power interruptions, leading to canceled writing or even the case of complete drive loss. The reliability of both HDD and SSD varies greatly among models.
Like HDD, there is a tradeoff between cost and performance of various SSDs. One-level cell SSDs (SLCs), while SSDs that are much more expensive than multi-level (MLC), offer significant speed advantages. At the same time, current DRAM-based solid-state storage is currently regarded as the fastest and most expensive, with an average response time of 10 microseconds than the average of 100 microseconds of other SSDs. Enterprise flash devices (EFDs) are designed to handle tier-1 application demands with performance and response times similar to cheaper SSDs.
In traditional HDD, rewritten files will generally occupy the same location on the disk surface as the original file, whereas in the SSD new copies will often be written to different NAND cells for leveling wear purposes. The wear-leveling algorithm is very complex and difficult to test in depth; as a result, one major cause of data loss in SSDs is firmware bugs.
The following table shows a detailed description of the advantages and disadvantages of both technologies. Comparison reflects typical characteristics, and may not apply to certain devices.
Memory card
While both memory cards and most SSDs use flash memory, they serve very different markets and destinations. Each has a number of different attributes that are optimized and tailored to meet specific user needs. Some of these characteristics include power consumption, performance, size, and reliability.
SSDs were originally designed for use in computer systems. The first unit is meant to replace or add to the hard disk drive, so the operating system recognizes them as hard drives. Initially, solid state drives are even shaped and installed on computers like hard drives. Then SSDs become smaller and more compact, eventually developing their own unique form factor such as the M.2 form factor. SSDs are designed to be permanently installed on the computer.
In contrast, memory cards (such as Secure Digital (SD), CompactFlash (CF), and many others) were originally designed for digital cameras and then found their way to cell phones, gaming devices, GPS units, etc. Most memory cards are physically smaller than SSDs, and are designed to be inserted and deleted repeatedly. There are adapters that allow multiple memory cards to interface to the computer, allowing them to be used as SSDs, but they are not meant to be the primary storage device on a computer. The typical CompactFlash card interface is three to four times slower than an SSD. Because memory cards are not designed to tolerate the number of readings and writes that occur during normal computer use, their data can be damaged unless a special procedure is taken to reduce wear on the card to a minimum.
SSD Failure
SSDs have very different modes of failure than traditional magnetic hard drives. Due to their design, some types of failures can not be applied (motors can not wear out or magnetic heads fail, as these are not required in SSDs). Conversely, other types of failure may occur (eg, incomplete or failed writing due to sudden power failure may be a greater problem than HDD, and if the chip fails then all data in it is lost, the scenario does not apply to magnetic drives). However, overall statistics show that SSDs are generally very reliable, and often continue to work well beyond the expected lifespan as stated by their manufacturer.
Reliability and SSD failure modes
Initial tests by Techreport.com for 18 months during 2013 - 2015 have previously tested a number of SSDs for destruction to identify how and at which point they failed; tests found that "All drives exceed their official resilience specifications by writing hundreds of terabytes without problems", which are described as far beyond the usual size for "ordinary consumers". The first SSD to fail is a TLC-based drive - a design type that is expected to be less durable than SLC or MLC - and the SSD in question successfully writes over 800,000 GB (800 TB or 0.8 petabyte) before it fails; three SSDs in the test managed to write nearly three times that number (almost 2.5 PB) before they also failed. So the ability of even the consumer SSD to be very reliably has been established.
A 2016 study of "millions of driving days" in production use by SSDs over a six-year period found that SSDs failed at "much lower" levels than HDDs, but had potential for data loss locally because unreadable blocks became more problematic than with HDD. It came to a number of "unexpected conclusions":
- In the real world, MLC-based designs - believed to be less reliable than SLC designs - are often as reliable as SLC. (The findings state that "SLC is generally no more reliable than MLC")
- The device age, measured by usage days, is a key factor in SSD reliability, and not the amount of data read or written. Because these findings persist after controlling early failure and other factors, the possibility of factors such as "silicon aging" is the cause of this trend. This correlation is significant (about 0.2 - 0.4).
- The crude bit error (RBER) rate grows much slower than is usually believed and is not exponential as is often assumed, nor is it a good predictor of errors or other SSD failures.
- The irreversible bit error rate (UBER) is widely used but is not a good failure predictor. But the SSD UBER level is higher than for HDD, so even though they do not predict failure, they can cause data loss because unreadable blocks become more common on SSDs than HDDs. The conclusion states that although overall more reliable, the error rate can not go wrong which can impact on larger users.
- "Bad blocks on new SSDs are common, and drives with a large number of bad blocks are much more likely to lose hundreds of other blocks, most likely due to dead or chip failure 30-80 percent of SSDs develop at least one bad block and 2-7 percent developing at least one bad chip in the first four years of deployment. "
- There is no sharp increase in errors after the expected lifetime is reached.
- Most SSDs develop no more than a few bad blocks, maybe 2-4. SSDs that develop many bad blocks often continue to grow much more (maybe hundreds), and may be vulnerable to failure. However most of the drives (99%) are shipped with bad blocks of manufacture. The overall finding is that bad blocks are common and 30-80% of drives will develop at least one that is used, but even some bad blocks (2-4) are predictors of up to hundreds of bad blocks at a later time. Bad manufacturing block calculations correlate with further bad block developments. The report conclusion adds that SSDs tend to have "less than one" bad block or "large number", and suggest that this may be the basis for predicting the eventual failure.
- About 2-7% of SSDs will develop bad chips in the first 4 years of use. More than 2/3 of these chips will violate the tolerances and specifications of their manufacturer, which usually guarantees that no more than 2% block on the chip will fail in the expected writing time.
- 96% of SSDs that need to be repaired (warranty service), only need to be fixed once in their life. Days between improvements vary from "several thousand days" to "almost 15,000 days" depending on the model.
Data recovery and secure deletion
Solid state drives have set new challenges for enterprise data recovery, because the way data storage is not linear and much more complex than hard disk drives. Drive strategies operating internally can vary greatly between manufacturers, and the TRIM command zeroes the entire range of deleted files. Leveling wear also means that the physical address of data and addresses affected by the operating system is different.
To safely delete data, the ATA Secure Erase command can be used. Programs such as hdparm can be used for this purpose.
Endurance
The JEDEC Country Solid Technology Association (JEDEC) has published standards for reliability metrics:
- Non-Recoverable Bit Error Rate (UBER)
- Written Terabytes (TBW) - The number of terabytes that can be written to the drive under warranty.
- Drive Writes Per Day (DWPD) - The total drive capacity can be written per day under warranty.
Apps
Until 2009, SSDs are primarily used in aspects of mission critical applications where the speed of storage systems should be as high as possible. Since flash memory has become a common component of SSDs, falling prices and density increases make it more cost-effective for many other applications. Organizations that can benefit from fast data access systems include equity trading firms, telecommunications companies, and streaming media and video editing companies. The list of apps that could benefit from faster storage is extensive.
Flash-based solid-state drives can be used to create network equipment from general-purpose personal computer hardware. The write-protected flash drive that contains the operating system and the application software can replace a larger and less reliable disk drive or CD-ROM. Equipment built in this way can provide a cheap alternative to expensive router and firewall hardware.
SD card-based SSDs with SD direct operating systems are easily locked-write. Combined with a cloud computing environment or other writable media, to keep persistence, the booted OS of the write-lock SD card is robust, tough, reliable, and resistant to permanent corruption. If the running OS is damaged, simply turn off the machine and then return it back to its original undamaged state and thus very solid. The OS-installed SD card does not require removal of the damaged component because it is written-locked although any written media may need to be recovered.
Hard drive cache
In 2011, Intel introduced a cache mechanism for their Z68 chipset (and cellular derivatives) called Smart Response Technology, allowing SATA SSDs to be used as cache (configurable as write-through or write-back) for conventional magnetic hardware. disk drive. A similar technology is available on the RocketHybrid HighPoint PCIe card.
Solid-state hybrid hard disks (SSHD) are based on the same principle, but integrate some amount of flash memory on conventional drive boards rather than using separate SSDs. The flash layers on these drives can be accessed separately from the host's magnetic storage using the ATA-8 command, which allows the operating system to manage them. For example, Microsoft's ReadyDrive technology explicitly saves part of the hibernation files in this cache drive when the hibernate system, making the next resume faster.
The dual-drive hybrid system combines the use of separate SSD and HDD devices installed on the same computer, with overall performance optimizations managed by computer users, or by computer operating system software. Examples of these types of systems are bcache and dm-cache in Linux, and Apple's Fusion Drive.
File system support for SSD
Usually the same file system used on the hard disk drive can also be used on solid state hard drives. It is usually expected for the file system to support the TRIM command that helps SSDs to recycle the discarded data (support for TRIM arrives several years after the SSD itself but is now almost universal). This means that the file system does not need to manage wear leveling or other flash memory characteristics, since they are handled internally by SSDs. Some flash file systems using log-based design (F2FS, JFFS2) help reduce write amplification on SSDs, especially in situations where only a small amount of data is altered, such as when updating system file metadata.
Although not a file system feature, the operating system should also aim to align the partitions correctly, which avoids excessive copy-writing and write cycles. A typical practice for personal computers is that each partition is aligned to start at a 1 MB mark (= 1,048,576 bytes), which includes all common SSD pages and block size scenarios, since it can be shared by all commonly used sizes - 1 MB, 512 KB , 128Ã, KB, 4Ã, KB, and 512Ã, bytes. Modern operating system installation software and disk tools handle this automatically.
Linux
Ext4, Btrfs, XFS, JFS, and F2FS system files include support for discard (TRIM or UNMAP) functions. In November 2013, ext4 can be recommended as a secure option. F2FS is a modern file system optimized for flash-based storage, and from a technical perspective is a very good choice, but still in the experimental stage.
Kernel support for TRIM operation was introduced in version 2.6.33 of the main Linux kernel, released on February 24, 2010. To use it, the file system must be mounted using the discard
parameter. The Linux swap partition by default performs a flue operation when the underlying drive supports TRIM, with the possibility to turn it off, or to choose between a single discharge operation or continuous operation. Support for the TRIM queue, which is a SATA 3.1 feature that generates the TRIM command does not interfere with the command queue, introduced in Linux kernel 3.12, released on November 2, 2013.
An alternative to kernel-level TRIM operations is to use a user-space utility called fstrim that passes all unused blocks in the file system and sends a TRIM command to that area. The fstrim utility is usually run by cron as a scheduled task. In November 2013, it is used by the Ubuntu Linux distribution, where it is enabled only for Intel and Samsung solid-state drives for reliability reasons; vendor checks can be disabled by editing the /etc/cron.weekly/fstrim file using the instructions contained within the file itself.
Since 2010, the standard Linux driver utility has been taking care of aligning the corresponding partitions by default.
Linux performance considerations
During installation, Linux distributions usually do not configure the installed system to use TRIM and thus the /etc/fstab
file requires manual modification. This is because the presumption that the current implementation of the TRIM Linux command may not be optimal. It has been shown to cause performance degradation instead of performance improvements under certain circumstances. In January 2014, Linux sent individual TRIM commands to individual sectors, instead of a revamped list that defined the TRIM range as recommended by the TRIM specification. This deficiency has been around for years and there is no known plan to eliminate it.
For performance reasons, it is recommended to replace the IQ scheduler from the default CFQ (Completely Fair Queuing) to NOOP or Deadline. CFQs are designed for traditional magnetic media and look for optimization, so much of the I/O scheduling effort is wasted when used with SSDs. As part of their design, SSDs offer a much greater degree of parallelism for I/O operations, so it's better to allow scheduling decisions into their internal logic - especially for upscale SSDs.
Scalable block layers for high-performance SSD storage, known as blk-multiqueue or blk-mq and developed mainly by Engineers Fusion-io, combined into the mainline Linux kernel in the 3.13 kernel version, was released on January 19, 2014. It leverages the performance offered by SSD and NVM Express, enabling a much higher I/O delivery rate. With the new design of the Linux kernel block layer, the internal queue is divided into two levels (per CPU and hardware delivery queue), thus eliminating congestion and allowing higher levels of I/O parallelization. In version 4.0 of the Linux kernel, April 12, 2015, VirtIO block driver, SCSI layer (used by Serial ATA drivers), device coupler frameworks, loop device drivers, unedited image block drivers (UBI) (which implements removing block management layer for flash memory devices) and RBD drivers (which export Ceph RADOS objects as block devices) have been modified to actually use this new interface; other drivers will be ported in the following release.
OS X
The OS X version since 10.6.8 (Snow Leopard) supports TRIM but only when used with SSDs purchased by Apple. TRIM is not automatically enabled for third-party drives, although it can be enabled by using a third-party utility such as Trim Enabler . The TRIM status can be checked in the System Information app or in the system_profiler
command tool.
OSÃ, X 10.11 (El Capitan) and 10.10.4 (Yosemite) include sudo trimforce enable
as a Terminal command that allows TRIM on non-Apple SSDs. There are also techniques for enabling TRIM in OS X versions earlier than 10.6.8, although it is uncertain whether TRIM is actually used correctly in such cases.
Microsoft Windows
Microsoft Windows versions prior to 7 did not take special precautions to support solid state drives. Starting from Windows 7, the standard NTFS file system provides TRIM support (other file systems in Windows do not support TRIM).
By default, Windows 7 and later versions run the TRIM command automatically if the device is detected as a solid-state drive. To change this behavior, in the Registry key HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Control \ FileSystem , the value DisableDeleteNotification can be set to 1 to prevent the mass storage driver from issuing TRIM commands. This can be useful in situations where data recovery is preferred over the level of wear and tear (in many cases, TRIM irreversibly resets all the freed space).
Windows implements the TRIM command to be more than just a file deletion operation. The TRIM operation is fully integrated with partition and volume level commands such as format and delete , with file system commands related to cutting and compression, and with System Restore (also known as Snapshot Volume ) feature.
Windows 7 and later
Windows 7 and later versions have native support for SSDs. The operating system detects the presence of SSDs and optimizes the operation accordingly. For Windows SSD devices disable SuperFetch and ReadyBoost, boot time and application loading operations. Despite initial remarks by Steven Sinofsky prior to the release of Windows 7, however, defragmentation is not disabled, although his behavior on SSDs is different. One reason is the low performance of Volume Shadow Copy Service on fragmented SSDs. The second reason is to avoid the maximum number of practical file fragments that can be handled by volume. If this maximum is reached, subsequent attempts to write to the drive will fail with an error message.
Windows 7 also includes support for the TRIM command to reduce garbage collection for data that the predefined operating system no longer holds. Without support for TRIM, SSDs will not realize this data becomes invalid and do not have to keep rewriting during garbage collection causing further wear on the SSD. It is useful to make some changes that prevent SSDs being treated more like HDDs, such as undo defragmentation, not charging it to more than about 75% capacity, not storing frequently-written files to log files and temporary files like that if the hard drive is available, TRIM process.
Windows Vista
Windows Vista generally expects hard disk drives rather than SSDs. Windows Vista includes ReadyBoost to exploit the characteristics of USB devices connected to USB, but for SSDs only increases default partitioning to prevent read-write and write operations that reduce SSD speed. Most SSDs are usually broken down into 4kB sectors, while most systems are based on 512 byte sectors with default partition settings not parallel to the 4 KB limit. Proper alignment does not help SSD endurance during drive life; However, some Vista operations, if not disabled, can shorten SSD life.
Defragmenting the hard disk should be disabled because the location of the file component on the SSD does not have a significant impact on its performance, but moving the file to make it close together using Windows Defrag routine will cause unnecessary writing wear on the limited number of P/E cycles in the SSD. Superfetch features will not improve the performance of the system materially and cause additional costs in the system and SSD, although not cause wear. Windows Vista does not send the TRIM command to the solid state drive, but some third part utilities such as SSD Doctor will periodically scan the TRIM drive and the appropriate entry.
ZFS
Solaris on version 10 Update 6 (released in October 2008), and the latest version of OpenSolaris, Solaris Express Community Edition, Illumos, Linux with ZFS on Linux, and FreeBSD all can use SSD as a performance amplifier for ZFS. A low-latency SSD can be used for ZFS Intent Log (ZIL), where it is named SLOG. This is used whenever sync writing to a drive occurs. SSDs (not necessarily low latency) can also be used for level 2 Adaptive Replacement Cache (L2ARC), which is used to store cached data to read. When used either alone or in combination, major improvements in performance are generally seen.
FreeBSD
ZFS for FreeBSD introduced support for TRIM on September 23, 2012. Code builds a map of the released data territory; on each code writing consult the map and finally remove the previously released range but now overwritten. There is a low-priority thread that revolves around the TRIM when the time comes.
Also the Unix File System (UFS) supports the TRIM command.
Swap partition
- According to former Microsoft Windows division president Steven Sinofsky, "there are some files that are better than pagefiles to be placed on SSDs". According to telemetry data collected, Microsoft has found pagefile.sys to be the ideal partner for SSD storage.
- The Linux partition swap by default performs a TRIM operation when the underlying block device supports TRIM, with the possibility to turn it off, or to choose between a single or continuous TRIM operation.
- If the operating system does not support the use of TRIM on discrete swap partitions, it may be possible to use swap files inside a regular file system. For example, OS X does not support swap partitions; just swap to a file in the file system, so it can use TRIM when, for example, the swap file is deleted.
- DragonFly BSD allows swap configured SSD to also be used as a file system cache. This can be used to improve performance on desktop and server workloads. The bcache, dm-cache, and Flashcache projects provide similar concepts for the Linux kernel.
Organization of standardization
The following is the organization and standardization body that serves to create standards for solid state hard drives (and other computer storage devices). The table below also includes organizations that promote the use of solid-state hard disks. This is not necessarily a complete list.
Commercialization
Availability
Solid-state propulsion technology has been marketed to the military and niche industry markets since the mid-1990s.
Along with the emerging enterprise market, SSDs have appeared on ultra-mobile PCs and some lightweight laptop systems, adding significantly to laptop prices, depending on capacity, form factor and transfer speed. For low-end applications, USB flash drives can be obtained from $ 10 to $ 100 or more, depending on capacity and speed; or, the CompactFlash card can be paired with a CF-to-IDE or CF-to-SATA converter at the same cost. One of these requires that the write cycle endurance problem be managed, either by refraining from storing frequently written files on the drive or by using a flash file system. Standard CompactFlash cards typically have write speeds of 7 to 15 MB/sec, while more expensive cards claim speeds of up to 60 MB/sec.
One of the major major releases of SSDs is the XO Laptop, which was created as part of the One Laptop Per Child project. The mass production of these computers, built for children in developing countries, began in December 2007. These machines use 1,024 flash MiB SLC NANDs as primary storage which is considered more suitable for harsher conditions than normal conditions in where they are expected to be used. Dell began delivering an ultra-portable laptop with the SanDisk SSD on April 26, 2007. Asus released a subnotebook Eee PC on October 16, 2007, with 2, 4 or 8 gigabytes of flash memory. On January 31, 2008, Apple released the MacBook Air, a thin laptop with optional 64 GB SSD. The cost of the Apple Store is $ 999 more for this option, compared to the RPM 8000 GB 4200 hard disk drive. Another option, Lenovo ThinkPad X300 with a 64GB SSD, was announced by Lenovo in February 2008. On August 26, 2008, Lenovo released the ThinkPad X301 with option 128Ã, GB SSD which adds about $ 200 US.
In 2008, low-end netbooks appeared with SSDs. In 2009, SSD began appearing on laptops.
On January 14, 2008, EMC Corporation (EMC) became the first enterprise storage vendor to deliver flash-based SSDs into its product portfolio when it announced it had selected STEC, Inc. Zeus-IOPS SSD for its Symmetrix DMX system. In 2008, Sun released Sun Storage 7000 Unified Storage Systems (code-named Amber Road), which uses conventional solid state drives and hard drives to take advantage of the speeds offered by SSDs and the economy and capacity offered. by conventional HDD.
Dell began offering optional 256A GB optional hard disks on select notebook models in January 2009. In May 2009, Toshiba launched a 512 GB SSD laptop.
Since October 2010, Apple's MacBook Air line has been using solid state hard disk as standard. As of December 2010, the OCZ RevoDrive X2 PCIe SSD is available in capacities of 100 GB to 960 GB which deliver speeds of over 740 MB/s sequential speed and small random files write up to 120,000 IOPS. In November 2010, Fusion-io released the highest-performing SSD drive named ioDrive Octal using PCI-Express x16 Gen 2.0 interface with 5.12 TB storage space, 6.0 GB/sec read speed, write speed of 4.4 GB/sec and low latency 30 microseconds. It has 1.19 M Read 512 bytes IOPS and 1.18 M Write 512 bytes of IOPS.
In 2011, the Intel Ultrabook-based specification computer is available. This specification specifies that Ultrabook uses SSD. This is a consumer-level device (unlike many previous flash offerings intended for enterprise users), and represents the first consumer computer to use SSDs aside from the MacBook Air. At CES 2012, OCZ Technology demonstrates a RD CloudServ PCIe SSD capable of achieving transfer rates of 6.5 GB/sec and 1.4 million IOPS. Also announced is the Z-Drive R5 available in up to 12 TB capacity, capable of reaching transfer rates of 7.2Ã,® GB/s and 2.52 million IOPS using PCI Express x16 Gen 3.0.
In December 2013, Samsung introduced and launched the industry's first 1 mSATA TB SSD. In August 2015, Samsung announced SSD TB 16, when the world's highest capacity single storage device of any type.
Quality and performance
In general, the performance of certain devices may vary significantly under different operating conditions. For example, the number of parallel threads that access the storage device, the block size of I/O, and the amount of free space left can dramatically change the performance (ie the transfer rate) of the device.
SSD technology has grown rapidly. Most performance measurements used on disk drives with rotating media are also used on SSDs. Flash-based SSD performance is difficult to benchmark due to various conditions that allow. In tests conducted in 2010 by Xssist, using IOmeter, 4 kB random 70% read/30% write, queue depth 4, IOPS delivered by Intel X25-E 64 GB G1 starts around 10,000 IOPs, and drops sharply after 8 minutes to 4,000 IOPS, and continue to decline gradually over the next 42 minutes. IOPS varies between 3,000 and 4,000 from about 50 minutes forward for the remaining 8 hours of follow-up testing.
Write amplification is the main reason for SSD performance change over time. Enterprise class drive designers try to avoid this performance variation by increasing over-provisioning, and by using wear-leveling algorithms that move data only when the drive is not widely used.
Sales
SSD shipments are 11 million units in 2009, 17.3 million units in 2011 with a total of US $ 5 billion, 39 million units in 2012, and is expected to increase to 83 million units in 2013 to 201.4 million units by 2016 and to 227 million units in 2017.
Revenues for the SSD market (including cheap PC solutions) worldwide totaled $ 585 million in 2008, up more than 100% from $ 259 million in 2007.
See also
- Solid solid state drive
- List of hard disk drive manufacturers
- RAID
References
Further reading
- "Solid-state revolution: deep on how SSDs really work". Lee Hutchinson. Ars Technica. June 4, 2012.
- Mai Zheng, Joseph Tucek, Feng Qin, Mark Lillibridge, "Understanding the Robust Strength under the Power Fault", FAST'13
- Cheng Li, Philip Shilane, Fred Douglis, Hyong Shim, Stephen Smaldone, Grant Wallace, "Nitro: Capacity-Capacity Capacity Cache for Primary Storage", USENIX ATC'14
- Cheng Li, Philip Shilane, Fred Douglis, Grant Wallace, "Pannier: Flash Case Based Container for Object Compounds", ACM/IFIP/USENIX Middleware'15
External links
- Background and general
- SSD Guide StorageReview.com
- A guide to understanding Solid State Drive
- SSD versus laptop HDD and upgrade experience
- Understanding SSD and Drive New from OCZ
- Map the 30 Year Increment of Disk Solid State Market
- Investigation: Is Your SSD More Reliable From Hard Drive? - a long-term SSD reliability review
- the SSD returns rate is reviewed by the manufacturer (2012), hardware.fr - France (UK) 2012 update of the 2010 report based on data from leading French technology retailer
- Company SSD Forms Factor Version 1.0a, Working Group of SSD Form Factor, December 12, 2012
- More
- Ted Tso - Sync filesystems to SSD removal block size
- JEDEC Continue SSD Standardization Effort
- Linux & amp; NVM: System Challenges and Storage (PDF)
- Linux and SSD Optimization
- Understanding Robustness SSD under Power Fault (USENIX 2013, by Mai Zheng, Joseph Tucek, Feng Qin, and Mark Lillibridge)
- SSD vs. m.2, FrugalGaming, by James Heinfield
Source of the article : Wikipedia