Micron has released the 9300 series SSD recently and the company put it into AI mix. If you are interested in it, you can now read this post to get some releated information.
Micron Puts SSD into AI Mix
Nowadays, there have been more and more talk about the necessary of memories and architrcture for artificial intelligence (also known as AI) and the machine learning workloads. At precent, Micron Technogy has introduce a solid-state drive which has high performance and high capacity and it puts flash firmly in the mix.
This SSD is the 9300 series drive which was released recently. The drive is using the NVM Express (NVMe) protocol which aims at data intensive applications with 3.5 Gbps throughput on both reads and writes.
Micron has released a new series of SSDs – Micron 9300 SSD. It is designed to satisfy the large workloads of enterprises, delivering fast transfer speed.
What Can Micron 9300 SSD Do for AI
Cliff Smith, who is Micron’s product line manager has said like this:
“Latency is becoming much more important in the enterprise and cloud work space where the response time for the application is pretty important, so that your infrastructure can respond to more user requests on a given server storage platform”.
In general, that is the market aim of this Micron 9300 series. Besides its performance, it has some other selling points like the 28 percent less power consumption than the company’s previous generation of NVMe SSDs. In addition, the capacity of the drive can be as high as 15.36 TB and the 32 NVMe namespaces make it is as efficiently as possible to use the storage space.
The performance and capacity put the 9300 drive in a position to meet the need of AI and machine learning. Smith also said that the throughput and the capacity make the SSD to ingest large datasets quickly. When you are loading a workload for a learning algorithm, you are just writing. But the 9300 series drive can write this data quickly.
When the dataset is in, the learning algorithm will take over and it will read and train constantly. There is a trick to develop the training. Thus, it can constantly read the dataset.
But, Micron is still trying to make the transform and load processes to be perfect. Then, it will be possible to move vast amounts of information from data lakes to faster SSD and then into the GPU complex.
It will speed up the learning and generation of a model which can be put into production to make inference. But, there is one disadvantage. Due to the parallel processes, data scientisets will be unable to take long coffee breaks.
The Current Situation of AI Workloads
Now, the machine learning sequential as the ETL process caches the data set into an SSD and then to the GPU. However, the company sees that it will be donw in parallel in the near future.
In some cases, the high capacity can be a drawback because it may take a long time to rebuild when it ran into problems.
Want to perform SSD data recovery? This post gives the best SSD data recovery software to recover data on SSD without any damage to original data.
It will still take several years before reaching the level of autonomous driving on the roadways. And at the same time, there are plenty of AI workloads which need to be addressed by various memory and storage technologies, like SSDs, both in the data center and at the edge in scenarios where an immediate inference is needed.