Total Views: 33
Once completed, Meta expects its new AI Research SuperCluster (RSC) to be among the fastest AI supercomputers in the world.
The supercomputer is currency under construction in stages. The RSC currently includes 6080 Nvidia A100 GPUs. Coupled with 175 petabytes of mass storage, 46 petabytes of cache storage, and 10 petabytes of networked file system storage. Each GPU communicates with the others over a 200 Gb/s Infiniband HDR network.
When finished, the RCS will have 16,000 GPUs and a data infrastructure capable of serving one exabyte of training data per second. Which would be outperforming its current state by 2.5 times. Meta anticipates reaching the final stage in July.
Natural language processing (NLP).
Meta created the CSR to train massive AI models in natural language processing (NLP). As well as to search and train models with trillions of samples. It will also aid in the development of AI capable of operating in hundreds of different languages, analyzing media, and developing augmented reality capabilities. Furthermore, the corporation claims that developing more advanced AI in speech and vision processing will help it better identify hazardous content.
Meta aims to use CSR in the future to develop totally new AI models for workloads such as real-time translation, collaboration, and ultimately developing a richer metaverse.
The new system, according to Meta, is currently 20 times quicker than its current Nvidia V100 clusters in the computer vision workflow. Moreover, it is three times faster than its current search clusters in building large-scale NLP models.
The business also addressed the problem of privacy in broad terms in their release. The entire data pipeline, from storage systems to the GPU, is encrypted, according to the company. To ensure that data has been appropriately anonymized, all data is subject to a privacy assessment. Furthermore, because the data is only decrypted in memory, even a physical security compromise will not result in data leakage.