Datacenters—an infrastructure to store, manage, and process data for enterprises and organizations— are growing continuously as digitization and information technologies scale up. Datacenters’ existence can be dated back to 1940, and it has evolved continuously over decades to handle rising volumes of data. As trends including Big Data and Data Analytics, Artificial Intelligence, Machine Learning, Internet of Things (IoT), Industrial Internet of Things (IIoT) emerge, all sectors of enterprise, government, and industrial organization show heightened interest to leverage growth through digitization and data monetization. According to Cisco, data Internet Protocol (IP) traffic is estimated to grow threefold from 6.8 Zettabytes in 2016 to 20.6 Zettabytes by 2020. It also adds that workloads on datacenter and compute instances will grow 2.3 folds by 2020, and organizations will switch to public cloud infrastructures to manage their IT operations and analytic needs. This can be clearly related to an increasing number of consumers demanding high-definition (HD) video and audio streaming, large-sized file sharing, faster computing requirements in applications such as financial transactions and stock trading, real-time and faster data analytics for self-driving cars, Industry 4.0, crime mitigation, and so on. The use cases can extend largely.
All these applications are demanding data centers to scale up and upgrade to deliver high performance. However, there is substantial cost associated with setting up such hyperscale datacenters. For instance, Facebook announced a $1 billion investment into its new datacenter to be established in Singapore. To justify the cost and deliver high performance efficiently, cost-effectively, and securely, datacenters demand advances in their components ecosystem with more emphasis on semiconductors. Hence, this article is focused on identifying the different semiconductor devices being used in a datacenter along with an overview on advances under way to support the development.
Semiconductors have grown to emerge as a primary component for success in datacenters, with different devices being used, such as central processing units (CPUs), graphics processing units (GPUs), memory, chips for network infrastructure, and power management.
Big Data, Data Analytics, and Internet of Things together have resulted in data explosion. The process of Big Data encompasses data collection from end points through field nodes such as sensors and micro-electro-mechanical systems (MEMS) and transmitting them over network chips and network infrastructures to be processed in datacenters. The challenge in handling and processing such data stems not only from its huge volume, but also from the unorganized structure, which calls for high computing power to segregate the right data from noise, organize it, and produce meaningful business insights–all in a short time frame. Traditional computing architectures, such as the von Neumann architecture, pose challenges for continued adoption. Therefore, computing chips are witnessing new developmental approaches, including changes to architecture, materials, and a focus on utilizing accelerators, ASICs. However, in the short term, it is expected that the current adoption of field programmable gate array (FPGA) and GPU including general purpose GPU (GPGPU) will continue to be the preferred alternative.
Memory chips are the key devices to store and manage data in datacenters. Their performance being crucial to the success of datacenter operations, there is a strong demand to improve the performance of the current double data rate (DDR) class of memories. Memory devices are not only expected to handle huge volumes of data with high bandwidth, but high performance has to be demonstrated with optimized cost and low power consumption. It is expected that the next-generation memory devices such as high bandwidth memory (HBM) and graphics DDR (GDDR) will be adopted in datacenters. However, in the near future, DDR4 buffer chips and DDR5 are expected to be utilized in datacenters.
While the industry is looking to advance memory classes for datacenters, there is an alternative approach to push computing and memory handling to the edge network. However, this needs an improved memory class. NVDIMM-P, a hybrid dual in-line memory module (DIMM)-based memory technology, is among the expected storage classes of memory that can bring such benefits. The NVDIMM is a persistent non-volatile class of memory device that also enables bringing the processor closer physically, allowing for faster processing of large data volumes. Therefore, the industry is hoping to utilize NVDIMMs in IoT devices to avoid or divert huge influx of traffic in the network.
The semiconductor industry is striving to continue with Moore’s law to achieve success in applications such as datacenters. However, the industry is finding it economically and scientifically hard to scale beyond 10 nanometer (nm) structures. As the industry tries to progress, advances are also being explored in packaging applications, including interconnects to deliver the benefits required in end-user applications such as datacenters. Interconnect technologies are being widely used in datacenter applications to lay down two or more different chips on a single interconnect substrate, such as the silicon interposer, that can establish an electrical connection between the two. Simply put, interconnects enable heterogeneous chip connections for 2.5D and 3D integrated circuits (ICs) before being placed on the printed circuit boards (PCBs), significantly improving the processing speed.
Research efforts are also being focused on interposer applications to explore new materials such as the glass interposer, which could deliver high input/output (i/o) connection density at a much lower cost.
In a nutshell, datacenter growth is enabling a wide range of development activities across different semiconductor devices and packaging technologies. With the demand for data increasing rapidly, semiconductors will continue to advance and grow to achieve new landmarks.