The advent of the Data Processing Unit or the I/O Processing Unit, or whatever you want to call it, was driven as much by economics as it was by architectural necessity.
The fact that chips are pressing up against reticle limits and CPU processing for network and storage functions is quite expensive compared to offload approaches have combined to make the DPU probable. The need to better secure server workloads, especially in multitenant environments, made the DPU inevitable. And now, the economics of that offload makes the DPU not just palatable, but desirable.
There is a reason why Amazon Web Services invented its Nitro DPUs, why Google has partnered with Intel to create the “Mount Evans” IPU, why AMD bought both Xilinx and Pensando (which both have a DPU play), and why Nvidia bought Mellanox Technology. The DPU, which is becoming the control point in the network and increasingly the gatekeeper to compute and storage, is at the center of all of the system architectures among these hyperscalers and IT vendors who want to propagate DPUs to the masses.
We have a lot of DPU theory and some hyperscale DPU practice, but as we have complained about in the past, we don’t have a lot of data that shows the cost/benefit analysis of DPUs in action. Nvidia heard our complaints and has put together some analysis using its BlueFie …