« Back
    Intel® Data Center Blocks for Enterprise AI Inferencing

    Intel® Data Center Blocks for Enterprise AI Inferencing provide purpose-built infrastructure for low-latency, high-throughput inference performed on a CPU, not a separate accelerator card. It provides you with a jumpstart to deploying effcient AI inferencing algorithms on a solution composed of validated Intel® architecture building blocks that you can innovate on and take to market. To do so, this solution makes use of a feature of 2nd Generation Intel® Xeon® Scalable processors, Intel® Deep Learning Boost (Intel® DL Boost), which accelerates AI inferencing by performing inferencing calculations in one instruction that previously took multiple instructions.

     Asset Overview