HPE relies on Qualcomm chips for its Edgeline EL8000 server

on 08/09/2022, by Andy Patrizio, IDG NS (adapted by Jean Elyan), Infrastructure540 mots

The Cloud AI100 chip supports the HPE Edgeline EL8000 edge system, capable of providing compute, storage and management in a single device.

Later this month, HP Enterprise will ship what may well be the first server specifically for AI inference for machine learning. The machine learning process has two stages: a training stage and an inference stage. Learning involves using Nvidia and AMD’s powerful GPUs or other high-performance chips to teach the AI ​​system what to look for, such as image recognition. Inference responds if the subject matches the trained models. But a GPU is overkill for this task, and a much less powerful CPU may suffice.If the EL8000 is equipped with an Intel Xeon Scalable type central processor, it also hosts Qualcomm’s Cloud AI100 chips, which fully meet the needs of artificial intelligence at the edge. It features up to 16 AI cores and supports FP16, INT8, INT16, FP32 data formats, all used for inference. These are not custom ARM processors, but entirely new SoCs, specifically designed for inference.

Inference workloads are often larger in scale and typically require low latency and high throughput to deliver real-time results. In 5U (8.4 inch) format, this server embeds up to four independent blades (ProLiant e910 and e920 1U) grouped together using switches integrated into the dual redundant chassis. Its little brother, the HPE Edgeline EL8000t, is a 2U system. It supports two independent blades (ProLiant e910 and e920 2U).

Two formats for Qualcomm’s AI100 chip

In addition to its performance, the Cloud AI100 chip consumes little energy. It is available in two form factors: either a PCI Express card or two M.2 chips mounted on the motherboard. The PCIe card has a thermal envelope of 75 watts, while the two M.2 units draw 15 or 25 watts. A common processor consumes more than 200 watts, and a GPU more than 400 watts. Qualcomm says its Cloud AI 100 chip supports major industry-standard model formats, including ONNX, TensorFlow, PyTorch, and Caffe. These models can be imported and prepared from pre-built models that can be compiled and optimized for deployment. Qualcomm has the tools for porting and preparing models, including support for custom operations.

According to Qualcomm, the Cloud AI100 chip targets the manufacturing and industrial sector, and sectors with AI needs at the edge. This is the case, for example, of computer vision and natural language processing (NLP). For computer vision, this may include quality control and assurance in manufacturing, object detection and video surveillance, and loss prevention and detection. For natural language processing, this includes programming code generation, intelligent assistant operations, and language translation. Edgeline servers will be available for purchase or rental through HPE GreenLake later this month.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.