checkAd

     216  0 Kommentare NVIDIA Sets AI Inference Records, Introduces A30 and A10 GPUs for Enterprise Servers

    NVIDIA AI Platform Smashes Every MLPerf Category, from Data Center to Edge

    SANTA CLARA, Calif., April 21, 2021 (GLOBE NEWSWIRE) -- NVIDIA today announced that its AI inference platform, newly expanded with NVIDIA A30 and A10 GPUs for mainstream servers, has achieved record-setting performance across every category on the latest release of MLPerf.

    MLPerf is the industry’s established benchmark for measuring AI performance across a range of workloads spanning computer vision, medical imaging, recommender systems, speech recognition and natural language processing.

    Anzeige 
    Handeln Sie Ihre Einschätzung zu Nvidia Corporation!
    Long
    851,67€
    Basispreis
    5,69
    Ask
    × 14,77
    Hebel
    Short
    954,41€
    Basispreis
    0,57
    Ask
    × 14,74
    Hebel
    Präsentiert von

    Den Basisprospekt sowie die Endgültigen Bedingungen und die Basisinformationsblätter erhalten Sie bei Klick auf das Disclaimer Dokument. Beachten Sie auch die weiteren Hinweise zu dieser Werbung.

    Debuting on MLPerf, NVIDIA A30 and A10 GPUs combine high performance with low power consumption to provide enterprises with mainstream options for a broad range of AI inference, training, graphics and traditional enterprise compute workloads. Cisco, Dell Technologies, Hewlett Packard Enterprise, Inspur and Lenovo are expected to integrate the GPUs into their highest volume servers starting this summer.

    NVIDIA achieved these results taking advantage of the full breadth of the NVIDIA AI platform ― encompassing a wide range of GPUs and AI software, including TensorRT and NVIDIA Triton Inference Server ― which is deployed by leading enterprises, such as Microsoft, Pinterest, Postmates, T-Mobile, USPS and WeChat.

    “As AI continues to transform every industry, MLPerf is becoming an even more important tool for companies to make informed decisions on their IT infrastructure investments,” said Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. “Now, with every major OEM submitting MLPerf results, NVIDIA and our partners are focusing not only on delivering world-leading performance for AI, but on democratizing AI with a coming wave of enterprise servers powered by our new A30 and A10 GPUs.”

    MLPerf Results
    NVIDIA is the only company to submit results for every test in the data center and edge categories, delivering top performance results across all MLPerf workloads.

    Several submissions also use Triton Inference Server, which simplifies the complexity of deploying AI in applications by supporting models from all major frameworks, running on GPUs, as well as CPUs, and optimizing for different query types including batch, real-time and streaming. Triton submissions achieved performance close to that of the most optimized GPU implementations, as well as CPU implementations, with comparable configurations.

    Seite 1 von 4


    Diskutieren Sie über die enthaltenen Werte


    globenewswire
    0 Follower
    Autor folgen

    Verfasst von globenewswire
    NVIDIA Sets AI Inference Records, Introduces A30 and A10 GPUs for Enterprise Servers NVIDIA AI Platform Smashes Every MLPerf Category, from Data Center to EdgeSANTA CLARA, Calif., April 21, 2021 (GLOBE NEWSWIRE) - NVIDIA today announced that its AI inference platform, newly expanded with NVIDIA A30 and A10 GPUs for mainstream …

    Schreibe Deinen Kommentar

    Disclaimer