checkAd

     149  0 Kommentare NVIDIA’s New Ampere Data Center GPU in Full Production

    New NVIDIA A100 GPU Boosts AI Training and Inference up to 20x;
    NVIDIA’s First Elastic, Multi-Instance GPU Unifies Data Analytics, Training and Inference;
    Adopted by World’s Top Cloud Providers and Server Makers

    SANTA CLARA, Calif., May 14, 2020 (GLOBE NEWSWIRE) -- NVIDIA today announced that the first GPU based on the NVIDIA Ampere architecture, the NVIDIA A100, is in full production and shipping to customers worldwide.

    Anzeige 
    Handeln Sie Ihre Einschätzung zu Nvidia Corporation!
    Long
    754,00€
    Basispreis
    4,46
    Ask
    × 14,58
    Hebel
    Short
    838,08€
    Basispreis
    4,52
    Ask
    × 14,38
    Hebel
    Präsentiert von

    Den Basisprospekt sowie die Endgültigen Bedingungen und die Basisinformationsblätter erhalten Sie bei Klick auf das Disclaimer Dokument. Beachten Sie auch die weiteren Hinweise zu dieser Werbung.

    The A100 draws on design breakthroughs in the NVIDIA Ampere architecture — offering the company’s largest leap in performance to date within its eight generations of GPUs — to unify AI training and inference and boost performance by up to 20x over its predecessors. A universal workload accelerator, the A100 is also built for data analytics, scientific computing and cloud graphics.

    “The powerful trends of cloud computing and AI are driving a tectonic shift in data center designs so that what was once a sea of CPU-only servers is now GPU-accelerated computing,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA A100 GPU is a 20x AI performance leap and an end-to-end machine learning accelerator — from data analytics to training to inference. For the first time, scale-up and scale-out workloads can be accelerated on one platform. NVIDIA A100 will simultaneously boost throughput and drive down the cost of data centers.”

    New elastic computing technologies built into A100 make it possible to bring right-sized computing power to every job. A multi-instance GPU capability allows each A100 GPU to be partitioned into as many as seven independent instances for inferencing tasks, while third-generation NVIDIA NVLink interconnect technology allows multiple A100 GPUs to operate as one giant GPU for ever larger training tasks.

    The world’s leading cloud service providers and systems builders that expect to incorporate A100 GPUs into their offerings include: Alibaba Cloud, Amazon Web Services (AWS), Atos, Baidu Cloud, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Google Cloud, H3C, Hewlett Packard Enterprise (HPE), Inspur, Lenovo, Microsoft Azure, Oracle, Quanta/QCT, Supermicro and Tencent Cloud.

    Immediate Adoption Worldwide
    Among the first to tap into the power of NVIDIA A100 GPUs is Microsoft, which will take advantage of their performance and scalability.

    “Microsoft trained Turing Natural Language Generation, the largest language model in the world, at scale using the current generation of NVIDIA GPUs,” said Mikhail Parakhin, corporate vice president, Microsoft Corp. “Azure will enable training of dramatically bigger AI models using NVIDIA’s new generation of A100 GPUs to push the state of the art on language, speech, vision and multi-modality.”

    Seite 1 von 5


    Diskutieren Sie über die enthaltenen Werte


    globenewswire
    0 Follower
    Autor folgen

    Verfasst von globenewswire
    NVIDIA’s New Ampere Data Center GPU in Full Production New NVIDIA A100 GPU Boosts AI Training and Inference up to 20x;NVIDIA’s First Elastic, Multi-Instance GPU Unifies Data Analytics, Training and Inference;Adopted by World’s Top Cloud Providers and Server Makers SANTA CLARA, Calif., May 14, 2020 …

    Schreibe Deinen Kommentar

    Disclaimer