checkAd

     1130  0 Kommentare NVIDIA TensorRT 3 Dramatically Accelerates AI Inference for Hyperscale Data Centers

    BEIJING, CHINA--(Marketwired - Sep 25, 2017) - GTC China - NVIDIA (NASDAQ: NVDA) today unveiled new NVIDIA® TensorRT 3 AI inference software that sharply boosts the performance and slashes the cost of inferencing from the cloud to edge devices, including self-driving cars and robots.

    The combination of TensorRT 3 with NVIDIA GPUs delivers ultra-fast and efficient inferencing across all frameworks for AI-enabled services -- such as image and speech recognition, natural language processing, visual search and personalized recommendations. TensorRT and NVIDIA Tesla® GPU accelerators are up to 40 times faster than CPUs(1) at one-tenth the cost of CPU-based solutions.(2)

    Anzeige 
    Handeln Sie Ihre Einschätzung zu IBM - International Business Machines!
    Long
    167,65€
    Basispreis
    1,40
    Ask
    × 10,77
    Hebel
    Short
    194,16€
    Basispreis
    1,41
    Ask
    × 10,70
    Hebel
    Präsentiert von

    Den Basisprospekt sowie die Endgültigen Bedingungen und die Basisinformationsblätter erhalten Sie bei Klick auf das Disclaimer Dokument. Beachten Sie auch die weiteren Hinweise zu dieser Werbung.

    "Internet companies are racing to infuse AI into services used by billions of people. As a result, AI inference workloads are growing exponentially," said NVIDIA founder and CEO Jensen Huang. "NVIDIA TensorRT is the world's first programmable inference accelerator. With CUDA programmability, TensorRT will be able to accelerate the growing diversity and complexity of deep neural networks. And with TensorRT's dramatic speed-up, service providers can affordably deploy these compute intensive AI workloads."

    More than 1,200 companies have already begun using NVIDIA's inference platform across a wide spectrum of industries to discover new insights from data and deploy intelligent services to businesses and consumers. Among them are Amazon, Microsoft, Facebook and Google; as well as leading Chinese enterprise companies like Alibaba, Baidu, JD.com, iFLYTEK, Hikvision, Tencent and WeChat.

    "NVIDIA's AI platform, using TensorRT software on Tesla GPUs, is an outstanding technology at the forefront of enabling SAP's growing requirements for inferencing," said Juergen Mueller, chief information officer at SAP. "TensorRT and NVIDIA GPUs make real-time service delivery possible, with maximum machine learning performance and versatility to meet our customers' needs."

    "JD.com relies on NVIDIA GPUs and software for inferencing in our data centers," said Andy Chen, senior director of AI and Big Data at JD. "Using NVIDIA's TensorRT on Tesla GPUs, we can simultaneously inference 1,000 HD video streams in real time, with 20 times fewer servers. NVIDIA's deep learning platform provides outstanding performance and efficiency for JD."

    TensorRT 3 is a high-performance optimizing compiler and runtime engine for production deployment of AI applications. It can rapidly optimize, validate and deploy trained neural networks for inference to hyperscale data centers, embedded or automotive GPU platforms.

    Seite 1 von 4




    Verfasst von Marketwired
    NVIDIA TensorRT 3 Dramatically Accelerates AI Inference for Hyperscale Data Centers BEIJING, CHINA--(Marketwired - Sep 25, 2017) - GTC China - NVIDIA (NASDAQ: NVDA) today unveiled new NVIDIA® TensorRT 3 AI inference software that sharply boosts the performance and slashes the cost of inferencing from the cloud to edge devices, …

    Schreibe Deinen Kommentar

    Disclaimer