checkAd

     795  0 Kommentare NVIDIA Doubles Performance for Deep Learning Training

    LILLE, FRANCE--(Marketwired - Jul 7, 2015) - ICML -- NVIDIA today announced updates to its GPU-accelerated deep learning software that will double deep learning training performance.

    The new software will empower data scientists and researchers to supercharge their deep learning projects and product development work by creating more accurate neural networks through faster model training and more sophisticated model design.

    Anzeige 
    Handeln Sie Ihre Einschätzung zu Tesla!
    Short
    169,77€
    Basispreis
    0,81
    Ask
    × 14,75
    Hebel
    Long
    144,50€
    Basispreis
    0,42
    Ask
    × 14,67
    Hebel
    Präsentiert von

    Den Basisprospekt sowie die Endgültigen Bedingungen und die Basisinformationsblätter erhalten Sie bei Klick auf das Disclaimer Dokument. Beachten Sie auch die weiteren Hinweise zu dieser Werbung.

    The NVIDIA® DIGITS™ Deep Learning GPU Training System version 2 (DIGITS 2) and NVIDIA CUDA® Deep Neural Network library version 3 (cuDNN 3) provide significant performance enhancements and new capabilities.

    For data scientists, DIGITS 2 now delivers automatic scaling of neural network training across multiple high-performance GPUs. This can double the speed of deep neural network training for image classification compared to a single GPU.

    For deep learning researchers, cuDNN 3 features optimized data storage in GPU memory for the training of larger, more sophisticated neural networks. cuDNN 3 also provides higher performance than cuDNN 2, enabling researchers to train neural networks up to two times faster on a single GPU.

    The new cuDNN 3 library is expected to be integrated into forthcoming versions of the deep learning frameworks Caffe, Minerva, Theano and Torch, which are widely used to train deep neural networks.

    "High-performance GPUs are the foundational technology powering deep learning research and product development at universities and major web-service companies," said Ian Buck, vice president of Accelerated Computing at NVIDIA. "We're working closely with data scientists, framework developers and the deep learning community to apply the most powerful GPU technologies and push the bounds of what's possible."

    DIGITS 2 - Up to 2x Faster Training with Automatic Multi-GPU Scaling
    DIGITS 2 is the first all-in-one graphical system that guides users through the process of designing, training and validating deep neural networks for image classification.

    The new automatic multi-GPU scaling capability in DIGITS 2 maximizes the available GPU resources by automatically distributing the deep learning training workload across all of the GPUs in the system. Using DIGITS 2, NVIDIA engineers trained the well-known AlexNet neural network model more than two times faster on four NVIDIA Maxwell™ architecture-based GPUs, compared to a single GPU.1 Initial results from early customers are demonstrating better results.

    Seite 1 von 3




    Verfasst von Marketwired
    NVIDIA Doubles Performance for Deep Learning Training LILLE, FRANCE--(Marketwired - Jul 7, 2015) - ICML -- NVIDIA today announced updates to its GPU-accelerated deep learning software that will double deep learning training performance. The new software will empower data scientists and …

    Schreibe Deinen Kommentar

    Disclaimer