checkAd

     221  0 Kommentare ServiceNow, Hugging Face, and NVIDIA Release New Open-Access LLMs to Help Developers Tap Generative AI to Build Enterprise Applications - Seite 2

    “Since every software ecosystem has a proprietary programming language, code LLMs can drive breakthroughs in efficiency and innovation in every industry,” said Jonathan Cohen, vice president of applied research at NVIDIA. “NVIDIA’s collaboration with ServiceNow and Hugging Face introduces secure, responsibly developed models, and supports broader access to accountable generative AI that we hope will benefit the global community.”

    Fine-Tuning Advances Capabilities with Business-Specific Data

    Anzeige 
    Handeln Sie Ihre Einschätzung zu Nvidia Corporation!
    Long
    850,88€
    Basispreis
    8,38
    Ask
    × 14,70
    Hebel
    Short
    956,47€
    Basispreis
    0,85
    Ask
    × 14,70
    Hebel
    Präsentiert von

    Den Basisprospekt sowie die Endgültigen Bedingungen und die Basisinformationsblätter erhalten Sie bei Klick auf das Disclaimer Dokument. Beachten Sie auch die weiteren Hinweise zu dieser Werbung.

    StarCoder2 models share a state-of-the-art architecture and carefully curated data sources from BigCode that prioritize transparency and open governance to enable responsible innovation at scale.

    The foundation of StarCoder2 is a new code dataset called The Stack v2 which is more than 7x larger than The Stack v1. In addition to the advanced data set, new training techniques help the model understand low-resource programming languages (such as COBOL), mathematics, and program source code discussions.

    StarCoder2 advances the potential of future AI-driven coding applications, including text-to-code and text-to-workflow capabilities. With broader, deeper programming training, it provides repository context, enabling accurate, context-aware predictions. These advancements serve seasoned software engineers and citizen developers alike, accelerating business value and digital transformation.

    Users can fine-tune the open-access models with industry or organization-specific data using open-source tools such as NVIDIA NeMo or Hugging Face TRL.

    Organizations have already fine-tuned the foundational StarCoder model to create specialized task-specific capabilities for their businesses.

    ServiceNow’s text-to-code Now LLM was purpose-built on a specialized version of the 15 billion-parameter StarCoder LLM, fine-tuned and trained for ServiceNow workflow patterns, use-cases, and processes. Hugging Face also used the model to create its StarChat assistant.

    BigCode Fosters Open Scientific Collaboration in AI

    BigCode represents an open scientific collaboration jointly led by Hugging Face and ServiceNow. Its mission centers on the responsible development of LLMs for code.

    The BigCode community actively participated in the technical aspects of the StarCoder2 project through working groups and task forces, leveraging ServiceNow’s Fast LLM framework to train the 3 billion-parameter model, Hugging Face’s nanotron framework for the 7 billion-parameter model, and the end-to-end NVIDIA NeMo cloud-native framework and NVIDIA TensorRT-LLM software to train and optimize the 15 billion-parameter model.

    Seite 2 von 5


    Diskutieren Sie über die enthaltenen Werte


    Business Wire (engl.)
    0 Follower
    Autor folgen

    ServiceNow, Hugging Face, and NVIDIA Release New Open-Access LLMs to Help Developers Tap Generative AI to Build Enterprise Applications - Seite 2 ServiceNow (NYSE: NOW), Hugging Face, and NVIDIA, today announced the release of StarCoder2, a family of open-access large language models (LLMs) for code generation that sets new standards for performance, transparency, and cost-effectiveness. This …

    Schreibe Deinen Kommentar

    Disclaimer