NVIDIA has launched its AI foundry service, combining AI Foundation Models, NeMo framework and tools, and DGX Cloud AI supercomputing services to provide an end-to-end solution for creating custom generative AI models for businesses and startups using Microsoft Azure.
- NVIDIA AI Foundry service combines NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services to provide an end-to-end solution for creating custom generative AI models.
- The service has been built on Microsoft Azure to allow enterprises worldwide to connect their custom models with Microsoft's cloud services.
- NVIDIA AI Enterprise software has been integrated into Azure Machine Learning, bringing NVIDIA's platform of secure, stable, and supported AI and data science software to Azure's enterprise-grade AI service.
nVidia has launched its AI Foundry service, aiming to enhance the development and fine-tuning of custom generative AI applications for businesses and startups using Microsoft Azure. The service combines NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services to provide an end-to-end solution for creating custom generative AI models. These models can be deployed with NVIDIA AI Enterprise software to power various generative AI applications such as intelligent search, summarisation, and content generation. Industry leaders including SAP SE, Amdocs, and Getty Images are already utilising the service to build custom models.
Jensen Huang, founder and CEO of NVIDIA, said that enterprises require custom models that are tailored to their own data in order to perform specialised tasks. He added that the AI foundry service combines NVIDIA’s generative AI model technologies, LLM training expertise, and large-scale AI factory. The service has been built on Microsoft Azure to allow enterprises worldwide to connect their custom models with Microsoft’s cloud services.
Satya Nadella, chairman and CEO of Microsoft, highlighted the partnership between Microsoft and NVIDIA, stating that they are collaborating across all layers of the Copilot stack to innovate for the new age of AI. He emphasised that by offering NVIDIA’s generative AI foundry service on Microsoft Azure, they are providing enterprises and startups with new capabilities to build and deploy AI applications on the cloud.
NVIDIA’s AI foundry service can be used across various industries, including enterprise software, telecommunications, and media, to customise models for generative AI-powered applications. Once ready for deployment, enterprises can use a technique called retrieval-augmented generation (RAG) to connect their models with their enterprise data and gain new insights.
SAP plans to leverage the service and optimised RAG workflow with NVIDIA DGX Cloud and NVIDIA AI Enterprise software running on Azure to customise and deploy Joule, its new natural language generative AI copilot. Christian Klein, CEO of SAP SE, said that Joule draws on SAP’s unique position at the intersection of business and technology and builds on their approach to Business AI. He added that in partnership with NVIDIA, Joule can help customers unlock the potential of generative AI by automating tasks and delivering more intelligent, personalised experiences.
Amdocs, a leading provider of software and services to communications and media companies, is optimising models for the Amdocs amAIz framework to accelerate the adoption of generative AI applications and services for telecommunications companies worldwide. Shuky Sheffer, president and CEO at Amdocs, highlighted the immense potential of generative AI technology for service providers to transform customer engagement. He stated that leveraging NVIDIA’s and Microsoft’s technology will bring new GenAI-powered applications to customers faster while ensuring enterprise-grade security, reliability, and performance.
Customers using the NVIDIA foundry service can choose from several NVIDIA AI Foundation models, including the newly introduced family of NVIDIA Nemotron-3 8B models hosted in the Azure AI model catalog. These models can also be accessed on the NVIDIA NGC catalog and will soon be available on the Azure AI model catalog. The Nemotron-3 8B family includes versions tuned for different use cases and offers multilingual capabilities for building custom enterprise generative AI applications.
Furthermore, NVIDIA DGX Cloud AI supercomputing is now available on Azure Marketplace. It provides instances that customers can rent, scaling up to thousands of NVIDIA Tensor Core GPUs. The service also includes NVIDIA AI Enterprise software, including NeMo, to accelerate LLM customization. The addition of DGX Cloud on the Azure Marketplace allows Azure customers to use their existing Microsoft Azure Consumption Commitment credits to speed up model development with NVIDIA AI supercomputing and software.
NVIDIA AI Enterprise software has been integrated into Azure Machine Learning, bringing NVIDIA’s platform of secure, stable, and supported AI and data science software to Azure’s enterprise-grade AI service. Additionally, NVIDIA AI Enterprise is available on Azure Marketplace, offering businesses worldwide a range of options for production-ready AI development and deployment of custom generative AI applications.
About Our Team
Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.
Background Information
About Microsoft:
Microsoft, founded by Bill Gates and Paul Allen in 1975 in Redmond, Washington, USA, is a technology giant known for its wide range of software products, including the Windows operating system, Office productivity suite, and cloud services like Azure. Microsoft also manufactures hardware, such as the Surface line of laptops and tablets, Xbox gaming consoles, and accessories.Latest Articles about Microsoft
About nVidia:
NVIDIA has firmly established itself as a leader in the realm of client computing, continuously pushing the boundaries of innovation in graphics and AI technologies. With a deep commitment to enhancing user experiences, NVIDIA's client computing business focuses on delivering solutions that power everything from gaming and creative workloads to enterprise applications. for its GeForce graphics cards, the company has redefined high-performance gaming, setting industry standards for realistic visuals, fluid frame rates, and immersive experiences. Complementing its gaming expertise, NVIDIA's Quadro and NVIDIA RTX graphics cards cater to professionals in design, content creation, and scientific fields, enabling real-time ray tracing and AI-driven workflows that elevate productivity and creativity to unprecedented heights. By seamlessly integrating graphics, AI, and software, NVIDIA continues to shape the landscape of client computing, fostering innovation and immersive interactions in a rapidly evolving digital world.Latest Articles about nVidia
Technology Explained
Foundry: A foundry is a dedicated manufacturing facility focused on producing semiconductor components like integrated circuits (ICs) for external clients. These foundries are pivotal in the semiconductor industry, providing diverse manufacturing processes and technologies to create chips based on designs from fabless semiconductor firms or other customers. This setup empowers companies to concentrate on innovative design without needing substantial investments in manufacturing infrastructure. Some well-known foundries include TSMC (Taiwan Semiconductor Manufacturing Company), Samsung Foundry, GlobalFoundries, and UMC (United Microelectronics Corporation).
Latest Articles about Foundry
LLM: A Large Language Model (LLM) is a highly advanced artificial intelligence system, often based on complex architectures like GPT-3.5, designed to comprehend and produce human-like text on a massive scale. LLMs possess exceptional capabilities in various natural language understanding and generation tasks, including answering questions, generating creative content, and delivering context-aware responses to textual inputs. These models undergo extensive training on vast datasets to grasp the nuances of language, making them invaluable tools for applications like chatbots, content generation, and language translation.
Latest Articles about LLM
Trending Posts
GameMax introduces the Titan Silent 2 Chassis for Quiet Computing
Lenovo’s Impressive Smartphone Sales in the US Catch Attention
TeamGroup introduces T-CREATE EXPERT P32: A Stylish, High-Capacity Desktop External SSD for Creatives
“Threads introduces new feature enabling text formatting in publications”
SCUF Gaming teams up with Oracle Red Bull Sim Racing for an exclusive, multi-year collaboration.
Evergreen Posts
NZXT about to launch the H6 Flow RGB, a HYTE Y60’ish Mid tower case
Intel’s CPU Roadmap: 15th Gen Arrow Lake Arriving Q4 2024, Panther Lake and Nova Lake Follow
HYTE teases the “HYTE Y70 Touch” case with large touch screen
NVIDIA’s Data-Center Roadmap Reveals GB200 and GX200 GPUs for 2024-2025
S.T.A.L.K.E.R. 2: Heart of Chornobyl Pushed to November 20, introduces Fresh Trailer