Red Hat Summit: Nvidia Partnership, OpenShift on Oracle Cloud
We bring you all of the week's top announcements from Red Hat Summit, and spoiler – it’s all about AI.
One of the biggest announcements at Red Hat Summit so far is a new partnership with Run:ai. Red Hat is teaming with the AI optimization and orchestration specialist to bring Run:ai’s resource allocation capabilities to Red Hat OpenShift AI.
“By streamlining AI operations and optimizing the underlying infrastructure, this collaboration enables enterprises to get the most out of AI resources,” said Red Hat.
The company noted that while needed for AI, GPUs can be costly, especially when being used across distributed training jobs and inferencing. Red Hat and Run:ai are working on GPU resource optimization with Run:ai’s certified OpenShift Operator on Red Hat OpenShift AI. This helps users scale and optimize wherever their AI workloads are located.
Run:ai’s certified OpenShift Operator is available now.
Red Hat is collaborating with AMD to help customers building, deploying and managing artificial intelligence (AI) workloads.
As part of the collaboration, Red Hat and AMD will enable AMD GPU Operators on Red Hat OpenShift AI to provide the processing power and performance needed for AI workloads across the hybrid cloud.
Red Hat said AMD GPUs on Red Hat OpenShift AI can make it easier for customers to access, deploy and benefit from a validated GPU Operator. This in turn allows them to streamline AI workflows and bridge existing and potential gaps in GPU supply chains.
The AMD GPU Operator is now available in development preview for users to test and install on Red Hat OpenShift clusters.
Similarly, Red Hat is collaborating with Intel to power enterprise AI usage on Red Hat OpenShift AI.
The two will facilitate the delivery of end-to-end AI solutions on Intel AI products. This includes Intel Gaudi AI accelerators, Intel Xeon processors, Intel Core Ultra and Core processors, and Intel Arc GPUs. Red Hat maintained this makes model development and training, model serving, management and monitoring more seamless across a hybrid cloud infrastructure.
In the data center, Intel Gaudi AI accelerators and Xeon processors with Intel Advanced Matrix Extensions (AMX) cater to a set of AI use cases. These include generative AI (gen AI) training, fine-tuning, retrieval augmented generation (RAG), inferencing, as well as confidential AI to protect data in use with Intel Trust Domain Extensions (TDX).
At the edge, Intel supports local execution of large language models (LLMs) on platforms based on Intel Core Ultra, Xeon processors, and Intel Arc GPUs. Red Hat collaborated with Intel to certify its hardware solutions on Red Hat OpenShift AI to ensure interoperability and enable comprehensive AI capabilities.
In addition, Intel is offering open source software, “ready to go out of the box”, which can be integrated with Red Hat OpenShift AI. Supported software includes Intel Tiber Edge Platform, including Intel OpenVINO; Intel Tiber AI Studio; and oneAPI AI Tools. Qualified customers can request access to Intel Gaudi AI accelerators through Intel Tiber Developer Cloud.
Red Hat also announced upcoming integration support for Nvidia NIM microservices on Red Hat OpenShift AI to enable optimized inferencing for AI models.
As part of this latest collaboration, Nvidia will enable NIM interoperability with KServe, an open source project based on Kubernetes for scalable AI use cases and a core upstream contributor for Red Hat OpenShift AI. This, said Red Hat, will help fuel continuous interoperability for Nvidia NIM microservices within future iterations of Red Hat OpenShift AI.
The integration aims to help enterprises increase productivity with generative AI capabilities like expanding customer service with virtual assistants, case summarization for IT tickets, and accelerating business operations with domain-specific copilots. Red Hat also announced upcoming integration support for Nvidia NIM microservices on Red Hat OpenShift AI to enable optimized inferencing for AI models.
As part of this latest collaboration, Nvidia will enable NIM interoperability with KServe, an open source project based on Kubernetes for scalable AI use cases and a core upstream contributor for Red Hat OpenShift AI. This, said Red Hat, will help fuel continuous interoperability for Nvidia NIM microservices within future iterations of Red Hat OpenShift AI.
The integration aims to help enterprises increase productivity with gen AI capabilities like expanding customer service with virtual assistants, case summarization for IT tickets, and accelerating business operations with domain-specific copilots.
Elsewhere, the company is looking to improve productivity and efficiency through gen AI by expanding Red Hat Lightspeed across its core platforms.
Red Hat OpenShift Lightspeed and Red Hat Enterprise Linux Lightspeed will offer natural language processing (NLP) capabilities designed to make Red Hat’s enterprise-grade Linux and cloud-native application platforms easier to use for novices and more efficient for experienced professionals.
First introduced in Red Hat Ansible Automation Platform, Lightspeed is designed to help combat industry-wide skills gaps and complexity in enterprise IT as hybrid cloud adoption grows. With Lightspeed, users can contextually apply Red Hat’s knowledge in using open source technologies in mission-critical environments.
Keeping on the theme of AI, Red Hat is introducing generative AI to the open source community project, Konveyor.
Red Hat has been working with Konveyor, which provides the foundational technologies behind the updated versions of its migration toolkit for applications. Now, Red Hat is working to integrate LLMs into Konveyor to improve the economics of re-platforming and refactoring applications to Kubernetes and cloud-native technologies.
It said this latest effort aims to help enterprises achieve their modernization goals. Gen AI models like IBM watsonx Code Assistant, alongside existing knowledge based on additional AI models, can help streamline migrations as a more cost-effective solution when added to the development workflow.
At the same time, Red Hat announced partner support for image mode for Red Hat Enterprise Linux. This is a new deployment method that places containers at the core of the enterprise operating system. With image mode, both hardware and software partners have an easier, streamlined pathway for testing and deploying Red Hat Enterprise Linux-certified applications, said the vendor.
Image mode can help partners cut through increasingly complex and time-sensitive computing environments. With containers as the platform’s building blocks, individual components of certified applications can be updated via singular containers, rather than monolithic updates or traditional patches.
Image mode makes the core operating system manageable like any other container application workflow. This means that the same tools partners already use in containerized CI/CD pipelines, content scanning or management can be used in updating their Red Hat certified applications.
Vendors can push specific, targeted updates to customers depending on their applications or appliance. For edge deployments, partners can get image-based updates and rollback to unlock faster time to value while bolstering security and reliability.
Image mode for Red Hat Enterprise Linux is under evaluation by leading ISVs, OEMs and hardware vendors. In addition, all Red Hat Enterprise Linux certified hardware is supported by image mode.
Red Hat is also enabling developers to build, test and run gen AI-powered applications in containers using Podman AI Lab, a dedicated extension for Podman Desktop. Red Hat said this offers developers the convenience, simplicity and cost efficiency of their local developer experience, while keeping data on-premises, private and more secure.
Podman AI Lab features a recipe catalog with sample applications that “give developers a jump start” on some of the more common use cases for LLMs. These include:
Chatbots that simulate human conversation. These use AI to comprehend user inquiries and offer suitable responses. These capabilities are often used to augment applications that provide self-service customer support or virtual personal assistance.
Text summarizers, which provide capabilities across applications and industries to deliver information management. Developers can build applications to assist with things like content creation and curation, research, news aggregation, social media monitoring, and language learning.
Code generators, which empower developers to concentrate on higher-level design and problem solving by automating repetitive tasks like project setup and API integration, or to produce code templates.
Object detection helps identify and locate objects within digital images or video frames. It is a component in various applications, including autonomous vehicles, retail inventory management, precision agriculture, and sports broadcasting.
Audio-to-text transcription involves the process of automatically transcribing spoken language into written text.
Red Hat is introducing automated "policy as code," a new capability coming to future versions of Red Hat Ansible Automation Platform. The aim, it said, it to enforce greater operational consistency across hybrid cloud estates.
Policy as code automates IT compliance, making it possible to build policies into processes themselves and enforcing requirements across distributed teams. By making policy enforcement a fundamental part of IT actions, organizations can better align with internal or external requirements, from traditional data center operations to multiple public clouds to AI workloads.
Said Red Hat: “Applying compliance directives to mission-critical systems as they become AI-centric is vital, given that they are most typically impacted by compliance mandates that dictate system security, performance and auditability.”
Finally, Red Hat announced the general availability of Red Hat OpenShift on Oracle Cloud Infrastructure (OCI) on virtual machines (VMs).
The new offering builds on the collaboration between Red Hat and Oracle that began with the certification of Red Hat Enterprise Linux for OCI bare metal and Oracle VMware Cloud workloads.
Customers can now extend their Red Hat OpenShift ecosystem to include installations on OCI, managed from their Red Hat portal. Customers can choose from multiple installation methods, including Red Hat OpenShift Assisted Installer, command line and agent-based, which enables installation in air-gapped environments. Oracle is providing Container Storage Interface (CSI) software that enables OCI storage integration with Red Hat OpenShift, and Cloud Control Manager (CCM) software that enables API interoperation between OCI and Red Hat OpenShift platform.
Finally, Red Hat announced the general availability of Red Hat OpenShift on Oracle Cloud Infrastructure (OCI) on virtual machines (VMs).
The new offering builds on the collaboration between Red Hat and Oracle that began with the certification of Red Hat Enterprise Linux for OCI bare metal and Oracle VMware Cloud workloads.
Customers can now extend their Red Hat OpenShift ecosystem to include installations on OCI, managed from their Red Hat portal. Customers can choose from multiple installation methods, including Red Hat OpenShift Assisted Installer, command line and agent-based, which enables installation in air-gapped environments. Oracle is providing Container Storage Interface (CSI) software that enables OCI storage integration with Red Hat OpenShift, and Cloud Control Manager (CCM) software that enables API interoperation between OCI and Red Hat OpenShift platform.
RED HAT SUMMIT 2024 — Coinciding with its annual Summit this week in Denver, Red Hat is revealing a slew of product updates alongside numerous new and expanded industry collaborations.
They include partnerships with chipmakers Intel, AMD and Nvidia, as well as artificial intelligence (AI) optimization and orchestration specialist, Run:ai. Indeed, every announcement that Red Hat has made is geared around partners helping customers harness the power of AI.
Earlier this year, Red Hat debuted a new, invitation-only Partner Practice Accelerator Program. A driver for the new program is the need for organizations to bridge capabilities between the data center and the cloud, as well as the edge, and to embrace AI and automation.
Here, in the slideshow above, get up to speed with all the news that partners need to know, at a glance. Then stay tuned to Channel Futures all week as we're on the ground in Denver for Red Hat Summit 2024.
Read more about:
MSPsAbout the Author(s)
You May Also Like