Red Hat AI learning hub
Explore learning materials and tools designed to help you use Red Hat® OpenShift® AI and Red Hat® Enterprise Linux® AI, organized by the tasks you need to accomplish.
This content is curated by Red Hat experts but it may not be tested on every supported configuration.
Featured resources
Red Hat AI Foundations
No-cost training covering AI essentials and effective use of Red Hat AI.
Try Developer Sandbox for Red Hat OpenShift AI
Start a no-cost trial, which includes instant access to your own minimal developer cluster, hosted and managed by Red Hat.
Try Red Hat Enterprise Linux AI
Get a trial version of RHEL AI, our foundation model platform to develop, train, test, and run Granite family LLMs for enterprise applications.
Get to know Red Hat AI
Red Hat OpenShift AI Technical Overview (A1067)
An introduction to operationalizing AI/ML with Red Hat OpenShift AI.
Red Hat Enterprise Linux AI Technical Overview (A1096)
Explore how RHEL AI supports common challenges in adopting AI, such as cost, data transparency, and deployment complexity.
InstructLab on RHEL AI
Explore InstructLab on RHEL AI, including resources for developers.
OpenShift AI tutorial - Fraud detection example
A step-by-step guide for using OpenShift AI to train an example model in JupyterLab, deploy the model, and refine the model by using automated pipelines.
RHEL AI: Try out LLMs the easy way
Learn how RHEL AI provides a security-focused, low-cost environment for experimenting with LLMs.
Foundation model platform for generative AI
Get started quickly with gen AI and deliver results with a trusted, security-focused Red Hat Enterprise Linux platform.
Red Hat OpenShift AI
Learn about the artificial intelligence platform that runs on top of Red Hat OpenShift in this developer-focused product page.
AI solutions from Red Hat
Explore Red Hat’s AI solutions.
Customize models
What is InstructLab and how do you use it? 9 quick videos to help you get started
Nine quick videos to help you get started with InstructLab.
How to train an LLM using InstructLab
Join Red Hat’s Senior Director of Technical Marketing, Grant Shipley, as he demonstrates how to train an LLM with InstructLab.
RAG vs. Fine Tuning
Join Cedric Clyburn as he explores the differences and use cases of Retrieval Augmented Generation (RAG) and fine-tuning in enhancing LLMs.
Serve models
Serving models
Test and implement trained models into intelligent applications in Red Hat OpenShift AI Self-Managed
Install Red Hat Device Edge on NVIDIA Jetson Orin and IGX Orin
Install Red Hat Device Edge on an NVIDIA® Jetson Orin™/NVIDIA IGX Orin™ Developer Kit and explore new features brought by rpm-ostree.
Fine-tuning and serving an open source foundation model with Red Hat OpenShift AI
Learn how to fine-tune and deploy a HuggingFace GPT-2 model consisting of 137M parameters on a WikiText dataset using Red Hat OpenShift AI.
Build and scale applications
Introduction to Python Programming and to Red Hat OpenShift AI (AI252)
An introduction to Python programming, and creating and managing AI/ML workloads with Red Hat OpenShift AI.
Creating Machine Learning Models with Python and Red Hat OpenShift AI (A1253)
An introduction to machine learning concepts with Python and how to use Red Hat OpenShift AI to train ML models.
Developing and deploying AI/ML apps on RHOAI (A1267)
An introduction to developing and deploying AI/ML applications on Red Hat OpenShift AI.
Working in your data science IDE
Prepare your data science integrated development environment (IDE) for developing machine learning models.
How to get started with large language models and Node.js
Learn how to access an LLM using Node.js and LangChain.js, and explore LangChain.js APIs that simplify common requirements.
Getting started with Podman AI lab
Take a tour of Podman AI Lab and walk through each of its capabilities. Discover how to integrate generative AI in new and existing applications.
From Podman AI Lab to OpenShift AI
Learn how to go from a chatbot recipe in the Podman AI Lab extension to a RAG chatbot deployed on OpenShift and OpenShift AI.
Configure hardware accelerators
Hardware accelerators
Instructions for installing and configuring GPU Operators on OpenShift.
Enabling accelerators
Learn how to enable a variety of hardware accelerators for use in OpenShift AI.
Working with accelerators
Use accelerators to optimize the performance of your end-to-end data science workflows in OpenShift AI.
How to make the most of your GPUs (part 1 - time-slicing)
Explore different strategies supported by the NVIDIA GPU operator to oversubscribe available GPU resources.
How to make the most of your GPUs (part 2 - Multi-instance GPU)
Learn about dividing GPUs into isolated and static instances for concurrent usage by different applications.
How to Enable Hardware Accelerators on OpenShift (part 1)
Read about using hardware accelerators with Red Hat OpenShift.
How to enable Hardware Accelerators on OpenShift, SRO Building Blocks (part 2)
Read part 2 of this blog on using hardware accelerators with OpenShift.
Automate workflows
Automating end-to-end machine learning and model training tasks
Learn how to solve key data science challenges with OpenShift AI pipelines.
Working with data science pipelines
Enhance your data science projects on OpenShift AI by building portable ML workflows with data science pipelines.
Integrate a private AI coding assistant into your CDE using Ollama, Continue, and OpenShift Dev Spaces
Streamline your cloud development workflow by deploying and integrating a private AI assistant according to privacy, security, and IP legal concerns.
Manage and configure storage
Working with data in an S3-compatible object store
Learn how to work with data stored in an S3-compatible object store from your workbench.
Managing OpenShift AI
Manage OpenShift AI users and groups, dashboard interface and applications, deployment resources, accelerators, distributed workloads, and data backup.
Configuring cluster storage
Add cluster storage to the project and connect cluster storage to a specific project’s workbench.
Managing resources
Manage custom workbench images, cluster PVC size, user groups, and Jupyter notebook servers.
Troubleshoot Red Hat AI
Troubleshoot with Red Hat support
Open a support case with us or connect with an expert.
Red Hat Enterprise Linux AI
Browse the release notes and full product documentation for RHEL AI.
Red Hat OpenShift AI
Browse the release notes and full product documentation for Red Hat OpenShift AI Self-Managed and Cloud Service.
docs.instructlab.ai
Browse InstructLab resources and documentation.
Common RHEL AI issues
Learn how to collect system information and resolve common errors in RHEL AI. (Sign-in required)
Troubleshooting common installation problems
Resolve problems when installing Red Hat OpenShift AI in a disconnected environment.