Skip to content

Releases: ray-project/kuberay

v1.5.1

21 Nov 18:02
9e8a2c0

Choose a tag to compare

v1.5.1

Highlights

This release adds support for Ray token authentication using RayCluster, RayJob and RayService.

You can enable Ray token authentication using the following API:

apiVersion: ray.io/v1
kind: RayCluster
metadata:
  name: ray-cluster-with-auth
spec:
  rayVersion: '2.52.0'
  authOptions:
    mode: token

You must specify spec.rayVersion to 2.52.0 or newer. See full example at ray-cluster.auth.yaml.

Bug fixes

  • Fix a bug in the NewClusterWithIncrementalUpgrade strategy where the Active (old) cluster's Serve configuration cache was incorrectly updated to the new ServeConfigV2 during an upgrade. #4212 @ryanaoleary
  • Surface pod-level container failures to RayCluster status #4196 @spencer-p
  • Fix a bug where RayJob status is not updated if failure happens in Initializing phase #4191 @spencer-p
  • Fix a bug where RayCluster status was not always propagated to RayJob status #4192 @machichima

v1.5.0

04 Nov 14:58
21cf8cc

Choose a tag to compare

Highlights

Ray Label Selector API

Ray v2.49 introduced a label selector API. Correspondingly, KubeRay v1.5 now features a top-level API for defining Ray labels and resources. This new top-level API is the preferred method going forward, replacing the previous practice of setting labels and custom resources within rayStartParams.

The new API will be consumed by the Ray autoscaler, improving autoscaling decisions based on task and actor label selectors. Furthermore, labels configured through this API are mirrored directly into the Pods. This mirroring allows users to more seamlessly combine Ray label selectors with standard Kubernetes label selectors when managing and interacting with their Ray clusters.

You can use the new API in the following way:

apiVersion: ray.io/v1
kind: RayCluster
spec:
  ...
  headGroupSpec:
    rayStartParams: {}
    resources:
      Custom1: "1"
    labels:
      ray.io/zone: us-west-2a
      ray.io/region: us-west-2
  workerGroupSpec:
  - replicas: 1
    rayStartParams: {}
    resources:
      Custom1: "1"
    labels:
      ray.io/zone: us-west-2a
      ray.io/region: us-west-2

RayJob Sidecar submission mode

The RayJob resource now supports a new value for spec.submissionMode called SidecarMode.
Sidecar mode directly addresses a key limitation in both K8sJobMode and HttpMode: the network connectivity requirement from an external Pod or the KubeRay operator for job submission. With Sidecar mode, job submission is orchestrated by injecting a sidecar container into the Head Pod. This solution eliminates the need for an external client to handle the submission process and reduces job submission failure due to network failures.

To use this feature, set spec.submissionMode to SidecarMode in your RayJob:

apiVersion: ray.io/v1
kind: RayJob
metadata:
  name: my-rayjob
spec:
  submissionMode: "SidecarMode"
  ...

Advanced deletion policies for RayJob

KubeRay now supports a more advanced and flexible API for expressing deletion policies within the RayJob specification. This new design moves beyond the singular boolean field, spec.shutdownAfterJobFinishes, and allows users to define different cleanup strategies using configurable TTL values based on the Ray job's status.

This API unlocks new use cases that require specific resource retention after a job completes or fails. For example, users can now implement policies that:

  • Preserve only the Head Pod for a set duration after job failure to facilitate debugging.
  • Retain the entire Ray Cluster for a longer TTL after a successful run for post-analysis or data retrieval.

By linking specific TTLs to Ray job statuses (e.g., success, failure) and strategies (e.g. DeleteWorkers, DeleteCluster, DeleteSelf), users gain fine-grained control over resource cleanup and cost management.

Below is an example of how to use this new, flexible API structure:

apiVersion: ray.io/v1
kind: RayJob
metadata:
  name: rayjob-deletion-rules
spec:
  deletionStrategy:
    deletionRules:
    - policy: DeleteWorkers
      condition:
        jobStatus: FAILED
        ttlSeconds: 100
    - policy: DeleteCluster
      condition:
        jobStatus: FAILED
        ttlSeconds: 600
    - policy: DeleteCluster
      condition:
        jobStatus: SUCCEEDED
        ttlSeconds: 0

This feature is disabled by default and requires enabling the RayJobDeletionPolicy feature gate.

Incremental upgrade support for RayService

KubeRay v1.5 introduces the capability to enable zero-downtime incremental upgrades for RayServices. This new feature improves the upgrade process by leveraging the Gateway API and Ray autoscaling to incrementally migrate user traffic from the existing Ray cluster to the newly upgraded one.

This approach is more efficient and reliable compared to the former mechanism. The previous method required creating the upgraded Ray cluster at its full capacity and then shifting all traffic at once, which could lead to disruptions and unnecessary resource usage. By contrast, the incremental approach gradually scales up the new cluster and migrates traffic in smaller, controlled steps, resulting in improved stability and resource utilization during upgrade.

To enable this feature, set the following fields in RayService:

apiVersion: ray.io/v1
kind: RayService
metadata:
  name: example-rayservice
spec:
  upgradeStrategy:
    type: "NewClusterWithIncrementalUpgrade"
    clusterUpgradeOptions:
      maxSurgePercent: 40 
      stepSizePercent: 5  
      intervalSeconds: 10
      gatewayClassName: "cluster-gateway"

This feature is disabled by default and requires enabling the RayServiceIncrementalUpgrade feature gate.

Improved multi-host support for RayCluster

Previous KubeRay versions supported multi-host worker groups via the numOfHosts API, but this capability lacked fundamental capabilities required for managing multi-host accelerators. Firstly, it lacked logical grouping of worker Pods belonging to the same multi-host unit (or slice). As a result, it was not possible to run operations like “replace all workers in this group”. In addition, there was no ordered indexing, which is often required for coordinating multi-host workers when using TPUs.

When using multi-host in KubeRay v1.5, KubeRay will automatically set the following labels for multi-host Ray workers:

labels:
  ray.io/worker-group-replica-name: tpu-group-af03de
  ray.io/worker-group-replica-index: 0
  ray.io/replica-host-index: 1

Below is a description of each label and its purpose:

  • ray.io/worker-group-replica-name: this label provides a unique identifier for each replica (i.e. host group or slice) in a worker group. The label enables KubeRay to rediscover all other pods in the same group and apply group operators.
  • ray.io/worker-group-replica-index: this label is an ordered replica index in the worker group. This label is particularly important for cases like multi-slice TPUs, where each slice must be aware of its slice index.
  • ray.io/replica-host-index: this label is an ordered host index per replica (host group or slice).

These changes collectively enable reliable, production-level scaling and management of multi-host GPU workers or TPU slices.

This feature is disabled by default and requires enabling the RayMultiHostIndexing feature gate.

Breaking Changes

For RayCluster objects created by a RayJob, KubeRay will no longer attempt to recreate the Head Pod if it fails or is deleted after its initial successful provisioning. To retry failed jobs, use spec.backoffLimit which will result in KubeRay provisioning a new RayCluster.

CHANGELOG

  • [release-1.5] update version to v1.5.0 (#4177, @andrewsykim)
  • [CherryPick][Feature Enhancement] Set ordered replica index label to support mult… (#4171, @ryanaoleary)
  • [releasey-1.5] update version to v1.5.0-rc.1 (#4170, @andrewsykim)
  • [release-1.5] fix: dashboard build for kuberay 1.5.0 (#4169, @andrewsykim)
  • [release-1.5] update versions to v1.5.0-rc.0 (#4155, @andrewsykim)
  • [Bug] Sidecar mode shouldn't restart head pod when head pod is delete… (#4156, @rueian)
  • Bump Kubernetes dependencies to v0.34.x (#4147, @mbobrovskyi)
  • [Chore] Remove duplicate test-e2e-rayservice in Makefile (#4145, @seanlaii)
  • [Scheduler] Replace AddMetadataToPod with AddMetadataToChildResource across all schedulers (#4123, @win5923)
  • [Feature] Add initializing timeout for RayService (#4143, @seanlaii)
  • [RayService] Support Incremental Zero-Downtime Upgrades (#3166, @ryanaoleary)
  • Example RayCluster spec with Labels and label_selector API (#4136, @ryanaoleary)
  • [RayCluster] Fix for multi-host indexing worker creation (#4139, @chiayi)
  • Support uppercase default resource names for top-level Resources (#4137, @ryanaoleary)
  • [Bug] [KubeRay Dashboard] Misclassifies RayCluster type (#4135, @CheyuWu)
  • [RayCluster] Add multi-host indexing labels (#3998, @chiayi)
  • [Grafana] Use Range option instead of instant for RayCluster Provisioned Duration panel (#4062, @win5923)
  • [Feature] Separate controller namespace and CRD namespaces for KubeRay-Operator Dashboard (#4088, @400Ping)
  • Update grafana dashboards to ray 2.49.2 + add README instructions on how to update (#4111, @alanwguo)
  • fix: update broken and outdated links (#4129, @ErikJiang)
  • [Feature] Provide multi-arch images for apiserver and security proxy (#4131, @seanlaii)
  • test: add LastTransition to fix test (#4132, @machichima)
  • Add top-level Labels and Resources Structed fields to HeadGroupSpec and WorkerGroupSpec ([#4106](#4...
Read more

v1.5.0-rc.1

03 Nov 15:08
7ba3600

Choose a tag to compare

v1.5.0-rc.1 Pre-release
Pre-release
[releasey-1.5] update version to v1.5.0-rc.1 (#4170)

* update kuberay versions to v1.5.0-rc.1

Signed-off-by: Andrew Sy Kim <[email protected]>

* update generated helm docs

Signed-off-by: Andrew Sy Kim <[email protected]>

---------

Signed-off-by: Andrew Sy Kim <[email protected]>

v1.5.0-rc.0

30 Oct 03:00
88dd7f5

Choose a tag to compare

v1.5.0-rc.0 Pre-release
Pre-release
update versions to v1.5.0-rc.0 (#4155)

Signed-off-by: Andrew Sy Kim <[email protected]>

v1.4.2

16 Jul 23:41
34ea80e

Choose a tag to compare

Changelog

v1.4.1

07 Jul 20:19
3d138cf

Choose a tag to compare

Changelog

v1.4.0

21 Jun 16:12
279b9f0

Choose a tag to compare

Highlights

Enhanced Kubectl Plugin

KubeRay v1.4.0 introduces major improvements to the Kubectl Plugin:

  • Added a new scale command to scale worker groups in a RayCluster.
  • Extended the get command to support listing Ray nodes and worker groups.
  • Improved the create command:
    • Allows overriding default values in config files.
    • Supports additional fields such as Kubernetes labels and annotations, node selectors, ephemeral storage, ray start parameters, TPUs, autoscaler version, and more.

See Using the Kubectl Plugin (beta) for more details.

KubeRay Dashboard (alpha)

Starting from v1.4.0, you can use the open source dashboard UI for KubeRay. This component is still experimental and not considered ready for production, but feedback is welcome.

KubeRay dashboard is a web-based UI that allows you to view and manage KubeRay resources running on your Kubernetes cluster. It's different from the Ray dashboard, which is a part of the Ray cluster itself. The KubeRay dashboard provides a centralized view of all KubeRay resources.

See Use KubeRay dashboard (experimental) for more information. (The link will be replaced to doc website after the PR being merged)

Integration with kubernetes-sigs/scheduler-plugins

Starting with v1.4.0, KubeRay integrates one more scheduler kubernetes-sigs/scheduler-plugins to support gang scheduling for RayCluster resources. Currently, only single scheduler mode is supported.

See KubeRay integration with scheduler plugins for details.

KubeRay APIServer V2 (alpha)

The new APIServer v2 provides an HTTP proxy interface compatible with the Kubernetes API. It enables users to manage Ray resources using standard Kubernetes clients.

Key features:

  • Full compatibility with Kubernetes OpenAPI Spec and CRDs.
  • Available as a Go library for building custom proxies with pluggable HTTP middleware.

APIServer v1 is now in maintenance mode and will no longer receive new features. v2 is still in alpha. Contributions and feedback are encouraged.

Service Level Indicator (SLI) Metrics

KubeRay now includes SLI metrics to help monitor the state and performance of KubeRay resources.

See KubeRay Metrics Reference for details.

Breaking Changes

Default to Non-Login Bash Shell

Prior to v1.4.0, KubeRay ran most commands using a login shell. Starting from v1.4.0, the default shell is a non-login Bash shell. You can temporarily revert to login shell behavior using the ENABLE_LOGIN_SHELL environment variable, but using login shell is not recommended and this environment variable will be removed in the future release. (#3679)

If you encounter any issues with the new default behavior, please report in #3822 and don't open new issues.

Resource Name Changes and Length Validation

Before v1.4.0, KubeRay silently truncated resource names if they are too long to fit the 63-character limitation for Kubernetes. Starting from v1.4.0, we don't implicitly truncate resource names anymore. Instead, we emit an invalid spec event if the names are too long. (#3083)

We also shortened some of the resource names to loosen the length limitation. The following changes are made:

  • The suffix of headless service for RayCluster changes from headless-worker-svc to headless. (#3101)
  • The suffix of RayCluster name changes from -raycluster-xxxxx to -xxxxx (#3102)
  • The suffix of the head pod for RayCluster changes from -head-xxxxx to -head (#3028)

Updated Autoscaler v2 configuration

Starting from v1.4.0, autoscaler v2 is now configured using:

spec:
  autoscalerOptions:
    version: v2

You should not use the old RAY_enable_autoscaler_v2 environment variable.

See Autoscaler v2 Configuration for guidance.

Changelog

Read more

v1.3.2

03 Apr 02:05
66e4132

Choose a tag to compare

Bug fixes

  • [RayJob] Use --no-wait for job submission to avoid carrying the error return code to the log tailing (#3216)
  • [kubectl-plugin] kubectl ray job submit: provide entrypoint to preserve compatibility with v1.2.2 (#3186)

Improvements

  • [kubectl-plugin] Add head/worker node selector option (#3228)
  • [kubectl-plugin] add node selector option for kubectl plugin create worker group (#3235)

Changelog

  • [RayJob][Fix] Use --no-wait for job submission to avoid carrying the error return code to the log tailing (#3216)
  • kubectl ray job submit: provide entrypoint (#3186)
  • [kubectl-plugin] Add head/worker node selector option (#3228)
  • add node selector option for kubectl plugin create worker group (#3235)
  • [Chore][CI] Limit the release-image-build github workflow to only take tag as input (#3117)
  • [CI] Remove create tag step from release (#3249)

v1.3.1

18 Mar 12:24
4d7e43c

Choose a tag to compare

Highlights

This release includes a Go dependency update to resolve an incompatibility issue when using newer versions of k8s.io/component-base

Changelog

Changes required make a build after update of component-base (#3163, mszadkow)

v1.3.0

19 Feb 00:39
8ba2b33

Choose a tag to compare

Highlights

RayCluster Conditions API

The RayCluster conditions API is graduating to Beta status in v1.3. The new API provides more details about the RayCluster’s observable state that were not possible to express in the old API. The following conditions are supported for v1.3: AllPodRunningAndReadyFirstTime, RayClusterPodsProvisioning, HeadPodNotFound and HeadPodRunningAndReady. We will be adding more conditions in future releases.

Ray Kubectl Plugin

The Ray Kubectl Plugin is graduating to Beta status. The following commands are supported with KubeRay v1.3:

  • kubectl ray logs <cluster-name>: download Ray logs to a local directory
  • kubectl ray session <cluster-name>: initiate port-forwarding session to the Ray head
  • kubectl ray create <cluster>: create a Ray cluster
  • kubectl ray job submit: create a RayJob and submit a job using a local working directory

See the Ray Kubectl Plugin docs for more details.

RayJob Stability Improvements

Several improvements have been made to enhance the stability of long-running RayJobs. In particular, when using submissionMode=K8sJobMode, job submissions will no longer fail due to the submission of duplicate IDs. Now, if a submission ID already exists, the logs of the existing job will be retrieved instead.

RayService API Improvements

RayService strives to deliver zero-downtime serving. When changes in the RayService spec cannot be applied in place, it attempts to migrate traffic to a new RayCluster in the background. However, users might not always have sufficient resources for a new RayCluster. Beginning with KubeRay 1.3, users can customize this behavior using the new UpgradeStrategy option within the RayServiceSpec.

Previously, the serviceStatus field in RayService was inconsistent and did not accurately represent the actual state. Starting with KubeRay v1.3.0, we have introduced two conditions, Ready and UpgradeInProgress, to RayService. Following the approach taken with RayCluster, we have decided to deprecate serviceStatus. In the future, serviceStatus will be removed, and conditions will serve as the definitive source of truth. For now, serviceStatus remains available but is limited to two possible values: "Running" or an empty string.

GCS Fault Tolerance API Improvements

The new GcsFaultToleranceOptions field in the RayCluster now provides a streamlined way for users to enable GCS Fault Tolerance on a RayCluster. This eliminates the previous need to distribute related settings across Pod annotations, container environment variables, and the RayStartParams. Furthermore, users can now specify their Redis username in the newly introduced field (requires Ray 2.4.1 or later). To see the impact of this change on a YAML configuration, please refer to the example manifest.

Breaking Changes

RayService API

Starting from KubeRay v1.3.0, we have removed all possible values of RayService.Status.ServiceStatus except Running, so the only valid values for ServiceStatus are Running and empty. If ServiceStatus is Running, it means that RayService is ready to serve requests. In other words, ServiceStatus is equivalent to the Ready condition. It is strongly recommended to use the Ready condition instead of ServiceStatus going forward.

Features

  • RayCluster Conditions API is graduating to Beta status. The feature gate RayClusterStatusConditions is now enabled by default.
  • New events were added for RayCluster, RayJob and RayService for improved observability
  • Various improvements to Ray autoscaler v2
  • Introduce a new API in RayService spec.upgradeStrategy. The upgrade strategy type can be set to NewCluster or None to modify the behavior of zero-downtime upgrades for RayService.
  • Add RayCluster controller expecatations to mitigate stale informer caches
  • RayJob now supports submission mode InteractiveMode. Use this submission mode when you want to submit jobs from a local working directory on your laptop.
  • RayJob now supports spec.deletionPolicy API, this feature requires the RayJobDeletionPolicy feature gate to be enabled. Initial deltion policies are DeleteCluster, DeleteWorkers, DeleteSelf and DeleteNone.
  • KubeRay now detects TPUs and Neuron Core resources and specifies them as custom resources to ray start parameters
  • Introduce RayClusterSuspending and RayClusterSuspended conditions
  • Container CPU requests are now used in Ray –num-cpus if CPU limits is not specified
  • Various example manifests for using TPU v6 with KubeRay
  • Add ManagedBy field in RayJob and RayCluster. This is required for Multi-Kueue support.
  • Add support for kubectl ray create cluster command
  • Add support for kubectl ray create workergroup command

Guides & Tutorials

Changelog

Read more