Skip to content

feat: add kubernetes step executor and DAG defaults#1886

Merged
yottahmd merged 10 commits intomainfrom
k8s-step-type
Apr 1, 2026
Merged

feat: add kubernetes step executor and DAG defaults#1886
yottahmd merged 10 commits intomainfrom
k8s-step-type

Conversation

@yottahmd
Copy link
Copy Markdown
Collaborator

@yottahmd yottahmd commented Mar 31, 2026

Summary

  • add a Kubernetes step executor with type: k8s / type: kubernetes, including Job creation, pod log streaming, exit-code handling, config decoding, and direct runtime tests
  • add DAG-level kubernetes: defaults with explicit step opt-in and predictable merge behavior, and harden the executor around kubeconfig fallback, cleanup, invalid quantities, and completion-state handling
  • align the embedded DAG schema and schema navigation output with the implemented Kubernetes config surface so CLI/UI schema consumers document the new feature accurately

Why

  • Dagu could not run steps directly as Kubernetes Jobs, so Kubernetes-based execution had to be wrapped outside the workflow engine
  • shared Kubernetes settings like namespace, context, service account, and resources were repetitive without a DAG-level defaults block
  • the executor and schema needed to be production-safe and internally consistent before this functionality is exposed broadly

Testing

  • go test ./internal/runtime/builtin/kubernetes
  • go test ./internal/runtime/...
  • go test ./internal/core/...
  • go test ./internal/agent/schema ./internal/cmn/schema -count=1
  • go test ./internal/core/spec -count=1

Summary by CodeRabbit

  • New Features

    • Added native Kubernetes executor support to run workflow steps as Kubernetes Jobs with configurable container settings, resource requirements, tolerations, volumes, and environment variables.
    • Introduced DAG-level Kubernetes defaults that automatically inherit configuration to all Kubernetes steps.
  • Dependencies

    • Updated Docker client library to Moby for enhanced compatibility.

Add a new "kubernetes" (alias "k8s") step type that runs workflow steps
as Kubernetes Jobs in a local or remote cluster. Supports namespace,
resources (CPU/memory requests/limits), env vars, volumes, volume mounts,
service accounts, node selectors, tolerations, image pull secrets,
and configurable cleanup policy.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 31, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 954480dc-a764-4423-a9d8-934c87c67f09

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This PR introduces Kubernetes as a new executor type for the Dagu workflow system. It adds Kubernetes job execution support via a new internal/runtime/builtin/kubernetes package with client, configuration, and executor implementations. The PR also updates dependencies from github.com/docker/docker to github.com/moby/moby, extends the DAG schema to support Kubernetes-level defaults, integrates Kubernetes config parsing into the spec transformation pipeline, and includes comprehensive tests for Kubernetes executor behavior and configuration inheritance.

Changes

Cohort / File(s) Summary
Dependency Management
go.mod
Removed github.com/docker/docker v28.4.0+incompatible; added github.com/moby/moby/api v1.54.0, github.com/moby/moby/client v0.3.0; added Kubernetes direct requirements k8s.io/api v0.35.3, k8s.io/apimachinery v0.35.3, k8s.io/client-go v0.35.3 and multiple indirect Kubernetes packages; migrated github.com/docker/go-connections v0.6.0 to indirect; bumped go.yaml.in/yaml/v2 to v2.4.3.
Schema & Core Type Definitions
internal/cmn/schema/dag.schema.json, internal/core/dag.go, internal/core/spec/dag.go
Added DAG-level kubernetes defaults configuration field referencing new schema definitions; added KubernetesConfig type to DAG struct; registered Kubernetes transformer in spec pipeline; introduced comprehensive schema definitions for Kubernetes volumes, env vars, resources, tolerations, and executor config aggregates.
Kubernetes Configuration & Validation
internal/core/spec/kubernetes.go, internal/runtime/builtin/kubernetes/config.go
Implemented Kubernetes spec building logic with validation via schema type kubernetes_defaults; added configuration merging and deep-clone utilities for Kubernetes config maps; introduced Config struct with 40+ fields for Kubernetes job specification (image, env, volumes, tolerations, resource limits, cleanup policy, etc.) with LoadConfigFromMap validation and Kubernetes API type conversions.
Kubernetes Executor Runtime
internal/runtime/builtin/kubernetes/client.go, internal/runtime/builtin/kubernetes/executor.go, internal/runtime/builtin/builtin.go
Implemented Kubernetes Job client with REST config building, job creation/deletion, pod polling, log streaming, and exit-code retrieval; built executor implementing executor.Executor interface with Run/Kill lifecycle, cleanup policy handling, context cancellation, and error reporting; registered kubernetes and k8s executor names via builtin module initialization.
Step-Level Kubernetes Merging
internal/core/spec/step.go, internal/core/spec/loader.go
Extended buildStepExecutor to merge DAG-level Kubernetes defaults into step executor config; added custom merge strategy in mergeTransformer for core.KubernetesConfig type supporting deep merge and empty-config clearing semantics.
Docker Client Migration
internal/intg/ct_test.go, internal/intg/sftp_test.go, internal/intg/ssh_test.go, internal/runtime/builtin/docker/client.go, internal/runtime/builtin/docker/client_test.go, internal/runtime/builtin/docker/config.go, internal/runtime/builtin/docker/parser.go, internal/runtime/builtin/docker/registry_auth.go, internal/runtime/builtin/docker/registry_auth_test.go
Replaced github.com/docker/docker imports with github.com/moby/moby equivalents; updated API option/result types across container create/start/stop/remove/inspect/exec flows; migrated port types from nat.* to network.* with explicit IP address parsing; updated auth encoding from registry.EncodeAuthConfig to authconfig.Encode; refactored image pull to use specs.Platform instead of string platform values.
Kubernetes Schema & Step Validation Tests
internal/cmn/schema/dag_schema_test.go, internal/agent/schema/output_test.go, internal/core/spec/step_test.go
Added DAG schema validation tests for Kubernetes executor type and volume definitions; added schema repo-copy match verification; extended step executor tests to validate Kubernetes as command-only executor rejecting multiple commands and shell configuration; updated expected navigation output to include kubernetes segment.
Kubernetes Config & Executor Tests
internal/core/spec/kubernetes_test.go, internal/runtime/builtin/kubernetes/config_test.go, internal/runtime/builtin/kubernetes/client_test.go, internal/runtime/builtin/kubernetes/executor_test.go
Comprehensive test suites validating Kubernetes spec inheritance from DAG root to steps, base config merging, empty-config clearing, schema validation, configuration parsing/validation, Kubernetes REST config building, job lifecycle (creation, pod polling, log streaming, completion, cleanup), and executor run/kill behavior with context cancellation and cleanup semantics.

Sequence Diagram(s)

sequenceDiagram
    participant DAGSpec as DAG Spec<br/>(YAML)
    participant Transformer as Spec Transformer<br/>(buildKubernetes)
    participant Merger as Step Merger<br/>(mergeKubernetesExecutorConfig)
    participant Step as Step Executor<br/>(Kubernetes)
    participant KubeClient as Kubernetes Client
    participant KubeAPI as Kubernetes API<br/>(etcd)

    DAGSpec->>Transformer: kubernetes: {image: "...", namespace: "..."}
    Transformer->>Transformer: Validate config<br/>(schema: kubernetes_defaults)
    Transformer->>Transformer: Clone & store in DAG.Kubernetes
    
    DAGSpec->>Step: type: "kubernetes"<br/>executor_config: {...}
    Step->>Merger: Merge DAG.Kubernetes + step config
    Merger->>Merger: Deep merge with step overrides
    Merger->>Step: Merged config
    
    Step->>KubeClient: CreateJob(ctx, stepName, command)
    KubeClient->>KubeClient: Build Job manifest<br/>(image, env, volumes, resources)
    KubeClient->>KubeAPI: Create Job
    KubeAPI-->>KubeClient: Job created
    
    Step->>KubeClient: WaitForPod(ctx)
    KubeClient->>KubeAPI: Poll pod status<br/>(label selector: job-name)
    KubeAPI-->>KubeClient: Pod Running
    KubeClient-->>Step: Pod name
    
    Step->>KubeClient: StreamLogs(ctx, podName)
    KubeClient->>KubeAPI: Stream logs<br/>(step container)
    KubeAPI-->>KubeClient: Log lines
    KubeClient-->>Step: Logs written to stdout
    
    Step->>KubeClient: WaitForCompletion(ctx)
    KubeClient->>KubeAPI: Poll Job conditions
    KubeAPI-->>KubeClient: Job.Completed = True
    KubeClient-->>Step: Success
    
    Step->>KubeClient: DeleteJob(ctx)<br/>(cleanup_policy)
    KubeClient->>KubeAPI: Delete Job<br/>(background propagation)
    KubeAPI-->>KubeClient: Job deleted
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 23.46% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add kubernetes step executor and DAG defaults' directly and clearly summarizes the main change: introducing a Kubernetes step executor with DAG-level defaults support.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch k8s-step-type

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
internal/runtime/builtin/docker/client.go (2)

151-162: ⚠️ Potential issue | 🟠 Major

Don't clear ctID for stopped named containers unless you also handle the existing container.

If inspectContainer finds the named container but it is stopped, this branch drops the ID and later falls into ContainerCreate with the same name. Docker still reserves that name, so the "will be created" path turns into a runtime conflict instead of a predictable restart/remove/fail-fast path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/runtime/builtin/docker/client.go` around lines 151 - 162, The code
clears ctID when a named container is found but not running, which later causes
ContainerCreate to fail due to Docker name reservation; instead, keep ctID =
info.ID for stopped containers and implement explicit handling (e.g., call a
restart or remove flow) when isContainerRunning(...) returns false before
attempting ContainerCreate. Update the logic around inspectContainer,
isContainerRunning, and ctID so stopped containers are either restarted
(dockerClient.ContainerStart/Restart) or removed (ContainerRemove) based on the
desired behavior, and only call ContainerCreate when no existing container ID is
present.

843-867: ⚠️ Potential issue | 🟠 Major

Fix TTY exec stream handling: avoid stdcopy for raw TTY streams.

When TTY is enabled in ExecOptions (line 47), the ExecAttach stream is raw, not multiplexed. The current code always uses stdcopy.StdCopy (line 879), which only works for multiplexed frames. This will cause output loss or framing errors when TTY is enabled.

Branch the copy logic based on the TTY setting: use io.Copy for TTY streams and stdcopy.StdCopy for multiplexed streams.

Suggested fix
 	go func() {
-		if _, err := stdcopy.StdCopy(stdout, stderr, resp.Reader); err != nil {
-			logger.Error(ctx, "Docker executor: stdcopy", tag.Error(err))
+		var copyErr error
+		if c.cfg.ExecOptions.TTY {
+			_, copyErr = io.Copy(stdout, resp.Reader)
+		} else {
+			_, copyErr = stdcopy.StdCopy(stdout, stderr, resp.Reader)
+		}
+		if copyErr != nil {
+			logger.Error(ctx, "Docker executor: copy exec output", tag.Error(copyErr))
 		}
 		wg.Done()
 	}()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/runtime/builtin/docker/client.go` around lines 843 - 867, The
ExecAttach stream handling must branch on the TTY flag: after calling
cli.ExecAttach (using execCreateResp.ID) check the effective TTY (execOpts.TTY
or c.cfg.ExecOptions.TTY) and if TTY is true treat the attached stream as a raw
stream and use io.Copy to copy resp.Reader to the destination writer(s); if TTY
is false use stdcopy.StdCopy to demultiplex stdout/stderr from resp.Reader into
separate writers. Update the code around ExecAttach/resp handling to close the
response when done and choose io.Copy for raw TTY streams and stdcopy.StdCopy
for multiplexed streams to avoid frame loss.
🧹 Nitpick comments (1)
internal/core/dag.go (1)

227-230: Clone should deep-copy the new Kubernetes map field.

Line 230 introduces a mutable map on DAG; (*DAG).Clone() currently performs a shallow copy for this field, which can cause shared-state side effects across clones.

♻️ Suggested update in (*DAG).Clone()
 func (d *DAG) Clone() *DAG {
 	//nolint:govet // intentional copy; sync.Once is immediately reset below
 	clone := *d
 	// Reset sync.Once so LoadDotEnv can be called on the clone
 	clone.dotenvOnce = sync.Once{}
 	if d.PresolvedBuildEnv != nil {
 		clone.PresolvedBuildEnv = maps.Clone(d.PresolvedBuildEnv)
 	}
+	if d.Kubernetes != nil {
+		clone.Kubernetes = maps.Clone(d.Kubernetes)
+	}
 	return &clone
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/core/dag.go` around lines 227 - 230, The Clone method on DAG must
deep-copy the new mutable Kubernetes map to avoid shared-state: update
(*DAG).Clone() to produce an independent copy of the KubernetesConfig held in
the DAG.Kubernetes field (either by calling a
KubernetesConfig.Clone()/DeepCopy() helper or by explicitly allocating a new map
and copying each key/value), so the cloned DAG gets its own Kubernetes map
instance rather than a shallow reference to the original; modify DAG.Clone() to
perform this deep-copy for Kubernetes before returning the cloned DAG.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/cmn/schema/dag.schema.json`:
- Around line 3682-3692: The schema currently enforces "image" as required in
the kubernetesExecutorConfig fragment (the allOf that references
kubernetesCommonConfig and required:["image"]), which rejects step-level config
fragments that rely on DAG-level defaults; remove "image" from the immediate
required list in kubernetesExecutorConfig and instead enforce that the final
merged config contains image after DAG defaults are applied. Update the
validation path that uses kubernetesExecutorConfig so it validates step config
only after merging DAG defaults (or add a separate post-merge check) to assert
presence of image in the merged object rather than on the raw step fragment.
- Around line 3267-3335: The schema currently allows invalid Kubernetes env
entries; update the definitions to enforce valid combinations: in
kubernetesEnvVar add "required": ["name"] and a oneOf ensuring exactly one of
"value" or "value_from" is present (use oneOf with presence checks for "value"
vs "value_from"); in kubernetesEnvVarSource add a oneOf requiring exactly one of
"secret_key_ref", "config_map_key_ref", or "field_ref"; in
kubernetesEnvFromSource add a oneOf requiring at least one of "config_map_ref"
or "secret_ref" (and keep kubernetesEnvFromRef required:["name"]) so env_from
entries with only "prefix" are rejected. Use the existing symbols
kubernetesEnvVar, kubernetesEnvVarSource, kubernetesEnvFromRef, and
kubernetesEnvFromSource to locate where to add these required/oneOf constraints.

In `@internal/runtime/builtin/kubernetes/client.go`:
- Around line 283-285: WaitForPod currently treats corev1.PodFailed as a
terminal success and podPriority ranks Failed above Pending, causing a failed
retry to be selected while a newer retry is still pending; update the WaitForPod
switch (function WaitForPod) to only return when pod.Status.Phase is
corev1.PodRunning or corev1.PodSucceeded (do not return on PodFailed), and
adjust the podPriority comparator (the podPriority function/logic referenced
around the other block at 338-345) to rank Pending higher than Failed (or
exclude Failed from top selection) so pending restart attempts are preferred
over older failed pods when choosing which pod to collect logs/exit code from.
Ensure both places (the switch and podPriority logic) are updated consistently.
- Around line 71-82: The current logic in the kubeConfig.ClientConfig() error
branch falls back to rest.InClusterConfig() even when an explicit KUBECONFIG was
provided; update the error handling so that if explicitKubeconfig is true (or
explicitContext is true when the user explicitly set a context) you return the
original kubeconfig error immediately (e.g., return nil, fmt.Errorf("kubeconfig
error: %w", err)) and only attempt rest.InClusterConfig() when neither
explicitKubeconfig nor explicitContext is set and no explicit kubeconfig files
exist (use hasAnyKubeconfigFile(loadingRules.GetLoadingPrecedence()) to decide);
adjust the branch around kubeConfig.ClientConfig(), rest.InClusterConfig(),
explicitKubeconfig, and explicitContext accordingly.

In `@internal/runtime/builtin/kubernetes/config.go`:
- Around line 520-528: The schema entries in config.go currently register env,
env_from, resources, tolerations, volumes, and volume_mounts as opaque
object/array nodes, which prevents CLI/UI clients from validating or
autocompleting nested Kubernetes fields; update the jsonschema map (the block
that defines "env", "env_from", "resources", "node_selector", "tolerations",
"labels", "annotations", "volumes", "volume_mounts") to replace those generic
entries with full nested jsonschema.Schema definitions: define "env" as array of
objects with properties like name, value, valueFrom (and nested
secretKeyRef/configMapKeyRef), "env_from" as array of objects with
configMapRef/secretRef, "resources" as object with properties limits and
requests (each an object of string quantities), "tolerations" as array of
objects with key, operator, value, effect, tolerationSeconds, "volumes" as array
of objects supporting secret, configMap, emptyDir, persistentVolumeClaim,
hostPath shapes, and "volume_mounts" as array of objects with name, mountPath,
subPath, readOnly; keep original descriptions and types but provide these nested
properties so schema consumers can validate and autocomplete the
runtime-supported fields.
- Around line 493-498: The parseQuantity function currently only validates
syntax; after calling resource.ParseQuantity(value) add a check for negative
values (if qty.Sign() < 0) and return an error immediately so negatives fail
during config decoding. Use the same error-pattern as other negative checks in
this file: either return fmt.Errorf("%s: negative quantity %q: %w", field,
value, ErrNegativeQuantity) and add a new sentinel ErrNegativeQuantity (matching
the ErrNegativeActiveDeadline/ErrNegativeBackoffLimit style) or reuse an
appropriate existing ErrNegative* sentinel if present; update parseQuantity to
return resource.Quantity{}, that error when qty is negative.

In `@internal/runtime/builtin/kubernetes/executor.go`:
- Around line 280-285: validateStep currently allows multiple entries in
step.Commands while buildCommand only uses step.Commands[0], causing extra
commands to be silently dropped; update validateStep (and any related validation
for the kubernetes executor) to detect when len(step.Commands) > 1 and return a
clear error (e.g., "kubernetes executor does not support multiple command
entries; provide a single combined command or use a script"), or alternatively
modify buildCommand to combine all entries into a single shell invocation (e.g.,
join with " && " or wrap in "sh -c") so all commands run; reference validateStep
and buildCommand when making the change.
- Around line 135-139: The failure path after client.WaitForPod currently always
calls e.cleanup(ctx, true) which deletes unschedulable pods even when the job's
cleanup_policy is set to keep; change this to respect the job's cleanup policy
by not forcing deletion on scheduling errors: check the job/config cleanup
policy (e.g. cleanup_policy or e.cleanupPolicy) and either skip calling
e.cleanup or call e.cleanup(ctx, false) only when the policy allows deletion,
reserving force=true for explicit cancel/kill paths; update the code around the
WaitForPod error handling and the e.cleanup(ctx, true) call accordingly.

---

Outside diff comments:
In `@internal/runtime/builtin/docker/client.go`:
- Around line 151-162: The code clears ctID when a named container is found but
not running, which later causes ContainerCreate to fail due to Docker name
reservation; instead, keep ctID = info.ID for stopped containers and implement
explicit handling (e.g., call a restart or remove flow) when
isContainerRunning(...) returns false before attempting ContainerCreate. Update
the logic around inspectContainer, isContainerRunning, and ctID so stopped
containers are either restarted (dockerClient.ContainerStart/Restart) or removed
(ContainerRemove) based on the desired behavior, and only call ContainerCreate
when no existing container ID is present.
- Around line 843-867: The ExecAttach stream handling must branch on the TTY
flag: after calling cli.ExecAttach (using execCreateResp.ID) check the effective
TTY (execOpts.TTY or c.cfg.ExecOptions.TTY) and if TTY is true treat the
attached stream as a raw stream and use io.Copy to copy resp.Reader to the
destination writer(s); if TTY is false use stdcopy.StdCopy to demultiplex
stdout/stderr from resp.Reader into separate writers. Update the code around
ExecAttach/resp handling to close the response when done and choose io.Copy for
raw TTY streams and stdcopy.StdCopy for multiplexed streams to avoid frame loss.

---

Nitpick comments:
In `@internal/core/dag.go`:
- Around line 227-230: The Clone method on DAG must deep-copy the new mutable
Kubernetes map to avoid shared-state: update (*DAG).Clone() to produce an
independent copy of the KubernetesConfig held in the DAG.Kubernetes field
(either by calling a KubernetesConfig.Clone()/DeepCopy() helper or by explicitly
allocating a new map and copying each key/value), so the cloned DAG gets its own
Kubernetes map instance rather than a shallow reference to the original; modify
DAG.Clone() to perform this deep-copy for Kubernetes before returning the cloned
DAG.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 2e3537be-7095-4c2a-b26d-2bfdf19bc061

📥 Commits

Reviewing files that changed from the base of the PR and between eaee029 and 5f837d4.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (27)
  • go.mod
  • internal/agent/schema/output_test.go
  • internal/cmn/schema/dag.schema.json
  • internal/cmn/schema/dag_schema_test.go
  • internal/core/dag.go
  • internal/core/spec/dag.go
  • internal/core/spec/kubernetes.go
  • internal/core/spec/kubernetes_test.go
  • internal/core/spec/loader.go
  • internal/core/spec/step.go
  • internal/core/spec/step_test.go
  • internal/intg/ct_test.go
  • internal/intg/sftp_test.go
  • internal/intg/ssh_test.go
  • internal/runtime/builtin/builtin.go
  • internal/runtime/builtin/docker/client.go
  • internal/runtime/builtin/docker/client_test.go
  • internal/runtime/builtin/docker/config.go
  • internal/runtime/builtin/docker/parser.go
  • internal/runtime/builtin/docker/registry_auth.go
  • internal/runtime/builtin/docker/registry_auth_test.go
  • internal/runtime/builtin/kubernetes/client.go
  • internal/runtime/builtin/kubernetes/client_test.go
  • internal/runtime/builtin/kubernetes/config.go
  • internal/runtime/builtin/kubernetes/config_test.go
  • internal/runtime/builtin/kubernetes/executor.go
  • internal/runtime/builtin/kubernetes/executor_test.go

Comment on lines +3267 to +3335
"kubernetesEnvVarSource": {
"type": "object",
"additionalProperties": false,
"properties": {
"secret_key_ref": {
"$ref": "#/definitions/kubernetesKeySelector",
"description": "Read the value from a specific key in a Secret."
},
"config_map_key_ref": {
"$ref": "#/definitions/kubernetesKeySelector",
"description": "Read the value from a specific key in a ConfigMap."
},
"field_ref": {
"$ref": "#/definitions/kubernetesFieldRef",
"description": "Read the value from a Pod field."
}
},
"description": "Source configuration for an environment variable value."
},
"kubernetesEnvVar": {
"type": "object",
"additionalProperties": false,
"properties": {
"name": {
"type": "string",
"description": "Environment variable name."
},
"value": {
"type": "string",
"description": "Literal environment variable value."
},
"value_from": {
"$ref": "#/definitions/kubernetesEnvVarSource",
"description": "Dynamic source for the environment variable value."
}
},
"description": "A Kubernetes environment variable entry."
},
"kubernetesEnvFromRef": {
"type": "object",
"additionalProperties": false,
"required": ["name"],
"properties": {
"name": {
"type": "string",
"description": "Name of the Secret or ConfigMap to import."
}
},
"description": "Reference to a Secret or ConfigMap for env_from."
},
"kubernetesEnvFromSource": {
"type": "object",
"additionalProperties": false,
"properties": {
"config_map_ref": {
"$ref": "#/definitions/kubernetesEnvFromRef",
"description": "Import all keys from a ConfigMap."
},
"secret_ref": {
"$ref": "#/definitions/kubernetesEnvFromRef",
"description": "Import all keys from a Secret."
},
"prefix": {
"type": "string",
"description": "Optional prefix applied to imported environment variable names."
}
},
"description": "Import a Secret or ConfigMap as environment variables."
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reject impossible Kubernetes env specs at schema time.

Examples like env: [{value: "x"}], env: [{name: "X", value: "a", value_from: {...}}], or env_from: [{prefix: "APP_"}] currently pass this schema even though they do not describe a valid Kubernetes env definition. That pushes basic validation out to Job creation time. Add required and oneOf constraints here so schema consumers fail these combinations early.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/cmn/schema/dag.schema.json` around lines 3267 - 3335, The schema
currently allows invalid Kubernetes env entries; update the definitions to
enforce valid combinations: in kubernetesEnvVar add "required": ["name"] and a
oneOf ensuring exactly one of "value" or "value_from" is present (use oneOf with
presence checks for "value" vs "value_from"); in kubernetesEnvVarSource add a
oneOf requiring exactly one of "secret_key_ref", "config_map_key_ref", or
"field_ref"; in kubernetesEnvFromSource add a oneOf requiring at least one of
"config_map_ref" or "secret_ref" (and keep kubernetesEnvFromRef
required:["name"]) so env_from entries with only "prefix" are rejected. Use the
existing symbols kubernetesEnvVar, kubernetesEnvVarSource, kubernetesEnvFromRef,
and kubernetesEnvFromSource to locate where to add these required/oneOf
constraints.

Comment on lines +3682 to +3692
"kubernetesExecutorConfig": {
"allOf": [
{
"$ref": "#/definitions/kubernetesCommonConfig"
},
{
"type": "object",
"required": ["image"]
}
],
"description": "Kubernetes executor configuration for running a workflow step as a Kubernetes Job."
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Allow step overrides when image comes from DAG defaults.

kubernetesExecutorConfig makes image mandatory whenever a step has a config block. That means a DAG with root kubernetes.image works only until a step overrides some other Kubernetes field; type: k8s plus config: { cleanup_policy: keep } is rejected before merge even though the final merged config is valid. This required-image check needs to happen after DAG defaults are merged, not on the raw step fragment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/cmn/schema/dag.schema.json` around lines 3682 - 3692, The schema
currently enforces "image" as required in the kubernetesExecutorConfig fragment
(the allOf that references kubernetesCommonConfig and required:["image"]), which
rejects step-level config fragments that rely on DAG-level defaults; remove
"image" from the immediate required list in kubernetesExecutorConfig and instead
enforce that the final merged config contains image after DAG defaults are
applied. Update the validation path that uses kubernetesExecutorConfig so it
validates step config only after merging DAG defaults (or add a separate
post-merge check) to assert presence of image in the merged object rather than
on the raw step fragment.

Comment on lines +71 to +82
kubeConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, overrides)
restCfg, err := kubeConfig.ClientConfig()
if err != nil {
if explicitKubeconfig || explicitContext || hasAnyKubeconfigFile(loadingRules.GetLoadingPrecedence()) {
return nil, fmt.Errorf("kubeconfig error: %w", err)
}

restCfg, inClusterErr := rest.InClusterConfig()
if inClusterErr != nil {
return nil, fmt.Errorf("kubeconfig error: %w; in-cluster error: %w", err, inClusterErr)
}
return restCfg, nil
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't fail open to in-cluster config when KUBECONFIG is explicitly set but broken.

If KUBECONFIG points to a missing/invalid file and this code is running inside Kubernetes, this branch silently ignores that explicit user choice and targets the in-cluster cluster instead.

Suggested fix
 func buildRESTConfig(cfg *Config) (*rest.Config, error) {
 	loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
+	envKubeconfigSet := strings.TrimSpace(os.Getenv(clientcmd.RecommendedConfigPathEnvVar)) != ""
 	explicitKubeconfig := cfg.Kubeconfig != ""
 	if explicitKubeconfig {
 		loadingRules.ExplicitPath = cfg.Kubeconfig
 	}
@@
 	restCfg, err := kubeConfig.ClientConfig()
 	if err != nil {
-		if explicitKubeconfig || explicitContext || hasAnyKubeconfigFile(loadingRules.GetLoadingPrecedence()) {
+		if explicitKubeconfig || explicitContext || envKubeconfigSet || hasAnyKubeconfigFile(loadingRules.GetLoadingPrecedence()) {
 			return nil, fmt.Errorf("kubeconfig error: %w", err)
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/runtime/builtin/kubernetes/client.go` around lines 71 - 82, The
current logic in the kubeConfig.ClientConfig() error branch falls back to
rest.InClusterConfig() even when an explicit KUBECONFIG was provided; update the
error handling so that if explicitKubeconfig is true (or explicitContext is true
when the user explicitly set a context) you return the original kubeconfig error
immediately (e.g., return nil, fmt.Errorf("kubeconfig error: %w", err)) and only
attempt rest.InClusterConfig() when neither explicitKubeconfig nor
explicitContext is set and no explicit kubeconfig files exist (use
hasAnyKubeconfigFile(loadingRules.GetLoadingPrecedence()) to decide); adjust the
branch around kubeConfig.ClientConfig(), rest.InClusterConfig(),
explicitKubeconfig, and explicitContext accordingly.

Comment on lines +283 to +285
switch pod.Status.Phase {
case corev1.PodRunning, corev1.PodSucceeded, corev1.PodFailed:
return pod.Name, nil
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Retrying Jobs can select the wrong pod here.

WaitForPod returns PodFailed, and podPriority ranks Failed above Pending. With backoff_limit > 0, the first failed attempt can win selection while the retry pod is still pending, so downstream log/exit-code collection can report failure even if a later retry succeeds.

Also applies to: 338-345

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/runtime/builtin/kubernetes/client.go` around lines 283 - 285,
WaitForPod currently treats corev1.PodFailed as a terminal success and
podPriority ranks Failed above Pending, causing a failed retry to be selected
while a newer retry is still pending; update the WaitForPod switch (function
WaitForPod) to only return when pod.Status.Phase is corev1.PodRunning or
corev1.PodSucceeded (do not return on PodFailed), and adjust the podPriority
comparator (the podPriority function/logic referenced around the other block at
338-345) to rank Pending higher than Failed (or exclude Failed from top
selection) so pending restart attempts are preferred over older failed pods when
choosing which pod to collect logs/exit code from. Ensure both places (the
switch and podPriority logic) are updated consistently.

Comment on lines +493 to +498
func parseQuantity(field, value string) (resource.Quantity, error) {
qty, err := resource.ParseQuantity(value)
if err != nil {
return resource.Quantity{}, fmt.Errorf("%s: invalid quantity %q: %w", field, value, err)
}
return qty, nil
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Does k8s.io/apimachinery/pkg/api/resource.ParseQuantityaccept negative values such as-100mor-1Gi? What is the idiomatic way to reject negative resource.Quantity values in Go?

💡 Result:

Yes, k8s.io/apimachinery/pkg/api/resource.ParseQuantity accepts negative values such as -100m or -1Gi. The parsing regex ^([+-]?[0-9.]+)([eEinumkKMGTP][-+]?[0-9])$ explicitly allows a leading + or - sign, and the source code handles the sign by setting positive = false for - and applying result = -result or amount.Neg(amount). The Quantity.Sign method returns -1 for negative values, and Neg supports negation. Official docs confirm signed numbers are supported in the serialization format. The idiomatic way to reject negative resource.Quantity values in Go is to check if q.Sign < 0 after parsing or use k8s.io/kubernetes/pkg/apis/core/v1/validation.ValidateNonnegativeQuantity(q, fldPath), which is the standard validation function used throughout Kubernetes for ensuring quantities are non-negative (e.g., in pod resource validation).

Citations:


🏁 Script executed:

cd /tmp && find . -name "config.go" -path "*/kubernetes/*" 2>/dev/null | head -20

Repository: dagu-org/dagu

Length of output: 39


🏁 Script executed:

# List files in the repository to locate the file
git ls-files | grep -E "internal/runtime.*kubernetes.*config.go"

Repository: dagu-org/dagu

Length of output: 102


🏁 Script executed:

# Get an overview of the repository structure
git ls-files | head -30

Repository: dagu-org/dagu

Length of output: 761


🏁 Script executed:

# Check file size first
wc -l internal/runtime/builtin/kubernetes/config.go

Repository: dagu-org/dagu

Length of output: 106


🏁 Script executed:

# Read the file to understand the context and implementation
cat -n internal/runtime/builtin/kubernetes/config.go | head -100

Repository: dagu-org/dagu

Length of output: 5091


🏁 Script executed:

# Read around the parseQuantity function (lines 493-498 mentioned in review)
sed -n '480,520p' internal/runtime/builtin/kubernetes/config.go | cat -n

Repository: dagu-org/dagu

Length of output: 2067


🏁 Script executed:

# Look for all usages of parseQuantity in the file
grep -n "parseQuantity" internal/runtime/builtin/kubernetes/config.go

Repository: dagu-org/dagu

Length of output: 394


🏁 Script executed:

# Check for imports related to validation
sed -n '1,30p' internal/runtime/builtin/kubernetes/config.go | grep -E "import|validation"

Repository: dagu-org/dagu

Length of output: 65


🏁 Script executed:

# Check the usage context of parseQuantity in the file
sed -n '320,350p' internal/runtime/builtin/kubernetes/config.go | cat -n

Repository: dagu-org/dagu

Length of output: 1260


🏁 Script executed:

# Check the second usage context
sed -n '380,395p' internal/runtime/builtin/kubernetes/config.go | cat -n

Repository: dagu-org/dagu

Length of output: 737


🏁 Script executed:

# Check what happens after parseQuantity returns - is the qty value used?
sed -n '325,345p' internal/runtime/builtin/kubernetes/config.go | cat -n

Repository: dagu-org/dagu

Length of output: 860


🏁 Script executed:

# Find LoadConfigFromMap function to understand the full validation flow
grep -n "LoadConfigFromMap" internal/runtime/builtin/kubernetes/config.go

Repository: dagu-org/dagu

Length of output: 212


🏁 Script executed:

# Check if there are other error patterns for negative values in the file
grep -n "ErrNegative" internal/runtime/builtin/kubernetes/config.go

Repository: dagu-org/dagu

Length of output: 465


Reject negative quantities after parsing.

resource.ParseQuantity accepts negative values like -100m or -1Gi, so parseQuantity currently validates syntax only. Negative CPU/memory in resources.requests/resources.limits and negative empty_dir.size_limit pass validation and fail later at Job creation instead of during config decoding. This is inconsistent with the existing pattern of early negative-value checks in the file (see ErrNegativeActiveDeadline, ErrNegativeBackoffLimit, ErrNegativeTTLAfterFinished).

Suggested fix
 func parseQuantity(field, value string) (resource.Quantity, error) {
 	qty, err := resource.ParseQuantity(value)
 	if err != nil {
 		return resource.Quantity{}, fmt.Errorf("%s: invalid quantity %q: %w", field, value, err)
 	}
+	if qty.Sign() < 0 {
+		return resource.Quantity{}, fmt.Errorf("%s: quantity must be >= 0", field)
+	}
 	return qty, nil
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func parseQuantity(field, value string) (resource.Quantity, error) {
qty, err := resource.ParseQuantity(value)
if err != nil {
return resource.Quantity{}, fmt.Errorf("%s: invalid quantity %q: %w", field, value, err)
}
return qty, nil
func parseQuantity(field, value string) (resource.Quantity, error) {
qty, err := resource.ParseQuantity(value)
if err != nil {
return resource.Quantity{}, fmt.Errorf("%s: invalid quantity %q: %w", field, value, err)
}
if qty.Sign() < 0 {
return resource.Quantity{}, fmt.Errorf("%s: quantity must be >= 0", field)
}
return qty, nil
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/runtime/builtin/kubernetes/config.go` around lines 493 - 498, The
parseQuantity function currently only validates syntax; after calling
resource.ParseQuantity(value) add a check for negative values (if qty.Sign() <
0) and return an error immediately so negatives fail during config decoding. Use
the same error-pattern as other negative checks in this file: either return
fmt.Errorf("%s: negative quantity %q: %w", field, value, ErrNegativeQuantity)
and add a new sentinel ErrNegativeQuantity (matching the
ErrNegativeActiveDeadline/ErrNegativeBackoffLimit style) or reuse an appropriate
existing ErrNegative* sentinel if present; update parseQuantity to return
resource.Quantity{}, that error when qty is negative.

Comment on lines +520 to +528
"env": {Type: "array", Items: &jsonschema.Schema{Type: "object"}, Description: "Environment variables"},
"env_from": {Type: "array", Items: &jsonschema.Schema{Type: "object"}, Description: "Environment variable sources"},
"resources": {Type: "object", Description: "CPU/memory requests and limits"},
"node_selector": {Type: "object", Description: "Node selector constraints"},
"tolerations": {Type: "array", Items: &jsonschema.Schema{Type: "object"}, Description: "Pod tolerations"},
"labels": {Type: "object", Description: "Labels for Job and Pod"},
"annotations": {Type: "object", Description: "Annotations for Job and Pod"},
"volumes": {Type: "array", Items: &jsonschema.Schema{Type: "object"}, Description: "Pod volumes"},
"volume_mounts": {Type: "array", Items: &jsonschema.Schema{Type: "object"}, Description: "Container volume mounts"},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The registered schema still hides the nested Kubernetes surface.

env, env_from, resources, tolerations, volumes, and volume_mounts are all exposed as opaque object / array{object} nodes here. CLI/UI schema consumers still cannot validate or autocomplete the nested fields that the runtime actually supports.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/runtime/builtin/kubernetes/config.go` around lines 520 - 528, The
schema entries in config.go currently register env, env_from, resources,
tolerations, volumes, and volume_mounts as opaque object/array nodes, which
prevents CLI/UI clients from validating or autocompleting nested Kubernetes
fields; update the jsonschema map (the block that defines "env", "env_from",
"resources", "node_selector", "tolerations", "labels", "annotations", "volumes",
"volume_mounts") to replace those generic entries with full nested
jsonschema.Schema definitions: define "env" as array of objects with properties
like name, value, valueFrom (and nested secretKeyRef/configMapKeyRef),
"env_from" as array of objects with configMapRef/secretRef, "resources" as
object with properties limits and requests (each an object of string
quantities), "tolerations" as array of objects with key, operator, value,
effect, tolerationSeconds, "volumes" as array of objects supporting secret,
configMap, emptyDir, persistentVolumeClaim, hostPath shapes, and "volume_mounts"
as array of objects with name, mountPath, subPath, readOnly; keep original
descriptions and types but provide these nested properties so schema consumers
can validate and autocomplete the runtime-supported fields.

Comment on lines +135 to +139
podName, err := client.WaitForPod(ctx)
if err != nil {
logger.Error(ctx, "Kubernetes executor: pod scheduling failed", slog.Any("error", err))
e.cleanup(ctx, true)
return fmt.Errorf("pod scheduling failed: %w", err)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Respect cleanup_policy: keep when pod scheduling fails.

This path always calls e.cleanup(ctx, true), so unschedulable Jobs are deleted even when the user explicitly asked to keep them for debugging. That makes one of the hardest failure modes disappear immediately. Reserve force=true for cancellation/kill paths and let normal WaitForPod failures follow the configured cleanup policy.

Suggested change
 	podName, err := client.WaitForPod(ctx)
 	if err != nil {
 		logger.Error(ctx, "Kubernetes executor: pod scheduling failed", slog.Any("error", err))
-		e.cleanup(ctx, true)
-		return fmt.Errorf("pod scheduling failed: %w", err)
+		if ctx.Err() != nil {
+			e.cleanup(ctx, true)
+			return ctx.Err()
+		}
+		e.cleanup(ctx, false)
+		return fmt.Errorf("pod scheduling failed: %w", err)
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
podName, err := client.WaitForPod(ctx)
if err != nil {
logger.Error(ctx, "Kubernetes executor: pod scheduling failed", slog.Any("error", err))
e.cleanup(ctx, true)
return fmt.Errorf("pod scheduling failed: %w", err)
podName, err := client.WaitForPod(ctx)
if err != nil {
logger.Error(ctx, "Kubernetes executor: pod scheduling failed", slog.Any("error", err))
if ctx.Err() != nil {
e.cleanup(ctx, true)
return ctx.Err()
}
e.cleanup(ctx, false)
return fmt.Errorf("pod scheduling failed: %w", err)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/runtime/builtin/kubernetes/executor.go` around lines 135 - 139, The
failure path after client.WaitForPod currently always calls e.cleanup(ctx, true)
which deletes unschedulable pods even when the job's cleanup_policy is set to
keep; change this to respect the job's cleanup policy by not forcing deletion on
scheduling errors: check the job/config cleanup policy (e.g. cleanup_policy or
e.cleanupPolicy) and either skip calling e.cleanup or call e.cleanup(ctx, false)
only when the policy allows deletion, reserving force=true for explicit
cancel/kill paths; update the code around the WaitForPod error handling and the
e.cleanup(ctx, true) call accordingly.

Comment on lines +280 to +285
func validateStep(step core.Step) error {
if len(step.ExecutorConfig.Config) == 0 {
return fmt.Errorf("kubernetes executor requires config with at least 'image' field")
}
_, err := LoadConfigFromMap(step.ExecutorConfig.Config)
return err
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't silently drop additional command entries.

buildCommand only forwards step.Commands[0], and validateStep does not reject multiple entries. A Kubernetes step using command: ["echo one", "echo two"] will execute only the first command and still report success. Either reject multi-entry steps for this executor or wrap all entries into a single shell command before creating the Job.

Suggested guardrail
 func validateStep(step core.Step) error {
 	if len(step.ExecutorConfig.Config) == 0 {
 		return fmt.Errorf("kubernetes executor requires config with at least 'image' field")
 	}
+	if len(step.Commands) > 1 {
+		return fmt.Errorf("kubernetes executor supports only a single command entry; wrap multiple commands in a shell command or script")
+	}
 	_, err := LoadConfigFromMap(step.ExecutorConfig.Config)
 	return err
 }

Also applies to: 288-299

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/runtime/builtin/kubernetes/executor.go` around lines 280 - 285,
validateStep currently allows multiple entries in step.Commands while
buildCommand only uses step.Commands[0], causing extra commands to be silently
dropped; update validateStep (and any related validation for the kubernetes
executor) to detect when len(step.Commands) > 1 and return a clear error (e.g.,
"kubernetes executor does not support multiple command entries; provide a single
combined command or use a script"), or alternatively modify buildCommand to
combine all entries into a single shell invocation (e.g., join with " && " or
wrap in "sh -c") so all commands run; reference validateStep and buildCommand
when making the change.

@yottahmd yottahmd merged commit bed0c6f into main Apr 1, 2026
7 checks passed
@yottahmd yottahmd deleted the k8s-step-type branch April 1, 2026 03:52
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 1, 2026

Codecov Report

❌ Patch coverage is 63.33333% with 572 lines in your changes missing coverage. Please review.
✅ Project coverage is 68.73%. Comparing base (9115b82) to head (f0801b4).
⚠️ Report is 8 commits behind head on main.

Files with missing lines Patch % Lines
internal/runtime/builtin/kubernetes/config.go 63.95% 262 Missing and 62 partials ⚠️
internal/runtime/builtin/kubernetes/client.go 52.72% 115 Missing and 24 partials ⚠️
internal/runtime/builtin/kubernetes/executor.go 69.28% 34 Missing and 13 partials ⚠️
internal/runtime/builtin/docker/client.go 65.43% 22 Missing and 6 partials ⚠️
internal/core/spec/kubernetes.go 74.62% 14 Missing and 3 partials ⚠️
internal/core/dag.go 62.06% 10 Missing and 1 partial ⚠️
internal/runtime/builtin/docker/parser.go 66.66% 2 Missing and 2 partials ⚠️
internal/core/spec/loader.go 85.71% 1 Missing and 1 partial ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1886      +/-   ##
==========================================
- Coverage   68.92%   68.73%   -0.19%     
==========================================
  Files         452      456       +4     
  Lines       56092    57581    +1489     
==========================================
+ Hits        38661    39581     +920     
- Misses      13934    14390     +456     
- Partials     3497     3610     +113     
Files with missing lines Coverage Δ
internal/core/spec/dag.go 84.91% <ø> (ø)
internal/core/spec/step.go 79.59% <100.00%> (+0.25%) ⬆️
internal/runtime/builtin/docker/config.go 92.35% <100.00%> (ø)
internal/runtime/builtin/docker/registry_auth.go 77.90% <100.00%> (+0.25%) ⬆️
internal/core/spec/loader.go 76.13% <85.71%> (+0.33%) ⬆️
internal/runtime/builtin/docker/parser.go 89.79% <66.66%> (-3.76%) ⬇️
internal/core/dag.go 88.55% <62.06%> (-3.72%) ⬇️
internal/core/spec/kubernetes.go 74.62% <74.62%> (ø)
internal/runtime/builtin/docker/client.go 57.08% <65.43%> (-0.64%) ⬇️
internal/runtime/builtin/kubernetes/executor.go 69.28% <69.28%> (ø)
... and 2 more

... and 17 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d613621...f0801b4. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant