Skip to content

feat: implement output.renderer.converters support#49

Merged
jdx merged 3 commits intomainfrom
feat/output-renderer-converters
Mar 24, 2026
Merged

feat: implement output.renderer.converters support#49
jdx merged 3 commits intomainfrom
feat/output-renderer-converters

Conversation

@jdx
Copy link
Copy Markdown
Owner

@jdx jdx commented Mar 24, 2026

Summary

pklr previously skipped pkl's output block entirely, which meant converter functions were never applied. This caused consumers like hk to need post-processing workarounds (pklr_normalize_step_or_group) to normalize the JSON output.

This PR implements output.renderer.converters support:

  • Track which pkl class each object was instantiated from via ObjectSource.type_name
  • Parse the output.renderer.converters block from the AST and evaluate converter lambdas
  • Add Evaluator::apply_converters() which recursively walks the value tree and applies matching converter functions
  • Call apply_converters() in eval_to_json_with_options() before serializing to JSON

This enables pkl files like hk's Config.pkl to use converters for:

  • Type discrimination (_type tags for serde tagged enums)
  • Value coercion (string → array, bool → string)
  • Custom serialization (Regex objects)

Without this, hk needed ~100 lines of Rust to replicate what pkl's converters do natively.

Test plan

  • All 229 existing tests pass unchanged
  • 5 new tests covering converter functionality:
    • converter_injects_type_tag — basic _type injection
    • converter_coerces_values — string→array and bool→string coercion
    • converter_multiple_types — Group + Step converters with nesting
    • converter_no_converters_is_noop — no converters = unchanged output
    • converter_output_not_in_result — output block excluded from JSON

This comment was generated by Claude Code.

pklr previously skipped pkl's `output` block entirely, which meant
converter functions (used to inject _type tags, coerce types, etc.)
were never applied. This caused consumers like hk to need post-processing
workarounds to normalize the JSON output.

Changes:
- Add `type_name: Option<String>` to ObjectSource to track which pkl
  class an object was instantiated from
- Tag objects with their class name when evaluating `new ClassName { ... }`
- Parse the `output.renderer.converters` block from the AST and evaluate
  converter lambdas
- Add `apply_converters()` to Evaluator, which recursively walks the
  value tree and applies matching converter functions
- Call `apply_converters()` in `eval_to_json_with_options()` before
  serializing to JSON

This enables pkl files like hk's Config.pkl to use converters for
type discrimination (_type tags), value coercion (string→array,
bool→string), and custom serialization without requiring downstream
workarounds.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces comprehensive support for Pkl's output.renderer.converters feature within pklr. By integrating this functionality, pklr can now correctly interpret and apply transformations defined in Pkl modules' output blocks. This enhancement allows for native handling of complex serialization requirements, such as injecting type tags, coercing values between different types, and custom object serialization, directly within the Pkl evaluation process. The change significantly reduces the need for external post-processing steps, simplifying the workflow for consumers of pklr's JSON output.

Highlights

  • output.renderer.converters support: Implemented the Pkl output.renderer.converters feature, which was previously skipped by pklr.
  • Enhanced Pkl serialization: Enables Pkl files to natively use converters for tasks like type discrimination (_type tags), value coercion (e.g., string to array, boolean to string), and custom serialization (e.g., Regex objects).
  • Reduced post-processing: Eliminates the need for external post-processing workarounds (like pklr_normalize_step_or_group in hk) by integrating converter logic directly into the evaluation process.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new feature for Pkl's output.renderer.converters, enabling custom transformations of evaluated objects based on their type. Key changes include adding a converters field to the Evaluator struct, storing type_name in ObjectSource during object creation and amendment, and implementing logic to extract and recursively apply these converter lambdas. New test cases validate this functionality. Feedback from the review suggests improving the readability of the converter extraction logic by flattening nested loops and optimizing type name matching to avoid unnecessary string allocations for better performance.

Comment thread src/eval.rs Outdated
Comment on lines +2045 to +2085
let Entry::Property(renderer_prop) = entry else {
continue;
};
if renderer_prop.name != "renderer" {
continue;
}
let Some(renderer_body) = &renderer_prop.body else {
continue;
};
// Find `converters { ... }` inside renderer
for rentry in renderer_body {
let Entry::Property(converters_prop) = rentry else {
continue;
};
if converters_prop.name != "converters" {
continue;
}
let Some(converters_body) = &converters_prop.body else {
continue;
};
// Each converter is a DynProperty: [ClassName] = (x) -> expr
for centry in converters_body {
if let Entry::DynProperty(key_expr, val_expr) = centry {
// Extract the class name from the key expression
let class_name = match key_expr {
Expr::Ident(name) => name.clone(),
Expr::Field(_, name) => name.clone(),
Expr::String(s) => s.clone(),
_ => continue,
};
// Evaluate the lambda value
if let Ok(lambda) = self.eval_expr(val_expr, scope, depth).await
&& matches!(lambda, Value::Lambda(..))
{
self.converters.push((class_name, lambda));
}
}
}
}
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The nested loops for finding output.renderer.converters are quite deep, which can make the code harder to read and maintain. You could flatten this structure by using iterators and methods like find_map. This would make the logic for locating the converters block more linear and easier to follow.

        // Find `renderer { ... }` inside output
        let Some(renderer_body) = output_body.iter().find_map(|entry| {
            if let Entry::Property(p) = entry && p.name == "renderer" {
                p.body.as_ref()
            } else {
                None
            }
        }) else { return; };

        // Find `converters { ... }` inside renderer
        let Some(converters_body) = renderer_body.iter().find_map(|entry| {
            if let Entry::Property(p) = entry && p.name == "converters" {
                p.body.as_ref()
            } else {
                None
            }
        }) else { return; };

        // Each converter is a DynProperty: [ClassName] = (x) -> expr
        for centry in converters_body {
            if let Entry::DynProperty(key_expr, val_expr) = centry {
                // Extract the class name from the key expression
                let class_name = match key_expr {
                    Expr::Ident(name) => name.clone(),
                    Expr::Field(_, name) => name.clone(),
                    Expr::String(s) => s.clone(),
                    _ => continue,
                };
                // Evaluate the lambda value
                if let Ok(lambda) = self.eval_expr(val_expr, scope, depth).await {
                    if matches!(lambda, Value::Lambda(..)) {
                        self.converters.push((class_name, lambda));
                    }
                }
            }
        }

Comment thread src/eval.rs
Comment on lines +2115 to +2117
if matches && let Value::Lambda(params, body, captured) = lambda {
let mut call_scope = Scope::default();
for (k, v) in captured {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current type name matching logic uses format! inside a loop, which can lead to unnecessary string allocations and impact performance, especially with many objects and converters. You can optimize this by using ends_with on string slices and checking the boundary character, avoiding allocations altogether.

                        let matches = if conv_name == tn {
                            true
                        } else if tn.len() > conv_name.len() && tn.ends_with(conv_name) {
                            // Check for `.<conv_name>` suffix
                            tn.as_bytes().get(tn.len() - conv_name.len() - 1) == Some(&b'.')
                        } else if conv_name.len() > tn.len() && conv_name.ends_with(tn) {
                            // Check for `.<tn>` suffix
                            conv_name.as_bytes().get(conv_name.len() - tn.len() - 1) == Some(&b'.')
                        } else {
                            false
                        };

@greptile-apps
Copy link
Copy Markdown

greptile-apps Bot commented Mar 24, 2026

Greptile Summary

This PR implements output.renderer.converters support in pklr, enabling pkl files to define type-specific transformation lambdas that are applied to the evaluated value tree before JSON serialization — eliminating the need for post-processing workarounds in downstream consumers like hk.

Key changes:

  • src/value.rs: Adds type_name: Option<String> to ObjectSource to record which pkl class each object was instantiated from.
  • src/eval.rs: Tags Value::Object instances with their class name during instantiation; adds extract_converters_from_ast (walks the output.renderer.converters AST at depth 0 to collect lambdas) and apply_converters / apply_converters_recursive (recursively walks the value tree, invoking matching lambdas).
  • src/lib.rs: Calls apply_converters in eval_to_json_with_options after evaluation, before to_json().
  • tests/pkl_features.rs: Adds 5 new tests covering type-tag injection, value coercion, multi-type converters, no-op behaviour, and output-block exclusion.

Issues found:

  • self.converters is never cleared between evaluationsEvaluator is designed for reuse (it exposes an import cache), but converters pushed during one eval_source call persist into subsequent calls. A second invocation will accumulate both sets of converters, causing incorrect transform behaviour. The lib.rs top-level functions are safe because they always construct a fresh Evaluator, but direct users of the Evaluator API are affected.
  • extract_converters_from_ast carries a #[async_recursion(?Send)] attribute even though the function does not recurse — it is harmless but misleading.
  • The third branch of the type-name matching (conv_name.ends_with(&format!(".{tn}"))) could cause unexpected converter matches when short, common type names coincidentally suffix an unrelated fully-qualified converter key.

Confidence Score: 3/5

  • The PR is safe for the library-level API but has a latent state-accumulation bug in the Evaluator public API that should be fixed before merging.
  • The feature is well-designed and the five new tests all exercise the happy path correctly. However, self.converters is never cleared, meaning users who reuse an Evaluator instance across multiple eval_source calls will see converters from prior evaluations incorrectly applied to later ones. Since Evaluator is a public, reusable type (it has an import cache for exactly this purpose), this is a real bug rather than a theoretical one. The lib.rs entry points are unaffected because they always create a fresh instance, but the misuse is easy and the failure mode is silent (wrong output rather than a panic or error).
  • src/eval.rs — specifically the missing self.converters.clear() reset and the #[async_recursion] annotation on extract_converters_from_ast.

Important Files Changed

Filename Overview
src/eval.rs Adds converter extraction (extract_converters_from_ast) and application (apply_converters / apply_converters_recursive), tags objects with type_name during instantiation, and skips the output block in serialization. Key issue: self.converters is never cleared, causing stale converter accumulation across multiple eval_source calls on the same Evaluator instance.
src/value.rs Adds type_name: Option<String> field to ObjectSource to track which pkl class an object was instantiated from. Minimal, clean change with no issues.
src/lib.rs Wires apply_converters into eval_to_json_with_options after evaluation. Always creates a fresh Evaluator so the stale-converter bug in eval.rs does not affect this code path.
tests/pkl_features.rs Adds 5 focused tests covering the converter pipeline: type tag injection, value coercion, multi-type converters, no-op behaviour, and output block exclusion. Tests are well-structured and cover the happy path thoroughly.

Sequence Diagram

sequenceDiagram
    participant Caller
    participant lib as lib.rs (eval_to_json_with_options)
    participant Evaluator
    participant eval_module
    participant extract_converters as extract_converters_from_ast
    participant apply_converters as apply_converters_recursive

    Caller->>lib: eval_to_json_with_options(path, options)
    lib->>Evaluator: new()
    lib->>Evaluator: eval_source(source, path)
    Evaluator->>eval_module: eval_module(module, path, depth=0)
    loop For each top-level property
        alt prop.name == "output" && depth == 0
            eval_module->>extract_converters: extract_converters_from_ast(prop, scope, depth)
            extract_converters->>extract_converters: walk output→renderer→converters entries
            extract_converters-->>Evaluator: push (class_name, lambda) to self.converters
            eval_module->>eval_module: continue (skip output from result)
        else Normal property
            eval_module->>eval_module: eval_property → tag Value::Object with type_name
        end
    end
    eval_module-->>lib: Value (with type_name tags)
    lib->>Evaluator: apply_converters(value)
    Evaluator->>apply_converters: apply_converters_recursive(value, converters)
    loop Recursively walk value tree
        alt Object with matching type_name
            apply_converters->>Evaluator: eval_expr(lambda_body, call_scope, 0)
            Evaluator-->>apply_converters: converted Value
            apply_converters->>apply_converters: recurse on result
        else Object without match / List
            apply_converters->>apply_converters: recurse into children
        end
    end
    apply_converters-->>lib: transformed Value
    lib->>lib: value.to_json()
    lib-->>Caller: serde_json::Value
Loading

Fix All in Claude Code

Reviews (1): Last reviewed commit: "[autofix.ci] apply automated fixes" | Re-trigger Greptile

Comment thread src/eval.rs
Comment on lines 44 to 46
http_rewrites: Vec::new(),
converters: Vec::new(),
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Stale converters across multiple eval_source calls

self.converters is never cleared before a new evaluation begins. Since Evaluator exposes an import cache designed for reuse, a caller can reasonably invoke eval_source more than once on the same instance. Each call will append to self.converters rather than replace them, meaning converters extracted from a previous file bleed into the next.

// Example misuse that silently produces wrong output:
let mut ev = Evaluator::new();
let _ = ev.eval_source(file_a_src, file_a_path).await?; // populates converters from file A
let val = ev.eval_source(file_b_src, file_b_path).await?;
let val = ev.apply_converters(val).await?; // applies converters from BOTH A and B

The fix is to clear the list before the top-level evaluation begins. For example, in eval_source before calling eval_module, or at the start of eval_module when depth == 0:

pub async fn eval_source(&mut self, source: &str, path: &Path) -> Result<Value> {
    self.converters.clear(); // reset per-file state
    // ... rest of function

Fix in Claude Code

Comment thread src/eval.rs Outdated
/// Looks for the structure: `output { renderer { converters { [Type] = (x) -> ... } } }`.
/// Converter keys are type identifiers (e.g., `[Regex]`), which we preserve as
/// strings for matching against `ObjectSource.type_name`.
#[async_recursion(?Send)]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Unnecessary #[async_recursion] on non-recursive function

extract_converters_from_ast never calls itself, so the #[async_recursion(?Send)] attribute is not needed here. The macro is designed specifically to handle the infinite-type problem that arises when an async fn calls itself directly; applying it to a non-recursive function compiles fine but is misleading to readers of the code.

Suggested change
#[async_recursion(?Send)]
async fn extract_converters_from_ast(

Fix in Claude Code

Comment thread src/eval.rs Outdated
Comment on lines +2112 to +2114
let matches = conv_name == tn
|| tn.ends_with(&format!(".{conv_name}"))
|| conv_name.ends_with(&format!(".{tn}"));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Potential false-positive match for short type names

The third branch of the matching logic:

|| conv_name.ends_with(&format!(".{tn}"))

is intended to handle cases like conv_name = "Config.Step" matching a type name tn = "Step". However, if a pkl module defines a class whose simple name (tn) happens to be a suffix of a different, unrelated converter name (after a .), that converter would also fire. For instance, a converter keyed on "Foo.Bar" would match any object whose type_name is "Bar", even if it comes from a completely unrelated class. This is only a concern in codebases with deeply-qualified converter names that share short suffixes with unrelated types, but it is worth documenting the assumption or adding a stricter check (e.g., requiring both sides to share a module prefix).

Fix in Claude Code

- Clear converters in eval_source to prevent stale state across calls
- Flatten nested loops in extract_converters_from_ast using find_map
- Remove unnecessary #[async_recursion] from non-recursive function
- Replace format!-based type name matching with allocation-free
  byte-level dot boundary checks

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
@jdx jdx force-pushed the feat/output-renderer-converters branch from 2c48a2f to 7ee68cf Compare March 24, 2026 21:10
@jdx jdx merged commit c599225 into main Mar 24, 2026
6 checks passed
@jdx jdx deleted the feat/output-renderer-converters branch March 24, 2026 21:24
@jdx jdx mentioned this pull request Mar 24, 2026
jdx added a commit that referenced this pull request Mar 24, 2026
## 🤖 New release

* `pklr`: 0.2.2 -> 0.3.0 (⚠ API breaking changes)

### ⚠ `pklr` breaking changes

```text
--- failure constructible_struct_adds_field: externally-constructible struct adds field ---

Description:
A pub struct constructible with a struct literal has a new pub field. Existing struct literals must be updated to include the new field.
        ref: https://doc.rust-lang.org/reference/expressions/struct-expr.html
       impl: https://github.com/obi1kenobi/cargo-semver-checks/tree/v0.46.0/src/lints/constructible_struct_adds_field.ron

Failed in:
  field ObjectSource.type_name in /tmp/.tmpJNeWZq/pklr/src/value.rs:20
```

<details><summary><i><b>Changelog</b></i></summary><p>

<blockquote>

## [0.3.0](v0.2.2...v0.3.0) -
2026-03-24

### Added

- implement output.renderer.converters support
([#49](#49))
</blockquote>


</p></details>

---
This PR was generated with
[release-plz](https://github.com/release-plz/release-plz/).
jdx added a commit that referenced this pull request Mar 25, 2026
## Summary

Fixes the `missing field '_type'` error in hk's CI ([PR
#769](jdx/hk#769)).

PR #49 added `output.renderer.converters` support, but only extracted
converters from the *current* module's body entries. When a file
`amends` or `extends` a base module that defines converters (like hk's
`hk.pkl` amending `Config.pkl`), those converters were never extracted.

- Extract converters from the base module's AST during amends processing
- Extract converters from the base module's AST during extends
processing
- Added `converter_inherited_from_amends_base` test with two files on
disk

## Test plan

- [x] New test: `converter_inherited_from_amends_base` — creates
Base.pkl with converters + child.pkl that amends it, verifies converters
are applied
- [x] All 235 existing tests pass

*This comment was generated by Claude Code.*

---------

Co-authored-by: Claude Opus 4.6 (1M context) <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant