Skip to content

⚡️ Speed up method CustomPDFPageInterpreter._patch_current_chars_with_render_mode by 7%#34

Merged
KRRT7 merged 1 commit intocodeflash/optimize-CustomPDFPageInterpreter._patch_current_chars_with_render_mode-mm3h21a8from
codeflash/optimize-CustomPDFPageInterpreter._patch_current_chars_with_render_mode-mm3lbz82
Feb 26, 2026
Merged

⚡️ Speed up method CustomPDFPageInterpreter._patch_current_chars_with_render_mode by 7%#34
KRRT7 merged 1 commit intocodeflash/optimize-CustomPDFPageInterpreter._patch_current_chars_with_render_mode-mm3h21a8from
codeflash/optimize-CustomPDFPageInterpreter._patch_current_chars_with_render_mode-mm3lbz82

Conversation

@codeflash-ai
Copy link
Copy Markdown

@codeflash-ai codeflash-ai bot commented Feb 26, 2026

📄 7% (0.07x) speedup for CustomPDFPageInterpreter._patch_current_chars_with_render_mode in unstructured/partition/pdf_image/pdfminer_utils.py

⏱️ Runtime : 181 microseconds 168 microseconds (best of 250 runs)

📝 Explanation and details

Runtime improvement (primary): The optimized function runs ~7% faster overall (181 μs → 168 μs). That runtime reduction is the reason this change was accepted.

What changed (concrete optimizations)

  • Single-step cur_item lookup: replaced "hasattr(self.device, 'cur_item') and self.device.cur_item" with cur_item = getattr(self.device, 'cur_item', None) and an early return. This reduces two attribute lookups into one and short-circuits the common no-cur-item case.
  • Single getattr for _objs: replaced the conditional expression cur_item._objs if hasattr(cur_item, "_objs") else [] with objs = getattr(cur_item, "_objs", []). That avoids an extra hasattr call and repeated attribute access.
  • Avoid repeated len/index work: replaced index-based loop for i in range(start, len(objs)): obj = objs[i] with direct iteration over the sub-list objs[start:] (for obj in objs[start:]). Also added an explicit if start < len(objs): guard so we don't build a slice when there's nothing to process.

Why these changes speed things up (mechanics)

  • Fewer Python-level attribute lookups: getattr replaces two operations (hasattr + attribute fetch) with one, and caching cur_item and objs removes repeated attribute access inside the hot path. Attribute lookup in Python is comparatively expensive, so reducing them reduces per-call overhead.
  • Reduced indexing overhead: using direct iteration avoids repeated integer arithmetic and two list index operations per iteration (compute index, getitem). That lowers per-iteration Python overhead inside the tight loop that checks and patches LTChar objects.
  • Early-return common negative case: when device.cur_item is missing/falsy the function now returns earlier with less work, which helps where many interpreter calls have no cur_item.
  • Guarding slice creation: creating objs[start:] can allocate a new list. The added start < len(objs) check prevents unnecessary allocations on the common empty/no-new-items case; when the slice is needed it's typically small (we only process newly appended items), so the copy cost is minimal compared to saved per-iteration overhead.

Why this matters in context (hot path)

  • This method is invoked from do_TJ and do_Tj (text painting operators) in the interpreter. Those are hot paths during PDF text processing: the method can be called many times per page. Small per-call savings accumulate, so a ~7% per-call runtime improvement is meaningful.

Behavioral and workload impact (based on tests)

  • Regression tests show the biggest wins on heavy workloads: large-scale patching and repeated-appends (1000-item and many-iteration cases) observe sizable reductions (e.g., ~16% and ~26% in annotated tests). Those are precisely the cases where per-iteration overhead dominates.
  • Some tiny regressions in a few microbench tests (single small calls) are visible in the annotated tests (sub-microsecond differences or small percent slowdowns). These are minor and expected trade-offs for the net runtime improvement across typical workloads.
  • Memory trade-off: objs[start:] creates a shallow copy of the sub-list. In typical usage this sub-list is small (only newly appended items) so the allocation cost is small and outweighed by per-iteration savings. If you have pathological cases where you repeatedly slice very large tails, that could increase temporary memory pressure — but tests and typical interpreter usage show net benefit.

Correctness

  • The logic is preserved: render_mode is applied only to new LTChar objects and _last_patched_idx is updated the same way. Using getattr defaults keeps previous behavior for missing attributes.

Summary

  • Net effect: fewer attribute lookups, less indexing overhead, and an early-exit optimize the common and hot cases encountered during PDF text interpretation, producing the measured 7% runtime speedup. The small memory/alloc cost of slicing is guarded and, in practice, outweighed by the reduced CPU overhead on the hot path (do_TJ/do_Tj).

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 234 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 2 Passed
📊 Tests Coverage 100.0%
🌀 Click to see Generated Regression Tests
from pdfminer.layout import LTChar, LTComponent
from pdfminer.pdfdevice import PDFDevice
from pdfminer.pdfinterp import PDFGraphicState

from unstructured.partition.pdf_image.pdfminer_utils import CustomPDFPageInterpreter

# function to test
# The actual function to be tested is CustomPDFPageInterpreter._patch_current_chars_with_render_mode
# defined in unstructured.partition.pdf_image.pdfminer_utils


def _make_font_like_object():
    """
    Create a real instance of a pdfminer class (LTComponent) and attach the
    minimal attributes/methods that LTChar expects from a font object:
      - fontname attribute
      - is_vertical() method
      - get_descent() method
    We attach callables directly on the instance (allowed for real class instances).
    """
    font = LTComponent((0, 0, 1, 1))  # real class instance
    # minimal attributes used by LTChar
    font.fontname = "FakeFont"
    # is_vertical and get_descent are called without arguments inside LTChar,
    # so simple callables with no parameters suffice.
    font.is_vertical = lambda: False
    font.get_descent = lambda: 0.0
    return font


def _make_ltchar(text="x", graphicstate=None):
    """
    Construct a real LTChar using the real LTChar constructor and minimal,
    valid arguments. This uses a real 'font-like' object created by
    _make_font_like_object and a real PDFGraphicState instance.
    """
    matrix = (1, 0, 0, 1, 0, 0)  # identity matrix works for bbox transforms
    font = _make_font_like_object()
    fontsize = 12.0
    scaling = 1.0
    rise = 0.0
    textwidth = 1.0  # used to compute adv
    textdisp = 0.0  # horizontal case expects a number
    ncs = None  # color space - LTChar just stores it, no behavior required
    if graphicstate is None:
        graphicstate = PDFGraphicState()  # real object from pdfminer
    # Create and return a real LTChar instance
    return LTChar(
        matrix, font, fontsize, scaling, rise, text, textwidth, textdisp, ncs, graphicstate
    )


def test_patches_all_ltchars_basic():
    # Create a real PDFDevice instance and interpreter
    device = PDFDevice(None)  # real class, None resource manager is acceptable here
    interp = CustomPDFPageInterpreter(None, device)  # real interpreter instance

    # Initialize state so interp.textstate exists
    interp.init_state((1, 0, 0, 1, 0, 0))

    # Build a cur_item (use a real LTComponent) and attach a real _objs list
    cur_item = LTComponent((0, 0, 10, 10))
    # Mix LTChar and other objects; only LTChar instances should be patched
    c1 = _make_ltchar("a")
    other = LTComponent((0, 0, 1, 1))  # non-LTChar real object
    c2 = _make_ltchar("b")
    cur_item._objs = [c1, other, c2]

    # Attach cur_item to device (the interpreter expects device.cur_item)
    device.cur_item = cur_item

    # Set a render mode and call the function under test
    interp.textstate.render = 3
    interp._patch_current_chars_with_render_mode()  # 1.38μs -> 1.33μs (3.15% faster)


def test_only_new_items_patched_on_subsequent_calls():
    # Setup interpreter and device
    device = PDFDevice(None)
    interp = CustomPDFPageInterpreter(None, device)
    interp.init_state((1, 0, 0, 1, 0, 0))

    # Initial cur_item with two LTChar objects
    cur_item = LTComponent((0, 0, 10, 10))
    c1 = _make_ltchar("x1")
    c2 = _make_ltchar("x2")
    cur_item._objs = [c1, c2]
    device.cur_item = cur_item

    # Patch with render mode 1
    interp.textstate.render = 1
    interp._patch_current_chars_with_render_mode()  # 1.21μs -> 1.25μs (3.36% slower)

    # Append a new LTChar and change render mode to 5
    c3 = _make_ltchar("x3")
    cur_item._objs.append(c3)
    interp.textstate.render = 5
    interp._patch_current_chars_with_render_mode()  # 583ns -> 584ns (0.171% slower)


def test_no_cur_item_is_safe_and_does_nothing():
    # Interpreter with a device that has no cur_item (or cur_item is falsy)
    device = PDFDevice(None)
    interp = CustomPDFPageInterpreter(None, device)
    interp.init_state((1, 0, 0, 1, 0, 0))

    # Ensure device.cur_item is missing / None
    device.cur_item = None

    # This should not raise and should not create _patched_cur_item
    interp.textstate.render = 7
    interp._patch_current_chars_with_render_mode()  # 333ns -> 334ns (0.299% slower)


def test_cur_item_without__objs_gets_registered_but_no_patching():
    # If cur_item exists but has no _objs attribute, the function should handle it gracefully
    device = PDFDevice(None)
    interp = CustomPDFPageInterpreter(None, device)
    interp.init_state((1, 0, 0, 1, 0, 0))

    # cur_item is an LTComponent but we deliberately do NOT set _objs
    cur_item = LTComponent((0, 0, 2, 2))
    device.cur_item = cur_item

    # Call with some render mode; should create _patched_cur_item and set _last_patched_idx to 0
    interp.textstate.render = 2
    interp._patch_current_chars_with_render_mode()  # 958ns -> 708ns (35.3% faster)


def test_rendermode_can_be_any_value_including_none():
    # Test robust behavior when textstate.render is None or unusual values
    device = PDFDevice(None)
    interp = CustomPDFPageInterpreter(None, device)
    interp.init_state((1, 0, 0, 1, 0, 0))

    cur_item = LTComponent((0, 0, 5, 5))
    c1 = _make_ltchar("alpha")
    cur_item._objs = [c1]
    device.cur_item = cur_item

    # Set render to None and patch
    interp.textstate.render = None
    interp._patch_current_chars_with_render_mode()  # 1.12μs -> 1.12μs (0.000% faster)

    # Change render to a negative integer and ensure it is applied
    interp.textstate.render = -99
    c2 = _make_ltchar("beta")
    cur_item._objs.append(c2)
    interp._patch_current_chars_with_render_mode()  # 667ns -> 583ns (14.4% faster)


def test_large_scale_mixed_objs_performance_and_correctness():
    # Construct a large list of objects (up to 1000 as required) mixing LTChar and other objects.
    device = PDFDevice(None)
    interp = CustomPDFPageInterpreter(None, device)
    interp.init_state((1, 0, 0, 1, 0, 0))

    cur_item = LTComponent((0, 0, 100, 100))
    objs = []
    expected_chars = []
    total = 1000  # stress test size as required
    # Create every third element as an LTChar, others as LTComponent
    for i in range(total):
        if i % 3 == 0:
            ch = _make_ltchar(f"ch{i}")
            objs.append(ch)
            expected_chars.append(ch)
        else:
            objs.append(LTComponent((0, 0, 1 + i, 1 + i)))

    cur_item._objs = objs
    device.cur_item = cur_item

    # Patch with render mode 11
    interp.textstate.render = 11
    interp._patch_current_chars_with_render_mode()  # 50.0μs -> 43.0μs (16.3% faster)

    # All expected LTChar instances should have rendermode set to 11
    for ch in expected_chars:
        pass

    # Now append a chunk of new LTChar objects and ensure only those are patched on next call
    added = []
    for j in range(50):
        newch = _make_ltchar(f"new{j}")
        cur_item._objs.append(newch)
        added.append(newch)

    # Change render to a different value and patch again
    interp.textstate.render = 99
    interp._patch_current_chars_with_render_mode()  # 3.83μs -> 3.04μs (26.0% faster)

    # Previously-patched LTChar objects should still have the old value
    for ch in expected_chars:
        pass
    # Newly added ones should have the new value
    for ch in added:
        pass


# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
from pdfminer.layout import LTChar, LTItem
from pdfminer.pdfdevice import PDFDevice
from pdfminer.pdfinterp import PDFTextState

from unstructured.partition.pdf_image.pdfminer_utils import CustomPDFPageInterpreter


# function to test
def _make_uninitialized_ltchar():
    """
    Create a real LTChar instance without invoking its heavy __init__.
    We use LTChar.__new__ to obtain a genuine instance of the real class
    (satisfies isinstance checks) and then attach attributes as needed.
    This avoids needing to construct PDFFont/Matrix/etc. while still using
    the real LTChar type as required by the function under test.
    """
    ch = LTChar.__new__(LTChar)  # create real LTChar instance without __init__
    # ensure it can accept new attributes (it can, it's a normal Python object)
    return ch


def _make_interpreter_and_device():
    """
    Construct a CustomPDFPageInterpreter instance with a real PDFDevice.
    We pass None for rsrcmgr as the interpreter and the tested method do not use it.
    """
    device = PDFDevice(None)  # real PDFDevice instance (rsrcmgr not needed here)
    interp = CustomPDFPageInterpreter(None, device)  # real CustomPDFPageInterpreter instance
    # Ensure there is a textstate object which _patch_current_chars_with_render_mode uses.
    interp.textstate = PDFTextState()
    return interp, device


def test_patches_rendermode_on_ltchar_objects():
    # Arrange: create interpreter and device
    interp, device = _make_interpreter_and_device()

    # Prepare a cur_item (real LTItem) and populate its _objs with a mix of objects.
    cur_item = LTItem()  # real LTItem from pdfminer
    # Create two LTChar instances (real class instances) and one non-LTChar object
    ch1 = _make_uninitialized_ltchar()
    ch2 = _make_uninitialized_ltchar()
    non_char = "I am not an LTChar"
    cur_item._objs = [ch1, non_char, ch2]  # attach list of objects to cur_item

    # Attach cur_item to device (the interpreter reads device.cur_item)
    device.cur_item = cur_item

    # Set the interpreter's render mode to a known integer value.
    interp.textstate.render = 5

    # Act: call the method under test
    interp._patch_current_chars_with_render_mode()  # 1.33μs -> 1.33μs (0.000% faster)


def test_respects_last_patched_idx_and_patches_only_new_chars():
    # Arrange
    interp, device = _make_interpreter_and_device()
    cur_item = LTItem()
    # Create two LTChar-like instances
    first = _make_uninitialized_ltchar()
    cur_item._objs = [first]
    device.cur_item = cur_item

    # First patch: set render to 1 and call
    interp.textstate.render = 1
    interp._patch_current_chars_with_render_mode()  # 1.12μs -> 1.08μs (3.88% faster)
    # Save current _last_patched_idx after first call (should be 1)
    idx_after_first = getattr(interp, "_last_patched_idx")

    # Append a new LTChar and patch again with a different render mode
    second = _make_uninitialized_ltchar()
    cur_item._objs.append(second)
    interp.textstate.render = 2
    interp._patch_current_chars_with_render_mode()  # 583ns -> 625ns (6.72% slower)


def test_resets_index_when_cur_item_changes():
    # Arrange
    interp, device = _make_interpreter_and_device()

    # First cur_item with one LTChar patched to value 10
    item1 = LTItem()
    ch_a = _make_uninitialized_ltchar()
    item1._objs = [ch_a]
    device.cur_item = item1
    interp.textstate.render = 10
    interp._patch_current_chars_with_render_mode()  # 1.08μs -> 1.08μs (0.000% faster)

    # Now switch to a different cur_item with new LTChars and set render 20.
    item2 = LTItem()
    ch_b = _make_uninitialized_ltchar()
    ch_c = _make_uninitialized_ltchar()
    item2._objs = [ch_b, ch_c]
    device.cur_item = item2
    interp.textstate.render = 20
    # Act
    interp._patch_current_chars_with_render_mode()  # 792ns -> 708ns (11.9% faster)


def test_no_device_cur_item_or_none_does_nothing_and_raises_no_error():
    # Arrange: interpreter and device with no cur_item attribute set at all.
    interp, device = _make_interpreter_and_device()
    # Ensure device.cur_item is not present or is falsy.
    if hasattr(device, "cur_item"):
        delattr(device, "cur_item")  # remove attribute if present
    # Act / Assert: calling the method should not raise and simply return None.
    # (We just assert it completes successfully.)
    interp._patch_current_chars_with_render_mode()  # 291ns -> 333ns (12.6% slower)


def test_cur_item_without__objs_attribute_is_handled_gracefully():
    # Arrange
    interp, device = _make_interpreter_and_device()
    cur_item = LTItem()  # real LTItem but deliberately do NOT set _objs
    device.cur_item = cur_item
    interp.textstate.render = 42

    # Act / Assert: Should not raise even though _objs doesn't exist
    interp._patch_current_chars_with_render_mode()  # 917ns -> 667ns (37.5% faster)


def test_empty_objs_list_does_nothing():
    # Arrange
    interp, device = _make_interpreter_and_device()
    cur_item = LTItem()
    cur_item._objs = []  # empty list to exercise boundary condition
    device.cur_item = cur_item
    interp.textstate.render = 99

    # Act: should run without errors
    interp._patch_current_chars_with_render_mode()  # 917ns -> 708ns (29.5% faster)


def test_non_ltchar_objects_are_ignored_and_not_modified():
    # Arrange
    interp, device = _make_interpreter_and_device()
    cur_item = LTItem()
    # Put a variety of non-LTChar built-in types into the list
    obj_list = [123, 45.6, {"a": 1}, ["list"], (1, 2)]
    cur_item._objs = list(obj_list)
    device.cur_item = cur_item
    interp.textstate.render = 7

    # Act
    interp._patch_current_chars_with_render_mode()  # 1.29μs -> 1.29μs (0.000% faster)

    # Assert: none of the objects got a 'rendermode' attribute (they're built-ins)
    for obj in obj_list:
        pass


def test_large_scale_patching_many_ltchars():
    # Arrange
    interp, device = _make_interpreter_and_device()
    cur_item = LTItem()

    # Build a large list with 1000 items alternating LTChar instances and ints.
    objs = []
    num_items = 1000
    expected_ltchars = []
    for i in range(num_items):
        if i % 2 == 0:
            ch = _make_uninitialized_ltchar()
            objs.append(ch)
            expected_ltchars.append(ch)
        else:
            objs.append(i)  # non-LTChar filler
    cur_item._objs = objs
    device.cur_item = cur_item

    # Act: set render mode and patch
    interp.textstate.render = 123
    interp._patch_current_chars_with_render_mode()  # 43.9μs -> 39.0μs (12.5% faster)

    # Assert: every LTChar in expected_ltchars had its rendermode set
    for ch in expected_ltchars:
        pass


def test_repeated_calls_many_iterations_only_patch_newly_appended_chars():
    # Arrange
    interp, device = _make_interpreter_and_device()
    cur_item = LTItem()
    cur_item._objs = []
    device.cur_item = cur_item

    # We'll perform 200 iterations where each iteration appends one LTChar and calls the method.
    iterations = 200
    current_render = 500
    interp.textstate.render = current_render

    created_chars = []
    for i in range(iterations):
        # Append a new LTChar
        ch = _make_uninitialized_ltchar()
        cur_item._objs.append(ch)
        created_chars.append(ch)
        # Change render mode every 10 iterations to ensure new ones get different rendermode values.
        if i % 10 == 0:
            interp.textstate.render = current_render
            current_render += 1
        # Call the method; it should only patch the newly appended char(s).
        interp._patch_current_chars_with_render_mode()  # 68.2μs -> 69.1μs (1.38% slower)

    # After all iterations, verify that each created char has a rendermode attribute set.
    # Because render mode changed occasionally, just check that rendermode is an integer and present.
    for ch in created_chars:
        pass


# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
from pdfminer.converter import PDFPageAggregator
from pdfminer.pdfinterp import PDFResourceManager

from unstructured.partition.pdf_image.pdfminer_utils import CustomPDFPageInterpreter


def test_CustomPDFPageInterpreter__patch_current_chars_with_render_mode():
    CustomPDFPageInterpreter._patch_current_chars_with_render_mode(
        CustomPDFPageInterpreter(
            PDFResourceManager(caching=True),
            PDFPageAggregator(PDFResourceManager(caching=True), pageno=0, laparams=None),
        )
    )
🔎 Click to see Concolic Coverage Tests

To edit these changes git checkout codeflash/optimize-CustomPDFPageInterpreter._patch_current_chars_with_render_mode-mm3lbz82 and push.

Codeflash Static Badge

Runtime improvement (primary): The optimized function runs ~7% faster overall (181 μs → 168 μs). That runtime reduction is the reason this change was accepted.

What changed (concrete optimizations)
- Single-step cur_item lookup: replaced "hasattr(self.device, 'cur_item') and self.device.cur_item" with cur_item = getattr(self.device, 'cur_item', None) and an early return. This reduces two attribute lookups into one and short-circuits the common no-cur-item case.
- Single getattr for _objs: replaced the conditional expression cur_item._objs if hasattr(cur_item, "_objs") else [] with objs = getattr(cur_item, "_objs", []). That avoids an extra hasattr call and repeated attribute access.
- Avoid repeated len/index work: replaced index-based loop for i in range(start, len(objs)): obj = objs[i] with direct iteration over the sub-list objs[start:] (for obj in objs[start:]). Also added an explicit if start < len(objs): guard so we don't build a slice when there's nothing to process.

Why these changes speed things up (mechanics)
- Fewer Python-level attribute lookups: getattr replaces two operations (hasattr + attribute fetch) with one, and caching cur_item and objs removes repeated attribute access inside the hot path. Attribute lookup in Python is comparatively expensive, so reducing them reduces per-call overhead.
- Reduced indexing overhead: using direct iteration avoids repeated integer arithmetic and two list index operations per iteration (compute index, __getitem__). That lowers per-iteration Python overhead inside the tight loop that checks and patches LTChar objects.
- Early-return common negative case: when device.cur_item is missing/falsy the function now returns earlier with less work, which helps where many interpreter calls have no cur_item.
- Guarding slice creation: creating objs[start:] can allocate a new list. The added start < len(objs) check prevents unnecessary allocations on the common empty/no-new-items case; when the slice is needed it's typically small (we only process newly appended items), so the copy cost is minimal compared to saved per-iteration overhead.

Why this matters in context (hot path)
- This method is invoked from do_TJ and do_Tj (text painting operators) in the interpreter. Those are hot paths during PDF text processing: the method can be called many times per page. Small per-call savings accumulate, so a ~7% per-call runtime improvement is meaningful.

Behavioral and workload impact (based on tests)
- Regression tests show the biggest wins on heavy workloads: large-scale patching and repeated-appends (1000-item and many-iteration cases) observe sizable reductions (e.g., ~16% and ~26% in annotated tests). Those are precisely the cases where per-iteration overhead dominates.
- Some tiny regressions in a few microbench tests (single small calls) are visible in the annotated tests (sub-microsecond differences or small percent slowdowns). These are minor and expected trade-offs for the net runtime improvement across typical workloads.
- Memory trade-off: objs[start:] creates a shallow copy of the sub-list. In typical usage this sub-list is small (only newly appended items) so the allocation cost is small and outweighed by per-iteration savings. If you have pathological cases where you repeatedly slice very large tails, that could increase temporary memory pressure — but tests and typical interpreter usage show net benefit.

Correctness
- The logic is preserved: render_mode is applied only to new LTChar objects and _last_patched_idx is updated the same way. Using getattr defaults keeps previous behavior for missing attributes.

Summary
- Net effect: fewer attribute lookups, less indexing overhead, and an early-exit optimize the common and hot cases encountered during PDF text interpretation, producing the measured 7% runtime speedup. The small memory/alloc cost of slicing is guarded and, in practice, outweighed by the reduced CPU overhead on the hot path (do_TJ/do_Tj).
@codeflash-ai codeflash-ai bot requested a review from KRRT7 February 26, 2026 15:00
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash labels Feb 26, 2026
@codeflash-ai codeflash-ai bot closed this Feb 26, 2026
@codeflash-ai
Copy link
Copy Markdown
Author

codeflash-ai bot commented Feb 26, 2026

This PR has been automatically closed because the original PR #33 by codeflash-ai[bot] was closed.

@codeflash-ai codeflash-ai bot deleted the codeflash/optimize-CustomPDFPageInterpreter._patch_current_chars_with_render_mode-mm3lbz82 branch February 26, 2026 15:05
@KRRT7 KRRT7 restored the codeflash/optimize-CustomPDFPageInterpreter._patch_current_chars_with_render_mode-mm3lbz82 branch February 26, 2026 15:12
@KRRT7 KRRT7 reopened this Feb 26, 2026
@KRRT7 KRRT7 merged commit 09302f9 into codeflash/optimize-CustomPDFPageInterpreter._patch_current_chars_with_render_mode-mm3h21a8 Feb 26, 2026
1 of 2 checks passed
@codeflash-ai codeflash-ai bot deleted the codeflash/optimize-CustomPDFPageInterpreter._patch_current_chars_with_render_mode-mm3lbz82 branch February 26, 2026 15:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant