perf: Reduce overall flash usage by 30.7% by compressing built-in fonts#831
Conversation
Co-Authored-By: Claude Opus 4.6 <[email protected]>
|
This is obviously a slightly different approach to my per-glyph approach (#254), but I'm curious to know about on-device render performance vs master. In my branch, it basically added another 50% to the render time for a full page of text (moving from 700ms -> 1000ms) |
I'm quite curious how you measured these numbers without testing it on-device though. If I understand correctly, the heap tracking function from your emulator doesn't return the real value. Also, just note that because the emulated firmware is compiled to host arch (ARM64 on mac), it will behave differently in terms of memory usage. Remind that the ESP32C3 uses RISC-V 32-bit. Btw, I'm interested in making an emulator too. A bit off-topic, the direction of your emulator is good, but I think it will be even cleaner if it can:
|
|
@daveallie I reckon if you'd followed through with your idea of adding a cache, the PR would have been much closer to being merged. However, the grouping approach significantly increases the compression ratio, and, in relative terms when compiled and run on my laptop, modestly improves the glyph retrieval time per page. Note I also tested LZ4, which is much faster than miniz in decompression, however the cache seems to hide the decompression speed differences. The benchmark was run on an entire English EPUB. These obviously do not replace testing on-device, which I unfortunately cannot do yet since I'm still waiting for my device to be shipped. 😢
@ngxson Claude seems to be able to infer what the heap allocations would be just by reading the code. I'd love to validate this -- maybe when my device arrives!
One misleading comment in the original PR is that the ~11 KB allocation for |
Just note that I have seen instances where Claude straight out fake the results to make it look more appealing. If any test results (and even test scripts) are made by AI, please (1) take it as a grain of salt and (2) explicitly state that the result is generated by AI |
I've seen this too. It would be great if you can test this build out and see what the heap allocations are actually like. Obviously this PR should not be merged until someone does this. |
I am sorry but my job here is to review the code and optionally do analysis if things don't look right. The author of each PR should be the first person to verify the results before pushing the PR. Especially important when the author gives any numbers in the PR description:
|
@ngxson I validated this claim myself, and it should be self-evident from the PR diff (100k line deletions * 16 bytes per line = ~1.6 MB). Though you're right to be skeptical of the other claims which are not so self-evident (as was I). So, I snooped around in your repos and was able to run your hardware emulator to benchmark the results on this dev board which Amazon next-day delivered (thanks for the idea, neat!). (PS: I can show you how to fix the IO issues you mentioned in the README if you like). Note: the benchmarks below contain no sd card IO, so they should be representative of the actual xteink performance.
@daveallie The page render time was basically unaffected by the font compression. The key is to minimise the number of decompressions needed to render a page. This is achieved with glyph-grouping (which is more compression-efficient anyway) and with caching (i.e., don't decompress the same glyph twice within each page and/or across pages). The obvious downside is heap allocation. The heap size cost for Bookerly 14pt is ~19 KB. This can be freed after each page render, however it means that the next page starts with a cold cache (though this is no big deal, so potentially this should be removed before merging). Benchmark summary (Written by Claude from benchmark stats)Font Compression BenchmarkOn-device benchmark of group-based DEFLATE font compression on ESP32-C3 (RISC-V), run via the emulated environment ( Configurations
Render Time
With the cache enabled, compression adds no measurable overhead. Even starting cold on every page only costs +1.7%. Memory Usage
The cached configuration uses ~19 KB more heap than baseline, from the decompressed group buffer (~7 KB) and the Cache Effectiveness
English text fits almost entirely in one font group. The 4-slot LRU cache holds it for the entire page, yielding a near-perfect hit rate with only 1–3 decompressions needed per page. Benchmark data: |
576f8df to
d609227
Compare
|
I've swaped out miniz with uzlib, which essentially reduces the heap allocation of the decompressor to 0. The remaining heap allocation should just be for the LRU cache, which is now being freed between pages (because paying the cost of group decompression per-page is fine). Render Time
Render time is equivalent — the +1.4% median difference is within run-to-run variance. Memory Usage
uzlib reclaims ~10–12 KB of heap, consistent with eliminating the ~11 KB |
And on the PR description:
You already given 2 different numbers. Sorry but I will stop commenting and reviewing this PR because there are too many inconsistencies in term of report results. Will leave this for other maintainers to review. |
|
Okay, I'll leave this here. See here for a detailed breakdown of differences in flash size, render time, and heap allocation. If the maintainers would prefer, I can close this PR and re-open so the history is clearer. |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds grouped DEFLATE compression for built-in fonts, a new FontDecompressor with a 4‑slot LRU cache and uzlib-based inflate, tools to emit and verify compressed font headers, renderer integration for decompression, and cache-clearing calls at page-render entry points. Changes
Sequence DiagramsequenceDiagram
participant Renderer as GfxRenderer
participant Decompressor as FontDecompressor
participant uzlib as uzlib
participant FontData as EpdFontData
Renderer->>Renderer: getGlyphBitmap(fontData, glyph)
alt fontData has Groups (compressed)
Renderer->>Decompressor: getGroupIndex(fontData, glyphIndex)
Decompressor-->>Renderer: groupIndex
Renderer->>Decompressor: findInCache(fontData, groupIndex)
alt cache hit
Decompressor-->>Renderer: pointer to decompressed buffer
else cache miss
Decompressor->>Decompressor: findEvictionCandidate()
Decompressor-->>Renderer: slot
Decompressor->>FontData: read compressed group bytes
Decompressor->>uzlib: uzlib_uncompress(compressed → dest)
uzlib-->>Decompressor: decompressed bytes
Decompressor->>Decompressor: store in cache slot (valid, lastUsed)
Decompressor-->>Renderer: pointer to decompressed buffer
end
Renderer->>Renderer: compute glyph ptr from group offset and return bitmap
else no groups (uncompressed)
Renderer-->>Renderer: return bitmap from FontData->Bitmaps at glyph->dataOffset
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/EpdFont/scripts/fontconvert.py (1)
274-418:⚠️ Potential issue | 🟡 MinorFix Ruff lint findings in the compression path.
- Line 307:
packedis unused (B007) → rename to_packed.- Lines 377/382: trailing semicolons after
print(...)(E703).- Lines 417–418:
fprefixes without placeholders (F541).🧹 Suggested lint fixes
- for i, (props, packed) in enumerate(all_glyphs): + for i, (props, _packed) in enumerate(all_glyphs): @@ - print ("};\n"); + print("};\n") @@ - print ("};\n"); + print("};\n") @@ - print(f" nullptr,") - print(f" 0,") + print(" nullptr,") + print(" 0,")
🤖 Fix all issues with AI agents
In `@lib/EpdFont/FontDecompressor.cpp`:
- Around line 99-105: The decompression success check is incomplete: after
calling uzlib_uncompress(&decomp) you must validate that the decompressor
actually wrote the expected number of bytes (compare decomp.dest against
dest_limit or expected_size) and optionally that input was fully consumed, and
treat any short output as failure; if the check fails, free outBuf, log the
groupIndex and actual dest value (using Serial.printf similar to the existing
log) and return false. Ensure you reference uzlib_uncompress, the decomp struct,
outBuf, dest_limit/expected_size and groupIndex when making these checks and
cleanup.
- Around line 115-133: The getBitmap function currently assumes compression
groups exist and calls getGroupIndex/decompressGroup even when
fontData->groupCount == 0 or fontData->groups == nullptr; add a fast-path at the
start of FontDecompressor::getBitmap that checks if fontData->groupCount == 0 or
fontData->groups == nullptr and in that case return the direct bitmap pointer
for the glyph (or nullptr if glyph->dataOffset is invalid), bypassing
cache/decompression; reference getBitmap, getGroupIndex, decompressGroup, and
EpdFontData->groupCount/groups when implementing the guard.
In `@lib/EpdFont/scripts/verify_compression.py`:
- Around line 142-150: The loop captures the three-tuple returned by
verify_font_file into (font_name, success, message) but never uses font_name,
causing an unused-variable lint warning; rename font_name to _font_name or _ to
indicate intentional unused, or include the font_name in the print/log output
(e.g., use font_name in the PASS/FAIL messages) so the variable is used; update
the for-loop assignment where verify_font_file is called and adjust any related
prints (still referencing verify_font_file, font_name, success, message).
In `@lib/GfxRenderer/GfxRenderer.cpp`:
- Around line 6-11: In getGlyphBitmap, when fontData->groups is non-null
(compressed font) but fontDecompressor is null you must not index into
fontData->bitmap with glyph->dataOffset; instead guard that case and fail fast
by returning nullptr (or an explicit error) to avoid returning a pointer into
compressed data; update the function GfxRenderer::getGlyphBitmap to check
(fontData->groups != nullptr && fontDecompressor == nullptr) and return nullptr
when true, leaving the existing path for decompressed fonts and the
fontDecompressor path intact so callers (rendering code) can handle the null
result safely.
In `@lib/uzlib/src/tinflate.c`:
- Around line 662-669: The ISIZE trailer is not validated: when handling
TINF_CHKSUM_CRC after reading the CRC with tinf_get_le_uint32(d) you must also
verify the 4-byte ISIZE against a running total of decompressed bytes. Add a
uint32/uint64 field to the decompressor state (e.g., d->total_out or
d->out_count), increment it wherever output bytes are written in the inflate
paths, then after val = tinf_get_le_uint32(d) compare (uint32) d->total_out to
val and return TINF_CHKSUM_ERROR on mismatch; update any initialization/reset
logic for the state and ensure the comparison uses 32-bit wrapping semantics to
match gzip ISIZE.
- Around line 482-491: The memcpy call that copies LZ77 back-reference data
(using variables d->curlen, d->dest, d->dest_limit and d->lzOff) can operate on
overlapping source/destination ranges; replace memcpy(d->dest, d->dest +
d->lzOff, to_copy) with memmove to guarantee correct behavior for overlapping
copies, and ensure <string.h> is included if memmove isn't already available in
this compilation unit.
- Around line 193-205: The pointer comparison in uzlib_get_byte is unsafe when
d->source_limit can be NULL; update the initial guard to check d->source_limit
before doing the relational comparison (i.e., only evaluate d->source <
d->source_limit when d->source_limit is non-NULL), so the function first tests
if (d->source_limit && d->source < d->source_limit) and otherwise falls back to
the d->readSource / d->eof branch (using the existing d->readSource callback and
d->eof logic); ensure the behavior matches the struct comment about streaming
mode when source_limit == NULL.
🧹 Nitpick comments (8)
docs/font-compression.md (1)
3-35: Add measurement context for the reported metrics.The tables are useful, but they’re hard to reproduce without stating the target (device/emulator), build profile/toolchain, measurement command, date, and what the percentages are relative to. A short context section would prevent misinterpretation.
✍️ Suggested doc addition
## Flash & RAM + +### Measurement context +- Target: <device/emulator> +- Build profile/toolchain: <profile + version> +- Command/script: <how sizes were measured> +- Date/commit: <timestamp or commit SHA> +- Percentages: relative to <configured app partition size> | | baseline | font-compression | Difference |lib/EpdFont/EpdFontData.h (1)
18-45: Document or widen uint16_t limits on glyph and group counts.
glyphCount,firstGlyphIndex, andgroupCountuseuint16_t, which caps fonts at 65,535 glyphs per group/total. Current fonts reach only 877 glyphs, well below the limit. However, no assertions or documentation enforce this constraint. For future fonts, either document this limit explicitly or widen these fields touint32_tto avoid overflow risks.src/activities/reader/EpubReaderActivity.cpp (1)
648-650: Add a short rationale for per‑page font‑cache clearsIf the cache is intentionally page‑scoped (heap/fragmentation control), a brief comment will prevent accidental removal later.
♻️ Suggested comment
- const auto start = millis(); - renderer.clearFontCache(); + const auto start = millis(); + // Clear per page to limit heap usage and avoid stale decompressed glyphs. + renderer.clearFontCache();lib/uzlib/src/tinf.h (1)
1-3: Add a header guard / pragma onceEven for a thin compatibility header, a guard prevents accidental multiple inclusion if it expands later.
♻️ Suggested update
+#pragma once /* Compatibility header for the original tinf lib/older versions of uzlib. Note: may be removed in the future, please migrate to uzlib.h. */ `#include` "uzlib.h"lib/uzlib/src/tinf_compat.h (1)
1-9: Add an include guard/pragma once for this new compatibility header.Without a guard, repeated includes can re-define these macros and trigger warnings on some toolchains. Consider adding
#pragma once(or a classic guard) at the top.♻️ Proposed tweak
+#pragma once + /* This header contains compatibility defines for the original tinf API and uzlib 2.x and below API. These defines are deprecated and going to be removed in the future, so applications should migrate to new uzlib API. */lib/ZipFile/ZipFile.cpp (1)
9-29: Validate inflated size to catch corrupt ZIPs.Line 21:
tinfl_decompresscan return DONE even if the produced byte count doesn’t match the expectedinflatedSize. CheckingoutBytes(and optionallyinBytes) avoids returning partially filled buffers.✅ Proposed check
const tinfl_status status = tinfl_decompress(inflator, inputBuf, &inBytes, nullptr, outputBuf, &outBytes, TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF); free(inflator); - if (status != TINFL_STATUS_DONE) { + if (status != TINFL_STATUS_DONE || outBytes != inflatedSize || inBytes != deflatedSize) { LOG_ERR("ZIP", "tinfl_decompress() failed with status %d", status); return false; }lib/uzlib/src/defl_static.h (1)
1-37: Add an include guard/pragma once for this new header.This prevents duplicate prototype declarations if included multiple times in a TU.
♻️ Proposed tweak
+#pragma once + /* * Copyright (c) uzlib authors *lib/EpdFont/FontDecompressor.h (1)
9-35: MakeFontDecompressornon-copyable to avoid shallow pointer copies.This class owns raw buffers in
CacheEntry. Deleting copy/assign prevents accidental double-free or corruption.♻️ Proposed safeguard
class FontDecompressor { public: + FontDecompressor() = default; + FontDecompressor(const FontDecompressor&) = delete; + FontDecompressor& operator=(const FontDecompressor&) = delete; bool init(); void deinit();
daveallie
left a comment
There was a problem hiding this comment.
Few things from CodeRabbit to address and just some cleanup of Serial.printf needed.
- GfxRenderer.cpp:6-16 -- Guard compressed fonts when no decompressor is set. Now logs an error and returns nullptr instead of indexing into compressed bitmap data. - FontDecompressor.cpp:100 -- Validate decompression produced expected byte count by checking decomp.dest != decomp.dest_limit. - FontDecompressor.cpp:114-117 -- Guard for fonts without compression groups -- fast path returning direct bitmap pointer. - verify_compression.py:144 -- Prefix unused font_name with _ to fix RUF059 lint warning. - FontDecompressor.cpp:3,84,101 -- Replaced #include <HardwareSerial.h> with #include <Logging.h> and swapped both Serial.printf(...) calls with LOG_ERR(...) macros. - Deleted docs/font-compression-benchmark.md -- Same findings have been added to the PR description.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
Verify each finding against the current code and only fix it if needed.
In `@lib/EpdFont/FontDecompressor.cpp`:
- Around line 35-43: The getGroupIndex function currently returns 0 when no
matching group is found which can cause wrong group selection and OOB reads;
change getGroupIndex to return a sentinel (e.g., UINT16_MAX) when no group
matches, and update callers (notably getBitmap and the logic around the group
lookup used in the 114-135 range) to check for that sentinel and bail out early
(return an error/empty bitmap or safe default) rather than proceeding with group
0; reference getGroupIndex and getBitmap so you locate and add the defensive
check where the group index is used.
- Around line 14-33: Extract the duplicated cleanup logic in
FontDecompressor::deinit and FontDecompressor::clearCache into a private helper
(e.g., clearCacheEntries or freeCacheEntries) that iterates over cache, frees
entry.data, sets entry.data = nullptr and entry.valid = false; then have deinit
call that helper and clearCache call the helper and also reset accessCounter =
0. Update references to cache and accessCounter in those methods accordingly so
behavior stays identical and duplication is removed.
In `@lib/EpdFont/scripts/verify_compression.py`:
- Around line 81-92: The script currently proceeds when a Groups array is found
but parse_groups returns an empty list, causing silent false positives; add a
guard after calling parse_groups(groups_array_match.group(1)) to check if the
returned groups is empty and treat that as an error (return (font_name, False,
"no groups parsed") or similar). Specifically update the block using
groups_array_match, parse_groups, and the groups variable to fail fast when
groups is empty so downstream logic that expects group entries cannot continue.
In `@lib/GfxRenderer/GfxRenderer.cpp`:
- Around line 6-16: The getGlyphBitmap path detects compressed fonts
(fontData->groups != nullptr) and currently calls LOG_ERR("GFX", "Compressed
font but no FontDecompressor set") every glyph when fontDecompressor is null;
add a log-once guard (e.g. a static or GfxRenderer member boolean like
decompressorMissingLogged) checked and set inside GfxRenderer::getGlyphBitmap
before calling LOG_ERR so the error is emitted only once per process or renderer
instance, leaving the existing return nullptr behavior unchanged.
🧹 Nitpick comments (2)
🤖 Fix all nitpicks with AI agents
Verify each finding against the current code and only fix it if needed. In `@lib/EpdFont/FontDecompressor.cpp`: - Around line 14-33: Extract the duplicated cleanup logic in FontDecompressor::deinit and FontDecompressor::clearCache into a private helper (e.g., clearCacheEntries or freeCacheEntries) that iterates over cache, frees entry.data, sets entry.data = nullptr and entry.valid = false; then have deinit call that helper and clearCache call the helper and also reset accessCounter = 0. Update references to cache and accessCounter in those methods accordingly so behavior stays identical and duplication is removed. In `@lib/GfxRenderer/GfxRenderer.cpp`: - Around line 6-16: The getGlyphBitmap path detects compressed fonts (fontData->groups != nullptr) and currently calls LOG_ERR("GFX", "Compressed font but no FontDecompressor set") every glyph when fontDecompressor is null; add a log-once guard (e.g. a static or GfxRenderer member boolean like decompressorMissingLogged) checked and set inside GfxRenderer::getGlyphBitmap before calling LOG_ERR so the error is emitted only once per process or renderer instance, leaving the existing return nullptr behavior unchanged.lib/EpdFont/FontDecompressor.cpp (1)
14-33: Consider deduplicating cache cleanup.
deinit()andclearCache()share identical cleanup logic. A small helper would reduce duplication and keep behavior consistent if you tweak it later.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/EpdFont/FontDecompressor.cpp` around lines 14 - 33, Extract the duplicated cleanup logic in FontDecompressor::deinit and FontDecompressor::clearCache into a private helper (e.g., clearCacheEntries or freeCacheEntries) that iterates over cache, frees entry.data, sets entry.data = nullptr and entry.valid = false; then have deinit call that helper and clearCache call the helper and also reset accessCounter = 0. Update references to cache and accessCounter in those methods accordingly so behavior stays identical and duplication is removed.lib/GfxRenderer/GfxRenderer.cpp (1)
6-16: Consider rate‑limiting the “no decompressor” error.If a compressed font is rendered without a decompressor, this logs once per glyph and can flood logs / slow rendering. A log‑once guard keeps the error visible without spamming.
🔇 Log-once guard
if (fontData->groups != nullptr) { if (!fontDecompressor) { - LOG_ERR("GFX", "Compressed font but no FontDecompressor set"); + static bool logged = false; + if (!logged) { + LOG_ERR("GFX", "Compressed font but no FontDecompressor set"); + logged = true; + } return nullptr; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/GfxRenderer/GfxRenderer.cpp` around lines 6 - 16, The getGlyphBitmap path detects compressed fonts (fontData->groups != nullptr) and currently calls LOG_ERR("GFX", "Compressed font but no FontDecompressor set") every glyph when fontDecompressor is null; add a log-once guard (e.g. a static or GfxRenderer member boolean like decompressorMissingLogged) checked and set inside GfxRenderer::getGlyphBitmap before calling LOG_ERR so the error is emitted only once per process or renderer instance, leaving the existing return nullptr behavior unchanged.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
lib/EpdFont/FontDecompressor.cpplib/EpdFont/scripts/verify_compression.pylib/GfxRenderer/GfxRenderer.cpp
🧰 Additional context used
🧬 Code graph analysis (2)
lib/GfxRenderer/GfxRenderer.cpp (3)
src/activities/boot_sleep/SleepActivity.cpp (4)
bitmap(57-57)bitmap(81-81)bitmap(96-96)bitmap(265-265)src/components/themes/BaseTheme.cpp (1)
bitmap(313-313)src/components/themes/lyra/LyraTheme.cpp (1)
bitmap(287-287)
lib/EpdFont/FontDecompressor.cpp (1)
lib/uzlib/src/tinflate.c (2)
uzlib_uncompress_init(552-562)uzlib_uncompress(565-626)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (10)
lib/EpdFont/scripts/verify_compression.py (4)
15-18: Looks good: concise hex extraction.
21-32: Clear group parsing logic.
35-48: Glyph parsing is straightforward.
127-158: CLI flow is clean and deterministic.lib/EpdFont/FontDecompressor.cpp (4)
9-12: Init is clean and minimal.
45-52: Cache lookup is clear.
54-69: LRU eviction logic looks good.
71-112: Decompression path is solid.lib/GfxRenderer/GfxRenderer.cpp (2)
806-824: Good consolidation of font data access.
971-978: LGTM for centralized bitmap retrieval.
There was a problem hiding this comment.
🤖 Fix all issues with AI agents
Verify each finding against the current code and only fix it if needed.
In `@lib/EpdFont/FontDecompressor.cpp`:
- Around line 14-33: Both FontDecompressor::deinit and
FontDecompressor::clearCache duplicate the same loop that frees cache entries;
extract that logic into a private helper (e.g.,
FontDecompressor::freeAllEntries) which iterates over cache, frees entry.data
and sets entry.data = nullptr and entry.valid = false, then call
freeAllEntries() from both deinit() and clearCache(); ensure clearCache still
resets accessCounter = 0 after calling the helper.
In `@lib/EpdFont/scripts/verify_compression.py`:
- Around line 15-18: The parse_hex_array function's regex only matches '0x' with
exactly two hex digits so it will miss variants like '0X' or single-digit bytes
such as '0xA'; update the pattern used in parse_hex_array to accept either '0x'
or '0X' and 1–2 hex digits (e.g. use a pattern like r'0[xX]([0-9A-Fa-f]{1,2})'),
keep converting each capture with int(h, 16) and returning bytes(...) so
single-digit captures are parsed correctly into their numeric byte values.
🧹 Nitpick comments (2)
🤖 Fix all nitpicks with AI agents
Verify each finding against the current code and only fix it if needed. In `@lib/EpdFont/FontDecompressor.cpp`: - Around line 14-33: Both FontDecompressor::deinit and FontDecompressor::clearCache duplicate the same loop that frees cache entries; extract that logic into a private helper (e.g., FontDecompressor::freeAllEntries) which iterates over cache, frees entry.data and sets entry.data = nullptr and entry.valid = false, then call freeAllEntries() from both deinit() and clearCache(); ensure clearCache still resets accessCounter = 0 after calling the helper. In `@lib/EpdFont/scripts/verify_compression.py`: - Around line 15-18: The parse_hex_array function's regex only matches '0x' with exactly two hex digits so it will miss variants like '0X' or single-digit bytes such as '0xA'; update the pattern used in parse_hex_array to accept either '0x' or '0X' and 1–2 hex digits (e.g. use a pattern like r'0[xX]([0-9A-Fa-f]{1,2})'), keep converting each capture with int(h, 16) and returning bytes(...) so single-digit captures are parsed correctly into their numeric byte values.lib/EpdFont/FontDecompressor.cpp (1)
14-33: Consider extracting shared cache-clearing logic.
deinit()andclearCache()share identical loops for freeing cache entries. You could extract a private helper (e.g.,freeAllEntries()) to reduce duplication.♻️ Proposed refactor
+void FontDecompressor::freeAllEntries() { + for (auto& entry : cache) { + if (entry.data) { + free(entry.data); + entry.data = nullptr; + } + entry.valid = false; + } +} + void FontDecompressor::deinit() { - for (auto& entry : cache) { - if (entry.data) { - free(entry.data); - entry.data = nullptr; - } - entry.valid = false; - } + freeAllEntries(); } void FontDecompressor::clearCache() { - for (auto& entry : cache) { - if (entry.data) { - free(entry.data); - entry.data = nullptr; - } - entry.valid = false; - } + freeAllEntries(); accessCounter = 0; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/EpdFont/FontDecompressor.cpp` around lines 14 - 33, Both FontDecompressor::deinit and FontDecompressor::clearCache duplicate the same loop that frees cache entries; extract that logic into a private helper (e.g., FontDecompressor::freeAllEntries) which iterates over cache, frees entry.data and sets entry.data = nullptr and entry.valid = false, then call freeAllEntries() from both deinit() and clearCache(); ensure clearCache still resets accessCounter = 0 after calling the helper.lib/EpdFont/scripts/verify_compression.py (1)
15-18: Make hex parsing tolerant of 0X or single‑digit bytes.Optional robustness: if the generator ever emits
0Xor0xA, the current regex would drop bytes. Broadening the pattern avoids false negatives with minimal risk.♻️ Suggested tweak
- hex_vals = re.findall(r'0x([0-9A-Fa-f]{2})', text) + hex_vals = re.findall(r'0[xX]([0-9A-Fa-f]{1,2})', text)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/EpdFont/scripts/verify_compression.py` around lines 15 - 18, The parse_hex_array function's regex only matches '0x' with exactly two hex digits so it will miss variants like '0X' or single-digit bytes such as '0xA'; update the pattern used in parse_hex_array to accept either '0x' or '0X' and 1–2 hex digits (e.g. use a pattern like r'0[xX]([0-9A-Fa-f]{1,2})'), keep converting each capture with int(h, 16) and returning bytes(...) so single-digit captures are parsed correctly into their numeric byte values.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
lib/EpdFont/FontDecompressor.cpplib/EpdFont/scripts/verify_compression.py
🧰 Additional context used
🧬 Code graph analysis (1)
lib/EpdFont/FontDecompressor.cpp (2)
lib/Epub/Epub/BookMetadataCache.cpp (1)
entry(358-358)lib/uzlib/src/tinflate.c (2)
uzlib_uncompress_init(552-562)uzlib_uncompress(565-626)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (7)
lib/EpdFont/FontDecompressor.cpp (4)
9-12: LGTM!Clean initialization of the decompression state.
35-69: LGTM!The sentinel return for missing groups and the LRU eviction logic are both well-implemented. Linear search is appropriate for the small cache size.
71-112: LGTM!The decompression logic is sound. The validation at line 100 correctly checks both the return status and that the full output buffer was filled. Memory management handles both success and failure paths properly.
114-140: The offset semantics in the C++ code are handled correctly byfontconvert.py.When compression is enabled,
fontconvert.py(lines 326-357) explicitly convertsdataOffsetfrom absolute offsets to within-group offsets. The comment on line 334 states: "Update glyph's dataOffset to be within-group offset". Each glyph'sdataOffsetis recalculated aslen(group_data)(line 335), representing its position within the group's concatenated bitmap data, not the global bitmap.This matches the C++ expectations:
- Uncompressed (line 116):
dataOffsetis absolute intofontData->bitmap- Compressed (lines 129, 139):
dataOffsetis relative toentry->data(the decompressed group)The implementation is correct.
lib/EpdFont/scripts/verify_compression.py (3)
21-48: Parsing helpers look solid.Clear structure and direct mapping to the header fields; nice and readable.
51-126: Verification flow is thorough and well‑guarded.Good early exits and detailed error messages make failures actionable.
129-164: CLI flow and exit behavior look good.Straightforward usage checks, summary, and proper non‑zero exit on failures.
There was a problem hiding this comment.
🤖 Fix all issues with AI agents
Verify each finding against the current code and only fix it if needed.
In `@lib/EpdFont/FontDecompressor.cpp`:
- Around line 110-136: In FontDecompressor::getBitmap add defensive bounds
checks for glyph->dataOffset before indexing into fontData->bitmap and
entry->data: ensure glyph->dataOffset is less than the corresponding buffer size
(for non-grouped case, compare against fontData->bitmap length; for
cached/decompressed case, compare against the decompressed entry->size or buffer
length in CacheEntry) and if out-of-range log an error (e.g., via LOG_ERR) and
return nullptr; adjust checks around the findInCache and decompressGroup paths
(functions findInCache, decompressGroup, and the CacheEntry structure) so you
never access entry->data[glyph->dataOffset] or
&fontData->bitmap[glyph->dataOffset] without validating the offset.
🧹 Nitpick comments (1)
🤖 Fix all nitpicks with AI agents
Verify each finding against the current code and only fix it if needed. In `@lib/EpdFont/FontDecompressor.cpp`: - Around line 110-136: In FontDecompressor::getBitmap add defensive bounds checks for glyph->dataOffset before indexing into fontData->bitmap and entry->data: ensure glyph->dataOffset is less than the corresponding buffer size (for non-grouped case, compare against fontData->bitmap length; for cached/decompressed case, compare against the decompressed entry->size or buffer length in CacheEntry) and if out-of-range log an error (e.g., via LOG_ERR) and return nullptr; adjust checks around the findInCache and decompressGroup paths (functions findInCache, decompressGroup, and the CacheEntry structure) so you never access entry->data[glyph->dataOffset] or &fontData->bitmap[glyph->dataOffset] without validating the offset.lib/EpdFont/FontDecompressor.cpp (1)
110-136: Consider bounds-checkingglyph->dataOffsetfor defense-in-depth.Lines 125 and 135 access
entry->data[glyph->dataOffset]without verifying thatdataOffsetis within the decompressed buffer size. While the font generation tooling andverify_compression.pyshould ensure consistency, a bounds check would guard against corrupt or mismatched metadata.Given this is a performance-sensitive path and the data is tooling-generated, this is optional.
🛡️ Optional defensive bounds check
// Check cache CacheEntry* entry = findInCache(fontData, groupIndex); if (entry) { entry->lastUsed = ++accessCounter; + if (glyph->dataOffset >= entry->dataSize) { + LOG_ERR("FDC", "dataOffset %u out of bounds for group %u (size %u)", glyph->dataOffset, groupIndex, entry->dataSize); + return nullptr; + } return &entry->data[glyph->dataOffset]; } // Cache miss - decompress entry = findEvictionCandidate(); if (!decompressGroup(fontData, groupIndex, entry)) { return nullptr; } entry->lastUsed = ++accessCounter; + if (glyph->dataOffset >= entry->dataSize) { + LOG_ERR("FDC", "dataOffset %u out of bounds for group %u (size %u)", glyph->dataOffset, groupIndex, entry->dataSize); + return nullptr; + } return &entry->data[glyph->dataOffset]; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/EpdFont/FontDecompressor.cpp` around lines 110 - 136, In FontDecompressor::getBitmap add defensive bounds checks for glyph->dataOffset before indexing into fontData->bitmap and entry->data: ensure glyph->dataOffset is less than the corresponding buffer size (for non-grouped case, compare against fontData->bitmap length; for cached/decompressed case, compare against the decompressed entry->size or buffer length in CacheEntry) and if out-of-range log an error (e.g., via LOG_ERR) and return nullptr; adjust checks around the findInCache and decompressGroup paths (functions findInCache, decompressGroup, and the CacheEntry structure) so you never access entry->data[glyph->dataOffset] or &fontData->bitmap[glyph->dataOffset] without validating the offset.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
lib/EpdFont/FontDecompressor.cpplib/EpdFont/FontDecompressor.h
🧰 Additional context used
🧬 Code graph analysis (2)
lib/EpdFont/FontDecompressor.cpp (1)
lib/uzlib/src/tinflate.c (2)
uzlib_uncompress_init(552-562)uzlib_uncompress(565-626)
lib/EpdFont/FontDecompressor.h (1)
lib/EpdFont/FontDecompressor.cpp (18)
init(9-12)init(9-9)deinit(24-24)deinit(24-24)getBitmap(110-136)getBitmap(110-110)clearCache(26-29)clearCache(26-26)freeAllEntries(14-22)freeAllEntries(14-14)getGroupIndex(31-39)getGroupIndex(31-31)findInCache(41-48)findInCache(41-41)findEvictionCandidate(50-65)findEvictionCandidate(50-50)decompressGroup(67-108)decompressGroup(67-67)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (6)
lib/EpdFont/FontDecompressor.h (1)
1-42: LGTM!The header is well-structured with a clear public API and appropriate documentation about pointer lifetime. The 4-slot LRU cache design aligns with the PR's stated goals for balancing memory usage and cache hit rates.
lib/EpdFont/FontDecompressor.cpp (5)
1-12: LGTM!Clean initialization approach -
memsetensures a known state for theuzlib_uncompstructure before use.
14-29: LGTM!Proper memory management with null-after-free pattern to prevent double-free issues.
31-39: LGTM!Linear search is appropriate for the small number of groups. The sentinel return value for not-found cases properly guards against invalid access.
41-65: LGTM!The cache lookup and LRU eviction logic is straightforward and correct for a 4-slot cache. Preferring invalid slots before evicting valid entries is the right approach.
67-108: LGTM!The decompression flow is well-structured with proper error handling and cleanup. The validation at line 96 correctly verifies both the return status and that the expected byte count was produced.
|
@CodeRabbit help |
ChatThere are 3 ways to chat with CodeRabbit:
CodeRabbit commands
Other keywords and placeholders
CodeRabbit configuration file (
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Pull request overview
This PR introduces on-device DEFLATE decompression for built-in reader fonts and updates the font generation pipeline to emit grouped, compressed bitmap data, targeting a large firmware flash reduction.
Changes:
- Extend font data format to support compressed bitmap “groups” and regenerate reader fonts using a new
--compressoption infontconvert.py. - Add
FontDecompressor(4-slot LRU) backed byuzlib, and updateGfxRendererto fetch glyph bitmaps via decompression when needed. - Clear the decompression cache between page renders in reader activities to bound RAM usage.
Reviewed changes
Copilot reviewed 24 out of 73 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| src/main.cpp | Instantiates/initializes FontDecompressor and wires it into GfxRenderer. |
| src/activities/reader/TxtReaderActivity.cpp | Clears font decompression cache each page render. |
| src/activities/reader/EpubReaderActivity.cpp | Clears font decompression cache each page render. |
| lib/GfxRenderer/GfxRenderer.h | Adds decompressor hook and clearFontCache() API. |
| lib/GfxRenderer/GfxRenderer.cpp | Routes glyph bitmap access through getGlyphBitmap() (compressed/uncompressed). |
| lib/EpdFont/EpdFontData.h | Adds EpdFontGroup + groups/groupCount fields to EpdFontData. |
| lib/EpdFont/FontDecompressor.h | Declares FontDecompressor and its 4-slot LRU cache. |
| lib/EpdFont/FontDecompressor.cpp | Implements uzlib-backed group decompression + caching. |
| lib/EpdFont/scripts/fontconvert.py | Adds --compress and grouping + raw-DEFLATE compression output. |
| lib/EpdFont/scripts/verify_compression.py | Adds a script to validate compressed font headers. |
| lib/EpdFont/scripts/convert-builtin-fonts.sh | Enables compression for reader fonts and runs verification. |
| lib/EpdFont/builtinFonts/*.h | Updates generated font headers for new EpdFontData fields (and regenerates reader fonts with compression). |
| lib/uzlib/** | Adds the uzlib decompression library sources/config. |
| lib/uzlib/library.json | Declares uzlib as a PlatformIO library. |
| lib/ZipFile/ZipFile.cpp | Minor refactor/formatting around one-shot inflate helper usage. |
| bin/clang-format-fix | Excludes vendored lib/uzlib/ from clang-format script. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Add defensive bounds check for dataLength in addition to dataOffset - Add cache clearing to init() to ensure a clean state when reinitializing - Clear cache at end of page-render, not before
|
@daveallie @osteotek I think this is ready for a final look. |
…ts (crosspoint-reader#831) ## Summary **What is the goal of this PR?** Compress reader font bitmaps to reduce flash usage by 30.7%. **What changes are included?** - New `EpdFontGroup` struct and extended `EpdFontData` with `groups`/`groupCount` fields - `--compress` flag in `fontconvert.py`: groups glyphs (ASCII base group + groups of 8) and compresses each with raw DEFLATE - `FontDecompressor` class with 4-slot LRU cache for on-demand decompression during rendering - `GfxRenderer` transparently routes bitmap access through `getGlyphBitmap()` (compressed or direct flash) - Uses `uzlib` for decompression with minimal heap overhead. - 48 reader fonts (Bookerly, NotoSans 12-18pt, OpenDyslexic) regenerated with compression; 5 UI fonts unchanged - Round-trip verification script (`verify_compression.py`) runs as part of font generation ## Additional Context ## Flash & RAM | | baseline | font-compression | Difference | |--|--------|-----------------|------------| | Flash (ELF) | 6,302,476 B (96.2%) | 4,365,022 B (66.6%) | -1,937,454 B (-30.7%) | | firmware.bin | 6,468,192 B | 4,531,008 B | -1,937,184 B (-29.9%) | | RAM | 101,700 B (31.0%) | 103,076 B (31.5%) | +1,376 B (+0.5%) | ## Script-Based Grouping (Cold Cache) Comparison of uncompressed baseline vs script-based group compression (4-slot LRU cache, cleared each page). Glyphs are grouped by Unicode block (ASCII, Latin-1, Latin Extended-A, Combining Marks, Cyrillic, General Punctuation, etc.) instead of sequential groups of 8. ### Render Time | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Median** | 414.9 ms | 431.6 ms | +16.7 ms (+4.0%) | | **Pages** | 37 | 37 | | ### Memory Usage | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Heap free (median)** | 187.0 KB | 176.3 KB | -10.7 KB | | **Heap free (min)** | 186.0 KB | 166.5 KB | -19.5 KB | | **Largest block (median)** | 148.0 KB | 128.0 KB | -20.0 KB | | **Largest block (min)** | 148.0 KB | 120.0 KB | -28.0 KB | ### Cache Effectiveness | | Misses/page | Hit rate | |---|---|---| | **Compressed (cold cache)** | 2.1 | 99.85% | ------ ### AI Usage While CrossPoint doesn't have restrictions on AI tools in contributing, please be transparent about their usage as it helps set the right context for reviewers. Did you use AI tools to help write this code? _**YES**_ Implementation was done by Claude Code (Opus 4.6) based on a plan developed collaboratively. All generated font headers were verified with an automated round-trip decompression test. The firmware was compiled successfully but has not yet been tested on-device. --------- Co-authored-by: Claude Opus 4.6 <[email protected]>
…ts (crosspoint-reader#831) ## Summary **What is the goal of this PR?** Compress reader font bitmaps to reduce flash usage by 30.7%. **What changes are included?** - New `EpdFontGroup` struct and extended `EpdFontData` with `groups`/`groupCount` fields - `--compress` flag in `fontconvert.py`: groups glyphs (ASCII base group + groups of 8) and compresses each with raw DEFLATE - `FontDecompressor` class with 4-slot LRU cache for on-demand decompression during rendering - `GfxRenderer` transparently routes bitmap access through `getGlyphBitmap()` (compressed or direct flash) - Uses `uzlib` for decompression with minimal heap overhead. - 48 reader fonts (Bookerly, NotoSans 12-18pt, OpenDyslexic) regenerated with compression; 5 UI fonts unchanged - Round-trip verification script (`verify_compression.py`) runs as part of font generation ## Additional Context ## Flash & RAM | | baseline | font-compression | Difference | |--|--------|-----------------|------------| | Flash (ELF) | 6,302,476 B (96.2%) | 4,365,022 B (66.6%) | -1,937,454 B (-30.7%) | | firmware.bin | 6,468,192 B | 4,531,008 B | -1,937,184 B (-29.9%) | | RAM | 101,700 B (31.0%) | 103,076 B (31.5%) | +1,376 B (+0.5%) | ## Script-Based Grouping (Cold Cache) Comparison of uncompressed baseline vs script-based group compression (4-slot LRU cache, cleared each page). Glyphs are grouped by Unicode block (ASCII, Latin-1, Latin Extended-A, Combining Marks, Cyrillic, General Punctuation, etc.) instead of sequential groups of 8. ### Render Time | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Median** | 414.9 ms | 431.6 ms | +16.7 ms (+4.0%) | | **Pages** | 37 | 37 | | ### Memory Usage | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Heap free (median)** | 187.0 KB | 176.3 KB | -10.7 KB | | **Heap free (min)** | 186.0 KB | 166.5 KB | -19.5 KB | | **Largest block (median)** | 148.0 KB | 128.0 KB | -20.0 KB | | **Largest block (min)** | 148.0 KB | 120.0 KB | -28.0 KB | ### Cache Effectiveness | | Misses/page | Hit rate | |---|---|---| | **Compressed (cold cache)** | 2.1 | 99.85% | ------ ### AI Usage While CrossPoint doesn't have restrictions on AI tools in contributing, please be transparent about their usage as it helps set the right context for reviewers. Did you use AI tools to help write this code? _**YES**_ Implementation was done by Claude Code (Opus 4.6) based on a plan developed collaboratively. All generated font headers were verified with an automated round-trip decompression test. The firmware was compiled successfully but has not yet been tested on-device. --------- Co-authored-by: Claude Opus 4.6 <[email protected]>
…ts (crosspoint-reader#831) ## Summary **What is the goal of this PR?** Compress reader font bitmaps to reduce flash usage by 30.7%. **What changes are included?** - New `EpdFontGroup` struct and extended `EpdFontData` with `groups`/`groupCount` fields - `--compress` flag in `fontconvert.py`: groups glyphs (ASCII base group + groups of 8) and compresses each with raw DEFLATE - `FontDecompressor` class with 4-slot LRU cache for on-demand decompression during rendering - `GfxRenderer` transparently routes bitmap access through `getGlyphBitmap()` (compressed or direct flash) - Uses `uzlib` for decompression with minimal heap overhead. - 48 reader fonts (Bookerly, NotoSans 12-18pt, OpenDyslexic) regenerated with compression; 5 UI fonts unchanged - Round-trip verification script (`verify_compression.py`) runs as part of font generation ## Additional Context ## Flash & RAM | | baseline | font-compression | Difference | |--|--------|-----------------|------------| | Flash (ELF) | 6,302,476 B (96.2%) | 4,365,022 B (66.6%) | -1,937,454 B (-30.7%) | | firmware.bin | 6,468,192 B | 4,531,008 B | -1,937,184 B (-29.9%) | | RAM | 101,700 B (31.0%) | 103,076 B (31.5%) | +1,376 B (+0.5%) | ## Script-Based Grouping (Cold Cache) Comparison of uncompressed baseline vs script-based group compression (4-slot LRU cache, cleared each page). Glyphs are grouped by Unicode block (ASCII, Latin-1, Latin Extended-A, Combining Marks, Cyrillic, General Punctuation, etc.) instead of sequential groups of 8. ### Render Time | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Median** | 414.9 ms | 431.6 ms | +16.7 ms (+4.0%) | | **Pages** | 37 | 37 | | ### Memory Usage | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Heap free (median)** | 187.0 KB | 176.3 KB | -10.7 KB | | **Heap free (min)** | 186.0 KB | 166.5 KB | -19.5 KB | | **Largest block (median)** | 148.0 KB | 128.0 KB | -20.0 KB | | **Largest block (min)** | 148.0 KB | 120.0 KB | -28.0 KB | ### Cache Effectiveness | | Misses/page | Hit rate | |---|---|---| | **Compressed (cold cache)** | 2.1 | 99.85% | ------ ### AI Usage While CrossPoint doesn't have restrictions on AI tools in contributing, please be transparent about their usage as it helps set the right context for reviewers. Did you use AI tools to help write this code? _**YES**_ Implementation was done by Claude Code (Opus 4.6) based on a plan developed collaboratively. All generated font headers were verified with an automated round-trip decompression test. The firmware was compiled successfully but has not yet been tested on-device. --------- Co-authored-by: Claude Opus 4.6 <[email protected]>
…ts (crosspoint-reader#831) ## Summary **What is the goal of this PR?** Compress reader font bitmaps to reduce flash usage by 30.7%. **What changes are included?** - New `EpdFontGroup` struct and extended `EpdFontData` with `groups`/`groupCount` fields - `--compress` flag in `fontconvert.py`: groups glyphs (ASCII base group + groups of 8) and compresses each with raw DEFLATE - `FontDecompressor` class with 4-slot LRU cache for on-demand decompression during rendering - `GfxRenderer` transparently routes bitmap access through `getGlyphBitmap()` (compressed or direct flash) - Uses `uzlib` for decompression with minimal heap overhead. - 48 reader fonts (Bookerly, NotoSans 12-18pt, OpenDyslexic) regenerated with compression; 5 UI fonts unchanged - Round-trip verification script (`verify_compression.py`) runs as part of font generation ## Additional Context ## Flash & RAM | | baseline | font-compression | Difference | |--|--------|-----------------|------------| | Flash (ELF) | 6,302,476 B (96.2%) | 4,365,022 B (66.6%) | -1,937,454 B (-30.7%) | | firmware.bin | 6,468,192 B | 4,531,008 B | -1,937,184 B (-29.9%) | | RAM | 101,700 B (31.0%) | 103,076 B (31.5%) | +1,376 B (+0.5%) | ## Script-Based Grouping (Cold Cache) Comparison of uncompressed baseline vs script-based group compression (4-slot LRU cache, cleared each page). Glyphs are grouped by Unicode block (ASCII, Latin-1, Latin Extended-A, Combining Marks, Cyrillic, General Punctuation, etc.) instead of sequential groups of 8. ### Render Time | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Median** | 414.9 ms | 431.6 ms | +16.7 ms (+4.0%) | | **Pages** | 37 | 37 | | ### Memory Usage | | Baseline | Compressed (cold cache) | Difference | |---|---|---|---| | **Heap free (median)** | 187.0 KB | 176.3 KB | -10.7 KB | | **Heap free (min)** | 186.0 KB | 166.5 KB | -19.5 KB | | **Largest block (median)** | 148.0 KB | 128.0 KB | -20.0 KB | | **Largest block (min)** | 148.0 KB | 120.0 KB | -28.0 KB | ### Cache Effectiveness | | Misses/page | Hit rate | |---|---|---| | **Compressed (cold cache)** | 2.1 | 99.85% | ------ ### AI Usage While CrossPoint doesn't have restrictions on AI tools in contributing, please be transparent about their usage as it helps set the right context for reviewers. Did you use AI tools to help write this code? _**YES**_ Implementation was done by Claude Code (Opus 4.6) based on a plan developed collaboratively. All generated font headers were verified with an automated round-trip decompression test. The firmware was compiled successfully but has not yet been tested on-device. --------- Co-authored-by: Claude Opus 4.6 <[email protected]>

Summary
What is the goal of this PR?
Compress reader font bitmaps to reduce flash usage by 30.7%.
What changes are included?
EpdFontGroupstruct and extendedEpdFontDatawithgroups/groupCountfields--compressflag infontconvert.py: groups glyphs (ASCII base group + groups of 8) and compresses each with raw DEFLATEFontDecompressorclass with 4-slot LRU cache for on-demand decompression during renderingGfxRenderertransparently routes bitmap access throughgetGlyphBitmap()(compressed or direct flash)uzlibfor decompression with minimal heap overhead.verify_compression.py) runs as part of font generationAdditional Context
Flash & RAM
Script-Based Grouping (Cold Cache)
Comparison of uncompressed baseline vs script-based group compression (4-slot LRU cache, cleared each page). Glyphs are grouped by Unicode block (ASCII, Latin-1, Latin Extended-A, Combining Marks, Cyrillic, General Punctuation, etc.) instead of sequential groups of 8.
Render Time
Memory Usage
Cache Effectiveness
AI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing, please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code? YES
Implementation was done by Claude Code (Opus 4.6) based on a plan developed collaboratively. All generated font headers were verified with an automated round-trip decompression test. The firmware was compiled successfully but has not yet been tested on-device.