Conversation
📝 WalkthroughWalkthroughTrim behavior expanded to remove trailing spaces/newlines; selector processing now skips selectors containing Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested reviewers
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
cc45c27 to
ac0171b
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
lib/Epub/Epub/css/CssParser.cpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build
- GitHub Check: cppcheck
🔇 Additional comments (3)
lib/Epub/Epub/css/CssParser.cpp (3)
76-79: Trailing whitespace trimming is now robust.
Looping until the end prevents leftover trailing whitespace after normalization.
563-577: Comment-only clarification is helpful.
The TODOs make future class-combination support explicit without changing behavior.
598-599: Cache version bump aligns with format changes.
Invalidates older cache entries after selector filtering updates.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/Epub/Epub/css/CssParser.cpp`:
- Around line 346-375: The selector filter currently skips only +, >, [, :, and
whitespace but still accepts unsupported selectors (e.g. those with '#', '~',
'*', '|') which get cached into rulesBySelector_ and waste MAX_RULES slots;
update the filtering logic (the block that checks the variable named key in
CssParser.cpp) to perform an allowlist check that only accepts the simple
patterns supported by the resolver (bare tag names, .class, or tag.class) — for
example: reject any selector containing '#', '~', '*', '|' or any characters
outside valid tag/class name characters, and skip inserting into
rulesBySelector_ when the selector does not match the allowlist so only valid
patterns consume cache entries and MAX_RULES.
There was a problem hiding this comment.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
lib/Epub/Epub/css/CssParser.cpp
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@lib/Epub/Epub/css/CssParser.cpp`:
- Around line 346-375: The selector filtering in CssParser.cpp currently skips
selectors containing '+', '>', '[', ':', and whitespace but still allows
unsupported patterns like '#', '~', '*', and '|' which waste cache slots; update
the filter around the variable/key checks in the relevant parsing routine (the
block checking key.find(...)) to also detect and continue on selectors
containing '#', '~', '*', and '|' so those selectors are ignored before
caching/resolving.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
lib/Epub/Epub/css/CssParser.cpp (1)
77-79: Inconsistency withnormalizedInto()and redundant'\n'check.The normalization loop (lines 64-73) already converts all CSS whitespace including
'\n'to' ', making the'\n'check redundant here. Additionally,normalizedInto()at lines 100-102 uses a singleifchecking only for' ', creating inconsistent behavior between these two functions.Consider aligning both functions:
Suggested fix
// Remove trailing space - while (!result.empty() && (result.back() == ' ' || result.back() == '\n')) { + while (!result.empty() && result.back() == ' ') { result.pop_back(); }And update
normalizedInto()to match:while (!out.empty() && out.back() == ' ') { out.pop_back(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/Epub/Epub/css/CssParser.cpp` around lines 77 - 79, The trailing-character trimming in CssParser::normalized() currently checks for both ' ' and '\n' even though earlier normalization converts all whitespace (including '\n') to spaces; make it consistent with normalizedInto() by removing the '\n' check and only trimming trailing spaces, and update normalizedInto() (the function named normalizedInto) to use the same single-character check (while (!out.empty() && out.back() == ' ') { out.pop_back(); }) so both functions behave identically when removing trailing whitespace.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
lib/Epub/Epub/css/CssParser.cpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (1)
lib/Epub/Epub/css/CssParser.cpp (1)
581-605: LGTM!The TODO comments appropriately document the known limitation around multi-class selector combinations. The current implementation correctly handles single-class lookups and defers the more complex
.class1.class2matching to future work as stated in the PR objectives.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/Epub/Epub/css/CssParser.cpp`:
- Around line 346-394: The selector-filtering block in CssParser.cpp currently
skips selectors containing '+', '>', '[', ':', '#', '~', '*', and whitespace but
misses the namespace separator '|' so selectors like "svg|rect" are not
filtered; update the checks around the `key` variable (same block that tests
key.find('+') etc.) to also continue when key.find('|') !=
std::string_view::npos, following the same pattern and comment style as the
other excluded characters.
---
Nitpick comments:
In `@lib/Epub/Epub/css/CssParser.cpp`:
- Around line 77-79: The trailing-character trimming in CssParser::normalized()
currently checks for both ' ' and '\n' even though earlier normalization
converts all whitespace (including '\n') to spaces; make it consistent with
normalizedInto() by removing the '\n' check and only trimming trailing spaces,
and update normalizedInto() (the function named normalizedInto) to use the same
single-character check (while (!out.empty() && out.back() == ' ') {
out.pop_back(); }) so both functions behave identically when removing trailing
whitespace.
## Summary
* In a sample book I loaded, it had 900+ CSS rules, and took up 180kB of
memory loading the cache in
* Looking at the rules, a lot of them were completely useless as we only
ever apply look for 3 kinds of CSS rules:
* `tag`
* `tag.class1`
* `.class1`
* Stripping out CSS rules with descendant, nested, attribute matching,
sibling matching, pseudo element selection (as we never actually read
these from the cache) reduced the rule count down to 200
## Additional Context
* I've left in `.class1.class2` rules for now, even though we
technically can never match on them as they're likely to be addressed
soonest out of the all the CSS expansion
* Because we don't ever delete the CSS cache, users will need to delete
the book cache through the menu in order to get this new logic
* A new PR should be done up to address this - tracked here
#1015
---
### AI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code? No
## Summary
* In a sample book I loaded, it had 900+ CSS rules, and took up 180kB of
memory loading the cache in
* Looking at the rules, a lot of them were completely useless as we only
ever apply look for 3 kinds of CSS rules:
* `tag`
* `tag.class1`
* `.class1`
* Stripping out CSS rules with descendant, nested, attribute matching,
sibling matching, pseudo element selection (as we never actually read
these from the cache) reduced the rule count down to 200
## Additional Context
* I've left in `.class1.class2` rules for now, even though we
technically can never match on them as they're likely to be addressed
soonest out of the all the CSS expansion
* Because we don't ever delete the CSS cache, users will need to delete
the book cache through the menu in order to get this new logic
* A new PR should be done up to address this - tracked here
crosspoint-reader#1015
---
### AI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code? No
Summary
tagtag.class1.class1Additional Context
.class1.class2rules for now, even though we technically can never match on them as they're likely to be addressed soonest out of the all the CSS expansionAI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing, please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code? No