Skip to content

Conversation

@Jakuje
Copy link
Member

@Jakuje Jakuje commented Sep 7, 2023

We got a report that the file caching was rendering some cards useless and disabling file cache worked so I dived into it deeper to figure out the file cache does not honor the file offset at all.

The cards I have for testing store all the PKI objects inside of one PKCS#15 file at different offsets and when file caching is enabled, the sc_pkcs15_cache_file() just overrides the previous chunk with a new one at the beginning of the file, creating mangled file cache and hard to debug issues.

This attempts to honor the file offset in the cache, write at the same file offsets on disk and if there is some non-continuity, fill it with zeroes.

Checklist
  • PKCS#11 module is tested
  • Windows minidriver is tested
  • macOS tokend is tested

@popovec
Copy link
Member

popovec commented Sep 8, 2023

I'm afraid it won't work.

file with the following content: 00 11 22 33 44 55 66

  1. read from offset 3, the cache contains 00 00 00 33 44 55 66
  2. subsequent read from offset 0, the contents of the cache are used... which is wrong.

I look at the src/libopensc/pkcs15.c code and it seems that the cache has other inconsistencies.

When reading from a file that is compressed, the decompressed result is already inserted into the cache, which can cause another problem with the use of the offset. The cache should always correspond to the RAW file on the card.

@Jakuje
Copy link
Member Author

Jakuje commented Sep 8, 2023

@popovec thank you for having a look! Indeed, you are right. This will work only for the consecutive reads, which was probably the case for me. To work correctly, we would need to store also the valid ranges somewhere.

The case regarding to the compression is also a good point. I think we are not likely to get both of the compression and different offsets reads in the same card, but it would be better to have this figured out.

@popovec
Copy link
Member

popovec commented Sep 8, 2023

For now, I suggest a workaround:

diff --git a/src/libopensc/pkcs15.c b/src/libopensc/pkcs15.c
index 4215b733a..6324fcc81 100644
--- a/src/libopensc/pkcs15.c
+++ b/src/libopensc/pkcs15.c
@@ -2611,7 +2611,7 @@ sc_pkcs15_read_file(struct sc_pkcs15_card *p15card, const struct sc_path *in_pat
 
                sc_file_free(file);
 
-               if (len && p15card->opts.use_file_cache
+               if (len && p15card->opts.use_file_cache && !offset
                    && ((p15card->opts.use_file_cache & SC_PKCS15_OPTS_CACHE_ALL_FILES) || !private_data)) {
                        sc_pkcs15_cache_file(p15card, in_path, data, len);
                }

And one more thing, it will probably be necessary to check how the caching of record-based files works for us.

@Jakuje
Copy link
Member Author

Jakuje commented Sep 8, 2023

If I read your proposal workaround right, its basically the same thing as disabling file cache when we read from any offset, which is something I would like to avoid as the file caching makes the card initialization in opensc significantly faster (like 0.15s instead if 1.6s in the case of the card I used). In this case, both of the times are acceptable, but I would rather resolve this than just workaround it in the long term.

The decompression as done right now, can not be pretty simply moved after the caching as the reading itself indicates if the file is compressed or not so I am worried that the file caching as it is implemented now can not cover all the corner cases we would like to support and we will have to complicate things.

One option would be to handle the offsets as different files. It would not break compatibility , but ut would have disadvantage that we would not be able to use the cache for overlapping offsets (without some more logic), but really most of the reads ad this level are large reads without chunking on specific offset given either by the code or by "pointers" inside of other card files so I think this should be pretty safe thing to do and could handle also the record based files, I think (but I am not sure if I have some card with record-based files).

@popovec
Copy link
Member

popovec commented Sep 8, 2023

Perhaps, in the case of reading a file with an offset, it would be possible to do a full (or partial, up to offset+length) reading of the file into the cache from offset 0, then repeat the operation and the cache will be hit.

Look at epass2003, there a 2F00 file is created as a record-based (linear fixed, record length 64 bytes).

@Jakuje
Copy link
Member Author

Jakuje commented Sep 8, 2023

Perhaps, in the case of reading a file with an offset, it would be possible to do a full (or partial, up to offset+length) reading of the file into the cache from offset 0, then repeat the operation and the cache will be hit.

This would work for normal files, but would not work with the decompression (but the decompressed files are actually "virtual" files the the CECE suffix and should not need reading from offsets if I understand it right) and probably record based files. I will have to check the record based files better though as I probably do not understand them better.

Pushed second attempt, but untested for today as I do not have the cards at me now.

@frankmorgner
Copy link
Member

Treating every possible offset as new object for caching seems quite cumbersome and may lead to synchronization errors in the future. I would suggest to simply not cache the file if an offset is specified (in-file updates or appends could be considered, if we have already read the data before the update).

Would it possible to rewrite actual process for reading the data so that the full file can be read (and cached) first before reading it with an offset?

@Jakuje
Copy link
Member Author

Jakuje commented Sep 12, 2023

Thank you for having a look.

This is CardOS 5.3 card so to my understanding, the file read offsets and lengths are driven by the content of the MF/DF files on the card so I can not probably pre-cache that somehow manually in advance. The cardos pkcs15 emulator handles only fixing the tokeninfo if I see right.

One idea I have might be to try to fill the gaps with the data from the card if it is non-consecutive, but I would have to check how complicated it will be. I do not have the cards at hand now, so I will check tomorrow.

When the reads are not consecutive, avoid caching anything after the gaps.

Signed-off-by: Jakub Jelen <[email protected]>
@Jakuje
Copy link
Member Author

Jakuje commented Sep 13, 2023

Reiterated back to the caching with offsets in file. In my case, the offsets seem to be consecutive so the current version works ok.

The open question is still about the record based files, if there is something we can do to accommodate them or ignore them. If I read the code for reading files correctly, when we read the cache we do not know what type of file is it (nor what record size is used, anything), because this is retrieved from the select file call which is ignored when the cache is hit. We can have this information though while writing the file so one more option would be to store the file information somewhere too.

But I am not sure if this is something we need to resolve now. I think creating separate issue to handle this later should be enough.

Copy link
Member

@frankmorgner frankmorgner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, for honoring the offset the change looks good and we can track record based files in a separate issue.

@Jakuje
Copy link
Member Author

Jakuje commented Sep 14, 2023

If there will be no other comments, concerns, I would like to get this merged into the next release.

I opened #2860 for the record based files cache handling.

@frankmorgner frankmorgner merged commit bff98ff into OpenSC:master Sep 15, 2023
@xhanulik xhanulik mentioned this pull request Sep 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants