-
Notifications
You must be signed in to change notification settings - Fork 803
Honor cache offsets when writing file cache #2858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I'm afraid it won't work. file with the following content:
I look at the When reading from a file that is compressed, the decompressed result is already inserted into the cache, which can cause another problem with the use of the offset. The cache should always correspond to the RAW file on the card. |
|
@popovec thank you for having a look! Indeed, you are right. This will work only for the consecutive reads, which was probably the case for me. To work correctly, we would need to store also the valid ranges somewhere. The case regarding to the compression is also a good point. I think we are not likely to get both of the compression and different offsets reads in the same card, but it would be better to have this figured out. |
|
For now, I suggest a workaround: And one more thing, it will probably be necessary to check how the caching of record-based files works for us. |
|
If I read your proposal workaround right, its basically the same thing as disabling file cache when we read from any offset, which is something I would like to avoid as the file caching makes the card initialization in opensc significantly faster (like 0.15s instead if 1.6s in the case of the card I used). In this case, both of the times are acceptable, but I would rather resolve this than just workaround it in the long term. The decompression as done right now, can not be pretty simply moved after the caching as the reading itself indicates if the file is compressed or not so I am worried that the file caching as it is implemented now can not cover all the corner cases we would like to support and we will have to complicate things. One option would be to handle the offsets as different files. It would not break compatibility , but ut would have disadvantage that we would not be able to use the cache for overlapping offsets (without some more logic), but really most of the reads ad this level are large reads without chunking on specific offset given either by the code or by "pointers" inside of other card files so I think this should be pretty safe thing to do and could handle also the record based files, I think (but I am not sure if I have some card with record-based files). |
|
Perhaps, in the case of reading a file with an offset, it would be possible to do a full (or partial, up to offset+length) reading of the file into the cache from offset 0, then repeat the operation and the cache will be hit. Look at epass2003, there a 2F00 file is created as a record-based (linear fixed, record length 64 bytes). |
b298cb5 to
6d90cd0
Compare
This would work for normal files, but would not work with the decompression (but the decompressed files are actually "virtual" files the the CECE suffix and should not need reading from offsets if I understand it right) and probably record based files. I will have to check the record based files better though as I probably do not understand them better. Pushed second attempt, but untested for today as I do not have the cards at me now. |
|
Treating every possible offset as new object for caching seems quite cumbersome and may lead to synchronization errors in the future. I would suggest to simply not cache the file if an offset is specified (in-file updates or appends could be considered, if we have already read the data before the update). Would it possible to rewrite actual process for reading the data so that the full file can be read (and cached) first before reading it with an offset? |
|
Thank you for having a look. This is CardOS 5.3 card so to my understanding, the file read offsets and lengths are driven by the content of the MF/DF files on the card so I can not probably pre-cache that somehow manually in advance. The cardos pkcs15 emulator handles only fixing the tokeninfo if I see right. One idea I have might be to try to fill the gaps with the data from the card if it is non-consecutive, but I would have to check how complicated it will be. I do not have the cards at hand now, so I will check tomorrow. |
When the reads are not consecutive, avoid caching anything after the gaps. Signed-off-by: Jakub Jelen <[email protected]>
6d90cd0 to
f9a5fc7
Compare
|
Reiterated back to the caching with offsets in file. In my case, the offsets seem to be consecutive so the current version works ok. The open question is still about the record based files, if there is something we can do to accommodate them or ignore them. If I read the code for reading files correctly, when we read the cache we do not know what type of file is it (nor what record size is used, anything), because this is retrieved from the select file call which is ignored when the cache is hit. We can have this information though while writing the file so one more option would be to store the file information somewhere too. But I am not sure if this is something we need to resolve now. I think creating separate issue to handle this later should be enough. |
frankmorgner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, for honoring the offset the change looks good and we can track record based files in a separate issue.
|
If there will be no other comments, concerns, I would like to get this merged into the next release. I opened #2860 for the record based files cache handling. |
We got a report that the file caching was rendering some cards useless and disabling file cache worked so I dived into it deeper to figure out the file cache does not honor the file offset at all.
The cards I have for testing store all the PKI objects inside of one PKCS#15 file at different offsets and when file caching is enabled, the
sc_pkcs15_cache_file()just overrides the previous chunk with a new one at the beginning of the file, creating mangled file cache and hard to debug issues.This attempts to honor the file offset in the cache, write at the same file offsets on disk and if there is some non-continuity, fill it with zeroes.
Checklist