-
-
Notifications
You must be signed in to change notification settings - Fork 8k
pdf: Improve text with characters outside embedded font limits #30512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: text-overhaul
Are you sure you want to change the base?
Conversation
With libraqm, string layout produces glyph indices, not character codes, and font features may even produce different glyphs for the same character code (e.g., by picking a different Stylistic Set). Thus we cannot rely on character codes as unique items within a font, and must move toward glyph indices everywhere.
Currently, we split text into single byte chunks and multi-byte glyphs, then iterate through single byte chunks for output and multi-byte glyphs for output. Instead, output the single byte chunks as we finish them, then do the multi-byte glyphs at the end.
For a Type 3 font, its encoding is entirely defined by its `Encoding` dictionary (which we create), so there's no reason to use a specific encoding like `cp1252`. Instead, switch to Latin-1, which corresponds exactly to the first 256 character codes in Unicode, and can be mapped directly with `ord`.
By tracking both character codes and glyph indices, we can handle producing multiple font subsets if needed by a file format.
For character codes outside the embedded font limits (256 for type 3 and 65536 for type 42), we output them as XObjects instead of using text commands. But there is nothing in the PDF spec that requires any specific encoding like this. Since we now support subsetting all fonts before embedding, split each font into groups based on the maximum character code (e.g., 256-entry groups for type 3), then switch text strings to a different font subset and re-map character codes to it when necessary. This means all text is true text (albeit with some strange encoding), and we no longer need any XObjects for glyphs. For users of non-English text, this means it will become selectable and copyable again. Fixes matplotlib#21797
For Type 3 fonts, add a `ToUnicode` mapping (which was added in PDF 1.2), and for Type 42 fonts, correct the Unicode encoding, which should be UTF-16BE, not UCS2.
These characters are outside the BMP and should test subset splitting for type 42 output in PDF.
7ffffb5
to
3fc92f4
Compare
This is great and would also allow getting rid of _get_pdf_charprocs. I'll try to have a look at #30335 to start... |
The first two commits (the loop merge and the Type3 encoding change) seem independent from the rest (even from the switch to glyph index tracking) and could be merged first via a separate PR? (I can probably approve them right away.) |
I split the type3 encoding to #30520, but the loop merge has conflicts with the glyph index change. |
PR summary
For character codes outside the embedded font limits (256 for type 3 and 65536 for type 42), we output them as
XObject
s instead of using text commands. But there is nothing in the PDF spec that requires any specific encoding like this.Since we now support subsetting all fonts before embedding, split each font into groups based on the maximum character code (e.g., 256-entry groups for type 3), then switch text strings to a different font subset and re-map character codes to it when necessary.
This means all text is true text (albeit with some strange encoding), and we no longer need any XObjects for glyphs. For users of non-English text, this means it will become selectable and copyable again.
There are 3 steps to achieve this change:
CharacterTracker
. This class takes care of splitting characters into subsets that fit the desired PDF font type limits.ToUnicode
dictionary for the subset font. We already did this for type 42 fonts, but the implementation was incorrect as it didn't correctly handle non-BMP characters. For type 3, support was added in PDF 1.2, but we produce 1.4; there is a fallback to the glyph names, but it is inconsistent and probably depends on the original font having the right names.In the future, we may wish to extend the implementation in
CharacterTracker
to "compress" the character map it produces (i.e., if you use 255 characters all from a different 256-sized block with type 3, you get 255 fonts, but we could compress that to a single font.) I tried to avoid hard-coding any assumptions that the mapping is block-by-block, but it is possible that something slipped through, so I do not want to spend too much time on that right now.Formerly, with
multi_font_type3.pdf
(after adding the emoji to the test), copying the text in evince would produce:and with
multi_font_type42.pdf
:and now we get for both type 3 and 42:
Note how in the third line for type 3:
<>
are inverted exclamation/question marks¡¿
\
is a curly opening double quote“
^
, underscore_
, and tilde~
are (circumflex, dot, tilde) accents/smaller glyphsˆ˙˜
{}
are em-dash and curly quotes—˝
|
is en-dash–
Everything from the seventh to second-last line is missing in type 3 since it's outside of the 256 limit, and all the emoji are missing from type 42 since that's outside the 65536 limit.
This depends on #30335.
PR checklist