Skip to content

Add os.readinto API for reading data into a caller provided buffer #129205

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 of 6 tasks
cmaloney opened this issue Jan 22, 2025 · 15 comments
Closed
1 of 6 tasks

Add os.readinto API for reading data into a caller provided buffer #129205

cmaloney opened this issue Jan 22, 2025 · 15 comments
Labels
extension-modules C modules in the Modules dir type-feature A feature request or enhancement

Comments

@cmaloney
Copy link
Contributor

cmaloney commented Jan 22, 2025

Feature or enhancement

Proposal:

Code reading data in pure python tends to make a buffer variable, call os.read() which returns a separate newly allocated buffer of data, then copy/append that data onto the pre-allocated buffer[0]. That creates unnecessary extra buffer objects, as well as unnecessary copies. Provide os.readinto for directly filling a Buffer Protocol object.

os.readinto should closely mirror _Py_read which underlies os.read in order to get the same behaviors around retries as well as well-tested cross-platform support.

Move simple cases that use os.read (ex. [0]) to use the new API when it makes code simpler and more efficient. Potentially adding readinto to more readable/writeable file-like proxy objects or objects which transform the data (ex. Lib/_compression) is out of scope for this issue.

[0]

cpython/Lib/subprocess.py

Lines 1914 to 1921 in 298dda5

# Wait for exec to fail or succeed; possibly raising an
# exception (limited in size)
errpipe_data = bytearray()
while True:
part = os.read(errpipe_read, 50000)
errpipe_data += part
if not part or len(errpipe_data) > 50000:
break

def read_signed(fd):
data = b''
length = SIGNED_STRUCT.size
while len(data) < length:
s = os.read(fd, length - len(data))
if not s:
raise EOFError('unexpected EOF')
data += s
return SIGNED_STRUCT.unpack(data)[0]

cpython/Lib/_pyio.py

Lines 1695 to 1701 in 298dda5

def readinto(self, b):
"""Same as RawIOBase.readinto()."""
m = memoryview(b).cast('B')
data = self.read(len(m))
n = len(data)
m[:n] = data
return n

os.read loops to migrate

Well contained os.read loops

os.read loop interleaved with other code

Has this already been discussed elsewhere?

No response given

Links to previous discussion of this feature:

#129005 (comment)

Linked PRs

@cmaloney cmaloney added the type-feature A feature request or enhancement label Jan 22, 2025
@vstinner
Copy link
Member

Do you want to work on a PR? If not, I can if you prefer.

@cmaloney
Copy link
Contributor Author

Almost done with one for adding os.readinto + tests :). Just trying to make sure I port / replicate the right os.read tests (but not all / _Py_read is used by both and handles most cases). Definitely be nice to both work + review os.read -> os.readinto as appropriate if you're up for that (there's quite a bit in multiprocessing, subprocess, ...).

cmaloney added a commit to cmaloney/cpython that referenced this issue Jan 23, 2025
Add a new OS api which will read data directly into a caller provided
writeable buffer protocol object.
@picnixz picnixz added the extension-modules C modules in the Modules dir label Jan 23, 2025
@bluetech
Copy link

Just curious, how would you rewrite your first example using os.readinto?

@cmaloney
Copy link
Contributor Author

@bluetech Ideally to me they'd move to file.read() / the io file object stack which would give the desired "read until EOF or max_bytes is reached efficiently" behavior. Already that is efficient in a lot of cases, but can't be used everywhere currently. Mixture of some overhead that would need to be reduced to match performance (locks, buffer allocation, etc) + relative age of different parts of the codebase (io came in Python 3.0, many other python modules are older). That case I would like to be a BufferedIO with a 0-byte buffer doing a .read() with a "max_size" parameter, which needs a little bit of new features on BufferedIO to handle the use case. Currently though it's open coded.

Migrating cases like that should happen, and I suspect whats the simplest/cleanest will evolve with code review. My prototype has looked something like:

errpipe_data = bytearray(50_000)
bytes_read = 0
while bytes_read < 50_000:
    count := os.readinto(errpipe_read, memoryview(errpipe_data)[bytes_read:]):
    if count == 0:
        break
    bytes_read += count
del errpipe_data[bytes_read:]  # Remove excess bytes

Are some behavior differences between that and the code as implemented today (Today after reading 49_999 bytes, could get 50_000 bytes resulting in 99_999 bytes in errpipe_data, but my prototype never goes past 50_000 bytes).

Not sure if it's good / needed to include the -1 case (and probably something I should add a test around.. Returning 0, number of bytes read, or throwing an exception would definitely simplify call site hadnling a bit...

I also like doing the same thing but with a match instead of the if.

@bluetech
Copy link

Are some behavior differences between that and the code as implemented today (Today after reading 49_999 bytes, could get 50_000 bytes resulting in 99_999 bytes in errpipe_data, but my prototype never goes past 50_000 bytes).

Side note, it does look like the original code is meant to be len(errpipe_data) >= 50000 instead of >, but it doesn't matter much I suppose.

vstinner added a commit that referenced this issue Jan 26, 2025
…ded buffer (#129211)

Add a new OS API which will read data directly into a caller provided
writeable buffer protocol object.

Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
vstinner added a commit to vstinner/cpython that referenced this issue Jan 26, 2025
* Use f-string
* Fix grammar: replace 'datas' with 'data' (and replace 'data' with
  'item').
* Remove unused variables: 'pid' and 'old_mask'.
vstinner added a commit to vstinner/cpython that referenced this issue Jan 26, 2025
* Use f-string.
* Fix grammar: replace 'datas' with 'data' (and replace 'data' with
  'item').
* Remove unused variables: 'pid' and 'old_mask'.
@cmaloney
Copy link
Contributor Author

cmaloney commented Jan 26, 2025

os.read loops to migrate

I added a list to the base issue of os.read loops I think could be migrated. Comment if there's one you want to claim, and/or just cc: @cmaloney me in the PR (I don't have access to the sub task lists preview which the audit thread safety issue uses unfortunately)

The interleaved with other code/classes may be not be worth migrating / difficult to refactor without breaking existing code.

There are a number of places which do "IO like" objects with .read methods, excluding those from this issue. Some cases are likely worth investigating independently, but there is a lot more variation and surrounding context needed to make changes.

I think there's also some significant code golf around "shortest way to use os.readinto", my current variant has an issue of one more readinto call than needed in the "got all bytes the first call case" (which is the common fast path...).

vstinner added a commit that referenced this issue Jan 27, 2025
* Use f-string.
* Fix grammar: replace 'datas' with 'data' (and replace 'data' with
  'item').
* Remove unused variables: 'pid' and 'old_mask'.
* Factorize test_read() and test_readinto() code.

Co-authored-by: Cody Maloney <cmaloney@users.noreply.github.com>
cmaloney added a commit to cmaloney/cpython that referenced this issue Jan 31, 2025
Read into a pre-allocated fixed size buffer.

The previous code the buffer could actually get to 100_000 bytes in two
reads (first read returns 50_000, second pass through loop gets another
50_000), so this does change behavior. I think the fixed length of
50_000 was the intention though. This is used to pass exception issues
that happen during _fork_exec from the child to parent process.
@gpshead
Copy link
Member

gpshead commented Feb 2, 2025

The subprocess _execute_child one doesn't need changing, see rationale in the closed PR.

readinto an existing bytearray has a caveat: instead of a malloc, it does a malloc + bzero as pre-sized bytearrays don't have an undefined value concept. so it goes beyond just mapping pages for a buffer to writing all pages, no matter how much data is actually being read. when you're rarely/never going to be reading a lot of data or reusing the buffer, that is unneeded extra work.

@cmaloney
Copy link
Contributor Author

cmaloney commented Feb 2, 2025

@gpshead I commented on the closed PR, neither bytearray nor bytes in the code I've read does a zeroing of its initial allocation. Just does a malloc, with bytearray adding the one caveat of writing a single null-byte at the end of the array.

Will definitely take as a general note not to migrate loops unless there's a measurable / significant performance change.

@GalaxySnail
Copy link
Contributor

What about adding an os.preadinto() for pread? Currently os.pread() doesn't support pre-allocated buffers either.

@cmaloney
Copy link
Contributor Author

cmaloney commented Feb 7, 2025

@GalaxySnail could you share the library / code you have that want to be more efficient? Thinking through multiple options and want to make sure would work here. Can always add more apis, but each one is also more maintenance burden (more code to compile, tests to run each build, etc). If it's a big improvement in shipping code, can be worth it. A separate bigger scale project I'm looking at is more general usage of background/async I/O APIS (io_uring and friends), and one of the paths would be making those available, and rather than encouraging one refactor now (os.pread -> os.preadinto) and potentially one later (os.preadinto -> more async), just making sure that os.pread -> more async will be a straightforward and good win.

@GalaxySnail
Copy link
Contributor

Sadly I don't have any real-world code that want to be more efficient. I'm not suggesting optimizing the stdlib with pread, it just feels a bit strange that we have os.readinto() but not os.preadinto(). If we had it, third-party code could easily use it. Compared to the implementation and tests for os.readv() and os.preadv(), it shouldn't be too difficult to implement and test os.preadinto(), especially since pread is not available on Windows.

A separate bigger scale project I'm looking at is more general usage of background/async I/O APIS (io_uring and friends)

That sounds exciting!

@cmaloney
Copy link
Contributor Author

Rather than directly moving loops, I have been experimenting with a BytesIO._readfrom(file, /, *, estimate=None, limit=None) that encapsulates the buffer resizing efficiently as well as the read loop, adding common code for features "estimated size" and "limit size". It can be used to implement FileIO.readall by calling _readfrom with estimate with minimal perf change (is in PR). In general I think "If there is a read loop, it should be faster and simpler".

Draft PR: #130098

The _pyio implementation supports for three "shapes" of IO objects: file descriptor int, those with a .readinto member, and those with .read member. If that looks like a reasonable approach, I'd likely introduce it as a internal method BytesIO._readfrom() and move cases (with perf tests to make sure things don't regress).

In the C implementation I included an optimization to avoid a heap allocation by using 1KB of stack space in the estimate=0 case instead. Not sure that's worth the complexity and cost (if it gets used, need an extra copy compared to just using a bytes; and a warmed up interpreter feels like 1KB PyBytes is likely to be quickly available / allocated).

The CPython codebase has a common pattern of build a list of I/O chunks than "join" them together at the end of the loop. I think readfrom makes a tradeoff in that case, in that as long as resize infrequently copies (not lots of other heap objects being allocated), it should be faster than that single large copy at the end. I haven't run full performance numbers though.

For broader ecosystem, if a public API potentially helps solve the cases in python libraries where there is an fd but user's today avoid open() / the standard Python I/O stack and hand-write a read loop to fill a buffer. Already have a couple new public APIs though, and it feels likely in moving + perf testing individual migrations inside CPython might tweak API a bit / so my leaning is to keep internal implementation detail for now.

Thoughts?
cc: @gpshead, @vstinner

@vstinner
Copy link
Member

os.readinto() was added, I suggest to close this issue.

Further work / optimization work should be done in new separated issues.

@cmaloney
Copy link
Contributor Author

I'm good witth closing this, doing other optimizations as small separate projects where they make sense.

@vstinner
Copy link
Member

Adding os.readinto() is a nice enhancement, thanks @cmaloney!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
extension-modules C modules in the Modules dir type-feature A feature request or enhancement
Projects
None yet
Development

No branches or pull requests

6 participants