Rate this Page

torch.cuda.get_allocator_backend#

torch.cuda.get_allocator_backend()[source]#

Return a string describing the active allocator backend as set by PYTORCH_CUDA_ALLOC_CONF. Currently available backends are native (PyTorch’s native caching allocator) and cudaMallocAsync` (CUDA’s built-in asynchronous allocator).

Note

See Memory management for details on choosing the allocator backend.

Return type

str