|
40 | 40 | * mapping's backing &drm_gem_object buffers.
|
41 | 41 | *
|
42 | 42 | * &drm_gem_object buffers maintain a list of &drm_gpuva objects representing
|
43 |
| - * all existent GPU VA mappings using this &drm_gem_object as backing buffer. |
| 43 | + * all existing GPU VA mappings using this &drm_gem_object as backing buffer. |
44 | 44 | *
|
45 | 45 | * GPU VAs can be flagged as sparse, such that drivers may use GPU VAs to also
|
46 | 46 | * keep track of sparse PTEs in order to support Vulkan 'Sparse Resources'.
|
|
72 | 72 | * but it can also be a 'dummy' object, which can be allocated with
|
73 | 73 | * drm_gpuvm_resv_object_alloc().
|
74 | 74 | *
|
75 |
| - * In order to connect a struct drm_gpuva its backing &drm_gem_object each |
| 75 | + * In order to connect a struct drm_gpuva to its backing &drm_gem_object each |
76 | 76 | * &drm_gem_object maintains a list of &drm_gpuvm_bo structures, and each
|
77 | 77 | * &drm_gpuvm_bo contains a list of &drm_gpuva structures.
|
78 | 78 | *
|
|
81 | 81 | * This is ensured by the API through drm_gpuvm_bo_obtain() and
|
82 | 82 | * drm_gpuvm_bo_obtain_prealloc() which first look into the corresponding
|
83 | 83 | * &drm_gem_object list of &drm_gpuvm_bos for an existing instance of this
|
84 |
| - * particular combination. If not existent a new instance is created and linked |
| 84 | + * particular combination. If not present, a new instance is created and linked |
85 | 85 | * to the &drm_gem_object.
|
86 | 86 | *
|
87 | 87 | * &drm_gpuvm_bo structures, since unique for a given &drm_gpuvm, are also used
|
|
108 | 108 | * sequence of operations to satisfy a given map or unmap request.
|
109 | 109 | *
|
110 | 110 | * Therefore the DRM GPU VA manager provides an algorithm implementing splitting
|
111 |
| - * and merging of existent GPU VA mappings with the ones that are requested to |
| 111 | + * and merging of existing GPU VA mappings with the ones that are requested to |
112 | 112 | * be mapped or unmapped. This feature is required by the Vulkan API to
|
113 | 113 | * implement Vulkan 'Sparse Memory Bindings' - drivers UAPIs often refer to this
|
114 | 114 | * as VM BIND.
|
|
119 | 119 | * execute in order to integrate the new mapping cleanly into the current state
|
120 | 120 | * of the GPU VA space.
|
121 | 121 | *
|
122 |
| - * Depending on how the new GPU VA mapping intersects with the existent mappings |
| 122 | + * Depending on how the new GPU VA mapping intersects with the existing mappings |
123 | 123 | * of the GPU VA space the &drm_gpuvm_ops callbacks contain an arbitrary amount
|
124 | 124 | * of unmap operations, a maximum of two remap operations and a single map
|
125 | 125 | * operation. The caller might receive no callback at all if no operation is
|
|
139 | 139 | * one unmap operation and one or two map operations, such that drivers can
|
140 | 140 | * derive the page table update delta accordingly.
|
141 | 141 | *
|
142 |
| - * Note that there can't be more than two existent mappings to split up, one at |
| 142 | + * Note that there can't be more than two existing mappings to split up, one at |
143 | 143 | * the beginning and one at the end of the new mapping, hence there is a
|
144 | 144 | * maximum of two remap operations.
|
145 | 145 | *
|
146 | 146 | * Analogous to drm_gpuvm_sm_map() drm_gpuvm_sm_unmap() uses &drm_gpuvm_ops to
|
147 | 147 | * call back into the driver in order to unmap a range of GPU VA space. The
|
148 |
| - * logic behind this function is way simpler though: For all existent mappings |
| 148 | + * logic behind this function is way simpler though: For all existing mappings |
149 | 149 | * enclosed by the given range unmap operations are created. For mappings which
|
150 |
| - * are only partically located within the given range, remap operations are |
151 |
| - * created such that those mappings are split up and re-mapped partically. |
| 150 | + * are only partially located within the given range, remap operations are |
| 151 | + * created such that those mappings are split up and re-mapped partially. |
152 | 152 | *
|
153 | 153 | * As an alternative to drm_gpuvm_sm_map() and drm_gpuvm_sm_unmap(),
|
154 | 154 | * drm_gpuvm_sm_map_ops_create() and drm_gpuvm_sm_unmap_ops_create() can be used
|
|
168 | 168 | * provided helper functions drm_gpuva_map(), drm_gpuva_remap() and
|
169 | 169 | * drm_gpuva_unmap() instead.
|
170 | 170 | *
|
171 |
| - * The following diagram depicts the basic relationships of existent GPU VA |
| 171 | + * The following diagram depicts the basic relationships of existing GPU VA |
172 | 172 | * mappings, a newly requested mapping and the resulting mappings as implemented
|
173 | 173 | * by drm_gpuvm_sm_map() - it doesn't cover any arbitrary combinations of these.
|
174 | 174 | *
|
|
218 | 218 | *
|
219 | 219 | *
|
220 | 220 | * 4) Existent mapping is a left aligned subset of the requested one, hence
|
221 |
| - * replace the existent one. |
| 221 | + * replace the existing one. |
222 | 222 | *
|
223 | 223 | * ::
|
224 | 224 | *
|
|
236 | 236 | * and/or non-contiguous BO offset.
|
237 | 237 | *
|
238 | 238 | *
|
239 |
| - * 5) Requested mapping's range is a left aligned subset of the existent one, |
| 239 | + * 5) Requested mapping's range is a left aligned subset of the existing one, |
240 | 240 | * but backed by a different BO. Hence, map the requested mapping and split
|
241 |
| - * the existent one adjusting its BO offset. |
| 241 | + * the existing one adjusting its BO offset. |
242 | 242 | *
|
243 | 243 | * ::
|
244 | 244 | *
|
|
271 | 271 | * new: |-----|-----| (a.bo_offset=n, a'.bo_offset=n+1)
|
272 | 272 | *
|
273 | 273 | *
|
274 |
| - * 7) Requested mapping's range is a right aligned subset of the existent one, |
| 274 | + * 7) Requested mapping's range is a right aligned subset of the existing one, |
275 | 275 | * but backed by a different BO. Hence, map the requested mapping and split
|
276 |
| - * the existent one, without adjusting the BO offset. |
| 276 | + * the existing one, without adjusting the BO offset. |
277 | 277 | *
|
278 | 278 | * ::
|
279 | 279 | *
|
|
304 | 304 | *
|
305 | 305 | * 9) Existent mapping is overlapped at the end by the requested mapping backed
|
306 | 306 | * by a different BO. Hence, map the requested mapping and split up the
|
307 |
| - * existent one, without adjusting the BO offset. |
| 307 | + * existing one, without adjusting the BO offset. |
308 | 308 | *
|
309 | 309 | * ::
|
310 | 310 | *
|
|
334 | 334 | * new: |-----|-----------| (a'.bo_offset=n, a.bo_offset=n+1)
|
335 | 335 | *
|
336 | 336 | *
|
337 |
| - * 11) Requested mapping's range is a centered subset of the existent one |
| 337 | + * 11) Requested mapping's range is a centered subset of the existing one |
338 | 338 | * having a different backing BO. Hence, map the requested mapping and split
|
339 |
| - * up the existent one in two mappings, adjusting the BO offset of the right |
| 339 | + * up the existing one in two mappings, adjusting the BO offset of the right |
340 | 340 | * one accordingly.
|
341 | 341 | *
|
342 | 342 | * ::
|
|
351 | 351 | * new: |-----|-----|-----| (a.bo_offset=n,b.bo_offset=m,a'.bo_offset=n+2)
|
352 | 352 | *
|
353 | 353 | *
|
354 |
| - * 12) Requested mapping is a contiguous subset of the existent one. Split it |
| 354 | + * 12) Requested mapping is a contiguous subset of the existing one. Split it |
355 | 355 | * up, but indicate that the backing PTEs could be kept.
|
356 | 356 | *
|
357 | 357 | * ::
|
|
367 | 367 | *
|
368 | 368 | *
|
369 | 369 | * 13) Existent mapping is a right aligned subset of the requested one, hence
|
370 |
| - * replace the existent one. |
| 370 | + * replace the existing one. |
371 | 371 | *
|
372 | 372 | * ::
|
373 | 373 | *
|
|
386 | 386 | *
|
387 | 387 | *
|
388 | 388 | * 14) Existent mapping is a centered subset of the requested one, hence
|
389 |
| - * replace the existent one. |
| 389 | + * replace the existing one. |
390 | 390 | *
|
391 | 391 | * ::
|
392 | 392 | *
|
|
406 | 406 | *
|
407 | 407 | * 15) Existent mappings is overlapped at the beginning by the requested mapping
|
408 | 408 | * backed by a different BO. Hence, map the requested mapping and split up
|
409 |
| - * the existent one, adjusting its BO offset accordingly. |
| 409 | + * the existing one, adjusting its BO offset accordingly. |
410 | 410 | *
|
411 | 411 | * ::
|
412 | 412 | *
|
|
469 | 469 | * make use of them.
|
470 | 470 | *
|
471 | 471 | * The below code is strictly limited to illustrate the generic usage pattern.
|
472 |
| - * To maintain simplicitly, it doesn't make use of any abstractions for common |
473 |
| - * code, different (asyncronous) stages with fence signalling critical paths, |
| 472 | + * To maintain simplicity, it doesn't make use of any abstractions for common |
| 473 | + * code, different (asynchronous) stages with fence signalling critical paths, |
474 | 474 | * any other helpers or error handling in terms of freeing memory and dropping
|
475 | 475 | * previously taken locks.
|
476 | 476 | *
|
|
479 | 479 | * // Allocates a new &drm_gpuva.
|
480 | 480 | * struct drm_gpuva * driver_gpuva_alloc(void);
|
481 | 481 | *
|
482 |
| - * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva |
| 482 | + * // Typically drivers would embed the &drm_gpuvm and &drm_gpuva |
483 | 483 | * // structure in individual driver structures and lock the dma-resv with
|
484 | 484 | * // drm_exec or similar helpers.
|
485 | 485 | * int driver_mapping_create(struct drm_gpuvm *gpuvm,
|
|
582 | 582 | * .sm_step_unmap = driver_gpuva_unmap,
|
583 | 583 | * };
|
584 | 584 | *
|
585 |
| - * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva |
| 585 | + * // Typically drivers would embed the &drm_gpuvm and &drm_gpuva |
586 | 586 | * // structure in individual driver structures and lock the dma-resv with
|
587 | 587 | * // drm_exec or similar helpers.
|
588 | 588 | * int driver_mapping_create(struct drm_gpuvm *gpuvm,
|
|
680 | 680 | *
|
681 | 681 | * This helper is here to provide lockless list iteration. Lockless as in, the
|
682 | 682 | * iterator releases the lock immediately after picking the first element from
|
683 |
| - * the list, so list insertion deletion can happen concurrently. |
| 683 | + * the list, so list insertion and deletion can happen concurrently. |
684 | 684 | *
|
685 | 685 | * Elements popped from the original list are kept in a local list, so removal
|
686 | 686 | * and is_empty checks can still happen while we're iterating the list.
|
@@ -1160,7 +1160,7 @@ drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm,
|
1160 | 1160 | }
|
1161 | 1161 |
|
1162 | 1162 | /**
|
1163 |
| - * drm_gpuvm_prepare_objects() - prepare all assoiciated BOs |
| 1163 | + * drm_gpuvm_prepare_objects() - prepare all associated BOs |
1164 | 1164 | * @gpuvm: the &drm_gpuvm
|
1165 | 1165 | * @exec: the &drm_exec locking context
|
1166 | 1166 | * @num_fences: the amount of &dma_fences to reserve
|
@@ -1230,13 +1230,13 @@ drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
|
1230 | 1230 | EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_range);
|
1231 | 1231 |
|
1232 | 1232 | /**
|
1233 |
| - * drm_gpuvm_exec_lock() - lock all dma-resv of all assoiciated BOs |
| 1233 | + * drm_gpuvm_exec_lock() - lock all dma-resv of all associated BOs |
1234 | 1234 | * @vm_exec: the &drm_gpuvm_exec wrapper
|
1235 | 1235 | *
|
1236 | 1236 | * Acquires all dma-resv locks of all &drm_gem_objects the given
|
1237 | 1237 | * &drm_gpuvm contains mappings of.
|
1238 | 1238 | *
|
1239 |
| - * Addionally, when calling this function with struct drm_gpuvm_exec::extra |
| 1239 | + * Additionally, when calling this function with struct drm_gpuvm_exec::extra |
1240 | 1240 | * being set the driver receives the given @fn callback to lock additional
|
1241 | 1241 | * dma-resv in the context of the &drm_gpuvm_exec instance. Typically, drivers
|
1242 | 1242 | * would call drm_exec_prepare_obj() from within this callback.
|
@@ -1293,7 +1293,7 @@ fn_lock_array(struct drm_gpuvm_exec *vm_exec)
|
1293 | 1293 | }
|
1294 | 1294 |
|
1295 | 1295 | /**
|
1296 |
| - * drm_gpuvm_exec_lock_array() - lock all dma-resv of all assoiciated BOs |
| 1296 | + * drm_gpuvm_exec_lock_array() - lock all dma-resv of all associated BOs |
1297 | 1297 | * @vm_exec: the &drm_gpuvm_exec wrapper
|
1298 | 1298 | * @objs: additional &drm_gem_objects to lock
|
1299 | 1299 | * @num_objs: the number of additional &drm_gem_objects to lock
|
@@ -1588,7 +1588,7 @@ drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
|
1588 | 1588 | EXPORT_SYMBOL_GPL(drm_gpuvm_bo_find);
|
1589 | 1589 |
|
1590 | 1590 | /**
|
1591 |
| - * drm_gpuvm_bo_obtain() - obtains and instance of the &drm_gpuvm_bo for the |
| 1591 | + * drm_gpuvm_bo_obtain() - obtains an instance of the &drm_gpuvm_bo for the |
1592 | 1592 | * given &drm_gpuvm and &drm_gem_object
|
1593 | 1593 | * @gpuvm: The &drm_gpuvm the @obj is mapped in.
|
1594 | 1594 | * @obj: The &drm_gem_object being mapped in the @gpuvm.
|
@@ -1624,7 +1624,7 @@ drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
|
1624 | 1624 | EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain);
|
1625 | 1625 |
|
1626 | 1626 | /**
|
1627 |
| - * drm_gpuvm_bo_obtain_prealloc() - obtains and instance of the &drm_gpuvm_bo |
| 1627 | + * drm_gpuvm_bo_obtain_prealloc() - obtains an instance of the &drm_gpuvm_bo |
1628 | 1628 | * for the given &drm_gpuvm and &drm_gem_object
|
1629 | 1629 | * @__vm_bo: A pre-allocated struct drm_gpuvm_bo.
|
1630 | 1630 | *
|
@@ -1688,7 +1688,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_extobj_add);
|
1688 | 1688 | * @vm_bo: the &drm_gpuvm_bo to add or remove
|
1689 | 1689 | * @evict: indicates whether the object is evicted
|
1690 | 1690 | *
|
1691 |
| - * Adds a &drm_gpuvm_bo to or removes it from the &drm_gpuvms evicted list. |
| 1691 | + * Adds a &drm_gpuvm_bo to or removes it from the &drm_gpuvm's evicted list. |
1692 | 1692 | */
|
1693 | 1693 | void
|
1694 | 1694 | drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, bool evict)
|
@@ -1790,7 +1790,7 @@ __drm_gpuva_remove(struct drm_gpuva *va)
|
1790 | 1790 | * drm_gpuva_remove() - remove a &drm_gpuva
|
1791 | 1791 | * @va: the &drm_gpuva to remove
|
1792 | 1792 | *
|
1793 |
| - * This removes the given &va from the underlaying tree. |
| 1793 | + * This removes the given &va from the underlying tree. |
1794 | 1794 | *
|
1795 | 1795 | * It is safe to use this function using the safe versions of iterating the GPU
|
1796 | 1796 | * VA space, such as drm_gpuvm_for_each_va_safe() and
|
@@ -2358,7 +2358,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
|
2358 | 2358 | *
|
2359 | 2359 | * This function iterates the given range of the GPU VA space. It utilizes the
|
2360 | 2360 | * &drm_gpuvm_ops to call back into the driver providing the operations to
|
2361 |
| - * unmap and, if required, split existent mappings. |
| 2361 | + * unmap and, if required, split existing mappings. |
2362 | 2362 | *
|
2363 | 2363 | * Drivers may use these callbacks to update the GPU VA space right away within
|
2364 | 2364 | * the callback. In case the driver decides to copy and store the operations for
|
@@ -2475,7 +2475,7 @@ static const struct drm_gpuvm_ops lock_ops = {
|
2475 | 2475 | * required without the earlier DRIVER_OP_MAP. This is safe because we've
|
2476 | 2476 | * already locked the GEM object in the earlier DRIVER_OP_MAP step.
|
2477 | 2477 | *
|
2478 |
| - * Returns: 0 on success or a negative error codec |
| 2478 | + * Returns: 0 on success or a negative error code |
2479 | 2479 | */
|
2480 | 2480 | int
|
2481 | 2481 | drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
|
@@ -2619,12 +2619,12 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
|
2619 | 2619 | * @req_offset: the offset within the &drm_gem_object
|
2620 | 2620 | *
|
2621 | 2621 | * This function creates a list of operations to perform splitting and merging
|
2622 |
| - * of existent mapping(s) with the newly requested one. |
| 2622 | + * of existing mapping(s) with the newly requested one. |
2623 | 2623 | *
|
2624 | 2624 | * The list can be iterated with &drm_gpuva_for_each_op and must be processed
|
2625 | 2625 | * in the given order. It can contain map, unmap and remap operations, but it
|
2626 | 2626 | * also can be empty if no operation is required, e.g. if the requested mapping
|
2627 |
| - * already exists is the exact same way. |
| 2627 | + * already exists in the exact same way. |
2628 | 2628 | *
|
2629 | 2629 | * There can be an arbitrary amount of unmap operations, a maximum of two remap
|
2630 | 2630 | * operations and a single map operation. The latter one represents the original
|
|
0 commit comments