Provided by: nvidia-cuda-dev_9.1.85-3ubuntu1_amd64 bug

NAME

       Unified Addressing -

   Functions
       CUresult cuMemAdvise (CUdeviceptr devPtr, size_t count, CUmem_advise advice, CUdevice
           device)
           Advise about the usage of a given memory range.
       CUresult cuMemPrefetchAsync (CUdeviceptr devPtr, size_t count, CUdevice dstDevice,
           CUstream hStream)
           Prefetches memory to the specified destination device.
       CUresult cuMemRangeGetAttribute (void *data, size_t dataSize, CUmem_range_attribute
           attribute, CUdeviceptr devPtr, size_t count)
           Query an attribute of a given memory range.
       CUresult cuMemRangeGetAttributes (void **data, size_t *dataSizes, CUmem_range_attribute
           *attributes, size_t numAttributes, CUdeviceptr devPtr, size_t count)
           Query attributes of a given memory range.
       CUresult cuPointerGetAttribute (void *data, CUpointer_attribute attribute, CUdeviceptr
           ptr)
           Returns information about a pointer.
       CUresult cuPointerGetAttributes (unsigned int numAttributes, CUpointer_attribute
           *attributes, void **data, CUdeviceptr ptr)
           Returns information about a pointer.
       CUresult cuPointerSetAttribute (const void *value, CUpointer_attribute attribute,
           CUdeviceptr ptr)
           Set attributes on a previously allocated memory region.

Detailed Description

       \brief unified addressing functions of the low-level CUDA driver API (cuda.h)

       This section describes the unified addressing functions of the low-level CUDA driver
       application programming interface.

Overview

       CUDA devices can share a unified address space with the host. For these devices there is
       no distinction between a device pointer and a host pointer -- the same pointer value may
       be used to access memory from the host program and from a kernel running on the device
       (with exceptions enumerated below).

Supported Platforms

       Whether or not a device supports unified addressing may be queried by calling
       cuDeviceGetAttribute() with the device attribute CU_DEVICE_ATTRIBUTE_UNIFIED_ADDRESSING.

       Unified addressing is automatically enabled in 64-bit processes

Looking Up Information from Pointer Values

       It is possible to look up information about the memory which backs a pointer value. For
       instance, one may want to know if a pointer points to host or device memory. As another
       example, in the case of device memory, one may want to know on which CUDA device the
       memory resides. These properties may be queried using the function cuPointerGetAttribute()

       Since pointers are unique, it is not necessary to specify information about the pointers
       specified to the various copy functions in the CUDA API. The function cuMemcpy() may be
       used to perform a copy between two pointers, ignoring whether they point to host or device
       memory (making cuMemcpyHtoD(), cuMemcpyDtoD(), and cuMemcpyDtoH() unnecessary for devices
       supporting unified addressing). For multidimensional copies, the memory type
       CU_MEMORYTYPE_UNIFIED may be used to specify that the CUDA driver should infer the
       location of the pointer from its value.

Automatic Mapping of Host Allocated Host Memory

       All host memory allocated in all contexts using cuMemAllocHost() and cuMemHostAlloc() is
       always directly accessible from all contexts on all devices that support unified
       addressing. This is the case regardless of whether or not the flags
       CU_MEMHOSTALLOC_PORTABLE and CU_MEMHOSTALLOC_DEVICEMAP are specified.

       The pointer value through which allocated host memory may be accessed in kernels on all
       devices that support unified addressing is the same as the pointer value through which
       that memory is accessed on the host, so it is not necessary to call
       cuMemHostGetDevicePointer() to get the device pointer for these allocations.

       Note that this is not the case for memory allocated using the flag
       CU_MEMHOSTALLOC_WRITECOMBINED, as discussed below.

Automatic Registration of Peer Memory

       Upon enabling direct access from a context that supports unified addressing to another
       peer context that supports unified addressing using cuCtxEnablePeerAccess() all memory
       allocated in the peer context using cuMemAlloc() and cuMemAllocPitch() will immediately be
       accessible by the current context. The device pointer value through which any peer memory
       may be accessed in the current context is the same pointer value through which that memory
       may be accessed in the peer context.

Exceptions, Disjoint Addressing

       Not all memory may be accessed on devices through the same pointer value through which
       they are accessed on the host. These exceptions are host memory registered using
       cuMemHostRegister() and host memory allocated using the flag
       CU_MEMHOSTALLOC_WRITECOMBINED. For these exceptions, there exists a distinct host and
       device address for the memory. The device address is guaranteed to not overlap any valid
       host pointer range and is guaranteed to have the same value across all contexts that
       support unified addressing.

       This device address may be queried using cuMemHostGetDevicePointer() when a context using
       unified addressing is current. Either the host or the unified device pointer value may be
       used to refer to this memory through cuMemcpy() and similar functions using the
       CU_MEMORYTYPE_UNIFIED memory type.

Function Documentation

   CUresult cuMemAdvise (CUdeviceptr devPtr, size_t count, CUmem_advise advice, CUdevice device)
       Advise the Unified Memory subsystem about the usage pattern for the memory range starting
       at devPtr with a size of count bytes. The start address and end address of the memory
       range will be rounded down and rounded up respectively to be aligned to CPU page size
       before the advice is applied. The memory range must refer to managed memory allocated via
       cuMemAllocManaged or declared via __managed__ variables.

       The advice parameter can take the following values:

       • CU_MEM_ADVISE_SET_READ_MOSTLY: This implies that the data is mostly going to be read
         from and only occasionally written to. Any read accesses from any processor to this
         region will create a read-only copy of at least the accessed pages in that processor's
         memory. Additionally, if cuMemPrefetchAsync is called on this region, it will create a
         read-only copy of the data on the destination processor. If any processor writes to this
         region, all copies of the corresponding page will be invalidated except for the one
         where the write occurred. The device argument is ignored for this advice. Note that for
         a page to be read-duplicated, the accessing processor must either be the CPU or a GPU
         that has a non-zero value for the device attribute
         CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. Also, if a context is created on a device
         that does not have the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS
         set, then read-duplication will not occur until all such contexts are destroyed.

       • CU_MEM_ADVISE_UNSET_READ_MOSTLY: Undoes the effect of CU_MEM_ADVISE_SET_READ_MOSTLY and
         also prevents the Unified Memory driver from attempting heuristic read-duplication on
         the memory range. Any read-duplicated copies of the data will be collapsed into a single
         copy. The location for the collapsed copy will be the preferred location if the page has
         a preferred location and one of the read-duplicated copies was resident at that
         location. Otherwise, the location chosen is arbitrary.

       • CU_MEM_ADVISE_SET_PREFERRED_LOCATION: This advice sets the preferred location for the
         data to be the memory belonging to device. Passing in CU_DEVICE_CPU for device sets the
         preferred location as host memory. If device is a GPU, then it must have a non-zero
         value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. Setting
         the preferred location does not cause data to migrate to that location immediately.
         Instead, it guides the migration policy when a fault occurs on that memory region. If
         the data is already in its preferred location and the faulting processor can establish a
         mapping without requiring the data to be migrated, then data migration will be avoided.
         On the other hand, if the data is not in its preferred location or if a direct mapping
         cannot be established, then it will be migrated to the processor accessing it. It is
         important to note that setting the preferred location does not prevent data prefetching
         done using cuMemPrefetchAsync. Having a preferred location can override the page thrash
         detection and resolution logic in the Unified Memory driver. Normally, if a page is
         detected to be constantly thrashing between for example host and device memory, the page
         may eventually be pinned to host memory by the Unified Memory driver. But if the
         preferred location is set as device memory, then the page will continue to thrash
         indefinitely. If CU_MEM_ADVISE_SET_READ_MOSTLY is also set on this memory region or any
         subset of it, then the policies associated with that advice will override the policies
         of this advice.

       • CU_MEM_ADVISE_UNSET_PREFERRED_LOCATION: Undoes the effect of
         CU_MEM_ADVISE_SET_PREFERRED_LOCATION and changes the preferred location to none.

       • CU_MEM_ADVISE_SET_ACCESSED_BY: This advice implies that the data will be accessed by
         device. Passing in CU_DEVICE_CPU for device will set the advice for the CPU. If device
         is a GPU, then the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS must
         be non-zero. This advice does not cause data migration and has no impact on the location
         of the data per se. Instead, it causes the data to always be mapped in the specified
         processor's page tables, as long as the location of the data permits a mapping to be
         established. If the data gets migrated for any reason, the mappings are updated
         accordingly. This advice is recommended in scenarios where data locality is not
         important, but avoiding faults is. Consider for example a system containing multiple
         GPUs with peer-to-peer access enabled, where the data located on one GPU is occasionally
         accessed by peer GPUs. In such scenarios, migrating data over to the other GPUs is not
         as important because the accesses are infrequent and the overhead of migration may be
         too high. But preventing faults can still help improve performance, and so having a
         mapping set up in advance is useful. Note that on CPU access of this data, the data may
         be migrated to host memory because the CPU typically cannot access device memory
         directly. Any GPU that had the CU_MEM_ADVISE_SET_ACCESSED_BY flag set for this data will
         now have its mapping updated to point to the page in host memory. If
         CU_MEM_ADVISE_SET_READ_MOSTLY is also set on this memory region or any subset of it,
         then the policies associated with that advice will override the policies of this advice.
         Additionally, if the preferred location of this memory region or any subset of it is
         also device, then the policies associated with CU_MEM_ADVISE_SET_PREFERRED_LOCATION will
         override the policies of this advice.

       • CU_MEM_ADVISE_UNSET_ACCESSED_BY: Undoes the effect of CU_MEM_ADVISE_SET_ACCESSED_BY. Any
         mappings to the data from device may be removed at any time causing accesses to result
         in non-fatal page faults.

       Parameters:
           devPtr - Pointer to memory to set the advice for
           count - Size in bytes of the memory range
           advice - Advice to be applied for the specified memory range
           device - Device to apply the advice for

       Returns:
           CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

       Note:
           Note that this function may also return error codes from previous, asynchronous
           launches.

           This function exhibits  behavior for most use cases.

           This function uses standard  semantics.

       See also:
           cuMemcpy, cuMemcpyPeer, cuMemcpyAsync, cuMemcpy3DPeerAsync, cuMemPrefetchAsync,
           cudaMemAdvise

   CUresult cuMemPrefetchAsync (CUdeviceptr devPtr, size_t count, CUdevice dstDevice, CUstream
       hStream)
       Prefetches memory to the specified destination device. devPtr is the base device pointer
       of the memory to be prefetched and dstDevice is the destination device. count specifies
       the number of bytes to copy. hStream is the stream in which the operation is enqueued. The
       memory range must refer to managed memory allocated via cuMemAllocManaged or declared via
       __managed__ variables.

       Passing in CU_DEVICE_CPU for dstDevice will prefetch the data to host memory. If dstDevice
       is a GPU, then the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS must be
       non-zero. Additionally, hStream must be associated with a device that has a non-zero value
       for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS.

       The start address and end address of the memory range will be rounded down and rounded up
       respectively to be aligned to CPU page size before the prefetch operation is enqueued in
       the stream.

       If no physical memory has been allocated for this region, then this memory region will be
       populated and mapped on the destination device. If there's insufficient memory to prefetch
       the desired region, the Unified Memory driver may evict pages from other cuMemAllocManaged
       allocations to host memory in order to make room. Device memory allocated using cuMemAlloc
       or cuArrayCreate will not be evicted.

       By default, any mappings to the previous location of the migrated pages are removed and
       mappings for the new location are only setup on dstDevice. The exact behavior however also
       depends on the settings applied to this memory range via cuMemAdvise as described below:

       If CU_MEM_ADVISE_SET_READ_MOSTLY was set on any subset of this memory range, then that
       subset will create a read-only copy of the pages on dstDevice.

       If CU_MEM_ADVISE_SET_PREFERRED_LOCATION was called on any subset of this memory range,
       then the pages will be migrated to dstDevice even if dstDevice is not the preferred
       location of any pages in the memory range.

       If CU_MEM_ADVISE_SET_ACCESSED_BY was called on any subset of this memory range, then
       mappings to those pages from all the appropriate processors are updated to refer to the
       new location if establishing such a mapping is possible. Otherwise, those mappings are
       cleared.

       Note that this API is not required for functionality and only serves to improve
       performance by allowing the application to migrate data to a suitable location before it
       is accessed. Memory accesses to this range are always coherent and are allowed even when
       the data is actively being migrated.

       Note that this function is asynchronous with respect to the host and all work on other
       devices.

       Parameters:
           devPtr - Pointer to be prefetched
           count - Size in bytes
           dstDevice - Destination device to prefetch to
           hStream - Stream to enqueue prefetch operation

       Returns:
           CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

       Note:
           Note that this function may also return error codes from previous, asynchronous
           launches.

           This function exhibits  behavior for most use cases.

           This function uses standard  semantics.

       See also:
           cuMemcpy, cuMemcpyPeer, cuMemcpyAsync, cuMemcpy3DPeerAsync, cuMemAdvise,
           cudaMemPrefetchAsync

   CUresult cuMemRangeGetAttribute (void * data, size_t dataSize, CUmem_range_attribute
       attribute, CUdeviceptr devPtr, size_t count)
       Query an attribute about the memory range starting at devPtr with a size of count bytes.
       The memory range must refer to managed memory allocated via cuMemAllocManaged or declared
       via __managed__ variables.

       The attribute parameter can take the following values:

       • CU_MEM_RANGE_ATTRIBUTE_READ_MOSTLY: If this attribute is specified, data will be
         interpreted as a 32-bit integer, and dataSize must be 4. The result returned will be 1
         if all pages in the given memory range have read-duplication enabled, or 0 otherwise.

       • CU_MEM_RANGE_ATTRIBUTE_PREFERRED_LOCATION: If this attribute is specified, data will be
         interpreted as a 32-bit integer, and dataSize must be 4. The result returned will be a
         GPU device id if all pages in the memory range have that GPU as their preferred
         location, or it will be CU_DEVICE_CPU if all pages in the memory range have the CPU as
         their preferred location, or it will be CU_DEVICE_INVALID if either all the pages don't
         have the same preferred location or some of the pages don't have a preferred location at
         all. Note that the actual location of the pages in the memory range at the time of the
         query may be different from the preferred location.

       • CU_MEM_RANGE_ATTRIBUTE_ACCESSED_BY: If this attribute is specified, data will be
         interpreted as an array of 32-bit integers, and dataSize must be a non-zero multiple of
         4. The result returned will be a list of device ids that had
         CU_MEM_ADVISE_SET_ACCESSED_BY set for that entire memory range. If any device does not
         have that advice set for the entire memory range, that device will not be included. If
         data is larger than the number of devices that have that advice set for that memory
         range, CU_DEVICE_INVALID will be returned in all the extra space provided. For ex., if
         dataSize is 12 (i.e. data has 3 elements) and only device 0 has the advice set, then the
         result returned will be { 0, CU_DEVICE_INVALID, CU_DEVICE_INVALID }. If data is smaller
         than the number of devices that have that advice set, then only as many devices will be
         returned as can fit in the array. There is no guarantee on which specific devices will
         be returned, however.

       • CU_MEM_RANGE_ATTRIBUTE_LAST_PREFETCH_LOCATION: If this attribute is specified, data will
         be interpreted as a 32-bit integer, and dataSize must be 4. The result returned will be
         the last location to which all pages in the memory range were prefetched explicitly via
         cuMemPrefetchAsync. This will either be a GPU id or CU_DEVICE_CPU depending on whether
         the last location for prefetch was a GPU or the CPU respectively. If any page in the
         memory range was never explicitly prefetched or if all pages were not prefetched to the
         same location, CU_DEVICE_INVALID will be returned. Note that this simply returns the
         last location that the applicaton requested to prefetch the memory range to. It gives no
         indication as to whether the prefetch operation to that location has completed or even
         begun.

       Parameters:
           data - A pointers to a memory location where the result of each attribute query will
           be written to.
           dataSize - Array containing the size of data
           attribute - The attribute to query
           devPtr - Start of the range to query
           count - Size of the range to query

       Returns:
           CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

       Note:
           Note that this function may also return error codes from previous, asynchronous
           launches.

           This function exhibits  behavior for most use cases.

           This function uses standard  semantics.

       See also:
           cuMemRangeGetAttributes, cuMemPrefetchAsync, cuMemAdvise, cudaMemRangeGetAttribute

   CUresult cuMemRangeGetAttributes (void ** data, size_t * dataSizes, CUmem_range_attribute *
       attributes, size_t numAttributes, CUdeviceptr devPtr, size_t count)
       Query attributes of the memory range starting at devPtr with a size of count bytes. The
       memory range must refer to managed memory allocated via cuMemAllocManaged or declared via
       __managed__ variables. The attributes array will be interpreted to have numAttributes
       entries. The dataSizes array will also be interpreted to have numAttributes entries. The
       results of the query will be stored in data.

       The list of supported attributes are given below. Please refer to cuMemRangeGetAttribute
       for attribute descriptions and restrictions.

       • CU_MEM_RANGE_ATTRIBUTE_READ_MOSTLYCU_MEM_RANGE_ATTRIBUTE_PREFERRED_LOCATIONCU_MEM_RANGE_ATTRIBUTE_ACCESSED_BYCU_MEM_RANGE_ATTRIBUTE_LAST_PREFETCH_LOCATION

       Parameters:
           data - A two-dimensional array containing pointers to memory locations where the
           result of each attribute query will be written to.
           dataSizes - Array containing the sizes of each result
           attributes - An array of attributes to query (numAttributes and the number of
           attributes in this array should match)
           numAttributes - Number of attributes to query
           devPtr - Start of the range to query
           count - Size of the range to query

       Returns:
           CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_INVALID_CONTEXT,
           CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

       Note:
           Note that this function may also return error codes from previous, asynchronous
           launches.

       See also:
           cuMemRangeGetAttribute, cuMemAdvise cuMemPrefetchAsync, cudaMemRangeGetAttributes

   CUresult cuPointerGetAttribute (void * data, CUpointer_attribute attribute, CUdeviceptr ptr)
       The supported attributes are:

       • CU_POINTER_ATTRIBUTE_CONTEXT:

       Returns in *data the CUcontext in which ptr was allocated or registered. The type of data
       must be CUcontext *.

       If ptr was not allocated by, mapped by, or registered with a CUcontext which uses unified
       virtual addressing then CUDA_ERROR_INVALID_VALUE is returned.

       • CU_POINTER_ATTRIBUTE_MEMORY_TYPE:

       Returns in *data the physical memory type of the memory that ptr addresses as a
       CUmemorytype enumerated value. The type of data must be unsigned int.

       If ptr addresses device memory then *data is set to CU_MEMORYTYPE_DEVICE. The particular
       CUdevice on which the memory resides is the CUdevice of the CUcontext returned by the
       CU_POINTER_ATTRIBUTE_CONTEXT attribute of ptr.

       If ptr addresses host memory then *data is set to CU_MEMORYTYPE_HOST.

       If ptr was not allocated by, mapped by, or registered with a CUcontext which uses unified
       virtual addressing then CUDA_ERROR_INVALID_VALUE is returned.

       If the current CUcontext does not support unified virtual addressing then
       CUDA_ERROR_INVALID_CONTEXT is returned.

       • CU_POINTER_ATTRIBUTE_DEVICE_POINTER:

       Returns in *data the device pointer value through which ptr may be accessed by kernels
       running in the current CUcontext. The type of data must be CUdeviceptr *.

       If there exists no device pointer value through which kernels running in the current
       CUcontext may access ptr then CUDA_ERROR_INVALID_VALUE is returned.

       If there is no current CUcontext then CUDA_ERROR_INVALID_CONTEXT is returned.

       Except in the exceptional disjoint addressing cases discussed below, the value returned in
       *data will equal the input value ptr.

       • CU_POINTER_ATTRIBUTE_HOST_POINTER:

       Returns in *data the host pointer value through which ptr may be accessed by by the host
       program. The type of data must be void **. If there exists no host pointer value through
       which the host program may directly access ptr then CUDA_ERROR_INVALID_VALUE is returned.

       Except in the exceptional disjoint addressing cases discussed below, the value returned in
       *data will equal the input value ptr.

       • CU_POINTER_ATTRIBUTE_P2P_TOKENS:

       Returns in *data two tokens for use with the nv-p2p.h Linux kernel interface. data must be
       a struct of type CUDA_POINTER_ATTRIBUTE_P2P_TOKENS.

       ptr must be a pointer to memory obtained from :cuMemAlloc(). Note that p2pToken and
       vaSpaceToken are only valid for the lifetime of the source allocation. A subsequent
       allocation at the same address may return completely different tokens. Querying this
       attribute has a side effect of setting the attribute CU_POINTER_ATTRIBUTE_SYNC_MEMOPS for
       the region of memory that ptr points to.

       • CU_POINTER_ATTRIBUTE_SYNC_MEMOPS:

       A boolean attribute which when set, ensures that synchronous memory operations initiated
       on the region of memory that ptr points to will always synchronize. See further
       documentation in the section titled 'API synchronization behavior' to learn more about
       cases when synchronous memory operations can exhibit asynchronous behavior.

       • CU_POINTER_ATTRIBUTE_BUFFER_ID:

       Returns in *data a buffer ID which is guaranteed to be unique within the process. data
       must point to an unsigned long long.

       ptr must be a pointer to memory obtained from a CUDA memory allocation API. Every memory
       allocation from any of the CUDA memory allocation APIs will have a unique ID over a
       process lifetime. Subsequent allocations do not reuse IDs from previous freed allocations.
       IDs are only unique within a single process.

       • CU_POINTER_ATTRIBUTE_IS_MANAGED:

       Returns in *data a boolean that indicates whether the pointer points to managed memory or
       not.

       .RS 4

       Note that for most allocations in the unified virtual address space the host and device
       pointer for accessing the allocation will be the same. The exceptions to this are

       • user memory registered using cuMemHostRegister

       • host memory allocated using cuMemHostAlloc with the CU_MEMHOSTALLOC_WRITECOMBINED flag
         For these types of allocation there will exist separate, disjoint host and device
         addresses for accessing the allocation. In particular

       • The host address will correspond to an invalid unmapped device address (which will
         result in an exception if accessed from the device)

       • The device address will correspond to an invalid unmapped host address (which will
         result in an exception if accessed from the host). For these types of allocations,
         querying CU_POINTER_ATTRIBUTE_HOST_POINTER and CU_POINTER_ATTRIBUTE_DEVICE_POINTER may
         be used to retrieve the host and device addresses from either address.

       Parameters:
           data - Returned pointer attribute value
           attribute - Pointer attribute to query
           ptr - Pointer

       Returns:
           CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED,
           CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

       Note:
           Note that this function may also return error codes from previous, asynchronous
           launches.

       See also:
           cuPointerSetAttribute, cuMemAlloc, cuMemFree, cuMemAllocHost, cuMemFreeHost,
           cuMemHostAlloc, cuMemHostRegister, cuMemHostUnregister, cudaPointerGetAttributes

   CUresult cuPointerGetAttributes (unsigned int numAttributes, CUpointer_attribute * attributes,
       void ** data, CUdeviceptr ptr)
       The supported attributes are (refer to cuPointerGetAttribute for attribute descriptions
       and restrictions):

       • CU_POINTER_ATTRIBUTE_CONTEXTCU_POINTER_ATTRIBUTE_MEMORY_TYPECU_POINTER_ATTRIBUTE_DEVICE_POINTERCU_POINTER_ATTRIBUTE_HOST_POINTERCU_POINTER_ATTRIBUTE_SYNC_MEMOPSCU_POINTER_ATTRIBUTE_BUFFER_IDCU_POINTER_ATTRIBUTE_IS_MANAGED

       Parameters:
           numAttributes - Number of attributes to query
           attributes - An array of attributes to query (numAttributes and the number of
           attributes in this array should match)
           data - A two-dimensional array containing pointers to memory locations where the
           result of each attribute query will be written to.
           ptr - Pointer to query

       Unlike cuPointerGetAttribute, this function will not return an error when the ptr
       encountered is not a valid CUDA pointer. Instead, the attributes are assigned default NULL
       values and CUDA_SUCCESS is returned.

       If ptr was not allocated by, mapped by, or registered with a CUcontext which uses UVA
       (Unified Virtual Addressing), CUDA_ERROR_INVALID_CONTEXT is returned.

       Returns:
           CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_INVALID_CONTEXT,
           CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

       Note:
           Note that this function may also return error codes from previous, asynchronous
           launches.

       See also:
           cuPointerGetAttribute, cuPointerSetAttribute, cudaPointerGetAttributes

   CUresult cuPointerSetAttribute (const void * value, CUpointer_attribute attribute, CUdeviceptr
       ptr)
       The supported attributes are:

       • CU_POINTER_ATTRIBUTE_SYNC_MEMOPS:

       A boolean attribute that can either be set (1) or unset (0). When set, the region of
       memory that ptr points to is guaranteed to always synchronize memory operations that are
       synchronous. If there are some previously initiated synchronous memory operations that are
       pending when this attribute is set, the function does not return until those memory
       operations are complete. See further documentation in the section titled 'API
       synchronization behavior' to learn more about cases when synchronous memory operations can
       exhibit asynchronous behavior. value will be considered as a pointer to an unsigned
       integer to which this attribute is to be set.

       Parameters:
           value - Pointer to memory containing the value to be set
           attribute - Pointer attribute to set
           ptr - Pointer to a memory region allocated using CUDA memory allocation APIs

       Returns:
           CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED,
           CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

       Note:
           Note that this function may also return error codes from previous, asynchronous
           launches.

       See also:
           cuPointerGetAttribute, cuPointerGetAttributes, cuMemAlloc, cuMemFree, cuMemAllocHost,
           cuMemFreeHost, cuMemHostAlloc, cuMemHostRegister, cuMemHostUnregister

Author

       Generated automatically by Doxygen from the source code.