Provided by: nvidia-cuda-dev_9.1.85-3ubuntu1_amd64 bug

NAME

       Unified Addressing -

   Functions
       cudaError_t cudaPointerGetAttributes (struct cudaPointerAttributes *attributes, const void
           *ptr)
           Returns attributes about a specified pointer.

Detailed Description

       \brief unified addressing functions of the CUDA runtime API (cuda_runtime_api.h)

       This section describes the unified addressing functions of the CUDA runtime application
       programming interface.

Overview

       CUDA devices can share a unified address space with the host. For these devices there is
       no distinction between a device pointer and a host pointer -- the same pointer value may
       be used to access memory from the host program and from a kernel running on the device
       (with exceptions enumerated below).

Supported Platforms

       Whether or not a device supports unified addressing may be queried by calling
       cudaGetDeviceProperties() with the device property cudaDeviceProp::unifiedAddressing.

       Unified addressing is automatically enabled in 64-bit processes .

       Unified addressing is not yet supported on Windows Vista or Windows 7 for devices that do
       not use the TCC driver model.

Looking Up Information from Pointer Values

       It is possible to look up information about the memory which backs a pointer value. For
       instance, one may want to know if a pointer points to host or device memory. As another
       example, in the case of device memory, one may want to know on which CUDA device the
       memory resides. These properties may be queried using the function
       cudaPointerGetAttributes()

       Since pointers are unique, it is not necessary to specify information about the pointers
       specified to cudaMemcpy() and other copy functions. The copy direction cudaMemcpyDefault
       may be used to specify that the CUDA runtime should infer the location of the pointer from
       its value.

Automatic Mapping of Host Allocated Host Memory

       All host memory allocated through all devices using cudaMallocHost() and cudaHostAlloc()
       is always directly accessible from all devices that support unified addressing. This is
       the case regardless of whether or not the flags cudaHostAllocPortable and
       cudaHostAllocMapped are specified.

       The pointer value through which allocated host memory may be accessed in kernels on all
       devices that support unified addressing is the same as the pointer value through which
       that memory is accessed on the host. It is not necessary to call
       cudaHostGetDevicePointer() to get the device pointer for these allocations.

       Note that this is not the case for memory allocated using the flag
       cudaHostAllocWriteCombined, as discussed below.

Direct Access of Peer Memory

       Upon enabling direct access from a device that supports unified addressing to another peer
       device that supports unified addressing using cudaDeviceEnablePeerAccess() all memory
       allocated in the peer device using cudaMalloc() and cudaMallocPitch() will immediately be
       accessible by the current device. The device pointer value through which any peer's memory
       may be accessed in the current device is the same pointer value through which that memory
       may be accessed from the peer device.

Exceptions, Disjoint Addressing

       Not all memory may be accessed on devices through the same pointer value through which
       they are accessed on the host. These exceptions are host memory registered using
       cudaHostRegister() and host memory allocated using the flag cudaHostAllocWriteCombined.
       For these exceptions, there exists a distinct host and device address for the memory. The
       device address is guaranteed to not overlap any valid host pointer range and is guaranteed
       to have the same value across all devices that support unified addressing.

       This device address may be queried using cudaHostGetDevicePointer() when a device using
       unified addressing is current. Either the host or the unified device pointer value may be
       used to refer to this memory in cudaMemcpy() and similar functions using the
       cudaMemcpyDefault memory direction.

Function Documentation

   cudaError_t cudaPointerGetAttributes (struct cudaPointerAttributes * attributes, const void *
       ptr)
       Returns in *attributes the attributes of the pointer ptr. If pointer was not allocated in,
       mapped by or registered with context supporting unified addressing cudaErrorInvalidValue
       is returned.

       The cudaPointerAttributes structure is defined as:

           struct cudaPointerAttributes {
               enum cudaMemoryType memoryType;
               int device;
               void *devicePointer;
               void *hostPointer;
               int isManaged;
           }

        In this structure, the individual fields mean

       • memoryType identifies the physical location of the memory associated with pointer ptr.
         It can be cudaMemoryTypeHost for host memory or cudaMemoryTypeDevice for device memory.

       • device is the device against which ptr was allocated. If ptr has memory type
         cudaMemoryTypeDevice then this identifies the device on which the memory referred to by
         ptr physically resides. If ptr has memory type cudaMemoryTypeHost then this identifies
         the device which was current when the allocation was made (and if that device is
         deinitialized then this allocation will vanish with that device's state).

       • devicePointer is the device pointer alias through which the memory referred to by ptr
         may be accessed on the current device. If the memory referred to by ptr cannot be
         accessed directly by the current device then this is NULL.

       • hostPointer is the host pointer alias through which the memory referred to by ptr may be
         accessed on the host. If the memory referred to by ptr cannot be accessed directly by
         the host then this is NULL.

       • isManaged indicates if the pointer ptr points to managed memory or not.

       Parameters:
           attributes - Attributes for the specified pointer
           ptr - Pointer to get attributes for

       Returns:
           cudaSuccess, cudaErrorInvalidDevice, cudaErrorInvalidValue

       See also:
           cudaGetDeviceCount, cudaGetDevice, cudaSetDevice, cudaChooseDevice,
           cuPointerGetAttributes

Author

       Generated automatically by Doxygen from the source code.