c++ - Can I use a single address space for the GPU, CPU and FPGA look like to CUDA UVA? -


if i'm developing cuda, have opportunity use uva (unified virtual addressing) - single address space cpu-ram , gpu-ram of gpu. not possible, , appeared in version cuda 4.0. understand - provided nvidia cuda driver. http://docs.nvidia.com/cuda/gpudirect-rdma/index.html#basics-of-uva-cuda-memory-management

but if want use gpu , fpga on single computer connected pci-express 2.0 16x , them use single address space. there such possibility in fpga similar nvidia uva, need use fpga "uva", , nor interfere fpga "uva" vs cuda uva?

how this? http://research.microsoft.com/pubs/172730/20120625%20ucaa2012_bittner_ruf_final.pdf

enter image description here

you've found gpudirect rdma documentation seems answer question.

rdma gpudirect feature introduced in kepler-class gpus , cuda 5.0 enables direct path communication between gpu , peer device using standard features of pci express.

with this, host can initiate data movement between gpu , peer device. uva means each device has unique part of virtual address space, not mean arbitrary device can access memory of device, still need use appropriate apis.

note @ simplest level, asking single unified virtual address space. cpu , gpu unified (that's uva is), entirely dependent on how fpga works (including driver). rdma part optimisation moving data between 2 documentation useful uva in general.


Comments

Popular posts from this blog

html - How to style widget with post count different than without post count -

How to remove text and logo OR add Overflow on Android ActionBar using AppCompat on API 8? -

javascript - storing input from prompt in array and displaying the array -