Monday, July 13, 2009

Large-scale Deep Unsupervised Learning using Graphics Processors


This is a paper on implementation of DBN and sparse coding algorithms with GPU. The finer-grain parallelism provided by the modern GPU outperforms CPU architecture. The bottleneck is the IO, from main memory to the memory inside the video adapter. The finer-grain parallelism allows us to deal with data parallelism and the data are divided for each block and the job assigned to each block is further divided into threads' labor.

They said they would provide their code online. I have not found it.

No comments: