Both AMD and Intel have been developing and selling CPUs with GPUs built in. This integration allows for cheaper and more efficient systems, as a dedicated GPU is not needed, but their designs are not perfected yet, at least according to North Carolina State University researchers.
The purpose and potential of CPUs and GPUs are quite different, with GPUs better at performing simple calculations and CPUs better at more complex problems. For some math, such as linear algebra, the GPU is far superior to the CPU, but for operations that require flexible data retrieval and decisions, the CPU is the best choice.
Despite the cores’ proximity on the new chips, they are not utilized as though they were on the same silicon. What the researchers propose, and have tested, is having the CPU aid the GPU by loading data the GPU will need into the shared L3 cache. This allows the data to be accessed much faster than if the GPU had to call it from off-chip memory. The average speed increase was 21.4% in the testing, but it reached as high as 113%.
No word on when this might be incorporated into processors, but one of the co-authors of the study is from AMD, and it was funded in part by AMD as well. The paper is to be presented on February 27 at the International Symposium on High Performance Computer Architecture.