Looking at someone's desk, it is possible to determine what are the most used and most important objects on it, based on their proximity to the person. In multicore CPUs though, where the objects are stored in caches, the information is not always placed closest to the core that needs it because of the current hardware memory management system. Researchers at MIT are currently working on a software solution to optimize the last-level cache used by all cores, and are showing some promising results.
For a human, electrical signals may seem fast, but to a CPU running at gigahertz frequencies, the extra time it takes to pull data from the other side of the chip can actually impair performance. Ideally the data will be stored nearest to the core or cores that will need it, but at present the data is randomly assigned to the memory banks that comprise the cache. This is where Jigsaw comes in; the software the MIT researchers developed. By analyzing the way data is used by the cores of a CPU, it is able to recognize the best place for data for be stored in the last-level cache of the chip, whether that is nearest to a single core or in the center, where all cores can quickly access it.
So far the researchers have tested it on simulated chips with 16 and 64 cores, resulting in an average performance increase of 18% and reduced power consumption by up to 72%. As this is the first form of Jigsaw though, the increases could be made still greater, but naturally chip manufacturers will have to adopt the system before we see it.