A newly developed memory management technique could enable software to run more efficiently on multi-core processors.
The technique 'parallelises' programs like word processors, web browsers, language parsers and graph algorithms, which typically only utilise only one processor, or 'core', at a time.
Researcher Devesh D Tiwari said those programs could not traditionally be made to run in parallel, likening their operation to brewing a pot of coffee in sequential steps.
With his colleagues at the North Carolina State University, Tiwari developed a way of isolating a program's memory management functions into a separate 'thread' from the rest of the workload.
He estimated memory management functions, which involve preparing memory to contain data or freeing up memory that is currently in use, to typically comprise 30 percent of a program's workload.
Because the 'memory management thread' (MMT) could run on a different core than the rest of the program, the researchers achieved a 20 percent speed increase with open source software benchmarks.
"The computational thread notifies the memory-management thread, effectively telling it to allocate data storage and to notify the computational thread of where the storage space is located," Tiwari explained.
"By the same token, when the computational thread no longer needs certain data, it informs the memory-management thread that the relevant storage space can be freed."
Initially, the researchers observed delays in coordinating the MMT and computational thread, due to the communication latency between two cores.
They reduced the delay with a so-called 'speculative memory management' technique, where the MMT would predict memory allocation requests based on previous requests, and perform the tasks in advance.
Another technique, called 'bulk memory management', was also being investigated, and involved performing multiple memory management tasks in a group to reduce communication latency between the main computation thread and MMT.
Tiwari said the technique could be applied to any software platform and required no additional hardware and supporting drivers, besides those that come by default with contemporary chip multi-processors (CMP) like the Intel Core 2 Duo or Core 2 Quad.
"Our MMT approach is a generic approach and can be applied by software developers on any platform, if they wish," he told iTnews, adding that the technique had only been tested on Linux allocators so far, and not on proprietary operating systems such as Microsoft Windows.
"It can deliver performance transparently without requiring extra compiler hints or programmers' effort; in fact, MMT can also useful for high overhead tasks, [for example], security checking, profiling debugging, [and] tracing."
So far, Tiwari and his colleagues had no plans for commercialising their research, which was funded by the National Science Foundation.
"With CMPs becoming mainstream processors, it is important to design new techniques that can help speed up sequential applications," Tiwari told iTnews.
"MMT shows how fine-grained parallelism can be exploited on CMPs efficiently [and] lessons learned while designing MMT can also be useful to other researchers who are trying to exploit fine-grain parallelism on CMPs," he said.