Phys.org reports on an issue with processing priority queues in a world dominated by an ever-increasing number of cores. When processors (CPUs) add, remove, and read through these structures they cache the first item in the list so that it can be easily accessed and processed by a single-core processor. However this cache is the same for all available cores and when something gets changed (added or removed) it means the cache for all the cores needs to be cleared and re-read before it can be read or changed again. As you might imagine when there are 4, 8, or even more (say, 40? 80?) cores all attempting to read through, add, change, or delete items in this structure it can cause a massive slowdown that essentially obliterates the performance gain that should be had from having many cores.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory may have found an answer. First they looked at using a different structure — a linked list. However, this too suffered from a similar issue; You need to access the first item then traverse the sequence to find the memory address needed. Instead they tried skip lists that create rows of linked lists in order to make it more efficient to search through a linked list — a “hierarchical linked list”. They then take it a step further by starting the search lower down the hierarchy depending on how many processing cores are available. The researchers point out that it is not a perfect solution as there can still be a collision — when a data item appears at more than one level of the hierarchy — but the chances of such a collision happening are rare.