For some time now, manufacturers have been turning out multi-core processors, which offer improved performance without having to increase clock speed. Taking advantage of that performance with parallel programming is difficult though, especially as you try to optimize the execution of commands. Researchers at MIT however have found that one optimization method may be better than is generally thought, which could make parallel programming easier.
Parallel programming requires a programmer to properly split tasks between computational cores, and the longer it takes to complete any one of those tasks impacts how quickly the program can be run. Lock-free algorithms promise the programmer that at least one core will complete its task within a certain amount of time, and while that is helpful, its worst case scenario wastes potential. This is why many are working with wait-free algorithms that guarantee all cores finish a task within a fixed time, but these are much more difficult to develop. The MIT researchers have found that while the worst case for lock-free is not very good, on average it is much better than that and even rivals wait-free algorithms. They discovered this by using a random chip scheduler that assign resources to any task based on a probability, instead of priority.
This result suggests that programmers could save themselves some time by coding with lock-free algorithms, as the performance difference between them and wait-free algorithms need not be that great. However, as lock-free algorithms still have a bad worst case scenario, a programmer should still consider what the application for the code is.