The aluminum atoms have a potential critical transition (energy/temperature at which pairs of electrons mimic each other’s movements known as Cooper pairs) of 100 K (Kelvin) or about -280 degrees Fahrenheit (F). That sounds pretty cold but not when you compare it to the current high of 39 K (about -389°F). A difference of 61 K or 110°F. The researchers also believe this new material type is only the beginning and could lead to other materials with even higher transition energies and temperatures — potentially even to room temperature superconductors.
Obviously this has yet to be experimentally proven — the researchers did some preliminary experiments that show a very good possibility of superconductivity at those energies but did not actually create an aluminum superconductor. In short — looks good on paper, but may not actually exist. It will still be interesting to watch and see what happens as experiments progress with this new material type.
Next Big Future reports on an advancement that could boost vehicle mileage by ~10 mpg (miles per gallon). Cars that previously achieved 40 mpg could achieve 50 mpg with a nearly 30% advancement in efficiency. The advancement comes from using a laser to ignite the fuel within the combustion cylinders instead of spark plugs. Lasers have… Continue reading Efficiency Boost: Laser Ignition of Combustion Engines
Phys.org reports on an issue with processing priority queues in a world dominated by an ever-increasing number of cores. When processors (CPUs) add, remove, and read through these structures they cache the first item in the list so that it can be easily accessed and processed by a single-core processor. However this cache is the same for all available cores and when something gets changed (added or removed) it means the cache for all the cores needs to be cleared and re-read before it can be read or changed again. As you might imagine when there are 4, 8, or even more (say, 40? 80?) cores all attempting to read through, add, change, or delete items in this structure it can cause a massive slowdown that essentially obliterates the performance gain that should be had from having many cores.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory may have found an answer. First they looked at using a different structure — a linked list. However, this too suffered from a similar issue; You need to access the first item then traverse the sequence to find the memory address needed. Instead they tried skip lists that create rows of linked lists in order to make it more efficient to search through a linked list — a “hierarchical linked list”. They then take it a step further by starting the search lower down the hierarchy depending on how many processing cores are available. The researchers point out that it is not a perfect solution as there can still be a collision — when a data item appears at more than one level of the hierarchy — but the chances of such a collision happening are rare.
Phys.org reports on a new algorithm developed by a doctoral student at École polytechnique fédérale de Lausanne (EPFL) that changes frequencies and bandwidth usage based on the type of data packets being sent and received. Many routers today are set by default to use channel 6 of the 2.4 GHz frequency which causes a build-up of WiFi traffic on that channel. The problem is that many other channels overlap and use much of the same frequencies. In fact, while there are 14 total channels made available in the 2.4 GHz range, many countries ban the use of some of those frequencies. In the United States (US) channels 12 through 14 are not able to be used yet are the ones with the greatest frequency gap between channels. In effect, because the frequency bands overlap you can argue that there are really only 3 available spaces to transmit data in the 2.4 GHz WiFi band.
The graph above shows the frequency channels for the 2.4 GHz WiFi range and how the channels overlap. Most routers are set to channel 6 by default and while they may change channels depending on availability they generally pick a channel and stick with it. In addition, many routers will use up to 8 of these channels at the same time. The problem is that this rather small range gets filled up in areas where many routers are being run and essentially cause a traffic jam of data. The other problem is that because routers will often stick with a set channel other may actually be open and unused.
The new algorithm would determine the bandwidth requirements of the data being sent and received and would select an appropriate channel and width. It essentially removed the idea of “channels” and instead divvies up the available frequency range into “lanes.” Some of the lanes are specialized similar to having a carpool or bike lane. As an example, if all you did was check your e-mail and browse a few websites you don’t need much bandwidth. The new algorithm would utilize a small amount of bandwidth – say within channels 1 and 2 – for just website browsing and email. Videos such as Vimeo and YouTube, which require much more bandwidth, may get a large chunk of channels 6 through 10 to use, and the remaining could be used for various other purposes such as websites with larger images, chat programs, and cell-phone updates. It spreads out the use over the available bandwidth and specialized certain areas for things like low-bandwidth data such as web and email, cell-phone updates, and high-bandwidth videos. The developer claims that it could increase typical router throughput by up to seven times (7X).