Science & Technology News (1244)
Posted: September 26, 2016 11:33AM
For decades now the increase in computational power has been increasing at roughly the rate predicted by Moore's Law, but that is going to come to an end as we hit the fundamental limits of the materials and technologies being used. Many new approaches to computing are being developed so we can get around those limits, and researchers at North Carolina State University have come up with a fairly novel one. This new method use nonlinear circuits to exploit chaos such that fewer circuits and transistors are needed to perform a task.
Modern computer chips are tightly packed with a great number of transistors and circuits, and typically a lot of them are not put to use at the same time as the rest. This is because some circuits have been designed to perform specific tasks, and so for other tasks they are not useful. This new solution is to create nonlinear circuits that contain a number of different patterns, and each pattern is for a different function. By taking advantage of the system's natural chaos, the same circuit can be made to do multiple functions, and can switch from one function to another with each clock cycle. This means that potentially just hundreds of these circuits could match the performance of hundreds of thousands of traditional circuits.
As if the potential for tremendously greater performance in a smaller package were not enough, these nonlinear circuits, which are compatible with other digital devices, can be fabricated with modern technologies. The researchers are approaching commercial size, power, and ease of programming with their designs, so we may see some more news on in the coming months.
Source: North Carolina State University
Posted: September 22, 2016 01:38PM
It can be easy to forget just how important the materials used in a computer or other devices are, as special properties are needed for these systems to work. As we approach the limit of some of the materials we are using currently, new materials need to be discovered to continue developing ever faster and more efficient devices. One goal some researchers have had is to create a multiferroic material, and now researchers at Berkeley Lab and Cornell University have realized exactly that.
Multiferroic materials combine the properties of a ferroelectric and ferromagnetic material, and both families of materials are used in many technologies today but in different ways. Ferrimagnets are used in hard drives to store data as magnetic polarization, and also in sensors, while ferroelectric materials can easily flip polarization in response to an electric field and will hold their polarized states without power being supplied. Both sets of properties are valuable, and by combining them in one material new kinds of low-power memory technologies could be created, as an electric or magnetic field could be used to change both the electric and magnetic properties of the material.
To achieve this, the researchers made their material alternate between monolayers of lutetium oxide and iron oxide, but at every tenth repeat of these single-single pairs, a second iron oxide layer was added. The ferromagnetic atoms in this arrangement change their alignment to follow the neighboring ferroelectric atoms when they were exposed to an electric field. This flipping was observed from 200 to 300 K, which spans from about -100 ºF to 80 ºF, meaning this material works at room temperatures. The next step is to reduce the energy needed for this rewriting from the 5 V the researchers used to half a volt, and eventually to produce a working multiferroic device.
Source: Berkeley Lab
Posted: September 21, 2016 11:53AM
Reading someone's emotions can be very difficult, as some people do not obviously emote and others may try to hide their emotions, but it can be valuable information. For example, when testing a new product or reviewing media, accurately reading a subject's emotions can inform you about what works and what does not. To that end, as well as some valuable healthcare applications, researchers at MIT have developed a means of reading and tracking emotions using wireless signals.
This is hardly the first time MIT researchers, and the specific researchers on this project, have worked with wireless signals for some unexpected purpose as previously they were used to track people falling in a house. In that and this work, the key is measuring how the signals reflect off of a person's body and extracting information from that. In this case it is heart rate and breathing that is tracked by analyzing the acceleration recorded in the signals reflected off of the person's front and back. By using acceleration, the heart rate and breathing can be distinguished, as your pulse is much faster than your breathing. By reading these measurements, it is possible to determine the subject's emotional state and if they are happy, sad, angry, or excited.
When the researchers tested this they found this system, named EQ-Radio, has 70% accuracy at predicting emotions without any training and 87% accuracy after having learned the subject's emotions. Separate from monitoring emotions, this technology could also be used for tracking someone's heart rate with ECG accuracy without any on-body sensors, or for monitoring diagnosing conditions such as depression and anxiety.
Posted: September 13, 2016 09:35AM
The name Stephen Cabrinety might not be one you recognize, but there is a chance you have heard about something he did before his death in 1995. He collected various pieces of software starting in the 1980s and this collection eventually grew to contain some 25,000 pieces of software and video games, even in their original packaging, and many pieces of hardware from the time, including the appropriate readers and consoles. In 2009 the Stanford University Libraries obtained the collection and it has been working with NIST to preserve it, only recently finishing the work that included reading floppy disks and audio cassette tapes.
While the nostalgia factor is obvious, this preservation work has been done for a different reason than enjoying classic games and using old software. There are several institutions that exist for collecting and archiving the different media cultures create, such as the Library of Congress that keeps a copy of every published work. The concept goes back to ancient times with the Library of Alexandria, but there is no single repository for software. The National Software Reference Library (NSRL), created and maintained by NIST, comes close, though it has a rather different purpose, and now the complete Cabrinety collection has been added to it. Currently the programs cannot be loaded up and used, as the data has only just been added, but the Stanford team wants to build systems so this will be possible in the future.
For those who are curious, the NRSL is actually for forensic investigations. The software it has archived has all had a digital fingerprint created for it, so when computers or other devices have been taken as evidence, these fingerprints can be used to quickly filter out what information may be important. For example, after the disappearance of Malaysia Airlines flight MH370 the hashes for every flight simulator was requested, so that the FBI could discover the flight paths the pilot had practiced on.
Posted: September 9, 2016 01:46PM
While you might not see them, random numbers are used in many systems we benefit from every day, including the encryption systems that protect our online purchases and bank withdrawals. Modern random number generators are not perfectly random though, but new generators that utilize quantum mechanics can be completely random. Now researchers have successfully made a quantum random number generator small enough and fast enough to be usable in mobile devices, as reported in Optica.
The reason modern random number generators are not perfectly random is because the numbers are generated by algorithms or physical processes, and with enough information about the source, one can guess the numbers. Quantum mechanical phenomena, however, are immune to this as the processes involved cannot be predicted regardless of how much information one may have. This is why researchers have been working to create quantum-based random number generators, but so far those that have been made are large and not very fast. This new generator though, is based on a photonic integrated circuit (PIC) measuring just six millimeters by two millimeters and is able to operate in the area of gigabits per second. At that speed it can be used for real-time data encryption, protecting phone and video calls.
By demonstrating such a device can be made with PIC technology, chances are other researchers will act to make even better versions. Eventually these devices will likely move into commercial products to provide better security as well as scientific equipment for simulating biological interactions and nuclear reactions, or making stock market predictions.
Source: The Optical Society
Posted: September 8, 2016 12:18PM
Sustainable nuclear fusion has been a goal for a great many scientists around the world for decades, as the energy produced by a fusion-based power plant could potentially dwarf that from other sources. Obviously it is difficult to achieve, but strides are being made toward that bright future. Recently researchers at the University of Rochester have brought us closer to that future by creating the conditions needed to set a new record for laser-fusion.
There are a number of methods being investigated currently for triggering nuclear fusion, and the one the Rochester researchers were using is called direct-drive fusion. This method uses a number of lasers all aimed at a small fuel pellet, heating and compressing it to the point of implosion. If enough energy is pumped into the fuel by the lasers, the nuclei within the pellet will fuse together and ideally release more energy than the lasers spent. Using the OMEGA laser at the University of Rochester's Laboratory of Laser Energetics, the researchers were able to create the conditions needed to produce five times the current record for similar laser-fusion experiments. This brings it in line with the much larger National Ignition Facility (NIF) at Lawrence Livermore National Laboratory, if the conditions are scaled up to match. The NIF actually uses a different method for causing fusion called indirect-drive fusion. Instead of having the lasers directly aimed at the fuel, instead the laser light is converted to X-rays using a special gold enclosure, and those X-rays are what pump energy into the fuel pellet.
While ignition, when more energy is produced than is used, has not yet been achieved, these researchers at those at Lawrence Livermore are making significant strides towards that goal. In this experiment that includes better aiming the 60 laser beams involved onto the millimeter sized fuel pellet, improving the target's shell, and capturing images of the pellet as it implodes, for the purpose of improving future experiments.
Source: University of Rochester
Posted: August 24, 2016 01:58PM
As is now almost always the case with large AAA titles, the upcoming Battlefield 1 will have extra content added post-launch and you can purchase it early with its Premium Pass. This pass will include four expansion passes and two-week early access to each. You will be able to play as two new armies including France, in the They Shall Not Pass expansion, and the Russian Empire in a different expansion. The expansions will also add 16 multiplayer maps, Operations and game modes, elite classes, 20 weapons, and vehicles. There will also be 14 Battlepacks with weapon skins delivered monthly from November 2016 and 14 unique dog tags distributed over the Premium Pass period.
You can get additional details and release dates from the Battlefield website. Battlefield 1 releases October 21 for PC, Xbox One, and PlayStation 4. It is possible to get early access by pre-ordering the Early Enlister Deluxe Edition, which will get you in on October 18 (there are some conditions that apply to this though), while Origin Access (PC) and EA Access (Xbox One) membership will let you in starting October 13.
Posted: August 15, 2016 11:25AM
Microscopes are an amazing tool and have been ever since the first one was created, but we have been running up against their limits. Optical microscopes cannot resolve objects smaller than 200 nm, which is the size of the smallest bacteria because of certain laws of physics, but they can be given a helping hand. Researchers at Bangor University have created a new superlens that allows smaller objects and shapes to be seen, even the patterns on Blu-Ray discs.
These superlenses are made of nanobeads, which are objects we can find all around us, even if you are not aware of them. They are used in some paints and in sun-screen, and for the superlenses titanium dioxide (TiO2) beads are used. The reason for these specific beads is their high refractive index and how a sphere of them will break up a light beam. The nanobeads are deposited as a droplet containing millions onto the subject being put under a microscope, and these beads refract the light in a way to creates millions of individual beams. These light beams are what an optical microscope can capture and use to resolve details previously invisible.
Using these nanobead superlenses can increase the magnification of a microscope by a factor of five, which should be enough to reveal germs and viruses. Not too bad considering these nanobeads are actually fairly cheap and readily available.
Source: Bangor University
Posted: August 11, 2016 12:39PM
Windows can make a tremendous difference in a room by letting some natural light in, but there are times you want to cut down on the brightness. Obliviously shades and blinds can be used to achieve this, but not all situations allow for such solutions. An alternative is to create windows that can shade themselves, such as those MIT researchers have developed.
Self-shading windows already exist and are actually used on Boeing's 787, so that a flip of a switch can cut down on the light coming in. The catch with these windows is that they take a few minutes to transition from clear to a dark green. The new MIT windows change much faster and can actually go opaque. Both the new windows and the 787 windows are electrochromic windows, which work by having an electrical current applied to them. This current negatively charges the windows, so positive ions have to move through the window to return electrical balance, and it is these moving ions that shade the windows. In the 787 windows, the ions move slowly, thus making the transition slow, while the MIT windows contain metal-organic-frameworks (MOFs) that are able to carry electrons and the positive ions at high speeds.
Two other advantages the MIT windows have are it actually becoming opaque instead of just a dark shade, and that only the transition requires a current be applied. Once the window is made clear or opaque, the current can be stopped and will not be needed again until one wants it to transition back. This is obviously important as it cuts down on how much power these windows need to operate.
Posted: August 10, 2016 10:50AM
At the end of their lives, giant stars will collapse under their own gravity, resulting in a massive burst of radiation and matter called a supernova. Without these extraordinary events, many heavy elements and isotopes would not be present in the Universe, outside of stellar cores. Researchers from the Technical University of Munich have discovered the first time-resolved signal from a supernova on Earth, showing our planet has actually travelled through the remnants of one a dead star.
The evidence comes in the form of the radioisotope Fe-60, which cannot be produced by any natural, terrestrial mechanism, so its discovery points to supernova material falling on Earth. Actually this is not the first time such evidence has been found, but the previous discovery had poor temporal resolution, meaning we could not determine when it was from. This new discovery can be pinned down to starting 2.7 million years ago, peaking around 2.2 million years ago, and finally dying off about 1.7 million years ago. For approximately one million years, the Solar System passed through the debris of a supernova.
This iron isotope was found within microfossils of iron-sequestering bacteria that lived in the ocean. After the bacteria died, sediment built up at a constant rate, preserving the temporal shape of the Fe-60 signal.
The likely source of the iron is a supernova from the Scorpius-Centaurus OB association. At 2.3 million years ago it was just 300 light years away, so it was definitely close enough for us to pick something up from it. We are also within part of it called the Local Bubble, which is a largely matter-free cavity resulting from 15-20 supernova pushing matter away some 10 to 15 million years ago.
Source: Technical University of Munich
Posted: August 9, 2016 01:03PM
For years now, organic light emitting diodes (OLEDs) have promised us more efficient and potentially cheaper displays that also offer better color reproduction and contrast. So far though, OLED displays have been all but restricted to certain smartphones with LCD screens beating them out on larger scales. Thanks to researchers at Harvard University, MIT, and Samsung though, that could change in the future.
In any modern display, each pixel is made of smaller sub-pixels that emit red, green, and blue light. By varying how much light is emitted by each sub-pixel, any other color can be produced. The problem with OLEDs has been that the blue sub-pixels are often inefficient at producing blue light. To compensate for this, manufacturers instead use organometallic molecules that also contain expensive transition metals. To remove these metals and thereby cut costs, the researchers created an advanced machine learning algorithm to analyze and model over one million molecule candidates. The best 2500 of these were then given to experimental collaborators to consider their potential via a web application.
In the end the team had hundreds of molecules that should perform as well as or better than the best metal-free OLEDs known of today. Being completely organic, OLEDs made with these molecules could potentially be cheaper and thus easier to produce at the large sizes of televisions. This approach can also be applied to find organic molecules for other applications, such as flow batteries, solar cells, and organic lasers.
Source: Harvard University
Posted: August 8, 2016 12:51PM
Quantum computers are not here yet, but not for lack of trying. Instead of relying on electronic bits that can represent 0 or 1, quantum computers use qubits that can be 0 and 1 at the same time, but the medium for these qubits is still being decided on. One promising candidate is to use ions as qubits, and now researchers at MIT have created a prototype chip that allows for better control over them.
At the core of how quantum computers work is the quantum mechanical phenomenon, superposition, which is when a particle exists in mutually-exclusive states at the same time, such as spinning both clockwise and counterclockwise. These particles are the quantum bits, or qubits, and while there are options for what exactly they are, ions are likely the best understood of the choices. The thing about ions is that they can require large and complicated equipment to work with. For starters the ions have to be held in a trap, and while cage traps, with electrodes arranged like cage bars, work well, they are limited in size and realizing quantum computers will require large numbers of qubits. To that end, the MIT researchers are instead working with surface traps that have their electrodes covering a surface and the ions held slightly above them. In theory surface traps can be extended indefinitely.
Another issue the MIT researchers have addressed is how to control the ions. In a surface trap the ions can be just five micrometers apart, so hitting just the one you want with a laser from an optical table is very difficult. The solution here was to put a layer of glass and network of silicon nitride waveguides underneath the electrodes. Beneath holes in the electrodes are diffraction gratings within the waveguide, which direct the light up, into the holes and focuses it enough to hit single ions.
The next step for this work is to hopefully add light modulators to the diffraction gratings. This will make it possible to control how much light each ion qubit will receive, making it more efficient to program them.
Posted: August 3, 2016 01:21PM
Batteries are a critical component of a great deal of today's technologies, so there is an ever growing need to improve upon them. Achieving that improvement is difficult though because we need to use materials with the correct chemical properties, with sacrificing other important characteristics. Now, researchers at the University of California, Riverside have discovered that two candidates for anodes in future lithium-ion batteries can be combined to great effect.
Modern lithium-ion batteries rely on graphite for their anodes, because it performs the necessary chemistry and is resilient to the damage batteries can endure. Tin and silicon have also been suggested as possible anodes, because some of their properties are better than graphite, but others are worse. For example, silicon is somewhat fragile and repeated use can cause the anode to fracture, leading to a loss of performance and serious damage to the battery. What the researchers have discovered is that combining these two materials can create a new anode with the best of both worlds, essentially.
This new anode can hold three times the charge traditional graphite can, is stable over a great many charge-discharge cycles, and can even charge very quickly. It also can be produced with a simple manufacturing process, which can help keep prices down.
Posted: August 2, 2016 01:47PM
Graphene is a very interesting material that has several curious characteristics, and some of them come from the material being just one atom thick. Ever since its discovery, researchers have been working to better understand graphene and to make other atom-thick materials, which could have useful properties for electronics. Now researchers at the Moscow Institute of Physics and Technology, Rice University, and other institutions have crafted a theory to predict what it will take to produce graphene-like from salts.
Many salts, including the common sodium chloride, have a cubic molecule structure and ionic bonds between the atoms. It has been predicted and even observed in some salts that once they are comprised of few enough layers, they will spontaneously transform into a graphene-like structure. This process is called graphitization. These predictions and observations have been limited so far, because they have been for certain materials, but by leveraging the power of computer simulations, the researchers have created a general theory of how graphitization occurs. Now it is possible to predict the critical number of layers it will take for a salt made of the four alkali metals and the halogens to undergo graphitization. For sodium salts, the number is 11 layers, and for lithium salts it is between 19 and 27.
If this theory is proven accurate experimentally, it could open up a new route to producing ultrathin films and these films could have desirable properties for electronics. The researchers are also going to investigate other compounds, to see if more materials will undergo graphitization, resulting in new and intriguing properties.
Posted: August 1, 2016 12:45PM
Depending on how high it is, if you have had to pay an electric bill, you have probably asked yourself about where all of the power is going, to hopefully cut down. Answering this question is harder than you might think though, because of what is involved in actually getting the data, but researchers at MIT are working to change that. They have created a monitoring device about the size of a postage stamp that can be zip-tied onto a power cord, and then make measurements from the electric and magnetic fields around it.
Wireless devices like this have been developed before, but have had the cumbersome requirement of carefully aligning them on the cord. This MIT device solves that by having five offset sensors and software the selects the one getting the best signal. The data it captures can then be analyzed to determine how much power a device is using, and if there are any anomalies. During one of its tests it actually identified a wiring problem in a house that was putting a potential dangerous voltage on a copper pipe. This analysis though is not done remotely; all of the data can remain at the user's home, abating any privacy concerns as well as bandwidth issues, considering how much raw data is collected.
The hope for this device is that it will become cheap enough to produce and the applications for analyzing the data will become common enough, that anyone will be able to optimize the power usage within their home. This could come from adjusting habits or replacing particularly inefficient appliances to better models.
Posted: July 29, 2016 01:57PM
For our never-ending drive for faster computers and connections to be satiated, at least temporarily, it will become necessary to turn to new methods of communication. Already we have seen optical systems deployed to accelerate the Internet and more, but bringing similar systems into our computers has been proving very difficult. Thanks to researchers at the University at Buffalo though, we are closer to this high-speed future than ever before.
Light travels at a far faster speed than the electrical currents within our computer, so using it instead of electrons to connect the various chips and components in our systems could give a sizeable boost. But the devices for producing optical signals are sometimes too large to fit into computer chips and we are approaching the limit of how much information can be packed into an optical signal. The Buffalo researchers however have made an important discovery by shrinking down vortex lasers enough to be compatible with computer chips.
Vortex lasers use light's orbital angular momentum to cause the light waves to twist in a corkscrew shape, with a vortex at the center. Multiple corkscrews can be fit together in the same area without crossing each other, so these lasers are able to transmit up to ten times more information than a more traditional laser. Obviously such a device could go a long way in enhancing the performance of our computers and networks, and comes at a good time as we approach some fundamental limits in current technologies.
Source: University at Buffalo
Posted: July 28, 2016 01:26PM
As simple as hydrogen may be, one proton and one electron, it is a very unusual material that can express some interesting properties, under the right conditions. An example of this would be that solid hydrogen can enter a metallic state and act as a superconductor, but solid hydrogen requires very cold temperatures to form. Naturally researchers have been working on ways to warm things up, and those at the Carnegie Institution for Science have recently made a couple important discoveries to that end.
While pure hydrogen will still only solidify at a specific critical temperature, combining this element with one of the alkali metals below it, lithium, sodium, and potassium, could create hydrogen-rich compounds that can have their properties altered. These compounds could then be made to act as superconductors, while still surviving at more practical temperatures. The researchers have finally confirmed these predictions by creating a sodium/hydrogen material. Of course producing this material was far from easy, requiring temperatures around 2000 K (3100 ºF) and pressures between 300,000 and 400,000 times atmospheric pressure (30 - 40 gigapascals). Now that we have the ability to make the material though, we have a good start for finding easier methods of producing it.
When analyzed the compound was seen to have two different so-called polyhydrides, on being NaH3 and the other being NaH7. The latter molecule was especially interesting to see, because three of the hydrogen atoms in it lined up to form a one-dimensional chain. This was first predicted in 1972 and represents a new phase quite distinct from normal hydrogen.
Source: Carnegie Institution for Science
Posted: July 25, 2016 12:50PM
While one of the current trends for home entertainment is VR, at theaters we are still seeing 3D movies being projected and still having to put on special glasses to enjoy it. Thanks to some researchers at MIT though, the glasses might not be necessary in the future. They have developed a glasses-free 3D solution that could one day be scaled up for theaters.
Glasses-free 3D already exists, including in televisions that utilize parallax barriers to project specific images to each eye. The catch with this approach is the need for the viewer to be a constant distance away, which is not viable for a movie theater. Another solution uses multiple projectors to cover the angular range of an audience, but its failing is the relatively low resolution of the images it produces. This new solution, however, solves these problems by taking advantage of something central to movie theaters; seats. Movie-goers sit in specifically placed seats, so their range of movement is limited, which the MIT researchers exploit in their system by only displaying the necessary and narrow range of angles to each seat. This is accomplished using mirrors and lenses, and in the prototype it uses 50 sets of them and is just larger than a pad of paper.
The design can be scaled up, for use in movie theaters, billboards, and more, but for now the question remains of if it is financially feasible to go to those sizes.
Posted: July 21, 2016 02:08PM
When the topic of quantum mechanics comes up, most people likely envision particles and waves on the nearly infinitesimal stage of single atoms. While these is typically where one can look and find the counter-intuitive phenomena of quantum mechanics, the truth is the effects are not limited to that small stage. Researchers at MIT have analyzed data concerning neutrinos that travelled some 456 miles (735 Km) and found they maintained a superposition of states throughout the trip.
Superposition is the very counter-intuitive phenomenon that allows a particle to exist in multiple mutually exclusive states at the same time. For example, a coin tossed in the air could rotate in both clockwise and counter-clockwise directions. In this case the coin is a particle known as a neutrino, and a great many of them are made at a facility near Chicago, and some of them travel to a detector in Soudan, Minnesota, 456 miles away. Neutrinos come in many flavors, and this is what the MIT researchers decided to look at with a modified form of the Leggett-Garg inequality. This inequality is used to determine if a system is acting in a quantum mechanical or classical way, because correlations between measurements of the system will differ, depending on the system's behavior. What the researchers found is that the distribution of neutrino flavors falls within the predicted range of a quantum system, which had almost no overlap with a classical system.
This discovery represents the greatest distance quantum mechanics has ever been observed, and it is perhaps not surprising it involves neutrinos. These particles very rarely interact with the environment and travel at relativistic speeds, near the speed of light. This causes time to dilate for them so much that from their perspective, they last for a brief instant, further protecting their quantum state.
Posted: July 20, 2016 01:26PM
Everyday humanity generates more and more information that it wants to keep on various media including hard disks and flash drives. To keep up with the demand for more storage, storage density has to increase and now researchers at Delft University and the Kavli Institute of Nanoscience have successfully encoded bits to single atoms. This allows for 500 Terabits to be stored in a square inch, which would be enough to contain every book humanity has written on a postage stamp.
In order to encode the data to single atoms, the researchers turned to a scanning tunneling microscope, which has a sharp needle to probe atoms on a surface on at a time. This allowed them to precisely move chlorine atoms along a copper surface into one of two positions. In one position a hole is beneath an atom, which the researchers read as a 1, and when the hole was above the atom, it was read as a 0. By only have the chlorine atoms by other chlorine atoms and these holes, this method is more stable for storing data than others that rely on loose atoms. Additionally, the researchers arranged the memory into blocks of eight bytes and each block has a marker. These markers are similar to QR codes and contain information about the memory block, identifying if the block is damaged for some reason.
As you might expect with the manipulation of single atoms, this method is not ready for any commercial system. It requires a very clean vacuum environment and temperatures around that of liquid nitrogen. This is still an important step towards such a goal, but there is a still a lot to do.
Source: Delft University of Technology
Posted: July 18, 2016 10:38AM
For millennia humanity has fantasized about the ability to become invisible, but only relatively recent advances in science are actually making it possible. Typically these efforts are focused on hiding from free space waves, but in many practical applications surface waves also need to be considered. To that end, researchers at Queen Mary University of London have found a way to hide a curved surface from surface waves.
This new cloak works by depositing a nanocomposite medium onto the subject surface. This medium consisted of seven layers, making the material a graded index nanocomposite, and the electric properties of each layer depend on their positions. The result is to prevent the incoming surface waves from noticing the curves on the surface, and instead continue moving as though the surface were flat.
While this discovery will not directly lead to any ability to cloak a person, it could be used to enhance antennas, by allowing them to take on different shapes and sizes. It can also allow antennas to be attached in weird places and to a wide variety of materials. The potential for this discovery is greater than just better antennas though, as it could be applied to work with other phenomena that can also be described as waves.
Source: Queen Mary University of London
Posted: July 12, 2016 01:42PM
A vulnerability was discovered in the Tor network last year that could allow this anonymity-protecting system to be comprised. Fortunately, this year a new system called Riffle has been developed by researchers at MIT and EPFL that can guarantee messages sent on Riffle are safe, if just one server has not been compromised by an attacker.
Like Tor, Riffle uses onion encryption, which wraps data in multiple layers of encryption and as the packages moves through the network, each server removes a layer. In the end, only the final server knows the data's final destination and only the first server knows where it came from. What Riffle adds to this, to further protect the messages, is making the network of servers a mixnet. In a mixnet, it permutes the order of the messages it receives, so if they arrived A, B, C, they will leave C, B, A, or some other permutation, and each server a message goes to does this. This protects against a passive attacked that is just observing network traffic, but on its own does not protect against an attacker that has compromised a server.
To solve that issue, the researchers employ what is called a verifiable shuffle. Because of the layered encryption used for each message, what comes to and leaves the server looks completely different, but the server can be made to generate a mathematical proof that the sent messages are valid manipulations of the originals. Instead of just going off of message the server received though, Riffle has the initial message sent to every server in the mixnet, allowing each server to independently check for any tampering. Generating and checking proofs is not easy though, and could significantly slow down the entire network with just one message, so Riffle also uses what is called authentication encryption. It is more efficient than the verifiable shuffle, but requires the sender and receiver to share a cryptographic key, so Riffle uses the verifiable shuffle technique to share a key between the user and every server, while authentication encryption is used for the rest of the communication.
This combination of techniques means that so long as one server in the entire network has not been compromised. This lone server can verify the authenticity of the message and shuffle the messages so they cannot be tracked.
Posted: June 21, 2016 11:51AM
For several years now, processors have been having cores added to them to improve performance, as more cores means more processes can be run at the same time. Some software has also been written to utilize multiple cores at the same time, accelerating their operation, but making such parallel programs is not a simple endeavor. To address this issue and improve the performance of parallel programs, researchers at MIT have developed a new chip design called Swarm.
Part of why it is difficult to make a program run on multiple cores in parallel is that it requires breaking the program up into multiple tasks and explicitly enforcing synchronization between these tasks, as they access shared data. What makes Swarm different is that it handles the synchronization itself, making the programming significantly easier. In fact, the researchers created six Swarm versions of common algorithms and, in general, the Swarm programs were a tenth the size, and would run three to 18 times faster. In one case the algorithm ran 75 times faster.
Swarm manages this synchronization on its own by time-stamping tasks based on their priority, which is something the programmer has to set. This allows the chip to automatically decide whether a task can write to the shared memory, protecting higher priority tasks from getting incorrect data. Currently this kind of synchronization has to be implemented by programmers, but now the chip will handle it and has circuitry built in for maintaining the queue of tasks.
Posted: June 20, 2016 09:03AM
Researchers at the University of California, Davis appear to have set some new records, as they have designed the first processor with 1000 independent cores. It also has the highest clock-rate for a processor designed at a university and is so efficient it can run at 115 billion instructions a second, while only dissipating 0.7 W. That makes it over 100 times more efficient than the processor in a laptop.
This processor consists of 621 million transistors and has an average maximum clock speed of 1.78 GHz. Part of the reason it is an average clock speed is because the cores can be independently clocked, allowing some to be shut down to conserve power. Each core can also run its own program separately from the others, which allows a larger application to be broken into smaller pieces with each piece running in parallel. The cores can also transfer data directly to each other, instead of relying on a cache to store the data for other cores to access.
With 1000 cores, this processor is meant for processing large amounts of data in parallel, as is needed for scientific data operations and datacenter record processing, but the researchers have also built applications for wireless coding and decoding, video processing, and encryption. The chip was fabricated by IBM using its 32 nm CMOS process.
Source: University of California, Davis
Posted: June 17, 2016 01:38PM
There are a whole host of things that can be made by 3D printing, but what most people are printing does not really push a printer's limit. Interested in pushing the technology, researchers at MIT decided to experiment with printing hair, and found it to be a rather interesting challenge, but not with the hardware. The software driving 3D printers would crash before it could finish, but the researchers developed a clever solution.
With current systems, 3D printing hair would require modelling every single strand in CAD, and then a slicer program would have to recreate each hair's contour as a mesh of triangles. From that horizontal cross sections would need to be generated from the triangles, and those cross sections would then have to be made into bitmaps for the printer to work with. For an area the size of a postage stamp covered with 6000 hairs, that process would take several hours to complete. The MIT researchers' solution was to develop a new piece of software that models the array of hair in a very different method. First they modelled a single hair as an elongated cone, so the hair's height, angle, and width could be manipulated by changing the pixels of the cone. To scale this up for thousands of hairs, the researchers created a color mapping technique, where red, green, and blue were represent the parameters height, width, and angle. So a circular patch of hair with taller strands at the rim would look like a red circle with darker hues at the rim. An algorithm can then take this color map and quickly generate a model for the 3D printer to work with.
While this approach for 3D printing hair could be used to print wigs and such, the researchers envision it being used to create hair for the more useful tasks of sensing, adhesion, and actuation. Of course, now that this new technique exists for 3D printing, others could find some even more interesting applications.
Posted: May 18, 2016 11:15AM
For the thousands of years humanity has been combining metals into various alloys, we have accepted that there is a tradeoff between strength and ductility. In order to make a metal stronger, it will become more susceptible to fracturing when deformed. This belief has also extended to high entropy alloys, but researchers at MIT borrowed a property of steel and overcame the tradeoff.
Steel is a tremendously ubiquitous alloy, thanks to its strength and the great many alloys it actually represents. By altering the components that are mixed together to form steel, its properties can be tailored to fit specific needs. For many steels and alloys in general, they possess stable single-phase microstructures, which means the structures of the molecules have specific shapes they most prefer, but some advanced steels have metastable phases. This means the steel has multiple stable internal structures, so when it is under stress, these phases can transition to more stable ones, making the alloy more resilient to fractures.
High entropy alloys, or HEAs, are alloys made from multiple metals in roughly equal proportions, unlike traditional alloys which are dominated by one material. In theory they could be very strong and light, but have so far been subject to the strength and ductility tradeoff. The MIT researchers decided to borrow from those advanced steel alloys and produce HEAs with metastable configurations. The result was an alloy made of iron, manganese, cobalt, and chromium that surpasses the best single-phase HEA.
This discovery should have great applications in the future, as this strategy can be used to create many other alloys with the best of both worlds.
Posted: May 10, 2016 11:54AM
In general, simplifying processes is important for making them more common, as fewer steps can speed things up and reduce costs. When chemistry is involved though, it can be very difficult to remove certain steps, and in some cases impossible. Luckily researchers at Berkeley Lab have succeeded in greatly simplifying the process of creating advanced biofuels.
What has been created here is a one-pot method for making these biofuels, which means the ingredients can be loaded into a chemical reactor and the output will be the desired, end result, instead of an intermediary material. Normally producing these biofuels requires using ionic liquids, which are salts that are liquid at room temperature and work as solvents. These are needed to break apart the cellulose, hemicellulose, and lignin of the plant matter before enzymes can get at the resulting sugars to produce biofuels. The issue has been that the ionic liquids damage the enzymes, so it has been necessary to add a step to remove them. What the Berkeley researchers have done is mutated a gene in E. coli to make it resilient to ionic liquids, and also given it the ability to create similarly resilient enzymes for creating biofuels.
Part of what makes this discovery important is that this strain of E. coli can be used as the base for other technologies to create advanced biofuels, such as jet fuel or its precursors.
Source: Berkeley Lab
Posted: May 9, 2016 09:25AM
Security software like antiviruses and parental controls are meant to keep our computers safe, but researchers at Concordia University have found that some are actually making us more vulnerable. This is because of how the software sets up a TLS proxy to filter out unwanted content.
To keep a computer from visiting websites that are either dangerous or something a parent does not want a child to see, a piece of software can be installed that will monitor this traffic and block those sites. In some cases the software works by checking the domain name, but in other cases it establishes a proxy and checks the certificate for the website. As a browser will also check such certificates, the software has its own to pass on to the browser, and this is where the vulnerability lies. Because the browser is left to assume the software's certificate is valid, anything with that certificate will get through, and that piece of software can be vulnerable to other attacks. For example, if the certificate the software uses is pre-generated and static, then every user can be attacked the same way.
The researchers tested 14 pieces of software, and found each one reduced the TLS security of the system they were installed on. In one case, an antivirus left users open to attack because after its license expired it ceased to check certificates, and also stopped receiving updates, and one of the parental control applications left its pre-generated certificate on the computer, even after it was uninstalled. This means that any traffic that used that certificate would be seen as trusted by the computer.
The researchers have contacted and reported their findings to the software manufacturers, so hopefully these issues can be addressed. They also suggest new guidelines for handling TLS proxies be developed, so as to prevent vulnerabilities from being added when a user is trying to, or required to secure their computer.
Source: Concordia University
Posted: May 6, 2016 12:05PM
Since its discovery, researchers have been discovering more and more applications for graphene, a form of carbon. With its amazing mechanical and electrical properties, it is hardly surprising that so many have been found. Now researchers at Chalmers University of Technology have discovered a way to make an effective means of cooling electronics and other devices.
Graphene is an atom-think sheet of carbon that has many promising properties, including great electrical and thermal conductivity. These are both reasons why we may see it used in various devices, and the Chalmers researchers have found how to enhance its thermal properties. By adding functionalized molecules to graphene nanoflakes, the researchers were able to improve a graphene film's heat transfer efficiency by 76%. This is because of how phonons, the quantum of heat, are constrained by the functional layer, and could lead to means of controlling the heat of electronic and optoelectronic systems. That control could then in turn be used to better cool electronic and optoelectronic devices, allowing for greater performance.
Posted: May 6, 2016 06:04AM
I am not sure if this was in any work of science fiction first, but it does leave me feeling like science fiction is becoming reality. Researchers at Carnegie Mellon University have developed a way to effectively turn the skin on your arm and hand into a touchscreen. This would allow someone to interface with a smartwatch, or other devices, without having to cover the small screen with their fingers.
Other 'skin to screen' tracking systems have been developed before, but those required special overlays, textiles, and combinations of projectors and cameras to work, which is hardly ideal. This system, called SkinTrack, uses pairs of electrodes and a ring for someone to wear on their finger. This ring emits a low-energy, high-frequency signal that propagates across the skin when the finger touches it, or is just close to it. The electrodes pick up this signal and can triangulate the finger's location from differences in the signal's phase. Different locations and movements can then be used as various inputs for a device, such as scrolling, highlighting buttons, and even hitting shortcuts.
The researchers found the system could identify when the finger was touching with 99% accuracy and had an error of 7.6 mm for the location of the finger. Over time this precision can drop though, as the conductivity of the skin can change due to sweat and hydration, as well as body movements. Another issue is that the ring will lose power eventually. Still, this is pretty cool and could definitely lead to advances for smartwatches and other pieces of advanced technology.
Source: Carnegie Mellon University