Science & Technology News (1237)
Posted: August 15, 2016 11:25AM
Microscopes are an amazing tool and have been ever since the first one was created, but we have been running up against their limits. Optical microscopes cannot resolve objects smaller than 200 nm, which is the size of the smallest bacteria because of certain laws of physics, but they can be given a helping hand. Researchers at Bangor University have created a new superlens that allows smaller objects and shapes to be seen, even the patterns on Blu-Ray discs.
These superlenses are made of nanobeads, which are objects we can find all around us, even if you are not aware of them. They are used in some paints and in sun-screen, and for the superlenses titanium dioxide (TiO2) beads are used. The reason for these specific beads is their high refractive index and how a sphere of them will break up a light beam. The nanobeads are deposited as a droplet containing millions onto the subject being put under a microscope, and these beads refract the light in a way to creates millions of individual beams. These light beams are what an optical microscope can capture and use to resolve details previously invisible.
Using these nanobead superlenses can increase the magnification of a microscope by a factor of five, which should be enough to reveal germs and viruses. Not too bad considering these nanobeads are actually fairly cheap and readily available.
Source: Bangor University
Posted: August 11, 2016 12:39PM
Windows can make a tremendous difference in a room by letting some natural light in, but there are times you want to cut down on the brightness. Obliviously shades and blinds can be used to achieve this, but not all situations allow for such solutions. An alternative is to create windows that can shade themselves, such as those MIT researchers have developed.
Self-shading windows already exist and are actually used on Boeing's 787, so that a flip of a switch can cut down on the light coming in. The catch with these windows is that they take a few minutes to transition from clear to a dark green. The new MIT windows change much faster and can actually go opaque. Both the new windows and the 787 windows are electrochromic windows, which work by having an electrical current applied to them. This current negatively charges the windows, so positive ions have to move through the window to return electrical balance, and it is these moving ions that shade the windows. In the 787 windows, the ions move slowly, thus making the transition slow, while the MIT windows contain metal-organic-frameworks (MOFs) that are able to carry electrons and the positive ions at high speeds.
Two other advantages the MIT windows have are it actually becoming opaque instead of just a dark shade, and that only the transition requires a current be applied. Once the window is made clear or opaque, the current can be stopped and will not be needed again until one wants it to transition back. This is obviously important as it cuts down on how much power these windows need to operate.
Posted: August 10, 2016 10:50AM
At the end of their lives, giant stars will collapse under their own gravity, resulting in a massive burst of radiation and matter called a supernova. Without these extraordinary events, many heavy elements and isotopes would not be present in the Universe, outside of stellar cores. Researchers from the Technical University of Munich have discovered the first time-resolved signal from a supernova on Earth, showing our planet has actually travelled through the remnants of one a dead star.
The evidence comes in the form of the radioisotope Fe-60, which cannot be produced by any natural, terrestrial mechanism, so its discovery points to supernova material falling on Earth. Actually this is not the first time such evidence has been found, but the previous discovery had poor temporal resolution, meaning we could not determine when it was from. This new discovery can be pinned down to starting 2.7 million years ago, peaking around 2.2 million years ago, and finally dying off about 1.7 million years ago. For approximately one million years, the Solar System passed through the debris of a supernova.
This iron isotope was found within microfossils of iron-sequestering bacteria that lived in the ocean. After the bacteria died, sediment built up at a constant rate, preserving the temporal shape of the Fe-60 signal.
The likely source of the iron is a supernova from the Scorpius-Centaurus OB association. At 2.3 million years ago it was just 300 light years away, so it was definitely close enough for us to pick something up from it. We are also within part of it called the Local Bubble, which is a largely matter-free cavity resulting from 15-20 supernova pushing matter away some 10 to 15 million years ago.
Source: Technical University of Munich
Posted: August 9, 2016 01:03PM
For years now, organic light emitting diodes (OLEDs) have promised us more efficient and potentially cheaper displays that also offer better color reproduction and contrast. So far though, OLED displays have been all but restricted to certain smartphones with LCD screens beating them out on larger scales. Thanks to researchers at Harvard University, MIT, and Samsung though, that could change in the future.
In any modern display, each pixel is made of smaller sub-pixels that emit red, green, and blue light. By varying how much light is emitted by each sub-pixel, any other color can be produced. The problem with OLEDs has been that the blue sub-pixels are often inefficient at producing blue light. To compensate for this, manufacturers instead use organometallic molecules that also contain expensive transition metals. To remove these metals and thereby cut costs, the researchers created an advanced machine learning algorithm to analyze and model over one million molecule candidates. The best 2500 of these were then given to experimental collaborators to consider their potential via a web application.
In the end the team had hundreds of molecules that should perform as well as or better than the best metal-free OLEDs known of today. Being completely organic, OLEDs made with these molecules could potentially be cheaper and thus easier to produce at the large sizes of televisions. This approach can also be applied to find organic molecules for other applications, such as flow batteries, solar cells, and organic lasers.
Source: Harvard University
Posted: August 8, 2016 12:51PM
Quantum computers are not here yet, but not for lack of trying. Instead of relying on electronic bits that can represent 0 or 1, quantum computers use qubits that can be 0 and 1 at the same time, but the medium for these qubits is still being decided on. One promising candidate is to use ions as qubits, and now researchers at MIT have created a prototype chip that allows for better control over them.
At the core of how quantum computers work is the quantum mechanical phenomenon, superposition, which is when a particle exists in mutually-exclusive states at the same time, such as spinning both clockwise and counterclockwise. These particles are the quantum bits, or qubits, and while there are options for what exactly they are, ions are likely the best understood of the choices. The thing about ions is that they can require large and complicated equipment to work with. For starters the ions have to be held in a trap, and while cage traps, with electrodes arranged like cage bars, work well, they are limited in size and realizing quantum computers will require large numbers of qubits. To that end, the MIT researchers are instead working with surface traps that have their electrodes covering a surface and the ions held slightly above them. In theory surface traps can be extended indefinitely.
Another issue the MIT researchers have addressed is how to control the ions. In a surface trap the ions can be just five micrometers apart, so hitting just the one you want with a laser from an optical table is very difficult. The solution here was to put a layer of glass and network of silicon nitride waveguides underneath the electrodes. Beneath holes in the electrodes are diffraction gratings within the waveguide, which direct the light up, into the holes and focuses it enough to hit single ions.
The next step for this work is to hopefully add light modulators to the diffraction gratings. This will make it possible to control how much light each ion qubit will receive, making it more efficient to program them.
Posted: August 3, 2016 01:21PM
Batteries are a critical component of a great deal of today's technologies, so there is an ever growing need to improve upon them. Achieving that improvement is difficult though because we need to use materials with the correct chemical properties, with sacrificing other important characteristics. Now, researchers at the University of California, Riverside have discovered that two candidates for anodes in future lithium-ion batteries can be combined to great effect.
Modern lithium-ion batteries rely on graphite for their anodes, because it performs the necessary chemistry and is resilient to the damage batteries can endure. Tin and silicon have also been suggested as possible anodes, because some of their properties are better than graphite, but others are worse. For example, silicon is somewhat fragile and repeated use can cause the anode to fracture, leading to a loss of performance and serious damage to the battery. What the researchers have discovered is that combining these two materials can create a new anode with the best of both worlds, essentially.
This new anode can hold three times the charge traditional graphite can, is stable over a great many charge-discharge cycles, and can even charge very quickly. It also can be produced with a simple manufacturing process, which can help keep prices down.
Posted: August 2, 2016 01:47PM
Graphene is a very interesting material that has several curious characteristics, and some of them come from the material being just one atom thick. Ever since its discovery, researchers have been working to better understand graphene and to make other atom-thick materials, which could have useful properties for electronics. Now researchers at the Moscow Institute of Physics and Technology, Rice University, and other institutions have crafted a theory to predict what it will take to produce graphene-like from salts.
Many salts, including the common sodium chloride, have a cubic molecule structure and ionic bonds between the atoms. It has been predicted and even observed in some salts that once they are comprised of few enough layers, they will spontaneously transform into a graphene-like structure. This process is called graphitization. These predictions and observations have been limited so far, because they have been for certain materials, but by leveraging the power of computer simulations, the researchers have created a general theory of how graphitization occurs. Now it is possible to predict the critical number of layers it will take for a salt made of the four alkali metals and the halogens to undergo graphitization. For sodium salts, the number is 11 layers, and for lithium salts it is between 19 and 27.
If this theory is proven accurate experimentally, it could open up a new route to producing ultrathin films and these films could have desirable properties for electronics. The researchers are also going to investigate other compounds, to see if more materials will undergo graphitization, resulting in new and intriguing properties.
Posted: August 1, 2016 12:45PM
Depending on how high it is, if you have had to pay an electric bill, you have probably asked yourself about where all of the power is going, to hopefully cut down. Answering this question is harder than you might think though, because of what is involved in actually getting the data, but researchers at MIT are working to change that. They have created a monitoring device about the size of a postage stamp that can be zip-tied onto a power cord, and then make measurements from the electric and magnetic fields around it.
Wireless devices like this have been developed before, but have had the cumbersome requirement of carefully aligning them on the cord. This MIT device solves that by having five offset sensors and software the selects the one getting the best signal. The data it captures can then be analyzed to determine how much power a device is using, and if there are any anomalies. During one of its tests it actually identified a wiring problem in a house that was putting a potential dangerous voltage on a copper pipe. This analysis though is not done remotely; all of the data can remain at the user's home, abating any privacy concerns as well as bandwidth issues, considering how much raw data is collected.
The hope for this device is that it will become cheap enough to produce and the applications for analyzing the data will become common enough, that anyone will be able to optimize the power usage within their home. This could come from adjusting habits or replacing particularly inefficient appliances to better models.
Posted: July 29, 2016 01:57PM
For our never-ending drive for faster computers and connections to be satiated, at least temporarily, it will become necessary to turn to new methods of communication. Already we have seen optical systems deployed to accelerate the Internet and more, but bringing similar systems into our computers has been proving very difficult. Thanks to researchers at the University at Buffalo though, we are closer to this high-speed future than ever before.
Light travels at a far faster speed than the electrical currents within our computer, so using it instead of electrons to connect the various chips and components in our systems could give a sizeable boost. But the devices for producing optical signals are sometimes too large to fit into computer chips and we are approaching the limit of how much information can be packed into an optical signal. The Buffalo researchers however have made an important discovery by shrinking down vortex lasers enough to be compatible with computer chips.
Vortex lasers use light's orbital angular momentum to cause the light waves to twist in a corkscrew shape, with a vortex at the center. Multiple corkscrews can be fit together in the same area without crossing each other, so these lasers are able to transmit up to ten times more information than a more traditional laser. Obviously such a device could go a long way in enhancing the performance of our computers and networks, and comes at a good time as we approach some fundamental limits in current technologies.
Source: University at Buffalo
Posted: July 28, 2016 01:26PM
As simple as hydrogen may be, one proton and one electron, it is a very unusual material that can express some interesting properties, under the right conditions. An example of this would be that solid hydrogen can enter a metallic state and act as a superconductor, but solid hydrogen requires very cold temperatures to form. Naturally researchers have been working on ways to warm things up, and those at the Carnegie Institution for Science have recently made a couple important discoveries to that end.
While pure hydrogen will still only solidify at a specific critical temperature, combining this element with one of the alkali metals below it, lithium, sodium, and potassium, could create hydrogen-rich compounds that can have their properties altered. These compounds could then be made to act as superconductors, while still surviving at more practical temperatures. The researchers have finally confirmed these predictions by creating a sodium/hydrogen material. Of course producing this material was far from easy, requiring temperatures around 2000 K (3100 ºF) and pressures between 300,000 and 400,000 times atmospheric pressure (30 - 40 gigapascals). Now that we have the ability to make the material though, we have a good start for finding easier methods of producing it.
When analyzed the compound was seen to have two different so-called polyhydrides, on being NaH3 and the other being NaH7. The latter molecule was especially interesting to see, because three of the hydrogen atoms in it lined up to form a one-dimensional chain. This was first predicted in 1972 and represents a new phase quite distinct from normal hydrogen.
Source: Carnegie Institution for Science
Posted: July 25, 2016 12:50PM
While one of the current trends for home entertainment is VR, at theaters we are still seeing 3D movies being projected and still having to put on special glasses to enjoy it. Thanks to some researchers at MIT though, the glasses might not be necessary in the future. They have developed a glasses-free 3D solution that could one day be scaled up for theaters.
Glasses-free 3D already exists, including in televisions that utilize parallax barriers to project specific images to each eye. The catch with this approach is the need for the viewer to be a constant distance away, which is not viable for a movie theater. Another solution uses multiple projectors to cover the angular range of an audience, but its failing is the relatively low resolution of the images it produces. This new solution, however, solves these problems by taking advantage of something central to movie theaters; seats. Movie-goers sit in specifically placed seats, so their range of movement is limited, which the MIT researchers exploit in their system by only displaying the necessary and narrow range of angles to each seat. This is accomplished using mirrors and lenses, and in the prototype it uses 50 sets of them and is just larger than a pad of paper.
The design can be scaled up, for use in movie theaters, billboards, and more, but for now the question remains of if it is financially feasible to go to those sizes.
Posted: July 21, 2016 02:08PM
When the topic of quantum mechanics comes up, most people likely envision particles and waves on the nearly infinitesimal stage of single atoms. While these is typically where one can look and find the counter-intuitive phenomena of quantum mechanics, the truth is the effects are not limited to that small stage. Researchers at MIT have analyzed data concerning neutrinos that travelled some 456 miles (735 Km) and found they maintained a superposition of states throughout the trip.
Superposition is the very counter-intuitive phenomenon that allows a particle to exist in multiple mutually exclusive states at the same time. For example, a coin tossed in the air could rotate in both clockwise and counter-clockwise directions. In this case the coin is a particle known as a neutrino, and a great many of them are made at a facility near Chicago, and some of them travel to a detector in Soudan, Minnesota, 456 miles away. Neutrinos come in many flavors, and this is what the MIT researchers decided to look at with a modified form of the Leggett-Garg inequality. This inequality is used to determine if a system is acting in a quantum mechanical or classical way, because correlations between measurements of the system will differ, depending on the system's behavior. What the researchers found is that the distribution of neutrino flavors falls within the predicted range of a quantum system, which had almost no overlap with a classical system.
This discovery represents the greatest distance quantum mechanics has ever been observed, and it is perhaps not surprising it involves neutrinos. These particles very rarely interact with the environment and travel at relativistic speeds, near the speed of light. This causes time to dilate for them so much that from their perspective, they last for a brief instant, further protecting their quantum state.
Posted: July 20, 2016 01:26PM
Everyday humanity generates more and more information that it wants to keep on various media including hard disks and flash drives. To keep up with the demand for more storage, storage density has to increase and now researchers at Delft University and the Kavli Institute of Nanoscience have successfully encoded bits to single atoms. This allows for 500 Terabits to be stored in a square inch, which would be enough to contain every book humanity has written on a postage stamp.
In order to encode the data to single atoms, the researchers turned to a scanning tunneling microscope, which has a sharp needle to probe atoms on a surface on at a time. This allowed them to precisely move chlorine atoms along a copper surface into one of two positions. In one position a hole is beneath an atom, which the researchers read as a 1, and when the hole was above the atom, it was read as a 0. By only have the chlorine atoms by other chlorine atoms and these holes, this method is more stable for storing data than others that rely on loose atoms. Additionally, the researchers arranged the memory into blocks of eight bytes and each block has a marker. These markers are similar to QR codes and contain information about the memory block, identifying if the block is damaged for some reason.
As you might expect with the manipulation of single atoms, this method is not ready for any commercial system. It requires a very clean vacuum environment and temperatures around that of liquid nitrogen. This is still an important step towards such a goal, but there is a still a lot to do.
Source: Delft University of Technology
Posted: July 18, 2016 10:38AM
For millennia humanity has fantasized about the ability to become invisible, but only relatively recent advances in science are actually making it possible. Typically these efforts are focused on hiding from free space waves, but in many practical applications surface waves also need to be considered. To that end, researchers at Queen Mary University of London have found a way to hide a curved surface from surface waves.
This new cloak works by depositing a nanocomposite medium onto the subject surface. This medium consisted of seven layers, making the material a graded index nanocomposite, and the electric properties of each layer depend on their positions. The result is to prevent the incoming surface waves from noticing the curves on the surface, and instead continue moving as though the surface were flat.
While this discovery will not directly lead to any ability to cloak a person, it could be used to enhance antennas, by allowing them to take on different shapes and sizes. It can also allow antennas to be attached in weird places and to a wide variety of materials. The potential for this discovery is greater than just better antennas though, as it could be applied to work with other phenomena that can also be described as waves.
Source: Queen Mary University of London
Posted: July 12, 2016 01:42PM
A vulnerability was discovered in the Tor network last year that could allow this anonymity-protecting system to be comprised. Fortunately, this year a new system called Riffle has been developed by researchers at MIT and EPFL that can guarantee messages sent on Riffle are safe, if just one server has not been compromised by an attacker.
Like Tor, Riffle uses onion encryption, which wraps data in multiple layers of encryption and as the packages moves through the network, each server removes a layer. In the end, only the final server knows the data's final destination and only the first server knows where it came from. What Riffle adds to this, to further protect the messages, is making the network of servers a mixnet. In a mixnet, it permutes the order of the messages it receives, so if they arrived A, B, C, they will leave C, B, A, or some other permutation, and each server a message goes to does this. This protects against a passive attacked that is just observing network traffic, but on its own does not protect against an attacker that has compromised a server.
To solve that issue, the researchers employ what is called a verifiable shuffle. Because of the layered encryption used for each message, what comes to and leaves the server looks completely different, but the server can be made to generate a mathematical proof that the sent messages are valid manipulations of the originals. Instead of just going off of message the server received though, Riffle has the initial message sent to every server in the mixnet, allowing each server to independently check for any tampering. Generating and checking proofs is not easy though, and could significantly slow down the entire network with just one message, so Riffle also uses what is called authentication encryption. It is more efficient than the verifiable shuffle, but requires the sender and receiver to share a cryptographic key, so Riffle uses the verifiable shuffle technique to share a key between the user and every server, while authentication encryption is used for the rest of the communication.
This combination of techniques means that so long as one server in the entire network has not been compromised. This lone server can verify the authenticity of the message and shuffle the messages so they cannot be tracked.
Posted: June 21, 2016 11:51AM
For several years now, processors have been having cores added to them to improve performance, as more cores means more processes can be run at the same time. Some software has also been written to utilize multiple cores at the same time, accelerating their operation, but making such parallel programs is not a simple endeavor. To address this issue and improve the performance of parallel programs, researchers at MIT have developed a new chip design called Swarm.
Part of why it is difficult to make a program run on multiple cores in parallel is that it requires breaking the program up into multiple tasks and explicitly enforcing synchronization between these tasks, as they access shared data. What makes Swarm different is that it handles the synchronization itself, making the programming significantly easier. In fact, the researchers created six Swarm versions of common algorithms and, in general, the Swarm programs were a tenth the size, and would run three to 18 times faster. In one case the algorithm ran 75 times faster.
Swarm manages this synchronization on its own by time-stamping tasks based on their priority, which is something the programmer has to set. This allows the chip to automatically decide whether a task can write to the shared memory, protecting higher priority tasks from getting incorrect data. Currently this kind of synchronization has to be implemented by programmers, but now the chip will handle it and has circuitry built in for maintaining the queue of tasks.
Posted: June 20, 2016 09:03AM
Researchers at the University of California, Davis appear to have set some new records, as they have designed the first processor with 1000 independent cores. It also has the highest clock-rate for a processor designed at a university and is so efficient it can run at 115 billion instructions a second, while only dissipating 0.7 W. That makes it over 100 times more efficient than the processor in a laptop.
This processor consists of 621 million transistors and has an average maximum clock speed of 1.78 GHz. Part of the reason it is an average clock speed is because the cores can be independently clocked, allowing some to be shut down to conserve power. Each core can also run its own program separately from the others, which allows a larger application to be broken into smaller pieces with each piece running in parallel. The cores can also transfer data directly to each other, instead of relying on a cache to store the data for other cores to access.
With 1000 cores, this processor is meant for processing large amounts of data in parallel, as is needed for scientific data operations and datacenter record processing, but the researchers have also built applications for wireless coding and decoding, video processing, and encryption. The chip was fabricated by IBM using its 32 nm CMOS process.
Source: University of California, Davis
Posted: June 17, 2016 01:38PM
There are a whole host of things that can be made by 3D printing, but what most people are printing does not really push a printer's limit. Interested in pushing the technology, researchers at MIT decided to experiment with printing hair, and found it to be a rather interesting challenge, but not with the hardware. The software driving 3D printers would crash before it could finish, but the researchers developed a clever solution.
With current systems, 3D printing hair would require modelling every single strand in CAD, and then a slicer program would have to recreate each hair's contour as a mesh of triangles. From that horizontal cross sections would need to be generated from the triangles, and those cross sections would then have to be made into bitmaps for the printer to work with. For an area the size of a postage stamp covered with 6000 hairs, that process would take several hours to complete. The MIT researchers' solution was to develop a new piece of software that models the array of hair in a very different method. First they modelled a single hair as an elongated cone, so the hair's height, angle, and width could be manipulated by changing the pixels of the cone. To scale this up for thousands of hairs, the researchers created a color mapping technique, where red, green, and blue were represent the parameters height, width, and angle. So a circular patch of hair with taller strands at the rim would look like a red circle with darker hues at the rim. An algorithm can then take this color map and quickly generate a model for the 3D printer to work with.
While this approach for 3D printing hair could be used to print wigs and such, the researchers envision it being used to create hair for the more useful tasks of sensing, adhesion, and actuation. Of course, now that this new technique exists for 3D printing, others could find some even more interesting applications.
Posted: May 18, 2016 11:15AM
For the thousands of years humanity has been combining metals into various alloys, we have accepted that there is a tradeoff between strength and ductility. In order to make a metal stronger, it will become more susceptible to fracturing when deformed. This belief has also extended to high entropy alloys, but researchers at MIT borrowed a property of steel and overcame the tradeoff.
Steel is a tremendously ubiquitous alloy, thanks to its strength and the great many alloys it actually represents. By altering the components that are mixed together to form steel, its properties can be tailored to fit specific needs. For many steels and alloys in general, they possess stable single-phase microstructures, which means the structures of the molecules have specific shapes they most prefer, but some advanced steels have metastable phases. This means the steel has multiple stable internal structures, so when it is under stress, these phases can transition to more stable ones, making the alloy more resilient to fractures.
High entropy alloys, or HEAs, are alloys made from multiple metals in roughly equal proportions, unlike traditional alloys which are dominated by one material. In theory they could be very strong and light, but have so far been subject to the strength and ductility tradeoff. The MIT researchers decided to borrow from those advanced steel alloys and produce HEAs with metastable configurations. The result was an alloy made of iron, manganese, cobalt, and chromium that surpasses the best single-phase HEA.
This discovery should have great applications in the future, as this strategy can be used to create many other alloys with the best of both worlds.
Posted: May 10, 2016 11:54AM
In general, simplifying processes is important for making them more common, as fewer steps can speed things up and reduce costs. When chemistry is involved though, it can be very difficult to remove certain steps, and in some cases impossible. Luckily researchers at Berkeley Lab have succeeded in greatly simplifying the process of creating advanced biofuels.
What has been created here is a one-pot method for making these biofuels, which means the ingredients can be loaded into a chemical reactor and the output will be the desired, end result, instead of an intermediary material. Normally producing these biofuels requires using ionic liquids, which are salts that are liquid at room temperature and work as solvents. These are needed to break apart the cellulose, hemicellulose, and lignin of the plant matter before enzymes can get at the resulting sugars to produce biofuels. The issue has been that the ionic liquids damage the enzymes, so it has been necessary to add a step to remove them. What the Berkeley researchers have done is mutated a gene in E. coli to make it resilient to ionic liquids, and also given it the ability to create similarly resilient enzymes for creating biofuels.
Part of what makes this discovery important is that this strain of E. coli can be used as the base for other technologies to create advanced biofuels, such as jet fuel or its precursors.
Source: Berkeley Lab
Posted: May 9, 2016 09:25AM
Security software like antiviruses and parental controls are meant to keep our computers safe, but researchers at Concordia University have found that some are actually making us more vulnerable. This is because of how the software sets up a TLS proxy to filter out unwanted content.
To keep a computer from visiting websites that are either dangerous or something a parent does not want a child to see, a piece of software can be installed that will monitor this traffic and block those sites. In some cases the software works by checking the domain name, but in other cases it establishes a proxy and checks the certificate for the website. As a browser will also check such certificates, the software has its own to pass on to the browser, and this is where the vulnerability lies. Because the browser is left to assume the software's certificate is valid, anything with that certificate will get through, and that piece of software can be vulnerable to other attacks. For example, if the certificate the software uses is pre-generated and static, then every user can be attacked the same way.
The researchers tested 14 pieces of software, and found each one reduced the TLS security of the system they were installed on. In one case, an antivirus left users open to attack because after its license expired it ceased to check certificates, and also stopped receiving updates, and one of the parental control applications left its pre-generated certificate on the computer, even after it was uninstalled. This means that any traffic that used that certificate would be seen as trusted by the computer.
The researchers have contacted and reported their findings to the software manufacturers, so hopefully these issues can be addressed. They also suggest new guidelines for handling TLS proxies be developed, so as to prevent vulnerabilities from being added when a user is trying to, or required to secure their computer.
Source: Concordia University
Posted: May 6, 2016 12:05PM
Since its discovery, researchers have been discovering more and more applications for graphene, a form of carbon. With its amazing mechanical and electrical properties, it is hardly surprising that so many have been found. Now researchers at Chalmers University of Technology have discovered a way to make an effective means of cooling electronics and other devices.
Graphene is an atom-think sheet of carbon that has many promising properties, including great electrical and thermal conductivity. These are both reasons why we may see it used in various devices, and the Chalmers researchers have found how to enhance its thermal properties. By adding functionalized molecules to graphene nanoflakes, the researchers were able to improve a graphene film's heat transfer efficiency by 76%. This is because of how phonons, the quantum of heat, are constrained by the functional layer, and could lead to means of controlling the heat of electronic and optoelectronic systems. That control could then in turn be used to better cool electronic and optoelectronic devices, allowing for greater performance.
Posted: May 6, 2016 06:04AM
I am not sure if this was in any work of science fiction first, but it does leave me feeling like science fiction is becoming reality. Researchers at Carnegie Mellon University have developed a way to effectively turn the skin on your arm and hand into a touchscreen. This would allow someone to interface with a smartwatch, or other devices, without having to cover the small screen with their fingers.
Other 'skin to screen' tracking systems have been developed before, but those required special overlays, textiles, and combinations of projectors and cameras to work, which is hardly ideal. This system, called SkinTrack, uses pairs of electrodes and a ring for someone to wear on their finger. This ring emits a low-energy, high-frequency signal that propagates across the skin when the finger touches it, or is just close to it. The electrodes pick up this signal and can triangulate the finger's location from differences in the signal's phase. Different locations and movements can then be used as various inputs for a device, such as scrolling, highlighting buttons, and even hitting shortcuts.
The researchers found the system could identify when the finger was touching with 99% accuracy and had an error of 7.6 mm for the location of the finger. Over time this precision can drop though, as the conductivity of the skin can change due to sweat and hydration, as well as body movements. Another issue is that the ring will lose power eventually. Still, this is pretty cool and could definitely lead to advances for smartwatches and other pieces of advanced technology.
Source: Carnegie Mellon University
Posted: April 29, 2016 11:08AM
As silicon-based electronics approach their theoretical limitations, many new technologies are being developed and investigated for replacing this long-lived standard. Among these are optical based systems that use photons or plasmons for transmitting, processing, and storing data. Researchers at ITMO University have recently found a way to build hybrid nanoantennas that could help optical technologies replace modern devices.
There are a number of reasons why people want to see photons replace electrons in computers, including their greater speed, ability to store more data, and the fact that they do not generate as much heat when used. Working with them, however, is difficult and require precisely created nanoantennas to localize light to specific areas. Building these nanoantennas is not easy, but the ITMO researchers have found a way to create arrays of hybrid nanoantennas, and to adjust those antennas. The antennas themselves are comprised of a truncated silicon cone with a particle of gold on top. This gold particle may start as a disk, but with a femtosecond laser, it is possible to change its shape to a sphere or a cup, altering the antenna's optical properties. This will allow the nanostructure to have its properties manipulated to fit desired roles.
These nanoantennas are roughly the same size as a bit in a modern optical disk, which can store about 10 Gb/in2. Unlike those bits though, the antennas are able to control the color of light, so if used for data storage, the capacity would be greatly increased by this added dimension.
Source: ITMO University
Posted: April 28, 2016 08:02AM
Ever since electronic computers were first developed, one of their primary applications has been for simulating various systems. These simulations allow predictions to be made but they are also a means for researchers to more closely study and analyze the processes involved. Some systems are harder to simulate than others, such as those ruled by quantum mechanics, because of how fragile they are, but researchers at the University of New South Wales have built a device that could serve as a quantum simulator.
The idea behind many simulators is to take a hard to examine process or system, and recreate it in an environment more easily studied. A computer simulation allows every aspect to be studied, but quantum systems can be so complicated that even the most powerful supercomputers cannot run them efficiently. What the New South Wales researchers have done is doped a pair of boron atoms into a silicon crystal, separated by just a few nanometers. In this configuration, they behaved like valence bonds, which are what hold many molecules together when the orbits of unpaired electron overlap. The researchers were then able to directly measure the clouds of electrons around the atoms, and the interactions between the spins of the electrons.
The observed behavior in this simulation matches the Hubbard model, which is what describes how electrons interact, with their wave-like properties, and is central to explaining many phenomena. The researchers also made a curious discovery as the electrons involved were entangled with each other, but this entanglement actually increased with their separation, instead of decreasing. It is quantum mechanics though, where the counterintuitive is so often the norm.
Source: University of New South Wales
Posted: April 27, 2016 09:14AM
It might not be the most pleasant image for some people, but we are approaching a time when robotic drones will be freely moving around us for various reasons. In some cases a swarm of robots might be used instead single drones, which makes it vital that they all act in concert. In general this can involve centralized or decentralized algorithms, and now researchers at MIT have developed a new decentralized algorithm that significantly reduces the amount of communication needed between the drones.
A centralized algorithm for controlling a swarm involves having a single computer make all of the decisions for a swarm of robots, which is fine unless that computer goes offline. Decentralized algorithms, where each robot is making its own decisions, do not suffer from these problems, but are much harder to design as each robot guesses what the others are doing. To that end, these algorithms will have the robots scan their local environment for obstacles and transmit their map to the rest of the swarm, so everyone robot has the same information to work with. What the new MIT algorithm does is cut down on the size of the map dramatically by only transmitting the intersection between different maps. So after the first drone transmits its complete map to its neighbors, these neighbors identify the overlap with the map they constructed, and then that is transmitted.
As this intersection is significantly less information than the entire, composited map, it cuts down on the communication between the drones, but the robots will still have a map of every detected obstacle. It also works for detecting moving obstacles and is completed many times a seconds, so sudden changes in an obstacles velocity should not be an issue.
Posted: April 26, 2016 08:50AM
When it comes to storing electricity, the two main ways to do so are batteries or supercapacitors, which both offer their own advantages and disadvantages. Hybrid batteries try to combine the two to get the best of both worlds, and now researchers at PNNL have found a way to make them even better.
The advantage batteries come with is tremendous energy density for their size, while supercapacitors store less energy but can be very quickly charged and discharged, unlike batteries. Hybrid batteries contain both technologies by making the electrodes out of supercapacitors, so that one can have a fast charging and long lasting device. These electrodes can be made from carbon nanotubes and it has been discovered before that spraying polyoxometalate, or POM onto them can improve their performance by adding ions to the surface. The catch is that only negative ions are desired, but POM includes both positive and negative ions. What the PNNL researchers did is change the method of applying the POM to ion soft-landing, which allows for precise control over what is applied; in this case applying only negative ions.
The resulting hybrid batteries stored 27% more energy than those made by more conventional methods. They also only lost a few percent of their capacity after 1000 charge and discharge cycles, while the conventional hybrid batteries were at half-capacity by then. When the researchers closely examined the electrodes they found that the ion soft-landing method allowed the negative ions to more evenly cover the electrodes, while the positive ions deposited by other methods resulted in the material clumping up on the electrode. This also means less POM was needed to achieve optimal results.
Posted: April 25, 2016 08:25AM
Something shared across many video games are certain specific archetypes, such as tanks, fighters, mages, rogues, assassins, and so on. In some games you are able to select what role you get to play as, while in other games it may be selected for you, or never even described. Researchers at North Carolina State University decided to look into these roles and see if they influence a player's behavior, and if selecting a role makes any difference.
To do this experiment, the researchers create a single-player RPG (which you can play at http://go.ncsu.edu/ixd-demo-rpg) and had 210 people play it. Of those, 78 were assigned the role of fighter, mage, or rogue, while 91 were allowed to select their role, and the final 41 played without a role. The game contained twelve multiple choice decisions that were careful constructed to be aligned with the three roles, to see if players maintained the role as they played. The results showed that whether the players selected or were assigned the role, they maintained them most of the time, with fighters being consistent 65.7% of the time, mages 76.1% of the time, and rogues 69.7% of the time. Even for the players who were not given a specific role, made decisions consistent with a specific role.
This study indicates that even without explicit role-playing elements to a game, players will assume and maintain roles on their own, which could influence how game designers develop games. It also means that other studies that examine player choice should be careful to remove role as a variable, as it could skew results.
Source: North Carolina State University
Posted: April 22, 2016 06:58AM
Wireless communication is something many of us rely on today for connecting or various devices to the Internet, so there is a constant drive to increase wireless speeds. One way to achieve this is to build systems that allow for the simultaneous transmission and reception of signals, but achieving this is somewhat difficult with a single antenna. Researchers at the Columbia University School of Engineering and Applied Science though have built an on-chip solution that could bring full duplexing and doubled speeds to devices like our phones.
Currently many devices use half-duplexing to connect to a Wi-Fi network, which means that while one antenna is sending and receiving all of the information, it is not doing so at the same time. This is because the electronic structures used exhibit Lorentz Reciprocity; electromagnetic waves travel in both directions at the same time. One way to overcome this issue requires using magnetic materials to create a radio frequency circulator. When the material is exposed to an external magnetic field, reciprocity is lost, allowing the incoming and outgoing signals to be separated. Such circulators cannot be integrated into silicon chips though, and even then they are rather large for using in something like a phone. To solve this problem, the Columbia researchers created a new, electronic circulator that is highly miniaturized and uses a set of capacitors to replicate the non-reciprocal twist the magnetic circulators produce.
The researchers have already demonstrated this new circulator design by building a prototype of their full-duplex system that also features an echo-cancelling receiver. By integrating the circulator into the same chip as the rest of the radio, it should be possible to keep the size of the system and the cost down, allowing for full-duplex communications and potentially doubling network capacity.
Source: Columbia University
Posted: April 21, 2016 12:38PM
The next time you archive some files and compress them, you might think about the process a little differently. Researchers at the National University of Singapore have discovered a common compression algorithm can be used to detect quantum entanglement. What makes this discovery so interesting is that it does not rely on heavily on an assumption that the measured particles are independent and identically distributed.
If you measure the property of a particle and then measure the same property of another particle, in classical mechanics there is no reason for them to match but pure chance. In quantum mechanics though, the two particles can be entangled, such that the results will match each other. This follows from Bell's theorem, which is applied to test if particles are in fact entangled. The catch is that the theorem is derived for testing pairs of particles, but many pairs have to be measured and the probabilities they are entangled calculated. This is where the researchers' discovery comes into play because instead of calculating probabilities, the measurements can be fed into the open-source Lempel-Ziv-Markov chain algorithm (LZMA) to get their normalized compression difference. Compression algorithms work by finding patterns in data and encoding them more efficiently, and in this case they also find correlations from quantum entanglement. If the data is classical, the normalized compression difference must be less than zero, but with quantum mechanics it can reach 0.24.
When tested, this approach returned a value of 0.0494 ± 0.0076, which shows the data did cross the classical-quantum boundary. It is below the 0.24 theoretical maximum because the quantum states cannot be created and measured perfectly, and the compression algorithm is not ideal.