Science & Technology News (1227)
Posted: July 21, 2016 02:08PM
When the topic of quantum mechanics comes up, most people likely envision particles and waves on the nearly infinitesimal stage of single atoms. While these is typically where one can look and find the counter-intuitive phenomena of quantum mechanics, the truth is the effects are not limited to that small stage. Researchers at MIT have analyzed data concerning neutrinos that travelled some 456 miles (735 Km) and found they maintained a superposition of states throughout the trip.
Superposition is the very counter-intuitive phenomenon that allows a particle to exist in multiple mutually exclusive states at the same time. For example, a coin tossed in the air could rotate in both clockwise and counter-clockwise directions. In this case the coin is a particle known as a neutrino, and a great many of them are made at a facility near Chicago, and some of them travel to a detector in Soudan, Minnesota, 456 miles away. Neutrinos come in many flavors, and this is what the MIT researchers decided to look at with a modified form of the Leggett-Garg inequality. This inequality is used to determine if a system is acting in a quantum mechanical or classical way, because correlations between measurements of the system will differ, depending on the system's behavior. What the researchers found is that the distribution of neutrino flavors falls within the predicted range of a quantum system, which had almost no overlap with a classical system.
This discovery represents the greatest distance quantum mechanics has ever been observed, and it is perhaps not surprising it involves neutrinos. These particles very rarely interact with the environment and travel at relativistic speeds, near the speed of light. This causes time to dilate for them so much that from their perspective, they last for a brief instant, further protecting their quantum state.
Posted: July 20, 2016 01:26PM
Everyday humanity generates more and more information that it wants to keep on various media including hard disks and flash drives. To keep up with the demand for more storage, storage density has to increase and now researchers at Delft University and the Kavli Institute of Nanoscience have successfully encoded bits to single atoms. This allows for 500 Terabits to be stored in a square inch, which would be enough to contain every book humanity has written on a postage stamp.
In order to encode the data to single atoms, the researchers turned to a scanning tunneling microscope, which has a sharp needle to probe atoms on a surface on at a time. This allowed them to precisely move chlorine atoms along a copper surface into one of two positions. In one position a hole is beneath an atom, which the researchers read as a 1, and when the hole was above the atom, it was read as a 0. By only have the chlorine atoms by other chlorine atoms and these holes, this method is more stable for storing data than others that rely on loose atoms. Additionally, the researchers arranged the memory into blocks of eight bytes and each block has a marker. These markers are similar to QR codes and contain information about the memory block, identifying if the block is damaged for some reason.
As you might expect with the manipulation of single atoms, this method is not ready for any commercial system. It requires a very clean vacuum environment and temperatures around that of liquid nitrogen. This is still an important step towards such a goal, but there is a still a lot to do.
Source: Delft University of Technology
Posted: July 18, 2016 10:38AM
For millennia humanity has fantasized about the ability to become invisible, but only relatively recent advances in science are actually making it possible. Typically these efforts are focused on hiding from free space waves, but in many practical applications surface waves also need to be considered. To that end, researchers at Queen Mary University of London have found a way to hide a curved surface from surface waves.
This new cloak works by depositing a nanocomposite medium onto the subject surface. This medium consisted of seven layers, making the material a graded index nanocomposite, and the electric properties of each layer depend on their positions. The result is to prevent the incoming surface waves from noticing the curves on the surface, and instead continue moving as though the surface were flat.
While this discovery will not directly lead to any ability to cloak a person, it could be used to enhance antennas, by allowing them to take on different shapes and sizes. It can also allow antennas to be attached in weird places and to a wide variety of materials. The potential for this discovery is greater than just better antennas though, as it could be applied to work with other phenomena that can also be described as waves.
Source: Queen Mary University of London
Posted: July 12, 2016 01:42PM
A vulnerability was discovered in the Tor network last year that could allow this anonymity-protecting system to be comprised. Fortunately, this year a new system called Riffle has been developed by researchers at MIT and EPFL that can guarantee messages sent on Riffle are safe, if just one server has not been compromised by an attacker.
Like Tor, Riffle uses onion encryption, which wraps data in multiple layers of encryption and as the packages moves through the network, each server removes a layer. In the end, only the final server knows the data's final destination and only the first server knows where it came from. What Riffle adds to this, to further protect the messages, is making the network of servers a mixnet. In a mixnet, it permutes the order of the messages it receives, so if they arrived A, B, C, they will leave C, B, A, or some other permutation, and each server a message goes to does this. This protects against a passive attacked that is just observing network traffic, but on its own does not protect against an attacker that has compromised a server.
To solve that issue, the researchers employ what is called a verifiable shuffle. Because of the layered encryption used for each message, what comes to and leaves the server looks completely different, but the server can be made to generate a mathematical proof that the sent messages are valid manipulations of the originals. Instead of just going off of message the server received though, Riffle has the initial message sent to every server in the mixnet, allowing each server to independently check for any tampering. Generating and checking proofs is not easy though, and could significantly slow down the entire network with just one message, so Riffle also uses what is called authentication encryption. It is more efficient than the verifiable shuffle, but requires the sender and receiver to share a cryptographic key, so Riffle uses the verifiable shuffle technique to share a key between the user and every server, while authentication encryption is used for the rest of the communication.
This combination of techniques means that so long as one server in the entire network has not been compromised. This lone server can verify the authenticity of the message and shuffle the messages so they cannot be tracked.
Posted: June 21, 2016 11:51AM
For several years now, processors have been having cores added to them to improve performance, as more cores means more processes can be run at the same time. Some software has also been written to utilize multiple cores at the same time, accelerating their operation, but making such parallel programs is not a simple endeavor. To address this issue and improve the performance of parallel programs, researchers at MIT have developed a new chip design called Swarm.
Part of why it is difficult to make a program run on multiple cores in parallel is that it requires breaking the program up into multiple tasks and explicitly enforcing synchronization between these tasks, as they access shared data. What makes Swarm different is that it handles the synchronization itself, making the programming significantly easier. In fact, the researchers created six Swarm versions of common algorithms and, in general, the Swarm programs were a tenth the size, and would run three to 18 times faster. In one case the algorithm ran 75 times faster.
Swarm manages this synchronization on its own by time-stamping tasks based on their priority, which is something the programmer has to set. This allows the chip to automatically decide whether a task can write to the shared memory, protecting higher priority tasks from getting incorrect data. Currently this kind of synchronization has to be implemented by programmers, but now the chip will handle it and has circuitry built in for maintaining the queue of tasks.
Posted: June 20, 2016 09:03AM
Researchers at the University of California, Davis appear to have set some new records, as they have designed the first processor with 1000 independent cores. It also has the highest clock-rate for a processor designed at a university and is so efficient it can run at 115 billion instructions a second, while only dissipating 0.7 W. That makes it over 100 times more efficient than the processor in a laptop.
This processor consists of 621 million transistors and has an average maximum clock speed of 1.78 GHz. Part of the reason it is an average clock speed is because the cores can be independently clocked, allowing some to be shut down to conserve power. Each core can also run its own program separately from the others, which allows a larger application to be broken into smaller pieces with each piece running in parallel. The cores can also transfer data directly to each other, instead of relying on a cache to store the data for other cores to access.
With 1000 cores, this processor is meant for processing large amounts of data in parallel, as is needed for scientific data operations and datacenter record processing, but the researchers have also built applications for wireless coding and decoding, video processing, and encryption. The chip was fabricated by IBM using its 32 nm CMOS process.
Source: University of California, Davis
Posted: June 17, 2016 01:38PM
There are a whole host of things that can be made by 3D printing, but what most people are printing does not really push a printer's limit. Interested in pushing the technology, researchers at MIT decided to experiment with printing hair, and found it to be a rather interesting challenge, but not with the hardware. The software driving 3D printers would crash before it could finish, but the researchers developed a clever solution.
With current systems, 3D printing hair would require modelling every single strand in CAD, and then a slicer program would have to recreate each hair's contour as a mesh of triangles. From that horizontal cross sections would need to be generated from the triangles, and those cross sections would then have to be made into bitmaps for the printer to work with. For an area the size of a postage stamp covered with 6000 hairs, that process would take several hours to complete. The MIT researchers' solution was to develop a new piece of software that models the array of hair in a very different method. First they modelled a single hair as an elongated cone, so the hair's height, angle, and width could be manipulated by changing the pixels of the cone. To scale this up for thousands of hairs, the researchers created a color mapping technique, where red, green, and blue were represent the parameters height, width, and angle. So a circular patch of hair with taller strands at the rim would look like a red circle with darker hues at the rim. An algorithm can then take this color map and quickly generate a model for the 3D printer to work with.
While this approach for 3D printing hair could be used to print wigs and such, the researchers envision it being used to create hair for the more useful tasks of sensing, adhesion, and actuation. Of course, now that this new technique exists for 3D printing, others could find some even more interesting applications.
Posted: May 18, 2016 11:15AM
For the thousands of years humanity has been combining metals into various alloys, we have accepted that there is a tradeoff between strength and ductility. In order to make a metal stronger, it will become more susceptible to fracturing when deformed. This belief has also extended to high entropy alloys, but researchers at MIT borrowed a property of steel and overcame the tradeoff.
Steel is a tremendously ubiquitous alloy, thanks to its strength and the great many alloys it actually represents. By altering the components that are mixed together to form steel, its properties can be tailored to fit specific needs. For many steels and alloys in general, they possess stable single-phase microstructures, which means the structures of the molecules have specific shapes they most prefer, but some advanced steels have metastable phases. This means the steel has multiple stable internal structures, so when it is under stress, these phases can transition to more stable ones, making the alloy more resilient to fractures.
High entropy alloys, or HEAs, are alloys made from multiple metals in roughly equal proportions, unlike traditional alloys which are dominated by one material. In theory they could be very strong and light, but have so far been subject to the strength and ductility tradeoff. The MIT researchers decided to borrow from those advanced steel alloys and produce HEAs with metastable configurations. The result was an alloy made of iron, manganese, cobalt, and chromium that surpasses the best single-phase HEA.
This discovery should have great applications in the future, as this strategy can be used to create many other alloys with the best of both worlds.
Posted: May 10, 2016 11:54AM
In general, simplifying processes is important for making them more common, as fewer steps can speed things up and reduce costs. When chemistry is involved though, it can be very difficult to remove certain steps, and in some cases impossible. Luckily researchers at Berkeley Lab have succeeded in greatly simplifying the process of creating advanced biofuels.
What has been created here is a one-pot method for making these biofuels, which means the ingredients can be loaded into a chemical reactor and the output will be the desired, end result, instead of an intermediary material. Normally producing these biofuels requires using ionic liquids, which are salts that are liquid at room temperature and work as solvents. These are needed to break apart the cellulose, hemicellulose, and lignin of the plant matter before enzymes can get at the resulting sugars to produce biofuels. The issue has been that the ionic liquids damage the enzymes, so it has been necessary to add a step to remove them. What the Berkeley researchers have done is mutated a gene in E. coli to make it resilient to ionic liquids, and also given it the ability to create similarly resilient enzymes for creating biofuels.
Part of what makes this discovery important is that this strain of E. coli can be used as the base for other technologies to create advanced biofuels, such as jet fuel or its precursors.
Source: Berkeley Lab
Posted: May 9, 2016 09:25AM
Security software like antiviruses and parental controls are meant to keep our computers safe, but researchers at Concordia University have found that some are actually making us more vulnerable. This is because of how the software sets up a TLS proxy to filter out unwanted content.
To keep a computer from visiting websites that are either dangerous or something a parent does not want a child to see, a piece of software can be installed that will monitor this traffic and block those sites. In some cases the software works by checking the domain name, but in other cases it establishes a proxy and checks the certificate for the website. As a browser will also check such certificates, the software has its own to pass on to the browser, and this is where the vulnerability lies. Because the browser is left to assume the software's certificate is valid, anything with that certificate will get through, and that piece of software can be vulnerable to other attacks. For example, if the certificate the software uses is pre-generated and static, then every user can be attacked the same way.
The researchers tested 14 pieces of software, and found each one reduced the TLS security of the system they were installed on. In one case, an antivirus left users open to attack because after its license expired it ceased to check certificates, and also stopped receiving updates, and one of the parental control applications left its pre-generated certificate on the computer, even after it was uninstalled. This means that any traffic that used that certificate would be seen as trusted by the computer.
The researchers have contacted and reported their findings to the software manufacturers, so hopefully these issues can be addressed. They also suggest new guidelines for handling TLS proxies be developed, so as to prevent vulnerabilities from being added when a user is trying to, or required to secure their computer.
Source: Concordia University
Posted: May 6, 2016 12:05PM
Since its discovery, researchers have been discovering more and more applications for graphene, a form of carbon. With its amazing mechanical and electrical properties, it is hardly surprising that so many have been found. Now researchers at Chalmers University of Technology have discovered a way to make an effective means of cooling electronics and other devices.
Graphene is an atom-think sheet of carbon that has many promising properties, including great electrical and thermal conductivity. These are both reasons why we may see it used in various devices, and the Chalmers researchers have found how to enhance its thermal properties. By adding functionalized molecules to graphene nanoflakes, the researchers were able to improve a graphene film's heat transfer efficiency by 76%. This is because of how phonons, the quantum of heat, are constrained by the functional layer, and could lead to means of controlling the heat of electronic and optoelectronic systems. That control could then in turn be used to better cool electronic and optoelectronic devices, allowing for greater performance.
Posted: May 6, 2016 06:04AM
I am not sure if this was in any work of science fiction first, but it does leave me feeling like science fiction is becoming reality. Researchers at Carnegie Mellon University have developed a way to effectively turn the skin on your arm and hand into a touchscreen. This would allow someone to interface with a smartwatch, or other devices, without having to cover the small screen with their fingers.
Other 'skin to screen' tracking systems have been developed before, but those required special overlays, textiles, and combinations of projectors and cameras to work, which is hardly ideal. This system, called SkinTrack, uses pairs of electrodes and a ring for someone to wear on their finger. This ring emits a low-energy, high-frequency signal that propagates across the skin when the finger touches it, or is just close to it. The electrodes pick up this signal and can triangulate the finger's location from differences in the signal's phase. Different locations and movements can then be used as various inputs for a device, such as scrolling, highlighting buttons, and even hitting shortcuts.
The researchers found the system could identify when the finger was touching with 99% accuracy and had an error of 7.6 mm for the location of the finger. Over time this precision can drop though, as the conductivity of the skin can change due to sweat and hydration, as well as body movements. Another issue is that the ring will lose power eventually. Still, this is pretty cool and could definitely lead to advances for smartwatches and other pieces of advanced technology.
Source: Carnegie Mellon University
Posted: April 29, 2016 11:08AM
As silicon-based electronics approach their theoretical limitations, many new technologies are being developed and investigated for replacing this long-lived standard. Among these are optical based systems that use photons or plasmons for transmitting, processing, and storing data. Researchers at ITMO University have recently found a way to build hybrid nanoantennas that could help optical technologies replace modern devices.
There are a number of reasons why people want to see photons replace electrons in computers, including their greater speed, ability to store more data, and the fact that they do not generate as much heat when used. Working with them, however, is difficult and require precisely created nanoantennas to localize light to specific areas. Building these nanoantennas is not easy, but the ITMO researchers have found a way to create arrays of hybrid nanoantennas, and to adjust those antennas. The antennas themselves are comprised of a truncated silicon cone with a particle of gold on top. This gold particle may start as a disk, but with a femtosecond laser, it is possible to change its shape to a sphere or a cup, altering the antenna's optical properties. This will allow the nanostructure to have its properties manipulated to fit desired roles.
These nanoantennas are roughly the same size as a bit in a modern optical disk, which can store about 10 Gb/in2. Unlike those bits though, the antennas are able to control the color of light, so if used for data storage, the capacity would be greatly increased by this added dimension.
Source: ITMO University
Posted: April 28, 2016 08:02AM
Ever since electronic computers were first developed, one of their primary applications has been for simulating various systems. These simulations allow predictions to be made but they are also a means for researchers to more closely study and analyze the processes involved. Some systems are harder to simulate than others, such as those ruled by quantum mechanics, because of how fragile they are, but researchers at the University of New South Wales have built a device that could serve as a quantum simulator.
The idea behind many simulators is to take a hard to examine process or system, and recreate it in an environment more easily studied. A computer simulation allows every aspect to be studied, but quantum systems can be so complicated that even the most powerful supercomputers cannot run them efficiently. What the New South Wales researchers have done is doped a pair of boron atoms into a silicon crystal, separated by just a few nanometers. In this configuration, they behaved like valence bonds, which are what hold many molecules together when the orbits of unpaired electron overlap. The researchers were then able to directly measure the clouds of electrons around the atoms, and the interactions between the spins of the electrons.
The observed behavior in this simulation matches the Hubbard model, which is what describes how electrons interact, with their wave-like properties, and is central to explaining many phenomena. The researchers also made a curious discovery as the electrons involved were entangled with each other, but this entanglement actually increased with their separation, instead of decreasing. It is quantum mechanics though, where the counterintuitive is so often the norm.
Source: University of New South Wales
Posted: April 27, 2016 09:14AM
It might not be the most pleasant image for some people, but we are approaching a time when robotic drones will be freely moving around us for various reasons. In some cases a swarm of robots might be used instead single drones, which makes it vital that they all act in concert. In general this can involve centralized or decentralized algorithms, and now researchers at MIT have developed a new decentralized algorithm that significantly reduces the amount of communication needed between the drones.
A centralized algorithm for controlling a swarm involves having a single computer make all of the decisions for a swarm of robots, which is fine unless that computer goes offline. Decentralized algorithms, where each robot is making its own decisions, do not suffer from these problems, but are much harder to design as each robot guesses what the others are doing. To that end, these algorithms will have the robots scan their local environment for obstacles and transmit their map to the rest of the swarm, so everyone robot has the same information to work with. What the new MIT algorithm does is cut down on the size of the map dramatically by only transmitting the intersection between different maps. So after the first drone transmits its complete map to its neighbors, these neighbors identify the overlap with the map they constructed, and then that is transmitted.
As this intersection is significantly less information than the entire, composited map, it cuts down on the communication between the drones, but the robots will still have a map of every detected obstacle. It also works for detecting moving obstacles and is completed many times a seconds, so sudden changes in an obstacles velocity should not be an issue.
Posted: April 26, 2016 08:50AM
When it comes to storing electricity, the two main ways to do so are batteries or supercapacitors, which both offer their own advantages and disadvantages. Hybrid batteries try to combine the two to get the best of both worlds, and now researchers at PNNL have found a way to make them even better.
The advantage batteries come with is tremendous energy density for their size, while supercapacitors store less energy but can be very quickly charged and discharged, unlike batteries. Hybrid batteries contain both technologies by making the electrodes out of supercapacitors, so that one can have a fast charging and long lasting device. These electrodes can be made from carbon nanotubes and it has been discovered before that spraying polyoxometalate, or POM onto them can improve their performance by adding ions to the surface. The catch is that only negative ions are desired, but POM includes both positive and negative ions. What the PNNL researchers did is change the method of applying the POM to ion soft-landing, which allows for precise control over what is applied; in this case applying only negative ions.
The resulting hybrid batteries stored 27% more energy than those made by more conventional methods. They also only lost a few percent of their capacity after 1000 charge and discharge cycles, while the conventional hybrid batteries were at half-capacity by then. When the researchers closely examined the electrodes they found that the ion soft-landing method allowed the negative ions to more evenly cover the electrodes, while the positive ions deposited by other methods resulted in the material clumping up on the electrode. This also means less POM was needed to achieve optimal results.
Posted: April 25, 2016 08:25AM
Something shared across many video games are certain specific archetypes, such as tanks, fighters, mages, rogues, assassins, and so on. In some games you are able to select what role you get to play as, while in other games it may be selected for you, or never even described. Researchers at North Carolina State University decided to look into these roles and see if they influence a player's behavior, and if selecting a role makes any difference.
To do this experiment, the researchers create a single-player RPG (which you can play at http://go.ncsu.edu/ixd-demo-rpg) and had 210 people play it. Of those, 78 were assigned the role of fighter, mage, or rogue, while 91 were allowed to select their role, and the final 41 played without a role. The game contained twelve multiple choice decisions that were careful constructed to be aligned with the three roles, to see if players maintained the role as they played. The results showed that whether the players selected or were assigned the role, they maintained them most of the time, with fighters being consistent 65.7% of the time, mages 76.1% of the time, and rogues 69.7% of the time. Even for the players who were not given a specific role, made decisions consistent with a specific role.
This study indicates that even without explicit role-playing elements to a game, players will assume and maintain roles on their own, which could influence how game designers develop games. It also means that other studies that examine player choice should be careful to remove role as a variable, as it could skew results.
Source: North Carolina State University
Posted: April 22, 2016 06:58AM
Wireless communication is something many of us rely on today for connecting or various devices to the Internet, so there is a constant drive to increase wireless speeds. One way to achieve this is to build systems that allow for the simultaneous transmission and reception of signals, but achieving this is somewhat difficult with a single antenna. Researchers at the Columbia University School of Engineering and Applied Science though have built an on-chip solution that could bring full duplexing and doubled speeds to devices like our phones.
Currently many devices use half-duplexing to connect to a Wi-Fi network, which means that while one antenna is sending and receiving all of the information, it is not doing so at the same time. This is because the electronic structures used exhibit Lorentz Reciprocity; electromagnetic waves travel in both directions at the same time. One way to overcome this issue requires using magnetic materials to create a radio frequency circulator. When the material is exposed to an external magnetic field, reciprocity is lost, allowing the incoming and outgoing signals to be separated. Such circulators cannot be integrated into silicon chips though, and even then they are rather large for using in something like a phone. To solve this problem, the Columbia researchers created a new, electronic circulator that is highly miniaturized and uses a set of capacitors to replicate the non-reciprocal twist the magnetic circulators produce.
The researchers have already demonstrated this new circulator design by building a prototype of their full-duplex system that also features an echo-cancelling receiver. By integrating the circulator into the same chip as the rest of the radio, it should be possible to keep the size of the system and the cost down, allowing for full-duplex communications and potentially doubling network capacity.
Source: Columbia University
Posted: April 21, 2016 12:38PM
The next time you archive some files and compress them, you might think about the process a little differently. Researchers at the National University of Singapore have discovered a common compression algorithm can be used to detect quantum entanglement. What makes this discovery so interesting is that it does not rely on heavily on an assumption that the measured particles are independent and identically distributed.
If you measure the property of a particle and then measure the same property of another particle, in classical mechanics there is no reason for them to match but pure chance. In quantum mechanics though, the two particles can be entangled, such that the results will match each other. This follows from Bell's theorem, which is applied to test if particles are in fact entangled. The catch is that the theorem is derived for testing pairs of particles, but many pairs have to be measured and the probabilities they are entangled calculated. This is where the researchers' discovery comes into play because instead of calculating probabilities, the measurements can be fed into the open-source Lempel-Ziv-Markov chain algorithm (LZMA) to get their normalized compression difference. Compression algorithms work by finding patterns in data and encoding them more efficiently, and in this case they also find correlations from quantum entanglement. If the data is classical, the normalized compression difference must be less than zero, but with quantum mechanics it can reach 0.24.
When tested, this approach returned a value of 0.0494 ± 0.0076, which shows the data did cross the classical-quantum boundary. It is below the 0.24 theoretical maximum because the quantum states cannot be created and measured perfectly, and the compression algorithm is not ideal.
Posted: April 21, 2016 07:01AM
While many of us may be transitioning to solid state data storage for greater read and write speeds, magnetic storage devices still have great data density. That density may be hitting new highs in the future as researchers at EPFL have demonstrated single-atom magnets that remain stable at a new record-high temperature of 40 K.
Magnetism is the result of the spin of electrons, which is a quantum mechanical property but works on the macroscale when the spins of many electrons line up. In a single atom though, the spin of an electron can be easily flipped by the environment, with magnetic remanence describing how well a magnet remains magnetized. The researchers were able to build prototype single-atom magnets using holmium atoms that were placed in ultrathin films of magnesium oxide. The electronic structure of the holmium atoms protect their magnetic fields from flipping, and at 40 K, the magnetic remanence is stable. Previous magnets consisting of three to twelve atoms have required even lower temperatures or poorer remanence where the holmium magnets are stable.
While 40 K is a bit too low for many practical uses, this still sets the record for smallest and most stable single-atom magnet, the ultimate goal for miniaturized data storage. Hopefully we will see this record broken before long.
Posted: April 20, 2016 08:34AM
Heat engines have been around for a long time and are used to convert thermal energy into mechanical force. Now thanks to researchers at Johannes Gutenberg University, a single atom has been turned into a heat engine, which could have applications for studying thermodynamics and quantum thermodynamics.
The core of this heat engine is a single calcium atom that has been electrically charged to be held in a trap. It can then be heated with electrically-generated noise and cooled with a laser beam, which results in it going through a thermodynamic cycle. That means the atom moves back and forth within the trap, just like the strokes of a heat engine. While the single atom only generates 10-22 watts at 0.3% efficiency, scaling the engine up to match the mass of a car engine, the output would be comparable.
Chances this design will not be used to actually generate power, but instead be used to study the thermodynamics of single-particles and if the operating temperatures can be lowered sufficiently, it could become a window to thermodynamic quantum effects. It is also possible to reverse the cycle to make it a single-atom refrigerator for cooling nanosystems.
Source: Johannes Gutenberg University
Posted: April 19, 2016 02:03PM
Currently thin is in for many devices, including phones, tablets, and laptops, but in the future we may see flexibility become the physical feature of choice. For some technologies this makes sense, as the flexibility can allow it to be deployed in more places by wrapping around any object. Researchers at Columbia University are working towards flexible cameras and have recently developed a flexible lens array without aliasing artifacts.
One way to create a flexible lens array is to attach rigid lenses with fixed focal length to a flexible material, but it has a significant flaw. As the material is bent, gaps will form between the lenses' fields of view. The researchers solved this problem by making the lenses themselves flexible, so the bending alters the focal length as needed. This prevents any gaps, or aliasing artifacts from forming, and it was achieved passively by optimizing the geometry and material properties of the silicone used, so no special mechanical or electrical systems are needed.
This lens is just half of a flexible camera as a large-format flexible detector also has to be developed. Once that is created though, the new class of cameras will have many new applications not currently possible with rigid cameras, and we could potentially see them made cheaply, like a roll of plastic.
Posted: April 18, 2016 09:09AM
So many items around us are becoming connected now, and eventually even our clothes will contain electronics. Creating e-textiles has not proven to be easy though, as textiles must be flexible while electronic components are typically rigid and fragile. Several advances have been made over recent years though, and now researchers at Ohio State University have successfully found a new means of embroidering circuits into clothes.
Previously the Ohio State researchers worked with a silver-coated polymer thread that measured about 0.5 mm in diameter, with each thread consisting of 600 smaller filaments. What the researchers have done recently though is switch to a new thread just 0.1 mm in diameter and made of just seven filaments. This new thread has a copper core that is enameled with pure silver, but is still able to be embroidered like a traditional thread. The researchers have already demonstrated this by feeding it into a sewing machine that then embroidered different shapes into textiles, and these shapes were functional circuits and antennas. In fact, a broadband antenna they made, which is able to work over a broad spectrum of frequencies like our mobile devices, showed off near-perfect efficiency from 1 to 5 GHz.
Potentially this antenna design could be allow our clothes to boost the reception of smartphones and tablets. The wire used costs about three cents a foot, and just one antenna takes about ten feet, so that is thirty cents, which is 24 times cheaper than similar antennas the researchers made in 2014. It is also cheaper because the technique has been refined so that only one embroidered layer is needed, saving on time and material.
Source: Ohio State University
Posted: April 15, 2016 08:44AM
When recharging a battery, it will start to heat up and for many applications, that is not too big a problem, but in some cases that heat can kill the battery, and even ignite it. To address the issue, work is being done to develop new battery components that can take the heat. Researchers at Rice University have recently developed a new combined electrolyte and separator that can survive temperatures up to 150 ºC.
Last year this same group of researchers discovered a kind of clay could be used as an electrolyte that would work at up to 120 ºC. From that work they speculated that hexagonal boron nitride (h-BN), or white graphene, could do an even better job. It is called white graphene because it is structurally similar to normal graphene, a form of carbon that is just one atom thick. Unlike graphene though, h-BN is an insulator and is also not a good ionic conductor. With properties like that, one would not expect it to improve a battery's performance, but it actually did. Despite being a relatively inert material, it combined with a piperidinium-based ionic liquid and lithium salt appeared to catalyze better reactions from the chemicals around it.
With the ability to operate from room temperature up to 150 ºC, the batteries using it can have very wide temperatures, which will be very important for some industrial and aerospace applications. For example, wellheads for the oil and gas industry require batteries that can survive the high temperatures they are exposed to. Non-rechargeable batteries have to be used currently, because only they can endure the temperatures involved, but now that may change.
Source: Rice University
Posted: April 14, 2016 07:36AM
As powerful as modern computers become, there are some operations they will never be able to do very well. Quantum computers however, do have the potential to complete some of these operations very quickly, because of the quantum mechanical effects they have at their disposal. The catch is that quantum mechanical systems are as fragile as they are powerful, but researchers at MIT have developed a new means of stabilizing quantum bits.
In traditional computer, information is stored with the charge of electrons, but in quantum computers, the quantum bits or qubits store information with properties that can enter a superposition. Superposition is a quantum mechanical phenomenon that allows a particle to exist in multiple, usually exclusive states, but is also very fragile. The qubit in this case is a nitrogen-vacancy (NV) center within a diamond. A pure diamond is comprised of carbon, but researchers discovered that by replacing a carbon atom with a nitrogen atom, and removing another carbon atom next to it, creates a quantum system that can be used as a qubit. What the researchers did is use microwave exposure to entangle the state of the electrons within the nitrogen-vacancy, with the state of the nitrogen atom's nucleus. This entanglement means that if anything goes wrong when the quantum computations are done, both the NV center and the nucleus will be affected. After the computation is completed, the nucleus and NV center are disentangled and are exposed to additional microwaves. These microwaves have been calibrated though, so that their effect on the NV center depends on the state of the nitrogen nucleus, so only if an error occurred would the qubit be touched.
With experiments the researchers found this method allowed the qubit to stay in it superposition for about a thousand times longer than if the method were not used. Obviously that is a significant accomplishment and we could see it quickly being used to as part of new protocols for quantum computing.
Posted: April 13, 2016 12:02PM
This bit of research is from the University of Montreal, which might not be all that surprising. Researchers there have discovered at maple syrup can actually help protect neurons from amyotrophic lateral sclerosis, or ALS. Before you start downing any syrup though, this study was done with C. elegans worms that do not have to worry about illnesses like diabetes, and was just for educational purposes.
The C. elegans worms used have been genetically modified to express TDP-43, which is related to ALS, and will result in 50% of the worms being completely paralyzed after two weeks. To see if maple syrup would make a difference, the worms were given some at various concentrations and compared with worms on a normal diet. At the two week mark, only 17% of the worms were paralyzed, showing that the syrup did in fact help protect them from the illness.
The reason maple syrup helped is because it contains sugar and some powerful antioxidants, polyphenols. Neurons use sugar for food, and diseased neurons need more to fight the toxic proteins associated with ALS. Two of the antioxidants identified, gallic acid and catechol, also have a neuroprotective effect, which certainly helped as well, even though they are only present in small concentrations.
Posted: April 12, 2016 07:58AM
As technology has advanced, video games have benefited with richer and sharper graphics, which is well demonstrated now by the virtual reality displays currently available and those scheduled to release later this year. This realism has resulted in several studies finding players will actually feel guilty after amoral acts, such as unjustified violence. Now researchers at the University of Buffalo have discovered gamers can become desensitized as they continue to play a particular game, and this spills over to similar games.
A sometimes used defense of violence in video games is that actions in a virtual world do not translate to the real world, but the findings that gamers feel guilty after committing these virtual acts would seem to challenge that claim. This new study adds on that playing a violent game over and over again desensitizes the player to this guilt, and that this applies to similar games as well. Why this happens and what the mechanisms are behind the desensitization is unclear though.
Currently the researchers have two arguments, with the first being that games become less sensitive to the stimuli causing guilt. The second argument considers tunnel vision, with a gamer's perception changing from that of a non-gamer with repeatedly play. Eventually a gamer might just be processing what they see differently, disregarding meaningless information and coming to recognize how artificial the virtual environment is. The researchers are planning future work to try to answer determine what the answer is.
Source: University of Buffalo
Posted: April 11, 2016 07:16AM
Undoubtedly encryption is a very important tool for securing communications, but modern encryption methods can all be beaten with clever tricks or brute force. In the future though, quantum encryption could be used to protect sensitive information in such a way that it cannot be compromised without the intended user's knowledge. Central to this kind of security is quantum key distribution, which has been limited to just hundreds of rather slow data rates, but researchers at the University of Cambridge have found a way to speed it up by up to six orders of magnitude.
Quantum encryption protects data because the key to decrypt it is transmitted using quantum mechanical particles, such as photons. When these photons are observed, to determine what the key is, their quantum mechanical properties change, meaning the key is altered and this can be detected. While theoretically quantum encryption cannot be broken, by attacking the real hardware components involved, it could potentially be compromised, so a protocol called measurement-device-independent quantum key distribution (MDI-QKD) was developed. While this has been demonstrated successfully, it has been limited to operating at just a few hundred bits per second, or less, because of how hard it is to create indistinguishable particles from the different lasers involved. The Cambridge researchers have addressed this problem by developed pulsed laser seeding for injecting photons from one laser beam into another. This method reduces the time jitter of the pulses, allowing them to be significantly shorter.
Using pulsed laser seeding, a data rate of up to one megabit per second is possible, which represents a one hundred to one million improvement factor. This new protocol could be leading us to the practical implementation of quantum cryptography.
Source: University of Cambridge
Posted: April 8, 2016 07:40AM
Many technologies we have today are only possible because of how much energy lithium-ion batteries can store, and their ability to be recharged. As the devices they power have improved though, these batteries have been approaching their limits, requiring new technologies or chemistry for the future. Thankfully researchers at NIST have found a way to improve the characteristics of a material that one day could serve as an energy storage medium.
The material in question is a compound of hydrogen and boron, and either lithium or sodium, with one of the boron atoms replaced with carbon. The researchers previously discovered this substitution improved the compound's ability to conduct ions by a factor of ten. What they have now found is a way to overcome an issue with its behavior at different temperatures. When in an environment hotter than boiling water, the material would conduct ions quite well, but at lower temperatures, such as room temperature, it lost its conductivity and thus its performance. The recent discovery is that by crushing the material into nanoscale particles, it maintains it conductivity at room temperature and far lower, making it potentially viable for batteries.
Now the researchers are investigating how the material might be used in next-generation batteries, with the hope of convincing people of the material's potential.
Posted: April 6, 2016 09:15AM
For many modern devices, the battery can be the bulkiest component, which is a problem as people demand smaller and thinner devices, without sacrificing battery life. One way to improve the performance of a battery is to use electrodes with 3D microstructured architectures, as these provide more places for ions to interact with. However, making such structures is difficult, but researchers at Aalto University have successfully demonstrated a way to build them.
This new method combines atomic and molecular layer deposition techniques to build thin films from lithium terephthalate. This is a hybrid organic/inorganic material recently found to be a viable anode for lithium-ion batteries. Surprisingly, even though lithium terephthalate is a hybrid compound, it survived the deposition technique that reaches temperatures between 200 ºC and 280 ºC. It also does not require conductive additives to achieve an excellent rate capability, but adding a protective layer of the solid-state electrolyte, LiPON does enhance its performance.
When the researchers tested the anodes they constructed, they found they retained 97% of their capacity after 200 charge/discharge cycles.
Source: Aalto University