Science & Technology News (1268)
Posted: March 24, 2017 12:51PM
For a long time the material indium tin oxide (ITO) has been a requirement in several technologies because it is a transparent conductor. One issue with ITO is that indium is quite rare, making the material ever more expensive, and another is that it is rather fragile. A potential alternative for a transparent conductor is a silver ultrathin film, which has issues of its own but researchers at the University of Michigan have recently solved some of these.
When trying to create a thin film of silver, it typically cannot be smaller than 15 nm because silver likes to cluster into islands and not form an even coating. What the Michigan researchers discovered is that adding just 6% aluminum the silver can be coaxed to form a film just 7 nm thick. After applying an anti-reflective coating they were able to make one layer up to 92.4% transparent. The aluminum is even more useful than that though, as the ultrathin film did not tarnish in open air after months, unlike pure silver films that tarnish almost immediately, disrupting its conductive properties and transparency.
While there are some obvious uses in displays, this silver ultrathin film has far more potential than that. In this form, silver is able to carry plasmon polaritons, which are oscillations created when light strikes a metal and they carry the information of the light wave. Plasmons can be much smaller than the wavelength of the light though, allowing the silver film to act as a kind of superlens. This gives the film potential uses inside of computer chips as a means to transmit information optically, allowing for faster data transfer than electronic transmission currently allows. On top of that, by alternating layers of the silver film with an insulator, like glass, a metamaterial hyperlens could be made, which could then image objects smaller than the wavelength of light and enable laser patterning, such as that used to etch computer chips, to reach smaller feature sizes.
Source: University of Michigan
Posted: March 15, 2017 12:09PM
There are a number of battery technologies out there that employ different methods to store and deliver electricity. Many store energy within their solid electrodes, but flow batteries actually use liquid electrolytes for storage. Today you can find large scale flow batteries used in stationary applications, but thanks to researchers at ETH Zurich and IBM Research Zurich we may see them coming to computing devices thanks to a clever use of the liquid electrolytes.
As the name indicates, the two liquid electrolytes in a flow battery move through it, and with the proper membrane is between them, an electrical current is produced like in any battery. What occurred to the researchers is that the electrolytes could also be used to transport heat away from a system, like a liquid cooling system for a computer. Instead of designing something comparable in size to an AIO cooler, the researchers went even smaller, making a battery just 1.5 mm thick so that it can be integrated into a chip stack. In a chip stack individual chips are stacked on top of each other to save space and energy, but by putting a thin battery micro-cell in between it is possible to power and cool the chips.
This micro-battery design offers a record-setting 1.4 W/cm2 (1 W/cm2 after subtracting the power to pump the electrolytes) and is able to dissipate more heat than what the battery generates as electrical energy. However further optimize is going to be needed because, record setting or not, it is not enough power for the computer chip to operate. Still, this is a promising proof-of-concept that may have potential in a variety of applications and could improve large-scale flow batteries as well.
Source: ETH Zurich
Posted: March 13, 2017 01:09PM
Everybody wants their computer to run faster, but we are approaching theoretical limits on just what traditional silicon-based electronics are capable of. For faster speeds, new technologies will need to be developed, and they may take advantage of some unusual physics. Researchers at the University of Michigan have made a discovery that could lead to these faster computers by way of lightwave electronics.
In lightwave electronics, electrons are manipulated by an oscillating electric field generated by ultrafast laser pulses, causing them to move at high speeds in certain directions. In modern electronics, when one fast moving electron hits another heat is produced, but with lightwave electronics the electrons are moving so quickly they are not likely to strike another before they stop. What the Michigan researchers have done is demonstrated this ability to move electrons in a semiconductor using pulses less than 100 femtoseconds long of terahertz radiation. These pulses pumped enough energy into the electrons to pop them up to a higher energy level, allowing them to move around the semiconductor crystal. The paths the electron take is not random though, or even determined by the laser pulses but by the structure of the crystal. When the electrons fall back down to a lower energy level, they emit light as an even shorter pulse than the terahertz source and these pulses could actually be used to read and write information to other electrons.
Eventually we could see lightwave computers operating ten to 100,000 times faster than modern computers, but there is still a lot to do before then. Before then we may see this used to optimize chemical reactions and in quantum systems, including quantum cryptography.
Source: University of Michigan
Posted: March 6, 2017 02:04PM
Quantum mechanics is a weird and wonderful science that can allow seemingly impossible things to happen, such as particles to exist in multiple places at the same time. This and other phenomena have potential for use in quantum computers, but building such systems is far from easy, naturally limiting researchers' access to the potential computing power. Today IBM announced IBM Q, an initiative to build commercially available universal quantum computing systems, and that the company is working on a new SDK and API for the IBM Quantum Experience, which gives anyone access to IBM's quantum processor via the IBM Cloud.
With the new API for Quantum Experience, more developers and programmers will be able to tap the five qubit system already accessible, without requiring a background in quantum physics. The new SDK however, which is to be released in the first half of this year, will allow for circuits of up to 20 qubits to be modelled by an upgraded simulator. Even that will be surpassed though as IBM also intends to introduce commercial IBM Q systems with about 50 qubits in the next few years. At this level of complexity it should be possible to show how quantum computers can surpass classical computers.
Posted: February 24, 2017 12:52PM
There are many systems out there used to secure digital systems and content, and one of them has just been dealt a significant blow. The cryptographic hash function SHA-1 has been dealt its first collision, thanks to Google, effectively confirming it is no longer secure. Luckily there are better versions of the function out there already (SHA-256 and SHA-3, for example), but now it seems even more important for security professionals to move away from the 20 year old standard.
Cryptographic hashes are an important part of securing data on the Internet as the algorithms produce what should be unique message digests. If you want to check the authenticity of a file you download, and the source provides the digest, you can compare what the source provided against the digest your computer can make from what you downloaded, and if they match, you know you got what you wanted, and it was not compromised by some third-party. What Google has down is produced the first collision for Secure Hash Algorithm 1, which means it has generated two files, in this case two PDFs, that have different content but the same SHA-1 message digest. Google started from a paper published in 2013 that described a theoretical approach to creating the collision, and work started on creating a PDF prefix that would do the job. It took a lot of computation, but the collision has been produced.
For years it has been known and recognized that SHA-1 is not very secure, but it is still in use today to confirm website security certificates. However, Chrome 56 and newer will not consider any website with a SHA-1 certificate secured, and reacting to the news of the collision, Mozilla has accelerated its phase-in of depreciating SHA-1 to all Firefox 51 users.
You can find even more information about the collision and its potential impact at the second source link below.
Posted: February 22, 2017 03:02PM
Last year three planets were discovered in the TRAPPIST-1 system and today it has been announced that, with the help of the Spitzer Space Telescope, four more planets have been found there, bringing the total to seven. Using Spitzer's data the sizes of these planets have been measured and first estimates of the inner-six's masses have also been made. Based on that information, these planets are most likely rocky planets, like Earth, and also Earth-like in size. While it will take more observations, it is possible all seven of these worlds possess liquid water on their surfaces, though only three of them are actually within the host-star's habitable zone. The seventh planet, which has not yet had its mass estimated, could be an icy, "snowball-like" world.
While these planets may have some similarities to Earth, the TRAPPIST-1 system is very different. The star is considered an ultra-cool dwarf, which means it is cool enough for even the nearest planet to have liquid water on its surface. In fact, all of these planets are closer to this star than Mercury is to the Sun. The orbits are also so close together that if someone were to stand on the surface of one planet, they may be able to make out geological features and even clouds on another. Being so close to the host star, it is possible the planets are tidally locked, meaning one side is always facing the star, just like how one side of the Moon always faces Earth.
For now we can expect at least NASA's Spitzer Space Telescope, Hubble Space Telescope, and the Kepler observatory, specifically designed for planet-hunting, to continue studying TRAPPIST-1, which is named for the Transiting Planets and Planetesimals Small Telescope in Chile that originally found the system. This new information will be used to help plan the missions for the upcoming James Webb Space Telescope, launching next year with the capability to detect the fingerprints of water, methane, oxygen, ozone, and other chemicals in the planets' atmosphere.
Posted: February 21, 2017 11:42AM
Hydrogen can be a potent fuel but one issue that has been deviling the adoption of it and the technology that can use it has been transporting the gas. In order to move the gas around like conventional fuels are would practically require a whole new infrastructure, but researchers at Georgia Institute of Technology have developed a solution. Instead of producing the hydrogen in one place and then transporting to where it will be used, they have created a reforming reactor that can be deploy at the point of use, and it just so happens to be related to the internal combustion engine.
Modern means of creating hydrogen for use in fuel cells involves temperatures of around 900 ºC, takes three water molecules to create one hydrogen molecule, and the resulting gas is low density. Wanting to design a better reactor, the researchers started thinking about the features it needed, and these included the ability to change the size of reactor vessel. Looking at existing mechanical systems, the researchers realized the internal combustion engine, with its over one hundred years of development, provides this feature, so they figured out how to make a four stroke engine into the CO2/H2 Active Membrane Piston (CHAMP) reactor.
In the first stroke, with the piston moving down, natural gas (methane) and stream are pulled into the cylinder, and once the piston reaches the bottom of the cylinder, the valve closes. Next the piston starts moving up, compressing the steam and methane, and the cylinder is also heated so that at 400 ºC a catalytic reaction is started, creating hydrogen and carbon dioxide. A selective membrane allows the hydrogen to exit the cylinder while the CO2 is pulled into a sorbent material mixed with the catalyst. Now when the piston lowers, reducing pressure as well, the CO2 escapes the sorbent so it can be expelled when the piston starts moving up again. While the process does produce carbon dioxide, it, like the hydrogen, can be captured for later use or long term storage.
Though the CHAMP reactor does resemble a four-stroke engine, it operates much more slowly, completing few cycles per minute, compared to conventional engines that run in the thousands of RPMs. This speed and other factors of the modular design can be altered, so the supply of hydrogen can match the immediate demand, at the source of the demand. Combined with the existing natural gas infrastructure and this system can be used to get as much hydrogen as is needed almost wherever it is needed, whether that is fuel cells for houses, refueling stations for fuel cell vehicles, or even larger systems for powering neighborhoods.
Source: Georgia Institute of Technology
Posted: February 15, 2017 12:13PM
To some people, video games only represent a waste of time that distracts players from more important and valuable ventures. Thankfully not everyone believes this and now researchers at the University of Texas at Dallas have even demonstrated some of the potential games have to do more than waste time. By adding realistic chemistry to Minecraft, the researchers found the student test-subjects were able to learn the science involved, without an in-class science instruction.
The name of the mod is Polycraft World and thanks to the chemistry professors that worked on the project as well, thousands of methods exist in it to produce over one hundred polymers from the thousands of chemicals available. Natural rubber can be used to make pogo sticks while crude oil can be processed with other materials to make a jetpack. A Wiki was also created for the students to look to, helping to find the right difficulty balance as making the game too easy will lead to players losing interest and being too hard will just frustrate them. Among the results were non-chemists building factories to build polyether ether ketones, which are very difficult to synthesize.
What the researchers want to see is the development of games that players can learn advanced subjects from, but can be played without accompanying any classroom learning. Right now though Polycraft World is also impacting classroom learning as it can monitor how players interact with it and how often they have to turn to a guide. This information on learning methods can be used to improve teaching methods to better help students.
Next the researchers want to add an economics portion to the Minecraft mod, and they are already working with economists to achieve this. Eventually players will be able to form governments and companies so a currency can be minted and distributed, with goods propping up the currency, hopefully forming a sustainable economy.
Source: University of Texas at Dallas
Posted: February 13, 2017 12:45PM
Atomic force microscopy (AFM) is a rather extreme form of microscopy as it moves a probe with a single-atom tip over a surface to make measurements. The forces between materials cause the probe to be deflected by the atoms it moves over, and this deflection can be recorded and analyzed to visualize the sample with tremendous detail. Now researchers at the University of Alberta have applied AFM to study a silicon surface and to even fabricate patterns in the silicon.
While AFM has existed for quite some time now, it is not often used with silicon, and has never been used like this before because it can potentially damage the silicon, but the researchers decided to take the risk because of what success could offer. Eventually the researchers discovered ways to minimize these challenges, allowing them to move around individual silicon atoms. With this capability, atomically designed structures can be built, providing a new level of control over the electrons that will be flowing through them. The researchers were also able to use AFM to measure the electronic bonds between the silicon atoms, another first that allows new insight into how electrons behave as they travel across silicon structures.
The vision at least one of the researchers has is to see this work used to create ultra-fast and ultra-low-power silicon circuits that could potentially use ten thousand times less power than what is available today. It may take a while before we get there, but this step has now been made and it will not just be electronic computers that may benefit from this, but future quantum computers as well.
Posted: February 8, 2017 02:01PM
Many of the concepts in quantum mechanics seem impossible, such as entanglement where particles can have their states strongly correlated despite being separated by any arbitrary distance, so there has naturally been a lot of skepticism about their accuracy. To determine if entanglement is the result of quantum mechanics or some other local correlation, physicist John Bell developed what is called Bell's inequality a half a century ago, which allows researchers to determine if quantum mechanics can explain a strong correlation or if a loophole is a better explanation. Researchers at MIT and the University of Vienna have recently conducted an experiment that goes to a literal cosmic-extreme to all-but remove the possibility of a loophole in a quantum entanglement measurement.
The loophole that was specifically targeted is the freedom-of-choice loophole, which states that an experiment may have a bias in its setup, and that bias is what creates a correlation. There are many parts to an experiment trying to measure quantum entanglement, from the source that emits a pair of photons to the detectors that measure their properties to determine if they are correlated. One solution that has been used before is to use a random number generator to determine what properties of the photons are measured, based on a number generated in the time between the photons being produced and the detectors making the measurement. This latest experiment replaced the random number generators at the detectors with telescopes that were aimed at a star some 600 light years away. The particular star was picked because it provides a consistent stream of photons that were produced those 600 years ago and have traveled through the void and dust of space on their way to Earth all of that time.
Comparing 600 years to the microseconds involved in the experiments using random number generators results in a reduction of some 16 orders of magnitude to the loophole still being in play.
Posted: January 30, 2017 06:03PM
We have another example of serendipity resulting in a potentially significant scientific discovery. In 2004 graphene was first isolated, and ever since then, with its amazing characteristics, including strength, flexibility, and conductivity, becoming better understood, researchers have been trying to find ways to mass produce it. Researchers at Kansas State University have done just that while working on something else.
The original experiment was to develop and ultimate patent carbon soot aerosol gels. The process for making these gels involved filling a 17 liter aluminum chamber with a hydrocarbon, such as acetylene or ethylene, and oxygen. A spark plug would then ignite and detonate the gases and the resulting soot would form the aerosol gels that seemed to resemble "black angel food cake," as one researcher described it. It was when the researchers more closely examined the soot that they discovered it was comprised of graphene.
The current methods for producing graphene can involve a lot of energy, special catalysts, and some potentially dangerous chemicals to produce milligrams of the carbon allotrope, while this method takes just a spark and results in grams of graphene. Now the researchers are working on improving the quality of the graphene, developing equipment to collect the graphene more quickly, and how to scale the process up for industry.
Source: Kansas State University
Posted: January 27, 2017 01:17PM
There are a number of elements on the Periodic Table that will display very unusual properties under the right conditions. Hydrogen, the simplest element, has long been theorized to be one of these elements, and thanks to researchers at Harvard University we may soon start confirming some of those theories and revolutionizing various fields. What the researchers have done is successfully created metallic hydrogen.
Hydrogen is one of the diatomic elements, which means they want to be in molecule pairs, so the molecules in hydrogen gas consist of two atoms, H2. (Oxygen and nitrogen are also diatomic elements, so their natural, gaseous forms are as the molecules O2 and N2.) When you freeze hydrogen into a solid, it will stay in these molecular pairs, but it has been theorized that if you apply enough pressure, the molecular bonds will break, producing atomic metallic hydrogen. To produce metallic hydrogen the researchers applied some 495 gigapascal (GPa), and for comparison standard atmosphere pressure is a little over 100 KPa, or 0.000001 GPa. Because of how much energy is invested to break those bonds, when the atoms are allowed to bind back together, as massive amount of energy could be released, making this a potentially very powerful rocket fuel.
Another possible use for metallic hydrogen could be as a superconductor, as it is expected to be one and might even be able to survive at room temperature and pressure. That depends on how stable the solid is though, which will take some more analysis to confirm, but that work is definitely going to be done.
Source: Harvard University
Posted: January 24, 2017 01:48PM
For natural materials, their characteristics typically cannot be changed after being produced, so hard materials will remain hard and soft materials will remain soft. Metamaterials, however, are materials designed with special structures that can allow them to possess properties not possible in Nature, such as a negative index of refraction, which can allow for invisibility cloaks. In this case though, researchers at the University of Michigan have designed a metamaterial that can transition between hard and soft states.
When an object comes in contact with the metamaterial's surface it can change its structure, altering how the edge reacts to stress. The properties of the metamaterial are topologically protected as well, so the inside bulk of the material will not be damaged despite repeated transitions between the hard and soft states.
Possible applications for such a metamaterials include cars and reusable rockets. In a car the material could shift from a stiff state for supporting loads to a soft state that can absorb the energy of a crash. Reusable rockets could be made more damage-resistant as well and bicycle tires could self-adjust depending on the terrain.
Source: University of Michigan
Posted: January 10, 2017 01:51PM
The ability to quickly move the heat generated by integrated circuits cannot be overvalued as too much heat can cause critical errors. Two materials that could be very useful in getting heat away from circuits are graphene and carbon nanotubes, but combining them to any great effect has proven difficult thus far. Researchers at Rice University though appear to have solved the problem by creating nano-chimneys.
Both graphene and nanotubes are made of pure carbon and consist of hexagonal rings, like chicken wire, and both are able to transmit heat very quickly. Combining these materials stunts that transmission though, with pillared graphene being 20% less conductive than free-standing nanotubes. This is because when the nanotubes are grown from graphene, heptagonal, seven-member rings form to connect the two structures. These rings, however, scatter phonons carrying thermal energy, preventing the heat from escaping. What the researchers discovered is that by selectively removing atoms from the graphene base a cone will be formed to connect the graphene and nanotubes, and these cones allow the heat to move up the nanotubes and away from the graphene. The cones do not reduce the number of heptagons but make them sparser, leaving paths for the phonons to take.
The researchers simulated nano-chimneys with cone radii of 2 nm and 4 nm to compare them to free-standing nanotubes and pillared graphene. The 2 nm-base chimneys were as conductive as the free-standing nanotubes while the 4 nm-base chimneys were 20% more conductive, which indicates there is a mechanism to tune the conductivity of these structures.
Source: Rice University
Posted: January 9, 2017 01:10PM
With so many of the devices we use every day being powered by batteries, it is very important to keep them all charged. Typically this involves finding a cable and then keeping the device tethered to it and the wall, which is one of the reasons wireless charging has become popular, but the technology used has limited range. Researchers at Duke University, the University of Washing, and Intellectual Ventures' Invention Science Fund have come up with a new solution though that could charge devices up to 10 meters away.
Currently wireless charging technology uses magnetic fields to induce a current, but this can require large coils and the range of the magnetic field is limited. The new solution is to use focused microwave beams to transmit the energy across a room. This would normally involve using an antenna dish and aiming it at the target devices, but that is hardly ideal, so instead the researchers propose using a phased array, which is collection of small antennas that can all be adjusted and tuned independently, allowing the signal produced by the array to be directed. That converts the dish into a flat antenna, but this is still not an ideal solution because of their cost and amount of energy used, so the researchers have turned to metamaterials. These synthetic materials allow the microwave wave front to be controlled, aiming the beams exactly where you want them, and the best part is this technology already exists.
Actually, all of this technology already exists and is already being used in other applications, and the antennas could be produced at the same plants LCD televisions are manufactured. According to the calculations involved, one of these antennas about the size of a typical flat-screen TV could focus microwaves down to about the size of a cellphone within a 10 m distance. However, while this technology exists today there is still more work to be done before a consumer device could be made. For one thing, the charging system needs to be able to identify when a person or pet crosses the microwave beam, shutting it off. This and the other challenges that remain can be overcome though, so it is more a question of when then if.
Source: Duke University
Posted: January 4, 2017 10:38AM
There have been a number of miracle materials throughout history and one of the latest examples is graphene, an atom-thick sheet of carbon. It earned this title because it is exceptionally strong and hard while still being flexible, transparent, and highly conductive. These are desirable properties for a number of applications and for the first time, researchers at Fraunhofer-Gesellschaft and the GLADIATOR project have successfully made functional OLED electrodes from the material.
Ever since its discovery, one of the challenges with graphene has been discovering ways to make it into a product because it is often difficult to manufacturer. The solution in this case is to heat a wafer of high-purity copper in a vacuum chamber and then add a mixture of methane and hydrogen. A chemical reaction starts between these gases and the copper, causing the methane to dissolve into the copper, leaving a sheet of carbon on the surface. After it is cooled and a carrier polymer is applied, the copper plate is etched away, leaving the graphene behind.
The researchers believe the first products that might uses these graphene electrodes will be able to launch in two to three years, and as both OLEDs and graphene are flexible, these products would be more resilient than those you find today. These graphene electrodes will likely be combined with more than OLEDs too as other technologies, such as photovoltaics, smart windows, and wearable devices could all benefit from them.
Posted: December 6, 2016 10:09AM
In the effort to create more visually stunning experiences for gamers, higher resolution monitors and more powerful components to drive them are constantly being worked on. According to science fiction though, eventually images could just be sent directly to our brains. Researchers at the University of Washington have taken a significant step toward that future with non-invasive, transcranial magnetic stimulation.
Using this standard piece of neuroscience equipment, the researchers produced phosphenes, which the test subject interpreted to navigate in a virtual world. Phosphenes are typically perceived as blobs or bars of light, despite no light actually entering the eye. The strong magnetic fields of transcranial magnetic stimulation are able to produce them, and in this case were used to provide information about a maze. With that information, the test subjects were to move their character forward or down, and with this brain stimulation made the correct choice 92% of the time. The researchers also noticed that the subjects improved over time, indicating they were learning to better identify the artificial stimuli.
While we might one day be able to experience video games within our minds, a potentially closer application, though still far away, is to assist the blind and visually impaired navigate in the real world. A lot of work needs to be done before that can become reality though, as the equipment used to create the magnetic fields is very bulky, but eventually new, more portable technologies could become available.
Source: University of Washington
Posted: November 23, 2016 10:03AM
Those who are more privacy and security conscious might do things like covering webcams and microphones, to prevent someone from attacking their computer and observing them. Researchers at Ben-Gurion University of the Negev, however, have demonstrated this might not be enough as speakers and headphones can be turned into microphones.
Physically the speakers, headphones, and earphones are similar to microphones with elements that can convert sound waves to electrical pulses, but normally doing so in the opposite direction. What the researchers have demonstrated is that intelligible signals can be recorded by remapping or retasking an output audio jack to be an input.
Potential means to combat a piece of malware taking advantage of this attack includes disabling the audio hardware, having the audio driver alert the user when microphones are accessed, and creating restrictions on how audio ports can be rejaked.
Posted: November 14, 2016 12:45PM
Albert Einstein is remembered for many contributions he made to physics, but the one he earned the Nobel Prize for was the Photoelectric Effect. This phenomenon converts optical energy to electrical energy and back, and is at the core of many modern technologies, including LEDs and solar panels. More than one hundred years after Einstein's paper was published, researchers at the Vienna University of Technology and Technical University of Munich have made unprecedented measurements of photoionization in progress, a process the photoelectric effect explains.
For this study, the researchers worked with a helium atom, as it is the simplest and best understood multi-electron system. By firing a short, high energy laser pulse at it, one of the atom's two electrons can absorb enough energy to leave the atom, and thanks to quantum mechanics it is possible for the second electron to also absorb some energy, but this does not always happen. This means there are actually two different photoionization processes that can occur. In either case though, one electron is ejected from the atom and the researchers caught this with another infrared laser pulse, and it turns out the process that excites this second electron is faster than the process that does not.
Both photoionization processes occur over attoseconds, which are 10-18 or 0.000000000000000001 seconds, so to make this measurement the researchers had to be even more precise, reaching into zeptoseconds (10-21 or 0.000000000000000000001 seconds). The way the researchers measured with such precision was by catching how the infrared laser pulse affected the speed of the emitted electron. Depending on the electromagnetic field of that pulse, the electron would either speed up or slow down, and that change allowed the researchers to measure events at a rate of 850 zs. The difference between the two processes was about 5 as, which agrees with the theoretical simulations run by the Vienna Scientific Cluster supercomputer.
Beyond setting new records, this discovery will allow for the derivation of a complete wave mechanic description of the interconnected system of an emitted electron and its mother helium ion. It also shows the effects believed to be instantaneous decades ago can actually be measured and even controlled.
Posted: November 4, 2016 11:18AM
Along with VR, 3D displays is a current trend in media industries as a means to more immerse and impress consumers, but the special hardware required can make it undesirable or unusable in some situations. To alleviate the need for at least 3D glasses, displays have been made that do not need them, but the approach used requires the viewer be far enough away that it cannot be deployed in small screens. As published in the Optics Express journal, researchers at the Seoul National University have developed a solution to this problem, and it will work with both liquid crystal and OLED based displays.
The way these glasses-free displays work is similar to the lenticular images and such we have seen on things like movie posters and toys. Multiple images are interlaced together and a parallax barrier with grooves matching the interlacing is placed on top of it, so that the image will be different, depending on the angle you look at it from. The problem with small displays is that the distance normally required to enjoy the 3D effect is around one meter, and this distance is based on the gap between the images and the parallax barrier.
What the South Korean researchers have done is develop a way to practically remove the gap between the image and the barrier, bringing that viewing distance much closer to the screen itself. This is achieved by using a monolithic structure that sandwiches a polarizing layer directly between the active parallax barrier layer and the image layer, making 2D/3D convertible mobile displays viable, while keeping cost and weight low.
Source: The Optical Society
Posted: October 31, 2016 01:07PM
Popeye will probably be proud of his favorite vegetable today as it now not only gives him his signature strength but can also detect explosives and other materials. This achievement is thanks to researchers at MIT who have enhanced the plant with carbon nanotubes so that they can give off a near-infrared glow when exposed to certain molecules.
Plants are in a unique position in Nature, as they are very literally connected to ground via their roots, allowing them to notice changes in soil chemistry such as the presence of contaminants and the amount of moisture. To access this wealth of information, the MIT researchers have been working on plant nanobionics for years, and this most recent study is to demonstrate that it can work with common plants, such as spinach. It works by first creating a batch of carbon nanotubes and wrapping them in a polymer, which binds to target molecules to alter the fluorescence of the nanotubes. These nanotubes are then applied to the underside of a plant's leafs, where vascular infusion absorbs them into the mesophyll layer, where most photosynthesis takes place. Now when the plant absorbs the target molecules, the fluorescence of the nanotubes will change. This is then detected, in real-time, by shining a laser onto the leaf and using an inexpensive infrared camera to capture the emitted light. It took about ten minutes for the target molecules to be carried from the roots to the detector in the leafs.
For this study the researchers made the plants sensitive to nitroaromatics, which are found in explosives such as those in landmines, but the nanotubes can be made to detect other chemicals as well. Some of these chemicals could be external, for monitoring changes in the environment, but others could be internal to the plant, enabling a botanist to track processes going on inside of its cells.
Posted: October 24, 2016 09:28AM
I can remember back in my astronomy course my teacher once explaining that the moment an astronomy textbook is published, is the moment it is out of date because of quickly new discoveries are being made in the field. In the 1990s one of these major discoveries was made as the research showed the expansion of the Universe was accelerating, which then led to the idea of Dark Energy, to explain this acceleration. Researchers at the University of Oxford have revisited this work though and found the original work's conclusion is not as strong as some might think, and the expansion might be occurring at a constant rate.
In order to measure the expansion of the Universe, astronomers and researchers turn to Type Ia supernovae. These are the relatively standard supernovae that occur when enough gas has fallen onto a white dwarf, the cooling core of a dead star, to re-ignite nuclear fusion, destroying the white dwarf. Because of the physics behind this kind of supernova, they produce approximately the same signal, allowing them to be used as a standard ruler. For this new study, the Oxford researchers used a catalogue of some 740 Type Ia supernovae, which is over ten times larger than the original study's sample size. What they found is that the evidence of accelerated expansion at best earns '3 sigma,' a term indicating the probability the measurements are indicative of something or just a fluke. For a discovery to be considered of fundamental significance, it must achieve 5 sigma, which is far away from an at-best 3 sigma.
While other measurements have been made that support the accelerated expansion, these tests have all been indirect measurements and based on a theoretical model from the 1930s, before real data was available. Today we know some of its assumptions are not completely true, making it possible for dark energy to be a consequence of these assumptions and not actually real. Only through further analysis and potentially a more sophisticated cosmological model can this distinction be made, which is what the Oxford researchers are hoping will happen in the future.
Source: University of Oxford
Posted: October 17, 2016 11:58AM
For those of us inclined to read a great deal, devices with e-ink displays have been a wonderful medium for enjoying our favorite tomes. These displays, though, are monochromatic and rigid, so while they consume less powerful and can be read in direct sunlight, there are still uses they are not fit for. Researchers at Chalmers University of Technology, however, have discovered a means to create a full-color electronic display that is also flexible.
The researchers were actually working on placing conductive polymers onto nanostructures when they realized this technology would lend itself well to making electronic displays. The researchers built a prototype then that comes is less than a micrometer thick and can reproduce all of the colors of an LED display, while also using a tenth the energy e-ink displays in a Kindle require. This work however is only on a fundamental level, and that prototype only had a few RGB pixels in it, so there is still quite a lot of work to do before it can be deployed in a product.
Among the issues to address is the use of gold and silver in the display. While very little gold is actually used in the display, a lot of the precious metal is wasted in the current manufacturing process. A means to reduce the waste or decrease the cost will need to be discovered before commercial manufacturing is possible.
Posted: October 7, 2016 11:24AM
They say all good things must come to an end, and for a while we have been approaching the physical limits for modern electronics. According to Moore's law the density of transistors in an integrated circuit will double every two years, but silicon structures can only be made so small before physics starts interfering with how they operate. This size limit is at about 5 nm, but researchers at Berkeley Lab and other institutions have successfully built a transistor with a gate-length of just 1 nm.
The reason for the 5 nm limit with silicon is that quantum mechanics will start to play a larger role on that scale, and the transistor's gate will not be able to block the flow of electrons. This is because the electrons will simply tunnel through the gate as though it were not there. While silicon may not be useful at this scale, molybdenum disulfide (MoS2) can be viable. This material can be made a single layer thick, which comes in at just 0.65 nm, and because of its resistance to electrical currents, smaller gates can be used. In this case a 1 nm gate made from a carbon nanotube is employed. Carbon nanotubes are hollow structures made of pure carbon that are grown, as opposed to silicon structures that are etched with lithography techniques that cannot yet reach 1 nm sizes.
When tested the researchers found the transistor was able to control the flow of electrons, showing that even though it is only 1 nm in size, it is still functional. This is only a proof of concept though, so you cannot expect these transistors to pop up in any devices soon. Several more discoveries will be needed before that can happen, such as developing self-aligning fabrication methods, scaling these up to produce billions of transistors, and finally packing all of them into a chip.
Source: Berkeley Lab
Posted: October 6, 2016 12:19PM
Since it was first discovered, quantum mechanics has been an intriguing field and for some time now, people have been trying to envision how it can be used. One possible application is a quantum computer that leverages the unusual and counterintuitive phenomena of quantum mechanics to process data with algorithms traditional computers cannot perform quickly. There are several avenues that can be used to build a quantum computer, and recently researchers at KIT have created something that should dramatically simplify the designs of optical quantum computers.
At the heart of any computer is the medium used to carry information. In modern electronic computers, the charge of electrons is used to store data, but quantum computers have more options available to them. Among these options is photons, the quantum for light, but the catch is that the photons need to be made for a photonic chip to operate on them. Previously this required external laser sources that could take up entire laboratory spaces, but the KIT researchers have succeeded in building a single-photon source into a chip. This is the first time a complete quantum optical structure has been built on a chip. The source is a carbon nanotube, which have been used before because they emit single photons when struck by external lasers. This new design, however, is completely electrical, so the external laser is no longer needed, making it possible to miniaturize the circuit significantly.
For those hoping this means optical quantum computers are just around the corner, sadly, that is not the case. This is a piece of fundamental research showing this is possible, so it will take time to see what, if any practical applications come from it. It is still an important step, but just how large a step and how long until the next one is still to be seen.
Posted: October 5, 2016 01:12PM
Luminescence is something we can find many examples of in the world, so we have a pretty good understanding of it. When we find something giving off light then, we expect it to follow the rules we have discovered, so when it does not, we know something interesting has been found. Researchers at the University of Vermont and Dartmouth College have discovered just such a molecule that luminesces unlike anything found before, and could be used for many new lights and devices in the future.
Just about every child, and even a healthy number of adults I know enjoy glow-in-the-dark materials and toys. These work by absorbing the energy of the light that strikes them and then re-emitting that energy as light at a certain wavelength. If more energy than they can emit as light falls on them, the extra is given off as heat in the form of vibrations, which was discovered by 1950 by chemist Michael Kasha. This new discovery is being called Suppression of Kasha's Rule, or SOKR (pronounced soccer) because the molecule is being prevented from vibrating the energy away as heat, forcing it to emit higher-frequency light. The molecule the researchers were working is actually a molecular rotor with a paddle on it, and when the researchers had them in water, they gave off a reddish glow. When put into a thick liquid, more like maple syrup, the molecules produced a bright, green light. The reason for is change is because the viscosity of the liquid was stopping the paddle on the molecule from rotating, and that blocked the pathway for the molecule to release energy as heat. Still needing to release the energy, the molecule increased the frequency, and therefore energy, of the light it emitted.
Naturally this discovery is important because no one expected it was possible, but it also has some potentially interesting and valuable applications. This could be used to make new kinds of LEDs and biomedical diagnostic tools, as the molecules can be used to measure the viscosity of a liquid.
Source: University of Vermont
Posted: October 4, 2016 01:00PM
When it comes to two-dimensional materials, a lot of attention is given to graphene, an atom-thick sheet of carbon with several amazing properties. There are other 2D materials though, with their own special characteristics, and researchers at Rice University, Argonne National Laboratory, and Northwestern University have determined one could be particularly useful for flexible electronics.
This other 2D material is made of boron, called borophene, and has a triangular lattice structure with periodic hexagonal vacancies. What gives it potential in flexible electronics is its wavy, corrugated pattern, allowing it to be flexible. While graphene does have very desirable electronic properties, it is too stiff for use in devices that need to stretch. Borophene actually does prefer to be flat and stiff, like graphene, as this form has the lowest energy value, but not when grown on silver. The silver substrate causes the borophene to take on this wavy form, and even causes the silver to change its shape to match. This form can be preserved when the borophene is re-glued to a different substrate.
Both borophene and graphene possess a rich band structure that includes Dirac cones, which allow electrons to travel at relativistic speeds. Borophene also exhibits strong electron-phonon coupling that supports possible superconductivity.
Source: Rice University
Posted: September 26, 2016 11:33AM
For decades now the increase in computational power has been increasing at roughly the rate predicted by Moore's Law, but that is going to come to an end as we hit the fundamental limits of the materials and technologies being used. Many new approaches to computing are being developed so we can get around those limits, and researchers at North Carolina State University have come up with a fairly novel one. This new method use nonlinear circuits to exploit chaos such that fewer circuits and transistors are needed to perform a task.
Modern computer chips are tightly packed with a great number of transistors and circuits, and typically a lot of them are not put to use at the same time as the rest. This is because some circuits have been designed to perform specific tasks, and so for other tasks they are not useful. This new solution is to create nonlinear circuits that contain a number of different patterns, and each pattern is for a different function. By taking advantage of the system's natural chaos, the same circuit can be made to do multiple functions, and can switch from one function to another with each clock cycle. This means that potentially just hundreds of these circuits could match the performance of hundreds of thousands of traditional circuits.
As if the potential for tremendously greater performance in a smaller package were not enough, these nonlinear circuits, which are compatible with other digital devices, can be fabricated with modern technologies. The researchers are approaching commercial size, power, and ease of programming with their designs, so we may see some more news on in the coming months.
Source: North Carolina State University
Posted: September 22, 2016 01:38PM
It can be easy to forget just how important the materials used in a computer or other devices are, as special properties are needed for these systems to work. As we approach the limit of some of the materials we are using currently, new materials need to be discovered to continue developing ever faster and more efficient devices. One goal some researchers have had is to create a multiferroic material, and now researchers at Berkeley Lab and Cornell University have realized exactly that.
Multiferroic materials combine the properties of a ferroelectric and ferromagnetic material, and both families of materials are used in many technologies today but in different ways. Ferrimagnets are used in hard drives to store data as magnetic polarization, and also in sensors, while ferroelectric materials can easily flip polarization in response to an electric field and will hold their polarized states without power being supplied. Both sets of properties are valuable, and by combining them in one material new kinds of low-power memory technologies could be created, as an electric or magnetic field could be used to change both the electric and magnetic properties of the material.
To achieve this, the researchers made their material alternate between monolayers of lutetium oxide and iron oxide, but at every tenth repeat of these single-single pairs, a second iron oxide layer was added. The ferromagnetic atoms in this arrangement change their alignment to follow the neighboring ferroelectric atoms when they were exposed to an electric field. This flipping was observed from 200 to 300 K, which spans from about -100 ºF to 80 ºF, meaning this material works at room temperatures. The next step is to reduce the energy needed for this rewriting from the 5 V the researchers used to half a volt, and eventually to produce a working multiferroic device.
Source: Berkeley Lab
Posted: September 21, 2016 11:53AM
Reading someone's emotions can be very difficult, as some people do not obviously emote and others may try to hide their emotions, but it can be valuable information. For example, when testing a new product or reviewing media, accurately reading a subject's emotions can inform you about what works and what does not. To that end, as well as some valuable healthcare applications, researchers at MIT have developed a means of reading and tracking emotions using wireless signals.
This is hardly the first time MIT researchers, and the specific researchers on this project, have worked with wireless signals for some unexpected purpose as previously they were used to track people falling in a house. In that and this work, the key is measuring how the signals reflect off of a person's body and extracting information from that. In this case it is heart rate and breathing that is tracked by analyzing the acceleration recorded in the signals reflected off of the person's front and back. By using acceleration, the heart rate and breathing can be distinguished, as your pulse is much faster than your breathing. By reading these measurements, it is possible to determine the subject's emotional state and if they are happy, sad, angry, or excited.
When the researchers tested this they found this system, named EQ-Radio, has 70% accuracy at predicting emotions without any training and 87% accuracy after having learned the subject's emotions. Separate from monitoring emotions, this technology could also be used for tracking someone's heart rate with ECG accuracy without any on-body sensors, or for monitoring diagnosing conditions such as depression and anxiety.