Science & Technology News (1256)
Posted: January 10, 2017 12:51PM
The ability to quickly move the heat generated by integrated circuits cannot be overvalued as too much heat can cause critical errors. Two materials that could be very useful in getting heat away from circuits are graphene and carbon nanotubes, but combining them to any great effect has proven difficult thus far. Researchers at Rice University though appear to have solved the problem by creating nano-chimneys.
Both graphene and nanotubes are made of pure carbon and consist of hexagonal rings, like chicken wire, and both are able to transmit heat very quickly. Combining these materials stunts that transmission though, with pillared graphene being 20% less conductive than free-standing nanotubes. This is because when the nanotubes are grown from graphene, heptagonal, seven-member rings form to connect the two structures. These rings, however, scatter phonons carrying thermal energy, preventing the heat from escaping. What the researchers discovered is that by selectively removing atoms from the graphene base a cone will be formed to connect the graphene and nanotubes, and these cones allow the heat to move up the nanotubes and away from the graphene. The cones do not reduce the number of heptagons but make them sparser, leaving paths for the phonons to take.
The researchers simulated nano-chimneys with cone radii of 2 nm and 4 nm to compare them to free-standing nanotubes and pillared graphene. The 2 nm-base chimneys were as conductive as the free-standing nanotubes while the 4 nm-base chimneys were 20% more conductive, which indicates there is a mechanism to tune the conductivity of these structures.
Source: Rice University
Posted: January 9, 2017 12:10PM
With so many of the devices we use every day being powered by batteries, it is very important to keep them all charged. Typically this involves finding a cable and then keeping the device tethered to it and the wall, which is one of the reasons wireless charging has become popular, but the technology used has limited range. Researchers at Duke University, the University of Washing, and Intellectual Ventures' Invention Science Fund have come up with a new solution though that could charge devices up to 10 meters away.
Currently wireless charging technology uses magnetic fields to induce a current, but this can require large coils and the range of the magnetic field is limited. The new solution is to use focused microwave beams to transmit the energy across a room. This would normally involve using an antenna dish and aiming it at the target devices, but that is hardly ideal, so instead the researchers propose using a phased array, which is collection of small antennas that can all be adjusted and tuned independently, allowing the signal produced by the array to be directed. That converts the dish into a flat antenna, but this is still not an ideal solution because of their cost and amount of energy used, so the researchers have turned to metamaterials. These synthetic materials allow the microwave wave front to be controlled, aiming the beams exactly where you want them, and the best part is this technology already exists.
Actually, all of this technology already exists and is already being used in other applications, and the antennas could be produced at the same plants LCD televisions are manufactured. According to the calculations involved, one of these antennas about the size of a typical flat-screen TV could focus microwaves down to about the size of a cellphone within a 10 m distance. However, while this technology exists today there is still more work to be done before a consumer device could be made. For one thing, the charging system needs to be able to identify when a person or pet crosses the microwave beam, shutting it off. This and the other challenges that remain can be overcome though, so it is more a question of when then if.
Source: Duke University
Posted: January 4, 2017 09:38AM
There have been a number of miracle materials throughout history and one of the latest examples is graphene, an atom-thick sheet of carbon. It earned this title because it is exceptionally strong and hard while still being flexible, transparent, and highly conductive. These are desirable properties for a number of applications and for the first time, researchers at Fraunhofer-Gesellschaft and the GLADIATOR project have successfully made functional OLED electrodes from the material.
Ever since its discovery, one of the challenges with graphene has been discovering ways to make it into a product because it is often difficult to manufacturer. The solution in this case is to heat a wafer of high-purity copper in a vacuum chamber and then add a mixture of methane and hydrogen. A chemical reaction starts between these gases and the copper, causing the methane to dissolve into the copper, leaving a sheet of carbon on the surface. After it is cooled and a carrier polymer is applied, the copper plate is etched away, leaving the graphene behind.
The researchers believe the first products that might uses these graphene electrodes will be able to launch in two to three years, and as both OLEDs and graphene are flexible, these products would be more resilient than those you find today. These graphene electrodes will likely be combined with more than OLEDs too as other technologies, such as photovoltaics, smart windows, and wearable devices could all benefit from them.
Posted: December 6, 2016 09:09AM
In the effort to create more visually stunning experiences for gamers, higher resolution monitors and more powerful components to drive them are constantly being worked on. According to science fiction though, eventually images could just be sent directly to our brains. Researchers at the University of Washington have taken a significant step toward that future with non-invasive, transcranial magnetic stimulation.
Using this standard piece of neuroscience equipment, the researchers produced phosphenes, which the test subject interpreted to navigate in a virtual world. Phosphenes are typically perceived as blobs or bars of light, despite no light actually entering the eye. The strong magnetic fields of transcranial magnetic stimulation are able to produce them, and in this case were used to provide information about a maze. With that information, the test subjects were to move their character forward or down, and with this brain stimulation made the correct choice 92% of the time. The researchers also noticed that the subjects improved over time, indicating they were learning to better identify the artificial stimuli.
While we might one day be able to experience video games within our minds, a potentially closer application, though still far away, is to assist the blind and visually impaired navigate in the real world. A lot of work needs to be done before that can become reality though, as the equipment used to create the magnetic fields is very bulky, but eventually new, more portable technologies could become available.
Source: University of Washington
Posted: November 23, 2016 09:03AM
Those who are more privacy and security conscious might do things like covering webcams and microphones, to prevent someone from attacking their computer and observing them. Researchers at Ben-Gurion University of the Negev, however, have demonstrated this might not be enough as speakers and headphones can be turned into microphones.
Physically the speakers, headphones, and earphones are similar to microphones with elements that can convert sound waves to electrical pulses, but normally doing so in the opposite direction. What the researchers have demonstrated is that intelligible signals can be recorded by remapping or retasking an output audio jack to be an input.
Potential means to combat a piece of malware taking advantage of this attack includes disabling the audio hardware, having the audio driver alert the user when microphones are accessed, and creating restrictions on how audio ports can be rejaked.
Posted: November 14, 2016 11:45AM
Albert Einstein is remembered for many contributions he made to physics, but the one he earned the Nobel Prize for was the Photoelectric Effect. This phenomenon converts optical energy to electrical energy and back, and is at the core of many modern technologies, including LEDs and solar panels. More than one hundred years after Einstein's paper was published, researchers at the Vienna University of Technology and Technical University of Munich have made unprecedented measurements of photoionization in progress, a process the photoelectric effect explains.
For this study, the researchers worked with a helium atom, as it is the simplest and best understood multi-electron system. By firing a short, high energy laser pulse at it, one of the atom's two electrons can absorb enough energy to leave the atom, and thanks to quantum mechanics it is possible for the second electron to also absorb some energy, but this does not always happen. This means there are actually two different photoionization processes that can occur. In either case though, one electron is ejected from the atom and the researchers caught this with another infrared laser pulse, and it turns out the process that excites this second electron is faster than the process that does not.
Both photoionization processes occur over attoseconds, which are 10-18 or 0.000000000000000001 seconds, so to make this measurement the researchers had to be even more precise, reaching into zeptoseconds (10-21 or 0.000000000000000000001 seconds). The way the researchers measured with such precision was by catching how the infrared laser pulse affected the speed of the emitted electron. Depending on the electromagnetic field of that pulse, the electron would either speed up or slow down, and that change allowed the researchers to measure events at a rate of 850 zs. The difference between the two processes was about 5 as, which agrees with the theoretical simulations run by the Vienna Scientific Cluster supercomputer.
Beyond setting new records, this discovery will allow for the derivation of a complete wave mechanic description of the interconnected system of an emitted electron and its mother helium ion. It also shows the effects believed to be instantaneous decades ago can actually be measured and even controlled.
Posted: November 4, 2016 10:18AM
Along with VR, 3D displays is a current trend in media industries as a means to more immerse and impress consumers, but the special hardware required can make it undesirable or unusable in some situations. To alleviate the need for at least 3D glasses, displays have been made that do not need them, but the approach used requires the viewer be far enough away that it cannot be deployed in small screens. As published in the Optics Express journal, researchers at the Seoul National University have developed a solution to this problem, and it will work with both liquid crystal and OLED based displays.
The way these glasses-free displays work is similar to the lenticular images and such we have seen on things like movie posters and toys. Multiple images are interlaced together and a parallax barrier with grooves matching the interlacing is placed on top of it, so that the image will be different, depending on the angle you look at it from. The problem with small displays is that the distance normally required to enjoy the 3D effect is around one meter, and this distance is based on the gap between the images and the parallax barrier.
What the South Korean researchers have done is develop a way to practically remove the gap between the image and the barrier, bringing that viewing distance much closer to the screen itself. This is achieved by using a monolithic structure that sandwiches a polarizing layer directly between the active parallax barrier layer and the image layer, making 2D/3D convertible mobile displays viable, while keeping cost and weight low.
Source: The Optical Society
Posted: October 31, 2016 12:07PM
Popeye will probably be proud of his favorite vegetable today as it now not only gives him his signature strength but can also detect explosives and other materials. This achievement is thanks to researchers at MIT who have enhanced the plant with carbon nanotubes so that they can give off a near-infrared glow when exposed to certain molecules.
Plants are in a unique position in Nature, as they are very literally connected to ground via their roots, allowing them to notice changes in soil chemistry such as the presence of contaminants and the amount of moisture. To access this wealth of information, the MIT researchers have been working on plant nanobionics for years, and this most recent study is to demonstrate that it can work with common plants, such as spinach. It works by first creating a batch of carbon nanotubes and wrapping them in a polymer, which binds to target molecules to alter the fluorescence of the nanotubes. These nanotubes are then applied to the underside of a plant's leafs, where vascular infusion absorbs them into the mesophyll layer, where most photosynthesis takes place. Now when the plant absorbs the target molecules, the fluorescence of the nanotubes will change. This is then detected, in real-time, by shining a laser onto the leaf and using an inexpensive infrared camera to capture the emitted light. It took about ten minutes for the target molecules to be carried from the roots to the detector in the leafs.
For this study the researchers made the plants sensitive to nitroaromatics, which are found in explosives such as those in landmines, but the nanotubes can be made to detect other chemicals as well. Some of these chemicals could be external, for monitoring changes in the environment, but others could be internal to the plant, enabling a botanist to track processes going on inside of its cells.
Posted: October 24, 2016 08:28AM
I can remember back in my astronomy course my teacher once explaining that the moment an astronomy textbook is published, is the moment it is out of date because of quickly new discoveries are being made in the field. In the 1990s one of these major discoveries was made as the research showed the expansion of the Universe was accelerating, which then led to the idea of Dark Energy, to explain this acceleration. Researchers at the University of Oxford have revisited this work though and found the original work's conclusion is not as strong as some might think, and the expansion might be occurring at a constant rate.
In order to measure the expansion of the Universe, astronomers and researchers turn to Type Ia supernovae. These are the relatively standard supernovae that occur when enough gas has fallen onto a white dwarf, the cooling core of a dead star, to re-ignite nuclear fusion, destroying the white dwarf. Because of the physics behind this kind of supernova, they produce approximately the same signal, allowing them to be used as a standard ruler. For this new study, the Oxford researchers used a catalogue of some 740 Type Ia supernovae, which is over ten times larger than the original study's sample size. What they found is that the evidence of accelerated expansion at best earns '3 sigma,' a term indicating the probability the measurements are indicative of something or just a fluke. For a discovery to be considered of fundamental significance, it must achieve 5 sigma, which is far away from an at-best 3 sigma.
While other measurements have been made that support the accelerated expansion, these tests have all been indirect measurements and based on a theoretical model from the 1930s, before real data was available. Today we know some of its assumptions are not completely true, making it possible for dark energy to be a consequence of these assumptions and not actually real. Only through further analysis and potentially a more sophisticated cosmological model can this distinction be made, which is what the Oxford researchers are hoping will happen in the future.
Source: University of Oxford
Posted: October 17, 2016 10:58AM
For those of us inclined to read a great deal, devices with e-ink displays have been a wonderful medium for enjoying our favorite tomes. These displays, though, are monochromatic and rigid, so while they consume less powerful and can be read in direct sunlight, there are still uses they are not fit for. Researchers at Chalmers University of Technology, however, have discovered a means to create a full-color electronic display that is also flexible.
The researchers were actually working on placing conductive polymers onto nanostructures when they realized this technology would lend itself well to making electronic displays. The researchers built a prototype then that comes is less than a micrometer thick and can reproduce all of the colors of an LED display, while also using a tenth the energy e-ink displays in a Kindle require. This work however is only on a fundamental level, and that prototype only had a few RGB pixels in it, so there is still quite a lot of work to do before it can be deployed in a product.
Among the issues to address is the use of gold and silver in the display. While very little gold is actually used in the display, a lot of the precious metal is wasted in the current manufacturing process. A means to reduce the waste or decrease the cost will need to be discovered before commercial manufacturing is possible.
Posted: October 7, 2016 10:24AM
They say all good things must come to an end, and for a while we have been approaching the physical limits for modern electronics. According to Moore's law the density of transistors in an integrated circuit will double every two years, but silicon structures can only be made so small before physics starts interfering with how they operate. This size limit is at about 5 nm, but researchers at Berkeley Lab and other institutions have successfully built a transistor with a gate-length of just 1 nm.
The reason for the 5 nm limit with silicon is that quantum mechanics will start to play a larger role on that scale, and the transistor's gate will not be able to block the flow of electrons. This is because the electrons will simply tunnel through the gate as though it were not there. While silicon may not be useful at this scale, molybdenum disulfide (MoS2) can be viable. This material can be made a single layer thick, which comes in at just 0.65 nm, and because of its resistance to electrical currents, smaller gates can be used. In this case a 1 nm gate made from a carbon nanotube is employed. Carbon nanotubes are hollow structures made of pure carbon that are grown, as opposed to silicon structures that are etched with lithography techniques that cannot yet reach 1 nm sizes.
When tested the researchers found the transistor was able to control the flow of electrons, showing that even though it is only 1 nm in size, it is still functional. This is only a proof of concept though, so you cannot expect these transistors to pop up in any devices soon. Several more discoveries will be needed before that can happen, such as developing self-aligning fabrication methods, scaling these up to produce billions of transistors, and finally packing all of them into a chip.
Source: Berkeley Lab
Posted: October 6, 2016 11:19AM
Since it was first discovered, quantum mechanics has been an intriguing field and for some time now, people have been trying to envision how it can be used. One possible application is a quantum computer that leverages the unusual and counterintuitive phenomena of quantum mechanics to process data with algorithms traditional computers cannot perform quickly. There are several avenues that can be used to build a quantum computer, and recently researchers at KIT have created something that should dramatically simplify the designs of optical quantum computers.
At the heart of any computer is the medium used to carry information. In modern electronic computers, the charge of electrons is used to store data, but quantum computers have more options available to them. Among these options is photons, the quantum for light, but the catch is that the photons need to be made for a photonic chip to operate on them. Previously this required external laser sources that could take up entire laboratory spaces, but the KIT researchers have succeeded in building a single-photon source into a chip. This is the first time a complete quantum optical structure has been built on a chip. The source is a carbon nanotube, which have been used before because they emit single photons when struck by external lasers. This new design, however, is completely electrical, so the external laser is no longer needed, making it possible to miniaturize the circuit significantly.
For those hoping this means optical quantum computers are just around the corner, sadly, that is not the case. This is a piece of fundamental research showing this is possible, so it will take time to see what, if any practical applications come from it. It is still an important step, but just how large a step and how long until the next one is still to be seen.
Posted: October 5, 2016 12:12PM
Luminescence is something we can find many examples of in the world, so we have a pretty good understanding of it. When we find something giving off light then, we expect it to follow the rules we have discovered, so when it does not, we know something interesting has been found. Researchers at the University of Vermont and Dartmouth College have discovered just such a molecule that luminesces unlike anything found before, and could be used for many new lights and devices in the future.
Just about every child, and even a healthy number of adults I know enjoy glow-in-the-dark materials and toys. These work by absorbing the energy of the light that strikes them and then re-emitting that energy as light at a certain wavelength. If more energy than they can emit as light falls on them, the extra is given off as heat in the form of vibrations, which was discovered by 1950 by chemist Michael Kasha. This new discovery is being called Suppression of Kasha's Rule, or SOKR (pronounced soccer) because the molecule is being prevented from vibrating the energy away as heat, forcing it to emit higher-frequency light. The molecule the researchers were working is actually a molecular rotor with a paddle on it, and when the researchers had them in water, they gave off a reddish glow. When put into a thick liquid, more like maple syrup, the molecules produced a bright, green light. The reason for is change is because the viscosity of the liquid was stopping the paddle on the molecule from rotating, and that blocked the pathway for the molecule to release energy as heat. Still needing to release the energy, the molecule increased the frequency, and therefore energy, of the light it emitted.
Naturally this discovery is important because no one expected it was possible, but it also has some potentially interesting and valuable applications. This could be used to make new kinds of LEDs and biomedical diagnostic tools, as the molecules can be used to measure the viscosity of a liquid.
Source: University of Vermont
Posted: October 4, 2016 12:00PM
When it comes to two-dimensional materials, a lot of attention is given to graphene, an atom-thick sheet of carbon with several amazing properties. There are other 2D materials though, with their own special characteristics, and researchers at Rice University, Argonne National Laboratory, and Northwestern University have determined one could be particularly useful for flexible electronics.
This other 2D material is made of boron, called borophene, and has a triangular lattice structure with periodic hexagonal vacancies. What gives it potential in flexible electronics is its wavy, corrugated pattern, allowing it to be flexible. While graphene does have very desirable electronic properties, it is too stiff for use in devices that need to stretch. Borophene actually does prefer to be flat and stiff, like graphene, as this form has the lowest energy value, but not when grown on silver. The silver substrate causes the borophene to take on this wavy form, and even causes the silver to change its shape to match. This form can be preserved when the borophene is re-glued to a different substrate.
Both borophene and graphene possess a rich band structure that includes Dirac cones, which allow electrons to travel at relativistic speeds. Borophene also exhibits strong electron-phonon coupling that supports possible superconductivity.
Source: Rice University
Posted: September 26, 2016 10:33AM
For decades now the increase in computational power has been increasing at roughly the rate predicted by Moore's Law, but that is going to come to an end as we hit the fundamental limits of the materials and technologies being used. Many new approaches to computing are being developed so we can get around those limits, and researchers at North Carolina State University have come up with a fairly novel one. This new method use nonlinear circuits to exploit chaos such that fewer circuits and transistors are needed to perform a task.
Modern computer chips are tightly packed with a great number of transistors and circuits, and typically a lot of them are not put to use at the same time as the rest. This is because some circuits have been designed to perform specific tasks, and so for other tasks they are not useful. This new solution is to create nonlinear circuits that contain a number of different patterns, and each pattern is for a different function. By taking advantage of the system's natural chaos, the same circuit can be made to do multiple functions, and can switch from one function to another with each clock cycle. This means that potentially just hundreds of these circuits could match the performance of hundreds of thousands of traditional circuits.
As if the potential for tremendously greater performance in a smaller package were not enough, these nonlinear circuits, which are compatible with other digital devices, can be fabricated with modern technologies. The researchers are approaching commercial size, power, and ease of programming with their designs, so we may see some more news on in the coming months.
Source: North Carolina State University
Posted: September 22, 2016 12:38PM
It can be easy to forget just how important the materials used in a computer or other devices are, as special properties are needed for these systems to work. As we approach the limit of some of the materials we are using currently, new materials need to be discovered to continue developing ever faster and more efficient devices. One goal some researchers have had is to create a multiferroic material, and now researchers at Berkeley Lab and Cornell University have realized exactly that.
Multiferroic materials combine the properties of a ferroelectric and ferromagnetic material, and both families of materials are used in many technologies today but in different ways. Ferrimagnets are used in hard drives to store data as magnetic polarization, and also in sensors, while ferroelectric materials can easily flip polarization in response to an electric field and will hold their polarized states without power being supplied. Both sets of properties are valuable, and by combining them in one material new kinds of low-power memory technologies could be created, as an electric or magnetic field could be used to change both the electric and magnetic properties of the material.
To achieve this, the researchers made their material alternate between monolayers of lutetium oxide and iron oxide, but at every tenth repeat of these single-single pairs, a second iron oxide layer was added. The ferromagnetic atoms in this arrangement change their alignment to follow the neighboring ferroelectric atoms when they were exposed to an electric field. This flipping was observed from 200 to 300 K, which spans from about -100 ºF to 80 ºF, meaning this material works at room temperatures. The next step is to reduce the energy needed for this rewriting from the 5 V the researchers used to half a volt, and eventually to produce a working multiferroic device.
Source: Berkeley Lab
Posted: September 21, 2016 10:53AM
Reading someone's emotions can be very difficult, as some people do not obviously emote and others may try to hide their emotions, but it can be valuable information. For example, when testing a new product or reviewing media, accurately reading a subject's emotions can inform you about what works and what does not. To that end, as well as some valuable healthcare applications, researchers at MIT have developed a means of reading and tracking emotions using wireless signals.
This is hardly the first time MIT researchers, and the specific researchers on this project, have worked with wireless signals for some unexpected purpose as previously they were used to track people falling in a house. In that and this work, the key is measuring how the signals reflect off of a person's body and extracting information from that. In this case it is heart rate and breathing that is tracked by analyzing the acceleration recorded in the signals reflected off of the person's front and back. By using acceleration, the heart rate and breathing can be distinguished, as your pulse is much faster than your breathing. By reading these measurements, it is possible to determine the subject's emotional state and if they are happy, sad, angry, or excited.
When the researchers tested this they found this system, named EQ-Radio, has 70% accuracy at predicting emotions without any training and 87% accuracy after having learned the subject's emotions. Separate from monitoring emotions, this technology could also be used for tracking someone's heart rate with ECG accuracy without any on-body sensors, or for monitoring diagnosing conditions such as depression and anxiety.
Posted: September 13, 2016 08:35AM
The name Stephen Cabrinety might not be one you recognize, but there is a chance you have heard about something he did before his death in 1995. He collected various pieces of software starting in the 1980s and this collection eventually grew to contain some 25,000 pieces of software and video games, even in their original packaging, and many pieces of hardware from the time, including the appropriate readers and consoles. In 2009 the Stanford University Libraries obtained the collection and it has been working with NIST to preserve it, only recently finishing the work that included reading floppy disks and audio cassette tapes.
While the nostalgia factor is obvious, this preservation work has been done for a different reason than enjoying classic games and using old software. There are several institutions that exist for collecting and archiving the different media cultures create, such as the Library of Congress that keeps a copy of every published work. The concept goes back to ancient times with the Library of Alexandria, but there is no single repository for software. The National Software Reference Library (NSRL), created and maintained by NIST, comes close, though it has a rather different purpose, and now the complete Cabrinety collection has been added to it. Currently the programs cannot be loaded up and used, as the data has only just been added, but the Stanford team wants to build systems so this will be possible in the future.
For those who are curious, the NRSL is actually for forensic investigations. The software it has archived has all had a digital fingerprint created for it, so when computers or other devices have been taken as evidence, these fingerprints can be used to quickly filter out what information may be important. For example, after the disappearance of Malaysia Airlines flight MH370 the hashes for every flight simulator was requested, so that the FBI could discover the flight paths the pilot had practiced on.
Posted: September 9, 2016 12:46PM
While you might not see them, random numbers are used in many systems we benefit from every day, including the encryption systems that protect our online purchases and bank withdrawals. Modern random number generators are not perfectly random though, but new generators that utilize quantum mechanics can be completely random. Now researchers have successfully made a quantum random number generator small enough and fast enough to be usable in mobile devices, as reported in Optica.
The reason modern random number generators are not perfectly random is because the numbers are generated by algorithms or physical processes, and with enough information about the source, one can guess the numbers. Quantum mechanical phenomena, however, are immune to this as the processes involved cannot be predicted regardless of how much information one may have. This is why researchers have been working to create quantum-based random number generators, but so far those that have been made are large and not very fast. This new generator though, is based on a photonic integrated circuit (PIC) measuring just six millimeters by two millimeters and is able to operate in the area of gigabits per second. At that speed it can be used for real-time data encryption, protecting phone and video calls.
By demonstrating such a device can be made with PIC technology, chances are other researchers will act to make even better versions. Eventually these devices will likely move into commercial products to provide better security as well as scientific equipment for simulating biological interactions and nuclear reactions, or making stock market predictions.
Source: The Optical Society
Posted: September 8, 2016 11:18AM
Sustainable nuclear fusion has been a goal for a great many scientists around the world for decades, as the energy produced by a fusion-based power plant could potentially dwarf that from other sources. Obviously it is difficult to achieve, but strides are being made toward that bright future. Recently researchers at the University of Rochester have brought us closer to that future by creating the conditions needed to set a new record for laser-fusion.
There are a number of methods being investigated currently for triggering nuclear fusion, and the one the Rochester researchers were using is called direct-drive fusion. This method uses a number of lasers all aimed at a small fuel pellet, heating and compressing it to the point of implosion. If enough energy is pumped into the fuel by the lasers, the nuclei within the pellet will fuse together and ideally release more energy than the lasers spent. Using the OMEGA laser at the University of Rochester's Laboratory of Laser Energetics, the researchers were able to create the conditions needed to produce five times the current record for similar laser-fusion experiments. This brings it in line with the much larger National Ignition Facility (NIF) at Lawrence Livermore National Laboratory, if the conditions are scaled up to match. The NIF actually uses a different method for causing fusion called indirect-drive fusion. Instead of having the lasers directly aimed at the fuel, instead the laser light is converted to X-rays using a special gold enclosure, and those X-rays are what pump energy into the fuel pellet.
While ignition, when more energy is produced than is used, has not yet been achieved, these researchers at those at Lawrence Livermore are making significant strides towards that goal. In this experiment that includes better aiming the 60 laser beams involved onto the millimeter sized fuel pellet, improving the target's shell, and capturing images of the pellet as it implodes, for the purpose of improving future experiments.
Source: University of Rochester
Posted: August 24, 2016 12:58PM
As is now almost always the case with large AAA titles, the upcoming Battlefield 1 will have extra content added post-launch and you can purchase it early with its Premium Pass. This pass will include four expansion passes and two-week early access to each. You will be able to play as two new armies including France, in the They Shall Not Pass expansion, and the Russian Empire in a different expansion. The expansions will also add 16 multiplayer maps, Operations and game modes, elite classes, 20 weapons, and vehicles. There will also be 14 Battlepacks with weapon skins delivered monthly from November 2016 and 14 unique dog tags distributed over the Premium Pass period.
You can get additional details and release dates from the Battlefield website. Battlefield 1 releases October 21 for PC, Xbox One, and PlayStation 4. It is possible to get early access by pre-ordering the Early Enlister Deluxe Edition, which will get you in on October 18 (there are some conditions that apply to this though), while Origin Access (PC) and EA Access (Xbox One) membership will let you in starting October 13.
Posted: August 15, 2016 10:25AM
Microscopes are an amazing tool and have been ever since the first one was created, but we have been running up against their limits. Optical microscopes cannot resolve objects smaller than 200 nm, which is the size of the smallest bacteria because of certain laws of physics, but they can be given a helping hand. Researchers at Bangor University have created a new superlens that allows smaller objects and shapes to be seen, even the patterns on Blu-Ray discs.
These superlenses are made of nanobeads, which are objects we can find all around us, even if you are not aware of them. They are used in some paints and in sun-screen, and for the superlenses titanium dioxide (TiO2) beads are used. The reason for these specific beads is their high refractive index and how a sphere of them will break up a light beam. The nanobeads are deposited as a droplet containing millions onto the subject being put under a microscope, and these beads refract the light in a way to creates millions of individual beams. These light beams are what an optical microscope can capture and use to resolve details previously invisible.
Using these nanobead superlenses can increase the magnification of a microscope by a factor of five, which should be enough to reveal germs and viruses. Not too bad considering these nanobeads are actually fairly cheap and readily available.
Source: Bangor University
Posted: August 11, 2016 11:39AM
Windows can make a tremendous difference in a room by letting some natural light in, but there are times you want to cut down on the brightness. Obliviously shades and blinds can be used to achieve this, but not all situations allow for such solutions. An alternative is to create windows that can shade themselves, such as those MIT researchers have developed.
Self-shading windows already exist and are actually used on Boeing's 787, so that a flip of a switch can cut down on the light coming in. The catch with these windows is that they take a few minutes to transition from clear to a dark green. The new MIT windows change much faster and can actually go opaque. Both the new windows and the 787 windows are electrochromic windows, which work by having an electrical current applied to them. This current negatively charges the windows, so positive ions have to move through the window to return electrical balance, and it is these moving ions that shade the windows. In the 787 windows, the ions move slowly, thus making the transition slow, while the MIT windows contain metal-organic-frameworks (MOFs) that are able to carry electrons and the positive ions at high speeds.
Two other advantages the MIT windows have are it actually becoming opaque instead of just a dark shade, and that only the transition requires a current be applied. Once the window is made clear or opaque, the current can be stopped and will not be needed again until one wants it to transition back. This is obviously important as it cuts down on how much power these windows need to operate.
Posted: August 10, 2016 09:50AM
At the end of their lives, giant stars will collapse under their own gravity, resulting in a massive burst of radiation and matter called a supernova. Without these extraordinary events, many heavy elements and isotopes would not be present in the Universe, outside of stellar cores. Researchers from the Technical University of Munich have discovered the first time-resolved signal from a supernova on Earth, showing our planet has actually travelled through the remnants of one a dead star.
The evidence comes in the form of the radioisotope Fe-60, which cannot be produced by any natural, terrestrial mechanism, so its discovery points to supernova material falling on Earth. Actually this is not the first time such evidence has been found, but the previous discovery had poor temporal resolution, meaning we could not determine when it was from. This new discovery can be pinned down to starting 2.7 million years ago, peaking around 2.2 million years ago, and finally dying off about 1.7 million years ago. For approximately one million years, the Solar System passed through the debris of a supernova.
This iron isotope was found within microfossils of iron-sequestering bacteria that lived in the ocean. After the bacteria died, sediment built up at a constant rate, preserving the temporal shape of the Fe-60 signal.
The likely source of the iron is a supernova from the Scorpius-Centaurus OB association. At 2.3 million years ago it was just 300 light years away, so it was definitely close enough for us to pick something up from it. We are also within part of it called the Local Bubble, which is a largely matter-free cavity resulting from 15-20 supernova pushing matter away some 10 to 15 million years ago.
Source: Technical University of Munich
Posted: August 9, 2016 12:03PM
For years now, organic light emitting diodes (OLEDs) have promised us more efficient and potentially cheaper displays that also offer better color reproduction and contrast. So far though, OLED displays have been all but restricted to certain smartphones with LCD screens beating them out on larger scales. Thanks to researchers at Harvard University, MIT, and Samsung though, that could change in the future.
In any modern display, each pixel is made of smaller sub-pixels that emit red, green, and blue light. By varying how much light is emitted by each sub-pixel, any other color can be produced. The problem with OLEDs has been that the blue sub-pixels are often inefficient at producing blue light. To compensate for this, manufacturers instead use organometallic molecules that also contain expensive transition metals. To remove these metals and thereby cut costs, the researchers created an advanced machine learning algorithm to analyze and model over one million molecule candidates. The best 2500 of these were then given to experimental collaborators to consider their potential via a web application.
In the end the team had hundreds of molecules that should perform as well as or better than the best metal-free OLEDs known of today. Being completely organic, OLEDs made with these molecules could potentially be cheaper and thus easier to produce at the large sizes of televisions. This approach can also be applied to find organic molecules for other applications, such as flow batteries, solar cells, and organic lasers.
Source: Harvard University
Posted: August 8, 2016 11:51AM
Quantum computers are not here yet, but not for lack of trying. Instead of relying on electronic bits that can represent 0 or 1, quantum computers use qubits that can be 0 and 1 at the same time, but the medium for these qubits is still being decided on. One promising candidate is to use ions as qubits, and now researchers at MIT have created a prototype chip that allows for better control over them.
At the core of how quantum computers work is the quantum mechanical phenomenon, superposition, which is when a particle exists in mutually-exclusive states at the same time, such as spinning both clockwise and counterclockwise. These particles are the quantum bits, or qubits, and while there are options for what exactly they are, ions are likely the best understood of the choices. The thing about ions is that they can require large and complicated equipment to work with. For starters the ions have to be held in a trap, and while cage traps, with electrodes arranged like cage bars, work well, they are limited in size and realizing quantum computers will require large numbers of qubits. To that end, the MIT researchers are instead working with surface traps that have their electrodes covering a surface and the ions held slightly above them. In theory surface traps can be extended indefinitely.
Another issue the MIT researchers have addressed is how to control the ions. In a surface trap the ions can be just five micrometers apart, so hitting just the one you want with a laser from an optical table is very difficult. The solution here was to put a layer of glass and network of silicon nitride waveguides underneath the electrodes. Beneath holes in the electrodes are diffraction gratings within the waveguide, which direct the light up, into the holes and focuses it enough to hit single ions.
The next step for this work is to hopefully add light modulators to the diffraction gratings. This will make it possible to control how much light each ion qubit will receive, making it more efficient to program them.
Posted: August 3, 2016 12:21PM
Batteries are a critical component of a great deal of today's technologies, so there is an ever growing need to improve upon them. Achieving that improvement is difficult though because we need to use materials with the correct chemical properties, with sacrificing other important characteristics. Now, researchers at the University of California, Riverside have discovered that two candidates for anodes in future lithium-ion batteries can be combined to great effect.
Modern lithium-ion batteries rely on graphite for their anodes, because it performs the necessary chemistry and is resilient to the damage batteries can endure. Tin and silicon have also been suggested as possible anodes, because some of their properties are better than graphite, but others are worse. For example, silicon is somewhat fragile and repeated use can cause the anode to fracture, leading to a loss of performance and serious damage to the battery. What the researchers have discovered is that combining these two materials can create a new anode with the best of both worlds, essentially.
This new anode can hold three times the charge traditional graphite can, is stable over a great many charge-discharge cycles, and can even charge very quickly. It also can be produced with a simple manufacturing process, which can help keep prices down.
Posted: August 2, 2016 12:47PM
Graphene is a very interesting material that has several curious characteristics, and some of them come from the material being just one atom thick. Ever since its discovery, researchers have been working to better understand graphene and to make other atom-thick materials, which could have useful properties for electronics. Now researchers at the Moscow Institute of Physics and Technology, Rice University, and other institutions have crafted a theory to predict what it will take to produce graphene-like from salts.
Many salts, including the common sodium chloride, have a cubic molecule structure and ionic bonds between the atoms. It has been predicted and even observed in some salts that once they are comprised of few enough layers, they will spontaneously transform into a graphene-like structure. This process is called graphitization. These predictions and observations have been limited so far, because they have been for certain materials, but by leveraging the power of computer simulations, the researchers have created a general theory of how graphitization occurs. Now it is possible to predict the critical number of layers it will take for a salt made of the four alkali metals and the halogens to undergo graphitization. For sodium salts, the number is 11 layers, and for lithium salts it is between 19 and 27.
If this theory is proven accurate experimentally, it could open up a new route to producing ultrathin films and these films could have desirable properties for electronics. The researchers are also going to investigate other compounds, to see if more materials will undergo graphitization, resulting in new and intriguing properties.
Posted: August 1, 2016 11:45AM
Depending on how high it is, if you have had to pay an electric bill, you have probably asked yourself about where all of the power is going, to hopefully cut down. Answering this question is harder than you might think though, because of what is involved in actually getting the data, but researchers at MIT are working to change that. They have created a monitoring device about the size of a postage stamp that can be zip-tied onto a power cord, and then make measurements from the electric and magnetic fields around it.
Wireless devices like this have been developed before, but have had the cumbersome requirement of carefully aligning them on the cord. This MIT device solves that by having five offset sensors and software the selects the one getting the best signal. The data it captures can then be analyzed to determine how much power a device is using, and if there are any anomalies. During one of its tests it actually identified a wiring problem in a house that was putting a potential dangerous voltage on a copper pipe. This analysis though is not done remotely; all of the data can remain at the user's home, abating any privacy concerns as well as bandwidth issues, considering how much raw data is collected.
The hope for this device is that it will become cheap enough to produce and the applications for analyzing the data will become common enough, that anyone will be able to optimize the power usage within their home. This could come from adjusting habits or replacing particularly inefficient appliances to better models.
Posted: July 29, 2016 12:57PM
For our never-ending drive for faster computers and connections to be satiated, at least temporarily, it will become necessary to turn to new methods of communication. Already we have seen optical systems deployed to accelerate the Internet and more, but bringing similar systems into our computers has been proving very difficult. Thanks to researchers at the University at Buffalo though, we are closer to this high-speed future than ever before.
Light travels at a far faster speed than the electrical currents within our computer, so using it instead of electrons to connect the various chips and components in our systems could give a sizeable boost. But the devices for producing optical signals are sometimes too large to fit into computer chips and we are approaching the limit of how much information can be packed into an optical signal. The Buffalo researchers however have made an important discovery by shrinking down vortex lasers enough to be compatible with computer chips.
Vortex lasers use light's orbital angular momentum to cause the light waves to twist in a corkscrew shape, with a vortex at the center. Multiple corkscrews can be fit together in the same area without crossing each other, so these lasers are able to transmit up to ten times more information than a more traditional laser. Obviously such a device could go a long way in enhancing the performance of our computers and networks, and comes at a good time as we approach some fundamental limits in current technologies.
Source: University at Buffalo