Science & Technology News (1052)
Posted: October 5, 2015 02:31PM
Whether people like it or not, video games have an impact on how our brains function and better understanding of this can prove useful. To that end, researchers have recently published an article in the Policy Insights from the Behavioral and Brain Sciences journal from SAGE Publications about the ways different types of games influence our brains.
With such a wide variety of video games, it is important to consider how different types and genres may impact us. As one example, action games that have quick moving targets moving in and out of our vision and require the player to make rapid decisions all appear to improve our attention skills, brain processing, and much more. However total play time can still predict poorer attention in the classroom, and there are behavior impacts beyond our cognitive abilities.
With modern video games so often using principles also used by psychologists, neuroscientists, and educators for altering behavior and educating people, understanding these impacts is very important. This is compounded by the active learning video games usually employ, as these methods also tend to be more effective than passive learning. The published article is free for a limited time if you follow the link given in the source.
Source: SAGE Publications
Posted: October 5, 2015 06:45AM
Batteries are an integral technology supporting modern life, but despite how widespread they are, they can be dangerous. Overtime it is possible for lithium-ion batteries to form dendrites inside of them, which can reduce battery life, cause a short circuit and even a fire. As published in The Journal of Chemical Physics by AIP Publishing, researchers at Caltech have found that heat might reduce dendrites, ultimately extending the usable lifetime of a rechargeable battery.
Batteries store and release energy by moving ions between the two electrodes, but because the ions do not always return correctly, small structures called dendrites can form. Over time the dendrites can grow, eventually connecting the two electrodes causing a short circuit. What the researchers observed is that when heated to 55 ºC, the dendrites could shorten by up to 36%. To understand why, they turned to computer models that showed the atoms in the dendrites moving around enough to cause them to topple.
This does not mean you should start baking old rechargeable batteries, as there are still a number of other variables to consider, but the researchers are going to keep investigating. Eventually this could develop into a way to keep our batteries working longer, just by adding some heat.
Posted: October 2, 2015 02:37PM
That is the topic researchers at Concordia University decided to investigate. Many people would say that social media games differ from more traditional video games in a number of ways, and the researchers wanted to see if the views on cheating are also different between the two.
To perform the study the researchers surveyed some 151 social media gamers, aging from 18 to 70. The survey investigated the respondents' views on cheating in general, in social media games, and what kind of practices players use. The researchers found that the responses largely fell into two main categories, according to the player's view of cheating. One category defined cheating as breaking with socially expected player conduct, so to gain an unfair advantage or otherwise act socially irresponsibly. The second group defined cheating as playing outside of the formal game rules. Across the board though, those methods of cheating that require some level of technical knowledge, like using cheat codes and external software, were condemned, but other forms were not so quickly criticized.
When the researchers asked if cheating in games on other platforms was different from cheating in social media games, a third of the participants said it was different. Next the researchers want to see if a player's ethics are affected when they use their real identities in the game.
Source: Concordia University
Posted: October 2, 2015 07:11AM
Advertising can be found in all manner of places, and the virtual environments of video games are no exception. Such advertising comes in different forms though, from virtual billboards to sponsored items. Researchers at Penn State decided to investigate what impact a player's performance has on their views of the sponsored item.
For this study, which had 85 participants (59 female and 26 male), the researchers used a racing video game and had the players drive a car featuring a VW logo. To measure performance the researchers recorded the number of laps completed, and the times the player crashed. What the players did not know is that the characteristics of the car had been modified by the researchers, to create an easy and difficult version of the game. The results showed a link between player performance and their retention of the advertised brand and opinion of the brand. Those players who performed better remembered the brand better and reported a more favorable attitude of the company, while those who performed worse reported just the opposite. This means that advertising in a game does not guarantee a positive result for the company.
The researchers also looked into the virtual billboards also featured in the game and found that recall of the ad was very low. This indicates that the branded products get more attention than the more traditionally placed ads.
Source: Penn State
Posted: October 1, 2015 02:14PM
We are always on the lookout for the best hardware, so naturally manufacturers are doing what they can to claim that title. One way to help achieve that is to optimize the designs for the software the hardware will be running. The catch is that sometimes that software is not available to the manufacturers, but researchers at North Carolina State University have developed performance cloning systems to help out.
Performance cloning builds a profile of how a program performs that can then help direct hardware optimization. This is important if the software in question is proprietary, like that used by several large corporations and Wall Street firms, because the software itself need not be shared with the manufacturer. Performance cloning has been used before for optimizing CPU design, but this new work focuses instead on memory systems. The researchers developed two techniques called MEMST (Memory Emulation using Stochastic Traces) and MeToo. MEMST looks at how much memory is used by a program, where that data is stored, and its retrieval pattern. MeToo analyzes memory timing behavior, profiling how often data is retrieved and if there are periods of rapid memory requests.
Potentially this work could lead to improvements with DRAM, memory controllers, and memory buses. The next step for MEMST and MeToo is to develop an integrated program, which also includes the researchers' work on cache memory, and commercializing it.
Source: North Carolina State University
Posted: October 1, 2015 06:14AM
The ability for antennas to convert radiation into electrical currents has been put to use for many decades now, but these systems have operated at radio and microwave frequencies. By moving up to optical frequencies though, antennas combined with rectifier diodes could be used to directly generate direct current. Now researchers at the Georgia Institute of Technology have finally achieved this with a rather ingenious design using carbon nanotubes.
To start the researchers grow conductive, vertically-aligned carbon nanotubes on a conducting substrate. The nanotubes are then coated with aluminum oxide, an insulator, and finally thin, transparent layers of calcium and aluminum are applied, forming a metal-insulator-metal diode. Bringing the antenna closer to the diode makes similar systems more efficient, so making the antenna one of the metals in the diode, as is the case here, is the ideal design. This efficiency is needed too, because the rectifier has to be able to switch on and off at femtosecond intervals in order for it to create a current from visible light.
Currently the proof-of-principle rectennas has an efficiency of have an efficiency of about one percent, but based on the work of others, the researchers are confident they could reach over 40% efficiency. Potential applications include photodetectors, and energy conversion systems that capture waste heat or light for generating electricity.
Source: Georgia Institute of Technology
Posted: September 30, 2015 06:23AM
Finding new uses for waste products is a good way to cut down on waste and potentially consumption as well. Thanks to researchers at ORNL and Drexel University, we have a process that could eat into scrap tire waste and make it into useful supercapacitors and other activated carbon applications.
It is estimated that by 2035 some 1.5 billion tires will be made every year, but already we see 8000 tons of waste a day. What this new process does is turn those tires into carbon composite papers. It started by soaking crumbs in sulfuric acid, then washing it and placing it into a tubular furnace with a nitrogen atmosphere, where the temperature rises up to 1100 ºC. Following some additional steps, including more baking and washing, you have a mix of polyaniline, a conductive polymer that can be put to use in a variety of places, such as the supercapacitors needed in a variety of places including cars, buses, and forklifts.
Currently the process has a yield of about 50%, but if every scrap tire were recycled this way that would still create some 1.5 million tons of carbon, or half the yearly global production of graphite. Not too bad, especially as the supercapacitors using this technology only say a two percent performance drop after 10,000 cycles.
Source: Oak Ridge National Laboratory
Posted: September 29, 2015 02:30PM
For decades the promise of nuclear fusion has been great amounts of energy with almost no waste. This promise has yet to be fulfilled though due to the difficulty of triggering nuclear fusion, maintaining the reaction, and doing all of this with less energy than the process releases. While work continues on large reactors around the world, researchers at the University of Gothenburg instead suggest building smaller reactors with a significantly different design that come with certain benefits.
Nuclear fusion is the process of combining smaller atomic nuclei into larger ones, which releases a great amount of energy. One byproduct of the process in the experimental reactors around the world is the release of neutrons. Neutrons are large, uncharged particles that are hard to stop and can do significant damage to materials and organisms. The smaller reactors the researchers suggest though would produce far fewer neutrons, and instead release muons, which are like electrons but possess significantly more energy. This means the reactor could instantly create electricity and would be safer because muons are short-lived and quickly decay into electrons, or similar particles.
The new reactor design would use deuterium, or heavy hydrogen for its fuel, but not tritium, which also contributes to its safety as tritium is radioactive, and allows the reactor to be built with thinner shielding. Also this method of nuclear fusion has already been shown to produce more energy than is needed for ignition, making this design look even better.
Source: University of Gothenburg
Posted: September 29, 2015 06:36AM
It is hard to fathom just how many lives and how much property has been saved thanks to lightning rods diverting the dangerous current safely away. Benjamin Franklin invented the lightning rod some 250 years ago, and it may be getting an interesting update in the foreseeable future. As reported by the Optical Society, researchers at The Hebrew University of Jerusalem have found a way to extend the lifespan of a laser-created plasma channel, which could one day be used as a lightning rod.
When a powerful laser beam moves through the air, it ionizes the molecules creating a plasma channel that can conduct electricity. Currently a laser pulse just 100 femtosecond long (0.000,000,000,000,1 seconds) can create a plasma stream just 100 microns wide and one meter long that lasts about 3 nanoseconds. Once those nanoseconds pass, the plasma will cool off, but the researchers found that by firing 10-nanosecond bursts from a secondary laser, they can pump enough energy into the plasma to keep it hot for ten times longer. By using more a powerful laser or adding more beams, it may be possible to extend the lifetime even more.
The researchers also, in other work, devised a way to lengthen the plasma channel. Normally one laser beam will create numerous channels that spread out in random directions along the beam. By controlling the optics of the beam with lenses though, the laser can be focused to create three plasma channels that line up end-to-end, building a three-meter-long channel. With more powerful lasers and the right optical setup, it should be possible to extend this even further. The next step is to combine these two methods to have length and lifetime.
Source: The Optical Society
Posted: September 28, 2015 10:04AM
For a very long time people have questioned if there was life on Mars, and for a time some even believed that images from telescopes revealed a vast network of canals, built by Martians. Now we know there are no canals on the red planet, and no life capable of building them either. However, NASA has today confirmed evidence of liquid water flows on Mars.
In 2010 telemetry from the Mars Reconnaissance Orbiter's (MRO's) High Resolution Imaging Science Experiment (HiRISE) revealed curious dark streaks in some areas, called recurring slope lineae (RSL). These features were seen to come and go with the Martian seasons, appearing in the warm seasons and fading in the cooler ones. Now with data from MRO's Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) researchers have found hydrated salts at RSL locations, indicating the presence of liquid water, but only when they were relatively wide. These salts are likely perchlorates, which have been shown on Earth to keep liquids from freezing at even -70 ºC, and have previously been found on Mars by other missions, including the Phoenix lander and Curiosity rover.
A likely explanation for the RSL is that there are flows of briny water flowing beneath the Martian surface in these areas, and enough water manages to wick up to the surface to visibly darken it. Even if the process that forms the dark streaks has another explanation, it is still clear by the presence of the hydrated salts that liquid water plays a role.
Posted: September 28, 2015 05:58AM
While 3D printing can allow a variety of objects to be made almost anywhere, it can still be better if what is made is a flat or rolled into a tube, for easy shipping. This naturally requires the object is later formed from its shipping shape, so researchers have been working on ways to get the objects to do this on their own. Typically these earlier attempts involved the controlled heating of the object, but researchers at the Georgia Institute of Technology have reversed the formula to a degree, simplifying the process.
To create these self-folding objects, shape memory polymers (SMPs) are used, as they are able to be programmed with a shape that will take on when heat is applied. Previous work applied relied on heaters attached at specific points and activated at specific times to control the folding. This new approach however builds that control into the object by controlling the amount of SMP used in the different pieces, which 3D printers are able to do. When heat is evenly applied then, such as by warm water, the differing amounts control the folding, and do so precisely enough that the pieces can actually interlock with each other.
There are a variety of applications for this research, even beyond the easier to ship flat and rolled objects that later fold into shape. Unmanned aircraft could potentially use this technology to controllably alter their shape depending on the situation, such as when transitioning from cruising to diving.
Source: Georgia Institute of Technology
Posted: September 25, 2015 06:19AM
Many current 3D printers build up objects by using a plastic filament that is deposited layer by layer. While this works well for simple objects, some want to be able to print objects made of multiple materials and even have electronics built in. To achieve this, the printer has to be able to switch between materials and efficiently mix them, and researchers at Harvard have designed a new printhead to achieve this.
Normally materials in a 3D printer are mixed passively by having them in the same channel. This works fine for low-viscosity fluids, but fails with high-viscosity fluids. To solve this problem the researchers' new printhead has a rotating impeller inside the microscale nozzle. With this, the nozzle can combine viscous materials and inks, such as those necessary for printing electronics and embedding them into printed objects.
Recently the same group of researchers also designed a different printhead that can rapidly switch between materials in a single nozzle. This is important for building structures without the defects that tend to occur when the process starts and stops to switch materials.
Source: Harvard University
Posted: September 24, 2015 02:25PM
A flat tire can ruin a day pretty quick, especially when a new tire has to be purchased. That could change in the future though, according to a report in the American Chemical Society's Applied Materials & Interfaces journal. Here it is described how researchers have created a durable and elastic rubber that is also self-healing.
An important processing step for producing tire rubber is vulcanization, which adds curatives like sulfur to make the bonds within the material more durable. Once these bonds are severed though, they cannot be repaired. With their new process though, the researchers avoid vulcanization while still making the rubber durable and elastic, and adding the self-healing ability. This healing works at room temperature, so a tire in a garage would still heal itself, though applying heat speeds up the healing. After eight days the researchers observed that the rubber could withstand 754 PSI, and by adding reinforcing agents, it could be made even stronger.
Source: American Chemical Society
Posted: September 24, 2015 06:11AM
Atoms are very small things, so it is hard to know where exactly they are in an object. In the past we have been able to estimate their locations using X-ray crystallography, which measures how X-rays interact with a crystal, but this has its limits. Now researchers at the University of California, Los Angeles have successfully mapped atoms in three-dimensions for the first time.
To perform this study the researchers used a scanning transmission electron microscope at Berkeley Lab. This microscope is able to image the location of atoms, but can only scan in two dimensions. To overcome this limitation, the target sample will have to be rotated through multiple angles in order to build a complete, 3D picture. This can be problematic though as the electron beam can damage a sample. The researchers addressed this by keeping the energy of the beam below the radiation damage threshold of the tungsten sample they were aiming it. It took tilting the sample 62 times, but eventually they were able to build a 3D model of the 3769 atom comprising the sample's tip.
The sample consisted of nine layers and on the sixth one the researchers found a point defect where either an atom was missing or another atom of a different element was located. Point defects like this can affect a material's properties, so being able to catch one is important. Indeed this research and future studies of other materials could significantly improve our understanding of each material's properties.
Posted: September 23, 2015 02:27PM
For the first time, researchers have built all-optical permanent on-chip memory. While light is used for efficiently and quickly transmitting data between computers, it is still electrical signals used within computers. By creating this optical memory device, the researchers at Karlsruhe Institute of Technology and the universities of Münster, Oxford, and Exeter make it more possible to bring optics into computers.
This new memory utilizes phase change materials that can have their optical properties manipulated by strong light pulses. In this case the material used was Ge2Sb2Te5 (GST) because its phase changes can be triggered by ultrashort light pulses, allowing the memory to operate at high speed. It is speed that is critical here, because while optical signals will travel faster than electrical ones, the need to convert between them for processing, storage, and transmission limits any the benefits of using light within a computer. By allowing the optical signals to be directly stored in a memory device, without conversion, one of these limitations is lifted, opening up the potential for faster and more efficient computers.
In addition to these benefits, the memory device can also store data more efficiently than traditional electronic memories, because its elements can exist in more states than just 0 or 1. This new memory can also store the data for decades without power.
Posted: September 23, 2015 06:43AM
To one day build a quantum Internet, it will be necessary to have the ability to reliably transmit or teleport quantum information over large distances. Researchers at NIST have recently set a new record for quantum teleportation distance through fiber optic lines of over 100 Km. This more than quadruples the previous record.
Quantum teleportation is not like the teleporters found in science fiction, because it is not that something physical has been relocated, but that the quantum state of something has been transferred or reconstructed somewhere else. In this case the quantum state of one photon was teleported onto another that ran through the 102 Km long fiber optic coil. The state in question was the position a photon had in a series of time slots just one nanosecond long. First the photon was sent through either a long or short fiber coil to put it into a superposition, as the coil length would determine its temporal position. This photon is then split by a special crystal into two identical and entangled photons, with one going on to the 100 Km coil. The other, helper photon would head to a beam splitter where it and another input photon each have a fifty-fifty chance of going through the splitter or being reflected. This input photon's state can be set to be early, late, or in a superposition. Detectors are set up to record the helper and input photons, and if one detects a photon before the other, that means they were in opposite states. Because the output photon and the helper photon are entangled and in identical states, we can know that the output photon will also have the opposite state of the input photon, meaning the opposite state of the input photon has been teleported to the output photon, that just traveled through some 100 Km of fiber optic line.
To make this experiment work, it was very important that the researchers use new single-photons detectors made at NIST that are very sensitive. After traveling through 100 Km of fiber optic cable, only 1% of photons survive, hence why those detectors are so important. On top of that, the teleportation is only successful 25% of the time, at best, but with those detectors it was successful 83% of that maximum, proving it was quantum teleportation that was happening, and not just coincidence. Potentially this work could lead to quantum repeaters for building a quantum Internet or advancing quantum computing.
Posted: September 22, 2015 02:35PM
Photons are critical to many advanced technologies, such as telecommunications where the massless quanta carry information at the fastest possible speed. Thanks to researchers at NIST though, photons may get a new and powerful use thanks to a curious interaction between pairs of them.
Back in 2013, researchers at Harvard, Caltech, and MIT were able to bind two photons together so they would travel superimposed on each other. By tweaking the parameters some, the NIST researchers have theoretically shown that a pair of photons could be made to travel alongside each other, at a specific distance. This is similar to the structure of two hydrogen atoms next to each other in a hydrogen molecule, though this is not actually a photon molecule. Still, two photons bound together and interacting with each other can have some interesting applications, including more precisely calibrating light sensors, and, if they are also entangled, they could be used for processing information. This could dramatically impact telecommunications by removing the need to convert the light in fiber optical cables to electronic signals for processing.
Additionally, we could see this lead to more complex structures of photons being built. While lightsabers are still firmly in the realm of fiction, this kind of research can bring them a bit closer to reality.
Posted: September 22, 2015 06:06AM
Securing credit card information is very important, as anyone whose information was taken in a large data breach or small skimming scheme knows. Actually protecting this information can be difficult though, because some proposed methods require new and expensive systems. Researchers at Lehigh University though have developed a new system for protecting your data that is compatible with current credit card readers.
This new system is called SafePay and replaces credit cards with a magnetic card chip that is controlled by an app on your phone. Traditional credit cards store information along their strips in plain text, making them susceptible to various attacks. With SafePay though, the information is sent to the magnetic card chip as a wave file only when needed. This information is also disposable, expiring after a certain amount of time or after so many uses, so even if the data is stolen, it might not be usable. The phone app gets the disposable information from a bank server and transmits it to the magnetic card chip via the headphone jack or Bluetooth.
The researchers have already successfully tested SafePay in the real-world with a vending machine, a gas station, and university coffee shop. As it will work with existing card readers, and the magnetic card chips cost about fifty cents or less, it has the potential to provide secure transactions without incurring the costs of other new technologies.
Source: Lehigh University
Posted: September 21, 2015 02:31PM
Catalysts are very important materials as they make some nearly impossible chemical reactions possible. Typically though, the catalysts used in various devices on a day to day basis require rare and expensive materials like platinum. Finding ways to replicate these catalysts while reducing costs has been a major goal of many researchers, and now those at MIT and Berkeley Lab have created a new kind of catalyst that may just achieve that goal.
An essential ingredient for many chemical reactions is energy, and some reactions can require a lot of energy to convert the ingredients to the final products. What a catalyst does is add intermediary steps that require less energy to complete, and in the end the catalyst remains, ready for more reactions. By tuning the catalyst it is possible to improve its performance and make it more selective, but the kind of catalysts that can be tuned also tend to be fragile and difficult to process into a device. This is why the more durable, but rarer catalysts, like platinum, are used, but the MIT researchers have discovered how to make graphite into a tunable catalyst. What they have done is found a way to chemically alter the surface of the graphene sheets that make up graphite, in order to tune its properties.
As graphite is already a kind of universal electrode, producing it and adding it to devices is going to be easy and cheap. The chemistry the researchers used is also well understood and can be scaled up, making this a potentially ideal, tunable catalyst. Applications include converting carbon dioxide into fuels and fuel cells.
Posted: September 21, 2015 05:40AM
There is a good chance that in the future computers will rely on the spin of electrons to carry information, instead of their charge as is the current design. That transition is still far in the future though, as the required technologies are still being developed. Researchers at the University of Groningen, Utrecht University, the Université de Bretagne Occidentale, and the FOM Foundation have recently created an electrical circuit that features a magnetic insulator and uses spin waves, which had been thought impossible before.
The reason it was believed impossible to use a magnetic insulator in an electrical circuit like this is because it would take far too much energy to generate the spin waves in the insulator. The researchers have gotten around this limitation though by carefully designing the geometry of the system and instead of generating a spin waves, utilizes those naturally present. The circuit consists of the magnetic insulator yttrium iron garnet (YIG) with platinum strips on both sides. When an electron in one strip strikes the YIB it creates a small disturbance in the spin waves already present from thermal fluctuations. Once the perturbation reaches the other platinum strip, the spin is passed onto an electron in the platinum, which influences the electron's motion, creating a measurable current.
Circuits transmitting information via spin waves could potentially be more efficient than current designs.
Source: University of Groningen
Posted: September 18, 2015 06:36AM
Magnetism has been used in one way or another for centuries, but despite this we are still finding new applications for magnetic phenomena. Spintronics are one example of this, as future computers could rely on the spin of electrons, the fundamental property that leads to magnetism, for like modern computers use the charge of electrons. Now researchers at the New York University, Stanford University, and the SLAC National Accelerator Laboratory have created a magnetic wave, as first predicted in the 1970s.
To find these waves the researchers had to use an ultrafast X-ray microscope, as they are otherwise too small to observe. They are similar in concept to water waves in that a shift in the balance of forces can cause smaller waves to form together into one, larger wave. These magnetic waves are very different though, in that they are stable and can flow over a magnet with very little resistance. This is in stark contrast to the electrical signals currently used to transmit data in computers, which generate heat as they move and require more energy as a result.
To find the magnetic waves, the researchers watched a magnetic material with an X-ray microscope with great spatial and temporal resolution, so small and quick changes could be observed. They then applied an electrical current to excite spin-waves that ultimately formed the magnetic waves they were searching for.
Source: New York University
Posted: September 17, 2015 09:18AM
With all of the crazy things that can go on in quantum mechanics, many people are probably happy we live in a largely classical world. The question this brings up then is where will the quantum world stop and the classical world begin? One of the areas we may find this bridge is with the Mott transition in superconductors, and researchers at Argonne National Laboratory have recently made some important observations of that phenomenon.
The Mott transition is an odd phenomenon that exists right at the border between the quantum and classical world and no one yet knows if it is a quantum or classical effect. To study it the researchers turned to superconductors, which also span that gap. When a magnetic field is applied to a superconductor, it will penetrate and form vortices within it that affect the material's electronic and magnetic properties. Normally these vortices are in an equilibrium state and will not move, like in an insulator, but by applying an electric current the Mott transition can occur, making them conducting. The Mott transition is an insulator-to-metal transition that some materials go through, even though established quantum mechanics state the materials should always be insulators. Normally this transition is based on temperature, but by inducing it electrically, we can have far better control over the phenomenon, making it easier to analyze and use.
This study has a variety of applications, including superconductivity and the relationship between quantum and classical physics, but it can also advance many-body systems, out-of-equilibrium systems, neither of which are well-understood currently, and potentially computers. The transition between an insulating and conducting state occurred at smaller scales than in silicon transistors, so we could see this applied to replace them.
Source: Argonne National Laboratory
Posted: September 16, 2015 05:49AM
For years we have seen hardware designers adding cores to CPUs to increase their performance, and this trend is not going to change. In the future though, some problems are going to come up as it becomes harder to manage more and more cores. Researchers at MIT however have addressed part of this future problem by developing the first new cache-coherence mechanism in thirty years.
In a modern multicore CPU, there are multiple cache levels for the cores to store data in, and at one level is a shared cache for every core, and at another are the cores' individual caches. If multiple cores are working on the same piece of data, problems can arise if one core makes changes to it. To avoid this, the current cache-coherence scheme maintains a directory of what cores are using the data, so the other cores can be told of any changes to the data. As the number of cores rises, so does the size of that directory, which impacts performance, as does having to grab the new version of the data from the shared cache.
The current scheme applies physical-time order, but the new one from MIT applies logical-time order. The difference being that instead of requiring everything happens in the actual temporal order, with overwritten data being dealt with immediately, the cores can continue to work on older data, and the results will be treated as older. When the data is retrieved from the shared cache, it gets a lease on it for so many cycles and the core puts timestamps on the cycles, so until the timestamps surpass the lease, it is free to work on that data. If a core needs to overwrite that data, it gives the new version a timestamp greater than the original lease, and ahead of where the cores are. Now, until a core reaches the timestamp of the new version is can continue working on the old, and each core will know the work they do is on the older version. Once past the new version's timestamp though, it will know to use the new version.
Because of the time-traveling nature of this scheme, with timestamps for data being set in the future, compared to where the cores actually are, the researchers have named the system Tardis. While they predict Tardis will have a significant impact as core counts increase, as the size of the directory will increase logarithmically with the core count instead of directly, they also do not see it being deployed any time soon. This is one area no hardware manufacturer wants to risk making mistakes with, but fortunately the researchers have also written a paper proving it will work, in addition to suggesting the new scheme.
Posted: September 15, 2015 06:32AM
Counterfeit products can be lead to many serious problems, such as insecure computer hardware, incorrect medication, and more. To fight these issues, many anti-counterfeit devices have been developed, but these are all applied to the packaging and after production. This makes it easier to copy the devices, but researchers at the University of Bradford and Sofmat Ltd have a new solution that can be applied to the product itself.
The new system applies a 3D barcode directly to the device that is so small, you can neither see nor feel it. To create the barcode, pins attached to micro-actuators are used and each step for the pins represents a letter or number. The steps are just 0.4 microns apart, making the accuracy of the device very important, but its strength is also important so the position does not shift during use. Reading the barcode requires special equipment currently, such as a white light interferometer or a laser-scanning confocal microscope, but the researchers are working on a laser scanner that can wireless connect to a phone or tablet.
The prototype the researchers built has a four pin array, so it has 1.7 million possible configurations, but future versions could have far more. Also it should be possible to have patterns imprinted onto the pin heads, further increasing security, and by changing the pins' positions for individual products will improve it even more. Though developed to work with plastics and injection molding, it is possible to use this system to stamp or emboss the codes onto other products.
Source: University of Bradford
Posted: September 14, 2015 05:29AM
Nature still has many tricks we can learn from. One more example of this comes from MIT where researchers are applying knowledge concerning the crosslinking seen in the threads used by mussels to anchor themselves to rocks. These bonds are used elsewhere in Nature, but can also be used to create a material that will change color based on its environment.
The material is a combination of rare-earth metals with the widely used polymer polyethylene glycol (PEG). This material can have its light emission tuned to produce a variety of colors, and even multiple colors resulting in white light. The crosslinking though is very sensitive to external parameters, so when something in the environment changes, the bonds and thus the color emission will change. The material could be designed such that pollutants, toxins, pathogens, mechanical pressures, and more will cause the color emission to immediately change, alerting people.
Increasing this material's potential is that it can be made into a gel, thin film, or a coating for application on structures. It also helps that the hydrogel scaffold used in this research is also commonly used in biological and polymer physics studies, so it is already well understood by many.
Posted: September 11, 2015 06:06AM
Researchers at Penn State have recently examined how compensation and free products can influence reviews by bloggers. Some fear that when compensation is provided by companies to bloggers who review their products, the bloggers are more likely to give these products more favorable reviews. According to the study this is not the case, and the reviewers will tend to believe they have more control over the organization providing the compensation.
For the study the researchers sent a questionnaire to 173 technology bloggers, as these bloggers tend to receive more compensation and review items than others. The results showed that the bloggers recognized that giving positive reviews for bad products would damage their credibility, and therefore their readership. Also the bloggers view PR representatives as sources for stories, and not income sources, while PR professionals view their relationships with bloggers as a way to spread the word, and not as a form of advertising. Further, requesting a positive review for compensation would kill the relationship with the blogger.
As the researchers point out, the compensation, such as receiving a product to review, is to balance the needs of both parties, as one needs the product to review it and the company wants to see their products talked about.
Source: Penn State
Posted: September 10, 2015 06:43AM
There are several technologies we want to use in certain applications, but several hurdles still exist preventing us from doing so. One example of this is carbon nanotubes being used in integrated circuits, because when exposed to air, the nanotubes quickly degrade. Researchers at Northwestern University though, have developed a solution, similar to that found for OLEDs that suffered a similar problem.
Carbon nanotubes have practically all of their atoms on their surface, so anything that interacts with the surface can dramatically affect their properties. Water and dust from the air are two things that can damage the nanotubes, resulting in them only surviving for hours instead of years or decades. To solve this problem the researchers developed a material to encapsulate the nanotubes. Organic LEDs have a similar problem and encapsulation layers had to be developed for them as well, which inspired this work, but the encapsulating material has been tailored for nanotubes.
To test their encapsulation method, the researchers built static random-access memory circuits (SRAM), which can make up 85% of modern some CPUs. The encapsulated nanotubes not only survived longer than their exposed counterparts, but also showed improved spatial uniformity. As the method for applying the encapsulation layer can be done easily and cheaply, we could see many interesting applications for nanotube-based circuits, such as being integrated into credit cards to store additional security information.
Source: Northwestern University
Posted: September 9, 2015 06:20AM
Some things are harder to for computers to render than others, because of the physics involved with making the resulting image realistic. Granular materials are an example of this because the light can be affected by each grain it hits, which adds up when there are millions of grains. To accelerate the process, researchers at Karlsruher Institut für Technologie have developed a new process that chooses the right technique based on scale involved.
For smaller scales, when only a few grains are involved, the classical approach of using path tracing for the light can be used. At larger scales this method would require hundreds to thousands of processor hours. Instead, the process changes to use volumetric path tracing, like that used to render clouds or fog. It turns out these methods can accurately represent light passing through granular materials, and can be quite efficient at doing so. Diffusion approximation can also be applied for when the grains are bright and strongly reflective, like with snow.
Compared to current methods, this new approach can be ten to several hundreds of times faster, depending on the material, without compromising image quality.
Posted: September 8, 2015 02:40PM
Water is an interesting material with how it can behave so differently depending on the situation. In some cases it will slip and slide over a surface (taking people with it) and in other cases it will cling to wherever it is. Making it slide around when it wants to stick can be tricky, but researchers at Penn State have managed to achieve this by creating a surface that droplets will easily move around on.
When water droplets are on a rough surface it can either partially float on a layer of air, or it can be in full contact with the surface, sticking in one place. The former state is called Cassie and the latter is named Wenzel, for the physicists that first described both states. Typically droplets in the Wenzel state will not move, but the Penn State researchers figured out how to free the droplets to move around. They did this by etching a silicon surface to have micrometer scale pillars, and then etched nanotextures into the pillars. These nanotextures then had a layer of lubricant infused into them, making the surface very slippery to the droplets.
This discovery could prove useful for any system that uses water condensation for heat transfer, and could outperform superhydrophobic materials that repel water. While the surface was made of silicon, this approach should be applicable to other materials, including metals, glass, ceramics, and even plastics.
Source: Penn State
Posted: September 8, 2015 05:24AM
The idea of wormholes has existed for several decades now, but as yet these structures for connecting remote regions of space have only existed in science fiction. None are known to exist in the Universe and there is no known way to manipulate the gravitational energy necessary to create one. However, we do know how to manipulate electromagnetic fields and now researchers at the Universitat Autonoma de Barcelona have successfully created a magnetic wormhole.
Using metamaterials and metasurfaces, the researchers created a special tunnel that a magnetic field could enter on one side, travel through, and exit on the other side without being detected along the way. While the device itself is visible to us, to a magnetic field it is undetectable. This causes the exit for it to look like a magnetic monopole as a single pole, North or South, seems to be alone there, because the other end of the magnet or electromagnet is on the other side of the device.
This discovery could prove very valuable in medicine and other sciences that use MRI or NMR machines. By allowing a magnetic field to be displaced from its source like this, a patient could be at a more comfortable distance from an MRI machine's detectors, and multiple parts of the body could be imaged at the same time as well.