Welcome Stranger to OCC!Login | Register

Science & Technology Article (1)

How To Setup Folding@Home: 2018 Edition

How To Setup [email protected]: 2018 Edition

» June 20, 2018 04:00PM

Science & Technology News (30)

Researchers Find GPUs Can Be Used to Spy on Users

Category: Science & Technology
Posted: November 7, 2018 09:56AM
Author: Guest_Jim_*

Since the beginning of the year we have been seeing a large number of side-channel attacks developed that could leak sensitive information, with the most prominent of these vulnerabilities being somehow related to the architectural designs of modern CPUs. It turns out CPUs are not the only piece of hardware that could be used to compromise your information as researchers at the University of California, Riverside have successfully used NVIDIA GPUs to spy on web browsing activity, compromise passwords, and attack cloud-based computational applications.

The premise to the first two attacks is to access the system's graphics API, such as OpenGL, as everything you see is being rendered by you GPU. To track someone's movements on the Internet, the memory utilization is measured, and as this is based on the number of objects and sizes of everything rendered on a webpage, it is relatively unique. It is also something consistent between cached and uncached pages. By using a machine learning based classifier, the researchers were able to fingerprint websites with high accuracy. The second attack compromises passwords by recording the time between changes to the texture within a textbox. As this texture needs to be updated as each character is entered, it is possible to learn the length of a password and the inter-keystroke timing, which is an already established means of compromising passwords. Finally the researchers developed a third attack for cloud-based neural-network applications that utilize CUDA acceleration by inserting a malicious computational workload to measure performance counter traces. From this information, the attacker could learn the structure of the victim's network, which would otherwise be secret.

As the first two attacks only need to work with a graphics API, like OpenGL, WebGL, and Vulkan, potentially any user is vulnerable as these are often pre-installed on all computers. The researchers only developed their attacks with NVIDIA GPUs, and NVIDIA has said it will issue a patch to allow disabling access to performance counters for user-level applications, but a draft of the research paper has been shared with AMD and Intel to determine their products' potential vulnerability.

Source: University of California, Riverside

First Exascale-Class Supercomputer, Shasta, Introduced by Cray

Category: Science & Technology
Posted: October 30, 2018 11:29AM
Author: Guest_Jim_*

Today Cray revealed its new supercomputing system called Shasta that has been designed to enable scalability up to the exascale. The company also revealed it has been contracted to build a Shasta supercomputer by the National Energy Research Scientific Computing Center (NERSC) for its NERSC-9 system, also called Perlmutter, in 2020. The Shasta design has a number of interesting and valuable features including the ability to work with many computing solutions to provide researchers with whatever they need, significantly increased bandwidth per node, and support for IP-routed and remote memory operations. The capability for various computing systems means one Shasta-based supercomputer can have x86 and ARM processors, GPUs, and Field Programmable Gate Arrays (FPGAs) in it, and in the future when we see AI specialized accelerators they too can be added.

Cray's partners for this include AMD, with EPYC processors being used in the NERSC system and support for Radeon Instinct GPUs, Intel, which has worked with Cray to optimize the Shasta platform for Xeon Scalable processors, NVIDIA, with its GPUs going into Perlmutter, Marvell, which works with Cray on using ARM CPUs, and Mellanox with its InfiniBand interconnect solution being supported. Intel's Omni-Path and Cray's new Slingshot supercomputing interconnect are also support by Shasta.

Source: Cray

MIT Develops DAWG System for Protecting Cached Data

Category: Science & Technology
Posted: October 19, 2018 07:23AM
Author: Guest_Jim_*

At the beginning of the year, the technology security world got a significant wake up as vulnerabilities in speculative execution were publicly revealed. These attacks were called Meltdown and Spectre and since then, more have been discovered. Speculative execution is a performance increasing solution found within many CPU architectures that allows the processor to work ahead of the commands or data given to it, so when the command is sent or the data is ready, the answer can be provided. What was discovered is that this could be abused to get a process to access information it should not be able to, and then expose this information to a malicious actor. While speculative execution is used in AMD, ARM, and Intel processors, Intel was more susceptible and mitigations also hurt performance for some tasks.

Wanting to develop a more efficient means of protection, MIT researchers have created Dynamically Allocated Way Guard, or DAWG, named in reference to Intel's Cache Allocation Technology, CAT. While both are meant to protect the information within a CPU's cache, CAT still leaves some information on the table that could be used for a timing attack. However, DAWG sets clear boundaries on what resources should and should not be shared, and does so at comparable performance to CAT.

As impressive as DAWG is, it does not currently protect against all speculative attacks, but it has been proven to protect against many non-speculative attacks used against cryptographic software. The team is working to make DAWG a solution against all known speculative execution attacks though and are hoping companies will be interested in and adopt its idea, or similar ones, to protect against data breaches.

Source: MIT

AI Applied to Create Synthetic MRI Imagery for Studying Abnormalities

Category: Science & Technology
Posted: September 17, 2018 08:46AM
Author: Guest_Jim_*

A complicating factor in trying to study and ultimately cure illnesses is that it can be uncommon enough to make it difficult to have real data to work from. One example of this is Alzheimer's and brain tumors, but a group of researchers from NVIDIA, the Mayo Clinic, and the MGH & BWH Center for Clinical Data Science applied a type of AI system to produce the desired information. Not only could this lead to improving our understanding of those illnesses, but it could potentially be applied to others.

The researchers used what is called Generative Adversarial Networks (GANs), which has two AI trying to out-compete each other. While one works to generate a synthetic result, the other works to identify it. As this process continues through multiple iterations, both AI networks improve. The researchers fed the GANs real MRI images of people with Alzheimer's and brain tumors with the challenge being to create synthetic versions that are still accurate enough for study. The training also involved the use of labels that would allow researchers to specify characteristics such as the size and location of a tumor.

Another advantage to this method, besides having more data to work from, is that this could be used to anonymize real MRI images, protecting the privacy of patients when any data needs to leave the hospital. More work still needs to be done though, including blind testing to guarantee the quality of the synthetic images.

Source: ZDNet

Nano-Sandwich Design Found to Improve Heat Transfer for 2D Materials in Nanoelectronics

Category: Science & Technology
Posted: September 12, 2018 12:33PM
Author: Guest_Jim_*

Potentially in just months from now we will be seeing the first processors manufactured on a 7 nm process ship and with them a likely significant step for performance and efficiency. As impressive as this transition might be though, we are approaching the limits of what is possible with silicon-based electronics. Among the materials being considered and developed on to replace ubiquitous silicon are 2D materials, such as graphene, and now a solution has been found for what would likely have been a significant issue.

Among the reasons there is a desire to move to 2D materials is that they can allow for much smaller chips to be made, compared to traditional 3D materials, and they can also bring other, special properties. The issue is that components made with them are prone to overheating. This may seem odd, especially for graphene which is a good thermal conductor, but transferring heat from the material to the silicon base it is built on is not so easy. However, researchers at the University of Illinois at Chicago have discovered that by adding an ultra-thin layer of another material on top can significantly improve this heat transfer. The researchers tested this with silicon oxide as the base, carbide for 2D material, and an ultra-thin layer of aluminum oxide encapsulating it and found the conductance was twice what it was without the aluminum oxide.

This test setup is only an experimental model, but it does still demonstrate a way to increase heat transfer, which is critical for potentially making 2D nanoelectronics in the future. Next the researchers want to work with different materials for the encapsulating layer, to see if this can improve the heat transfer even more.

Source: University of Illinois at Chicago

Graphene Found to Convert Gigahertz Signals to Terahertz Efficiently

Category: Science & Technology
Posted: September 11, 2018 07:57AM
Author: Guest_Jim_*

Graphene is a very interesting material, being a single-atom thick sheet of carbon with amazing electrical and physical properties. It was theoretically predicted some time ago it should also be able to act as a nonlinear material, meaning it can take a signal and multiply its frequency. Now this effect has finally been demonstrated by researchers at the Helmholtz Zentrum Dresden-Rossendorf and the University of Duisburg-Essen, with cooperation from the Max Planck Institute for Polymer Research.

To help envision what a nonlinear material does, imagine shining a pure red light, like a laser, into a material and what comes out is blue light. In this case we are working with electrical instead of optical signals and are going from gigahertz frequencies to terahertz, so from billions of cycles per second to trillions of cycles per second. More specifically, the researchers produced signals with frequencies between 300 and 680 GHz and then saw the graphene monolayer output signals with frequencies three, five, and even seven times higher, placing them within the terahertz domain.

Due to its high conductivity, many have been hoping to see graphene enter electronics as a replacement for silicon, and while that might still be a ways off, this research will likely contribute to it becoming a reality. Thanks to the experiments agreeing with the theoretical models predicting this behavior, it will be easier to foresee how graphene-based electronics will behave at ultrahigh speeds, by using those models.

Source: Helmholtz Zentrum Dresden-Rossendorf

Material with Second-Best Thermal Conductivity Created

Category: Science & Technology
Posted: July 6, 2018 08:37AM
Author: Guest_Jim_*

Carbon has all kinds of uses between its different forms, and some of them you might not realize. For example, diamond has the best known thermal conductivity at roughly 2200 watts per meter-Kelvin (Wm-1K-1), which would make it very useful in electronics, but the high cost of the material and its being an electrical insulator prevent it from being used to cool our devices. Luckily researchers at multiple universities have been working together over the years to create a material that is far more viable for this application.

Currently most modern processors are made from silicon, which has an adequate thermal conductance of around 150 Wm-1K-1, and when combined with other cooling methods it gets the job done, but as the demand for more powerful devices increases, so does the need for better cooling. This is where the work at the Universities of Texas at Dallas, Illinois at Urbana-Champaign, and of Houston comes into play as they have successfully synthesized a material with a thermal conductivity bested only by diamond. It was in 2013 when researchers at Boston College and the Naval Research Laboratory predicted boron arsenide could compete with diamond as a heat spreader, then in 2015 researchers at the University of Houston made a small, imperfect sample of it. Now the researchers from the three universities listed earlier have successfully created a sample of boron arsenide 4 mm x 2 mm x 1 mm with a thermal conductivity of 1000 Wm-1K-1. It took 14 days to grow the crystal in this experiment, but by increasing this time, it could have become even larger. That smaller sample from 2015 was under 500 microns and had a thermal conductivity around 200 Wm-1K-1.

Key to both growing the crystal this size and improving its thermal conductivity was the method used to grow it; chemical vapor transport. The way this works is one side of a chamber is hot while the other is cold and the raw materials are placed at the hot end. Another chemical then transports the boron and arsenic to the cold side, where they combine to form the boron arsenide crystals. It was only by tuning this process that it reached 1000 Wm-1K-1.

While diamond does still beat boron arsenide for thermal conductivity, both boron and arsenic are cheaper materials and as boron arsenide is a semiconductor, it can be used with computer chips. Just as important as successfully making boron arsenide though is how it directly challenges a theory concerning thermal conductivity, which could lead other researchers to design and synthesize other materials with high thermal conductivities. For now though, the next step is to work on improving the synthesis process for large-scale applications.

Source: University of Texas at Dallas, University of Illinois at Urbana-Champaign, and University of Houston

Fast and Compact Quantum Random Number Generator Created

Category: Science & Technology
Posted: July 2, 2018 12:18PM
Author: Guest_Jim_*

Random numbers are used in many technologies, including those that secure the Internet and uniquely identify genuine products, but despite how much they are used, generating truly random numbers is no easy. While the patterns can be very well hidden, the truth is that there are patterns to many random number generators, making it possible to compromise whatever they are securing by just understanding the system creating them and knowing what numbers were produced before. Quantum mechanics offers a solution there, as the behavior of sub-atomic particles are unpredictable, and now researchers at Lancaster University have succeeded in creating a very simple quantum random number generator that could lead to a more secure digital future.

This new QRNG is actually a single diode and the random number is generated by observing which of two paths within the diode electrons take. Being so small and simple, it could be integrated into future processors amongst the billions of other diodes already present, securing various electronic transactions. It could also find its way into Internet of Things devices and be combined with Blockchain to create a powerful tool for securing products and entire supply lines.



Source: Lancaster University

ORNL has 200 PFLOPS Supercomputer, Taking Performance Lead from China

Category: Science & Technology
Posted: June 9, 2018 07:07AM
Author: Guest_Jim_*

With the launch of the Summit supercomputer at Oak Ridge National Laboratory (ORNL), the United States of America has reclaimed the lead in supercomputing from China. This new system has a peak performance of 200 petaflops (200,000,000,000,000 floating point operations per second or FLOPS), which is eight times that of Titan, the nation's current top-ranked system. Both of these systems are at ORNL, which has a history of achievement with supercomputers. In 1988 it was ORNL scientists that achieved the first gigaflop calculations, then the first teraflop calculations in 1998, the first petaflops calculations in 2008, and now it will be involved with the first exascale scientific calculation, or exaops thanks to Summit.

Summit will be running some projects this year as ORNL and IBM go through the acceptance process for it. Ultimately its computing power will be used to research in energy, advanced materials, AI, and other domains.

Source: US Department of Energy

Organic Molecules and Methane Discovered on Mars

Category: Science & Technology
Posted: June 8, 2018 05:21PM
Author: Guest_Jim_*

Humanity has long wondered if there is more life than we what we know on Earth somewhere in the Universe. The first place we look is to our nearest neighbors, the other planets and many moons in the Solar System, with Mars receive special focus. Though it is not capable of supporting life today, there is evidence it could have in the past, and now the Curiosity rover has discovered organic molecules that may suggest there were areas that could have enabled life to exist.

In the mudstone that formed billions of years ago at the bottom of a lake, Curiosity discovered organic molecules, including thiophenes, benzene, toluene, and small carbon chains like propane or butane. Also, thanks to the longevity of the Curiosity rover, scientists have been able to observe a 'breathing' of methane in the Martian atmosphere, with the amount increasing in the summer and reducing in the winter.

While neither of these discoveries are direct indications of there having been life on Mars, the scientists are taking them as good signs to continue digging. The molecules in the mudstone could have once enabled life to form in lakes when Mars still had liquid water on its surface and the methane could have been produced by life. Methane is also produced by non-biological sources though, and while those organic molecules could have enabled life to form they also might not have. Those molecules are still very interesting though, because the radiation and chemicals found on the surface of the red planet will destroy organic molecules, so these somehow survived and did so in the top five centimeters of the surface. Altogether, the scientists take this as evidence to continue their search for life with current and future missions to Mars.

Source: NASA

Means of Expanding Walking Distance in Reality for Virtual Reality Experiences Developed

Category: Science & Technology
Posted: May 30, 2018 11:54AM
Author: Guest_Jim_*

Perhaps the best recognized issue with using virtual reality headsets is that a user's movements are limited by the real space they occupy, and potentially also the length of the cables connecting the headset to a computer. As reported by the Association for Computing Machinery, researchers at Stony Brook University, NVIDIA, and Adobe have come up with an interesting solution to the issue by exploiting a natural phenomenon of human vision. This solution will rotate the view, subtly turning the user without their knowledge, keeping them from colliding with walls or objects.

The way human vision and our brains work is more complicated than you may realize and an example of this are saccades. Saccades are quick eye movements that are made between different fixation points and occur without our awareness. This is because the new visual input from this movement is ignored, a phenomenon called saccadic suppression, which means we are temporarily blind when this occurs. It is this blindness the researchers are taking advantage of. By tracking the user's eye movements, the headset can identify saccades and then alter the view, turning it slightly, which will result in the user turning as they walk through the space. This slight turning of the view happens without the user even being aware of it though, because of how subtle the change is and it happening during saccadic suppression.

The result of this can be seen in the image below, where the orange line follows the path the user thought they walked in the virtual space, covering a 6.4 m x6.4 m space, while the blue path is the actual path walked within the 3.5 m x 3.5 m room. This solution to walking in VR does require the headset has gaze tracking capabilities, so current headsets would not support it, but future ones may, especially as more people become exposed to the capabilities and potential of eye tracking.

Source: EurekAlert!

Cray Adds AMD Epyc to Its Supercomputer Product Line

Category: Science & Technology
Posted: April 23, 2018 08:27AM
Author: Guest_Jim_*

Cray has been a name associated with supercomputers for many decades, and last week it announced it was adding AMD Epyc 7000 processors to its Cray CS500 product line. These cluster systems will off the company's customers greater flexibility as they will not need to rebuild and recompile x86 applications to run their high-performance computing workloads. The Cray Programming Environment has been updated to support AMD Epyc by integrating and optimizing it and its libraries for the processors.

Cray CS500 cluster systems will be offered with four dual-socket nodes in a 2U chassis or a 2U chassis with only one node, for large memory configurations. The AMD Epyc 7000 processors support up to 32 cores and eight DDR4 memory channels per socket.

Source: Cray

Omniphobic Coating Should Keep Surfaces Grime Free

Category: Science & Technology
Posted: April 13, 2018 11:38AM
Author: Guest_Jim_*

Keeping the things we touch clean is not always easy, and keeping things kids touch clean is even harder. To address this, researchers at the University of Michigan have developed a new omniphobic material that is durable and transparent. Being omniphobic means it will repel water, oils, alcohols, and apparently even peanut butter.

Omniphobic materials have been made before, but this one stands out because it is durable and clear as well, which are two properties that were a challenge to achieve. One might think that if you want a material to be both hydrophobic and oleophobic, you can just mix two with those properties, but materials science is far from that easy. Achieving the desired durability required consideration of partial miscibility, which concerns how well two substances mix, and the better they mix the more durable the result. To find the right chemicals to mix, the researchers also turned to a new approach for identifying chemicals with desirable properties. Traditionally researchers would just mix chemicals together to see what they get, but thanks to the extensive libraries of already studied materials and powerful computers, we have the ability to mathematically predict the properties of new materials and mixtures.

Ultimately the researchers found a mix of fluorinated polyurethane and F-POSS, a repellent molecule does the job while being clear and durable. While fluorinated polyurethane is already inexpensive, F-POSS is not, but manufacturers are in the process of scaling up to mass production, which will reduce the cost significantly. For now the researchers want to confirm the coating is nontoxic, so it could find use in daycare centers and similar, but eventually we could see it come to market in the next two years.



Source: University of Michigan

GeForce Academy of Gaming Launched

Category: Video Cards, Gaming, OCC News, Software, Science & Technology
Posted: March 30, 2018 09:49PM
Author: Grilka8050

The NVIDIA Academic Program has announced its expansion! It has added the Academy of Gaming. There are three majors, including Hardware Studies, eSports Management, and Gameosophy. These will help future developers and students create games that will change the industry. 

This starts next month, with 15 courses available at the release, and more are planned to launch throughout this year. These free courses are also transferable to nationally recognized colleges and universities. 

If you are interested in learning more, I highly recommend watching the official announcement on YouTube. If you are interested in viewing the course catalog, visit http://www.geforce.com/academy

Flexible LCD Created that Resembles Paper

Category: Science & Technology
Posted: March 29, 2018 11:25AM
Author: Guest_Jim_*

It might be the case that most of us get our information from electronic screens today, but there is still something to being able to hold a piece of paper in your hands. Trying to get the best of both media has not been easy, but researchers in China and Hong Kong have successfully created an LCD that has some paper-like qualities, including being only half a millimeter thick, flexible, and the potential to be inexpensive. A key aspect of this new display is the use of optically rewritable LCDs, unlike those used in our monitors, televisions, and phones.

An LCD is structured like a sandwich, with a layer of the liquid crystal between two plates, and in a conventional LCD the plates have electrical connections. These connections increase the thickness of the LCD, and can limit what materials and thickness used for the plates. Optically rewritable LCDs operate very differently, as special molecules on the plates react to polarized light to realign themselves, and switch the pixels. This enables such LCDs to have a much simpler structure, making them less expensive and more durable than their conventional counterparts. Additionally, the alignment of the molecules does not change without the polarized light, which means power is only needed when switching the display.

The researchers needed to do more than just work with optically rewritable LCDs, but also develop new spacers to go between the liquid crystal and the plates. Spacers are very important in any LCD, as they maintain the thickness of the liquid crystal layer, with a constant thickness necessary for a good contrast ratio, but as this LCD design needs to be flexible, the spacers need to maintain a thickness when bent. It was discovered a mesh-like spacer does the job, keeping the liquid crystal from flowing even when the LCD was bent or hit. The researchers also made it possible for the display to show three colors at a time with a special type of liquid crystal behind the LCD. The pixels will need to become smaller though, in order to produce full color, and the researchers also feel the resolution will need to increase before a commercial product could be made.

Source: EurekAlert!

Invisible Display Proof-of-Concept Created

Category: Science & Technology
Posted: March 26, 2018 10:51AM
Author: Guest_Jim_*

There are a number of works of fiction that include displays on walls and windows that do not appear to be there when not in use. An important step toward making such displays has been made by researchers at the University of California, Berkeley with the creation of light emitting devices that are invisible when off. The key to this was overcoming a limitation of LEDs on monolayer semiconductors, making it possibly to keep the device very thin but up to several millimeters wide.

To create light an LED has two contact points with one providing negative charges and the other positive charges, with the light being emitted from where the charges meet. When working with a monolayer semiconductor, in this case just three atoms thick, there is little room available for these contacts, but the researchers were able to work around this. The solution was to place the semiconductor on an insulating layer and then having electrodes on the monolayer semiconductor and beneath the insulator. By then applying an alternating current to the insulator, at the time the current switches polarity both positive and negative charges are present within the semiconductor, resulting in light.

The layers of material here are all so thin that they are flexible and transparent when not emitting light, which means the display is invisible when off and could be applied to curved surfaces. As it is though, this is just a proof-of-concept as the device's efficiency was only around 1% while that of commercial LEDs is around 25% to 30%. It might be a while longer before we see invisible display become a reality, but this is a significant step in that direction.

Source: University of California, Berkeley

NVIDIA Titan V Giving Errors in Some Scientific Workloads

Category: Science & Technology
Posted: March 26, 2018 08:29AM
Author: Guest_Jim_*

Currently one of the most powerful, and also most expensive graphics cards you could get is NVIDIA's Titan V. At $3000 this card uses a Volta GPU with 12 GB of HBM2 memory, and thanks to its Tensor cores can reach an unquestionably impressive 110 Tflops. While it is possible to use it to play games on, the Titan V actually targets those working with AI and deep learning, where high performance computing is very important. Unfortunately it appears some of these cards are not up to the task, according to an article from The Register.

Among the applications the Titan V is being used for is simulating how proteins and enzymes interact, and these simulations should return identical results when provided the same input. An engineer discovered and then told the The Register that of the four Titan Vs he tested on, two of them gave numerical errors ten percent of the time. Considering how necessary reproducibility is for scientific studies, this is far from a good development.

The exact cause is not known, though an "industry veteran" told The Register he thinks it might be memory related, with NVIDIA possibly pushing it too far. The Titan V does not have error-correcting code memory, which is a feature of the much more expensive Tesla line of accelerators. For now a recommendation being shared by some is to avoid using the Titan V until a patch can be developed and delivered.

Source: The Register

Means of Controlling Molecular Alignment on Graphene Discovered

Category: Science & Technology
Posted: March 23, 2018 09:35AM
Author: Guest_Jim_*

Since its discovery, there has been tremendous interest in graphene, an atom-thick sheet of carbon with impressive electrical and mechanical properties. It could potentially serve as a basis for future electronics, but is not the easiest material to work with currently. For one thing, trying to build up molecular structures on it is difficult because of graphene's symmetry, but researchers at Nagoya University have found a solution.

If you were to look at graphene, it resembles chicken wire with its hexagonal pattern, which creates a problem as it has three-fold symmetry, making those three axes thermodynamically equivalent. If you want to grow structures on a graphene sheet, the molecules you deposit equally like being on any of these axes. While working with sodium dodecyl sulfate (SDS) the researchers discovered they could control the direction it would form ribbon-like structures in. It is already known to form these structures, but they grow where they wish. After injecting SDS onto the graphene surface, the researchers used an atomic force microscope (AFM) to scan it and confirm random adsorption (which is different from absorption) of the SDS. Fifteen minutes later the surface was scanned again, but now the SDS had changed their orientation.

After some more work, the researchers discovered the AFM scanning the surface caused the SDS molecules to remove themselves from their original position to and then rotate to match the scanning orientation. The more extreme the angle between the AFM scan the direction, the more easily the SDS molecules rotated and moved, while those with a smaller different changed less and acted as seed molecules for the others. This ability to control the orientation of these ribbons could lead to several advances in many fields that work with two-dimensional materials, and could even help lead to graphene-based electronics.

Source: Nagoya University

Air-Breathing Electric Thruster Created for Low-Orbit Satellites

Category: Science & Technology
Posted: March 6, 2018 09:52AM
Author: Guest_Jim_*

What goes up must come down, is an old but accurate saying that poses a problem for satellite missions, especially those in low orbit. While the atmosphere 200 Km up is thin, it is still present enough to produce drag on a satellite, decaying its orbit. To compensate for this a thruster can be used, but once its propellant supply runs out, the satellite will fall from orbit, just like what happened to the GOCE gravity-mapping mission. For GOCE the propellant was xenon, but thanks to new work by the European Space Agency, future low-orbit missions, both for around Earth and potentially other worlds, air-breathing thrusters may be used instead.

This new thruster had to overcome the challenge of air molecules bouncing away at the intake, instead of being collected and compressed, but this was achieved. Once collected the molecules can be given an electric charge and propelled from the back producing thrust. The only parts that need any power are the coils and electrodes, with the remainder working on a passive basis, making the thruster simpler and efficient.

To test the innovative intake, normally one would feed it and measure the density at the collector, but instead the researchers decided to have the electric thruster attached and measure the thrust produced. Initially a stream of xenon was used, to confirm it was working, but then a feed of nitrogen-oxygen air replaced it, and the switch could be visibly observed by watching the engine plume change color. The system was then ignited successfully and repeatedly, confirming the feasibility of this design.

By using an air-breathing design, this removes the need for a special propellant on a satellite mission, making it possible for the satellite to continue its mission for a longer period of time. Also, as it can work with other kinds of gases, it could be used to enable satellite missions in low-orbit around a planet like Mars, with the carbon dioxide in its atmosphere serving as the propellant.

Source: ESA

Atmospheres and Densities Analyzed for Four TRAPPIST-1 Planets

Category: Science & Technology
Posted: February 6, 2018 12:43PM
Author: Guest_Jim_*

Last year NASA announced the star TRAPPIST-1 had seven Earth-sized planets orbiting it with multiple inside its habitable zone. Naturally these planets have been more closely looked at since, to determine if any of the planets might be able to support life. Using several space and ground-based telescopes, including the Hubble and Spitzer Space Telescopes, we now know that three of these exoplanets do not have high concentrations of hydrogen in their atmosphere. This is important as it suggests their atmospheres could be similar to Earth's, as opposed to the atmospheres on gaseous planets like Neptune.

The four planets investigated were TRAPPIST-1d, e, f, and g, which are the four that exist within the star's habitable zone, though one is at the inner-edge of this zone. The habitable zone of a star is the area in which a planet could have liquid water on its surface, which is necessary for life as we know it. Being a dwarf star, TRAPPIST-1 is much smaller and colder than the Sun, so much so all of its planets, not just those in the habitable zone, are closer to it than Mercury.

To analyze the planets' atmospheres, the telescopes looked at them as they transited TRAPPIST-1, so the star's light would pass through the atmospheric gases. This would leave a fingerprint of the gases on the observed spectra of the light and from that we could determine some of the atmospheres' characteristics. In addition to studying their atmosphere, the densities of the planets were also better measured, indicating the planets are mostly made of rock and some may have up to five percent of their mass in water. That would be 250 times more water than the oceans on Earth.

More observations and analyses are going to be necessary to further understand these planets 40 light years away, and likely those discoveries will come from the James Webb Space Telescope that should be launching next year. This next telescope has been designed and developed to far exceed the capabilities of the Hubble Space Telescope, but it has only been because of Hubble and other telescopes that we know what technologies and approaches get the most information.

The embedded media are from the ESA Hubble website with one being a comparison of star habitable zones.


Source: Hubble [NASA], [ESA] and Spitzer

Volumetric Display Made The Builds 3D Images In Air

Category: Science & Technology
Posted: February 1, 2018 08:21AM
Author: Guest_Jim_*

Science fiction has promised us many things for the future and thankfully we have scientists around the world working to fulfill those promises. Researchers at Brigham Young University are among them as they worked on their 'Princess Leia project' and they have now succeeded in creating a system to project 3D images in open air. These are not holograms though, as those involve scattering light on a 2D surface, but volumetric images as they exist in three dimensions, allowing you to view it from all sides. Other teams of researchers have also been working on producing volumetric imagery, but this is the first to combine the optical trapping and colorful lasers.

The way this display works is to take a particle of cellulose and trap it with a laser. This trapping allows the laser to precisely move it around while another set of lasers illuminate and color it. Thanks to how quickly the particle is being moved around, persistence of vision makes it appear lines are being painted in mid-air, forming a volumetric image. The researchers liken it to a 3D printer for light, with the cellulose particle effectively printing light as it is move around.

Thus far the researchers have displayed volumetric images of a butterfly, a prism, the university's logo, rings wrapped around an arm, and a recreation of Princess Leia by an individual in a lab coat crouching much as Leia does at the beginning of the famous message.



Source: Brigham Young University

Broadband Achieved Over Wet String

Category: Science & Technology
Posted: December 13, 2017 11:49AM
Author: Guest_Jim_*

Physics is an amazing thing, especially when you turn it to something kind of silly. An employee at a British IP did just this by demonstrating a broadband, ADSL signal can be carried by wet string. To be more specific, it was two meters of string soaked in salt water because fresh water would not do the trick. After connecting the string, its speed was measured at 3.5 Mbps for download, proving it works.

Why does this work? Because the physics involved with the signal does not solely rely on an electric current, which the string would resist. Instead the cable acts as a waveguide for electromagnetic waves of such high frequency, the material does not matter much.

It is safe to say string is not going to become a part of broadband connections, but this still demonstrates an interesting and amusing quirk to the technologies so many of us use every day.

Source: RevK's Rants Blog

All of the Top 500 Supercomputers Now Run Linux

Category: Science & Technology
Posted: November 16, 2017 09:18AM
Author: Guest_Jim_*

While it might not be a common operating system among consumers, Linux now dominates among supercomputers. All of those on the TOP500 list of the world's fastest supercomputers use the open-source operating system, and it is that open-source nature that has helped it achieve this milestone. Many supercomputers today are built for specific tasks, and to get the most out of them, customized operating systems are desired. Instead of developing a new OS from scratch, the open-source code of Linux allows the optimizations to be developed and put in place at much lower cost.

Source: ZDNet

Iridium Can Be Used to Destroy Cancer Cells

Category: Science & Technology
Posted: November 3, 2017 12:08PM
Author: Guest_Jim_*

Iridium is a rather rare metal, and while it is associated with the asteroid that led to the mass extinction of the dinosaurs and much more life, it may come to be a powerful tool for preserving life. Researchers at the University of Warwick have discovered a means to use iridium to destroy cancer cells, and this method does not damage normal healthy cells.

To destroy the cancer cells, the researchers made a compound of iridium and an organic material, and then light just needs to shine on it. The compound can be targeted specifically at cancerous cells, and it will transfer the energy of the light to the oxygen within the cell, producing singlet oxygen. Singlet oxygen is a high-energy form of oxygen that is highly reactive and poisonous to the cancer cell. Healthy cells are not affected by this process though, and the researchers even tested the method in healthy cells to prove as much. The light used in the process is just visible light and for their testing the researchers used red laser light, as it can reach through the skin.

While this research is on its own impressive, it also shows the value in exploring even more precious metals for their use in cancer chemotherapies. Platinum, another precious metal, is already used in over half of such treatments, and others could prove equally potent.

Source: University of Warwick

Improved Memory Management Developed for On-chip DRAM

Category: Science & Technology
Posted: October 23, 2017 10:06AM
Author: Guest_Jim_*

As processors have become faster and faster, the time it takes to go off-chip to access data has become more and an issue. This is where on-chip high-bandwidth caches come in, and even why some have been adding DRAM to a chip's packaging. The problem is DRAM is significantly different from the SRAM typically used for on-chip caches, which is why MIT researchers have developed a new cache management system.

The critical difference between SRAM and DRAM concerns how the two memory technologies store data and the impact this has on locating specific data. All data is tagged with a piece of metadata identifying where it is also located in the system's main memory and these tags are run through a hash function. The purpose of this hashing is to produce very different values for actually similar pieces of information, as this will prevent bottlenecking at specific locations. The outputs of the hash function is stored in a hash table, and sometimes multiple data items are referenced by one entry, if they all share the same hash output, but checking these few items is still more efficient than going through the entire tag list. This is where the difference between SRAM and DRAM comes out though, as SRAM uses six transistors for each bit of data while DRAM uses only one. This does give DRAM an advantage in space efficiency, but SRAM has some processing capability, allowing it to search the hash table for the desired information, while the processor needs to do this for DRAM-stored data, which takes time and bandwidth.

The solution from MIT, which has been dubbed Banshee, adds three bits to each entry in the hash table, with one identifying if it can be found in the DRAM cache, and the other two giving a location relative to the other data items sharing the same hash index. As the entry in the table is already around 100 bits, this is not much overhead especially as it can increase the data rate of on-chip DRAM by 33 to 50%. Banshee also adds a tag buffer to address issues of one processor core not knowing when another has data put in the DRAM cache. The buffer is only 5 KB, so it does not take up much, and when it is full all of the cores have their virtual-memory tables updated, allowing the buffer to clear and start fresh.

Source: MIT

Wireless Data and Power Combined

Category: Science & Technology
Posted: September 18, 2017 10:02AM
Author: Guest_Jim_*

In recent years there has been a trend for technologies to go wireless for both form and convenience. Wireless charging is among the advancements being added to technologies, but it often comes at increased cost and weight because it requires special components be added to the device. While there is a reason these parts need to be added, instead of using those already present, researchers at North Carolina State University decided to see if it was possible to work around this reason, and succeeded.

Wireless systems, for data or power, require the use of antennas and radios and typically these parts will be tuned design for their intended purpose for greatest efficiency. For wireless power the parts are tuned to a narrow bandwidth, which minimizes power loss but makes them unsuited for data transfer. What dawned on the researchers is that while the wireless power system does require narrow band antennas, the whole system bandwidth does not need to be so small. By combining narrow-bandwidth components with a wide-bandwidth system, the researchers were able to achieve both power and data transfer.

When the researchers tested this new design, they were able to transfer 3 W of power while transmitting 3.39 MB/s with only a 2.3% loss in efficiency due to the data transmission. If only 2 W of power was transferred though, the efficiency only dropped by 1.3%. These tests were not done with the device resting directly on the charging pad with, but a little more than six inches away, demonstrating that this system can work over a distance.

Source: North Carolina State University

AI Recreates Game Engine by Watching Gameplay Videos

Category: Science & Technology
Posted: September 12, 2017 09:33AM
Author: Guest_Jim_*

Games and artificial intelligence are not exactly strangers as there have been various kinds of AIs in games for years. In the future though, we might see AI playing a role in creating the games and not just giving us opponents. This is thanks to researchers at Georgia Institute of Technology where they have made an AI that can recreate a game's engine after watching video of it.

As the AI watches video of a game, it studies the frames to construct a model of how the game works and how players will interact with it. The researchers started it on Super Mario Bros. with a speedrunner video, which would be a more difficult test for it as the player is focused just on the goal. One video was not enough to develop a model that will clone the original engine, but by providing the AI additional videos, it was able to create something accurate enough another AI could not distinguish between it and the original game.

Since working with Super Mario Bros., the researchers have moved on to experimenting with Mega Man and Sonic the Hedgehog. Ultimately we could see this turned into a tool to accelerate game development and to experiment with different kinds of gameplay.

Source: Georgia Institute of Technology

Unknown Semiconductor Behavior Discovered with Potential Efficiency Impact

Category: Science & Technology
Posted: September 6, 2017 10:32AM
Author: Guest_Jim_*

When thinking about the semiconductors within our computers and other devices, as we surely do at times, chances are we just think about silicon, but there are more materials than that one involved. On its own, silicon actually will not conduct electricity, which is why other molecules called dopants are added to the material, but adding too many doping molecules will eventually block the electrical currents. The cause of this increased resistance is believed to be from the electrons bouncing off of the dopants, but now researchers at the University of Illinois at Chicago have discovered another mechanism that increases resistance.

To make this discovery the researchers started with chips of cadmium sulfide for their semiconductor base and then used copper ions as the dopant. Instead of connecting the chips up run a current through them, the researchers instead shot a powerful blue laser at them, with the energy of the laser being enough to generate an electrical current. Very high energy X-ray images were taken at the same time just millionths of a microsecond apart to reveal what was going on. To the researchers' surprise, the copper ions were intermittently forming bonds with the semiconductor base, and these bonds were then impairing conduction.

This behavior has not been seen before and it would be impairing the speed and efficiency of the semiconductor computers it affects. Fortunately, now that we are aware of this dynamic it will be possible to create designs that minimize it.

Source: University of Illinois at Chicago

Researchers Found Action Games Can Impact Brain Matter

Category: Science & Technology
Posted: August 14, 2017 09:27AM
Author: Guest_Jim_*

For many years people have been studying what impacts video games can have on humans and producing various results. New researchers at McGill University have found that action games can actually lead to a loss of grey matter in the hippocampus, though what long-term consequences this may entail require further study. However, playing 3D-platform games, like Super Mario 64, can actually increase the amount of grey matter.

For this study the researchers had 64 participants ranging in age from 18 to 30 and had them play 90 hours of different kinds of games. These games included first person shooters like Call of Duty, Killzone, Medal of Honor, and Borderlands 2, along with the previously mentioned Super Mario 64, and the participants had not played these games previously. The researchers found that for some of those playing the FPS games, the hippocampus lost grey matter after 90 hours, while no one lost any while playing the 3D platformer, or even saw an increase. There is more to it than this though as the researchers found the loss of grey matter also depended on the kind of learner people are. Response learners, who will follow their brain's autopilot and reward system for navigation, experienced the loss while spatial learners who use the hippocampus to navigate saw an increase in grey matter. It is not just a matter of the kind of game you are playing but also how you learn. The increase seen from Super Mario 64 occurred for both types of gamers.

Exactly what about this genre causes this atrophy is still unknown and will require further study, as will determining the long-term consequences of this loss. People with lower amounts of grey matter in the hippocampus though are known to be at an increased risk of neuropsychiatric illnesses, such as depression, schizophrenia, PTSD, and Alzheimer's disease.

Source: McGill University

Nearly Perfect Single-Crystal Graphene Grown on a Large Scale

Category: Science & Technology
Posted: August 10, 2017 02:22PM
Author: Guest_Jim_*

Since its discovery, many people have been working very hard to bring graphene to various products, thanks to its strength, flexibility, and very high conductivity. One of the primary issues with the material has been the difficultly of synthesizing it, especially on large scales. Researchers at the Institute for Basic Science in Korea have discovered a possible solution though, growing large pieces of single-crystal graphene quickly and possibly without an upper-size limit.

Graphene is an atom-thick sheet of carbon with a hexagonal molecular structure to it that can transport electrons at a very high speed, while still being very strong, very flexible, and transparent. These properties give it the potential to very successfully replace silicon in electronics, but achieving this would require large, high quality pieces of the carbon allotrope. Polycrystalline graphene, which consists of many crystals that interface with each other at various angles, can be produced at large sizes, but those varied interfaces are defects that impair the material's performance, so single-crystal graphene is needed. Previously producing just a few square centimeters would require a couple hours, but this new method was able to produce a 250 cm2 (5x50 cm 2) piece of nearly perfect graphene in just 20 minutes. The researchers accomplished this by starting with a copper-foil substrate that was heated to around 1030 ºC, allowing its atoms to align, forming a single crystal of copper. Then carbon atoms were deposited onto it via chemical vapor deposition, and these atoms formed islands that eventually coalesced to make a nearly perfect, single-crystal of graphene.

Obviously this is terrific news for the future of graphene, especially as it may be possible to scale it up just by using larger pieces of copper, while still being fast and cheap. It could also lead to new ways of producing other 2D materials with special and desirable properties.

Source: Institute for Basic Science

Random Pic
© 2001-2018 Overclockers Club ® Privacy Policy
Elapsed: 0.4353411198   (xlweb1)