Thursday, April 28, 2011

Good Eggs: Nanomagnets Offer Food for Thought About Computer Memories

For a study described in a new paper, NIST researchers used electron-beam lithography to make thousands of nickel-iron magnets, each about 200 nanometers (billionths of a meter) in diameter. Each magnet is ordinarily shaped like an ellipse, a slightly flattened circle. Researchers also made some magnets in three different egglike shapes with an increasingly pointy end. It's all part of NIST research on nanoscale magnetic materials, devices and measurement methods to support development of future magnetic data storage systems.

It turns out that even small distortions in magnet shape can lead to significant changes in magnetic properties. Researchers discovered this by probing the magnets with a laser and analyzing what happens to the"spins" of the electrons, a quantum property that's responsible for magnetic orientation. Changes in the spin orientation can propagate through the magnet like waves at different frequencies. The more egg-like the magnet, the more complex the wave patterns and their related frequencies. (Something similar happens when you toss a pebble in an asymmetrically shaped pond.) The shifts are most pronounced at the ends of the magnets.

To confirm localized magnetic effects and"color" the eggs, scientists made simulations of various magnets using NIST's object-oriented micromagnetic framework (OOMMF). Lighter colors indicate stronger frequency signals.

The egg effects explain erratic behavior observed in large arrays of nanomagnets, which may be imperfectly shaped by the lithography process. Such distortions can affect switching in magnetic devices. The egg study results may be useful in developing random-access memories (RAM) based on interactions between electron spins and magnetized surfaces. Spin-RAM is one approach to making future memories that could provide high-speed access to data while reducing processor power needs by storing data permanently in ever-smaller devices. Shaping magnets like eggs breaks up a symmetric frequency pattern found in ellipse structures and thus offers an opportunity to customize and control the switching process.

"For example, intentional patterning of egg-like distortions into spinRAM memory elements may facilitate more reliable switching," says NIST physicist Tom Silva, an author of the new paper.

"Also, this study has provided the Easter Bunny with an entirely new market for product development."


Source

Monday, April 25, 2011

Creative, Online Learning Tool Helps Students Tackle Real-World Problems

A new computer interface developed at Iowa State University is helping students use what they've learned in the horticulture classroom and apply it to problems they'll face when they are on the job site.

The project, called ThinkSpace, is led by a group of ISU faculty including Ann Marie VanDerZanden, professor of horticulture and associate director of ISU's Center for Excellence in Learning and Teaching.

ThinkSpace has many different features that make it an effective way to teach using ill-structured problems. This type of problem allows students to choose from multiple paths to arrive at a solution.

By contrast, well-structured problems have a straight path to the one, clear solution.

In horticulture, the ThinkSpace platform is being used for upper-level classes and requires students to access what they've learned throughout their time studying horticulture and apply it to real-world problems.

In these classes, VanDerZanden gives students computer-delivered information about residential landscape.

That information includes illustrations of the work site, descriptions of the trees on the property, explanations of the problems the homeowner is experiencing, mock audio interview files with the property owner, and about anything else a horticulture professional would discover when approaching a homeowner with a landscape problem.

Also, just like in real life, some of the information is relevant to the problem, and some information is not.

"It forces students to take this piece of information, and that piece of information, and another piece of information, and then figure out what is wrong -- in this case with a plant," said VanDerZanden.

When the students think they have determined the problem, they enter their responses into the online program.

VanDerZanden can then check the responses.

For those students on the right track, she allows them to continue toward a solution.

For those who may have misdiagnosed the situation, VanDerZanden steers the students toward the right track before allowing them to move forward.

So far, the response from students has been very positive.

"The students like the variety," said VanDerZanden."They like struggling with real-world problems, rather that something that is just made up. On the other hand, they can get frustrated because there is not a clear-cut answer."

The entire process leverages the classroom experience into something the students can use at work.

"I think this really enhances student learning," said VanDerZanden."Students apply material from previous classes to a plausible, real-world situation. For instance students see what happens when a tree was pruned really hard to allow a piece of equipment to get into the customer's yard. As a result, the tree sends out a lot of new succulent shoots, and then there is an aphid infestation in the tree. It helps students start making all of those connections."

The ThinkSpace interface was developed from existing technologies already being used in ISU's College of Veterinary Medicine, College of Engineering and department of English.

VanDerZanden and her group recently received a$446,000 grant from the United States Department of Agriculture Higher Education Challenge Grant program to further develop ThinkSpace so it could more useful to other academic areas and universities.

As part of this research, VanDerZanden is also working with faculty members at University of Pennsylvania, Philadelphia; University of Wisconsin, Madison; and Kansas State University, Manhattan.


Source

Sunday, April 24, 2011

Decoding Human Genes Is Goal of New Open-Source Encyclopedia

In a paper that will be published in the journalPLoS Biologyon 19 April 2011, the project -- called ENCODE (Encyclopedia Of DNA Elements) -- provides an overview of the team's ongoing efforts to interpret the human genome sequence, as well as a guide for using the vast amounts of data and resources produced so far by the project.

Ross Hardison, the T. Ming Chu Professor of Biochemistry and Molecular Biology at Penn State University and one of the principal investigators of the ENCODE Project team, explained that the philosophy behind the project is one of scientific openness, transparency, and collaboration across sub-disciplines. ENCODE comes on the heels of the now-complete Human Genome Project -- a 13-year effort aimed at identifying all the approximately 20,000 to 25,000 genes in human DNA -- which also was based on the belief in open-source data sharing to further scientific discovery and public understanding of science.

The ENCODE Project has accomplished this goal by publishing its database atgenome.ucsc.edu/ENCODE, and by posting tools to facilitate data use atencodeproject.org."ENCODE resources are already being used by scientists for discovery," Hardison said."But what's kind of revolutionary is that they also are being used in classes to train students in all areas of biology. Our classes here at Penn State are using real data on genomic variation and function in classroom problem sets, shortly after the labs have generated them."

Hardison explained that there are about 3-billion base pairs in the human genome, making the cataloging and interpretation of the information a monumental task."We have a very lofty goal: To identify the function of every nucleotide of the human genome," he said."Not only are we discovering the genes that give information to cells and make proteins, but we also want to know what determines that the proteins are made in the right cells, and at the appropriate time. Finding the DNA elements that govern this regulated expression of genes is a major goal of ENCODE." Hardison explained that ENCODE's job is to identify the human genome's functional regions, many of which are quite esoteric."The human DNA sequence often is described as a kind of language, but without a key to interpret it, without a full understanding of the 'grammar,' it might as well be a big jumble of letters." Hardison added that the ENCODE Project supplies data such as where proteins bind to DNA and where parts of DNA are augmented by additional chemical markers. These proteins and chemical additions are keys to understanding how different cells within the human body interpret the language of DNA.

In the soon-to-be-published paper, the team shows how the ENCODE data can be immediately useful in interpreting associations between disease and DNA sequences that can vary from person to person -- single nucleotide polymorphisms (SNPs). For example, scientists know that DNA variants located upstream of a gene called MYC are associated with multiple cancers, but until recently the mechanism behind this association was a mystery. ENCODE data already have been used to confirm that the variants can change binding of certain proteins, leading to enhanced expression of the MYC gene and, therefore, to the development of cancer. ENCODE also has made similar studies possible for thousands of other DNA variants that may be associated with susceptibility to a variety of human diseases.

Another of the principal investigators of the project, Richard Myers, president and director of the HudsonAlpha Institute for Biotechnology, explained that the ENCODE Project is unique because it requires collaboration from multiple people all over the world at the cutting edge of their fields."People are working in a coordinated manner to figure out the function of our human genome," he said."The importance of the project extends beyond basic knowledge of who and what we are as humans, and into an understanding of human health and disease."

Scientists with the ENCODE Project also are applying up to 20 different tests in 108 commonly used cell lines to compile important data. John Stamatoyannopoulos, an assistant professor of genome sciences and medicine at the University of Washington and another principal investigator, explained that the ENCODE Project has been responsible for producing many assays -- molecular-biology procedures for measuring the activity of biochemical agents -- that are now fundamental to biology."Widely used computational tools for processing and interpreting large-scale functional genomic data also have been developed by the project," Stamatoyannopoulos added."The depth, quality, and diversity of the ENCODE data are unprecedented."

Hardison said that the portion of the human genome that actually codes for protein is about 1.1 percent."That's still a lot of data," he said."And to complicate matters even more, most mechanisms for gene expression and regulation lie outside what we call the 'coding' region of DNA." Hardison explained that scientists have a limited number of tools with which to explore the genome, and one that has been used widely is inter-species comparison."For example, we can compare humans and chimpanzees and glean some fascinating information," Hardison said."But very few proteins and other DNA products differ in any fundamental way between humans and chimps. The important difference between us and our close cousins lies in gene expression -- the basic level at which genes give rise to traits such as eye color, height, and susceptibility to a particular disease. ENCODE is helping to map the very proteins involved in gene regulation and gene expression. Our paper not only explains how to find the data, but it also explains how to apply the data to interpret the human genome."

The ENCODE Project is funded, primarily, by the National Human Genome Research Institute of the U. S. National Institutes of Health.


Source

Saturday, April 23, 2011

'3-D Towers' of Information Double Data Storage Areal Density

This greatly enhances the amount of data that can be stored in a magnetic storage device and provides a method to reach beyond a wall of physical limits that the currently used technology is hitting. The team presents their findings in the American Institute of Physics'Journal of Applied Physics.

"Over the past 50 years, with the rise of multimedia devices, the worldwide Internet, and the general growth in demand for greater data storage capacity, the areal density of information in magnetic hard disk drives has exponentially increased by 7 orders of magnitude," says Jerome Moritz, a researcher at SPINTEC, in Grenoble."This areal density is now about 500Gbit/in2, and the technology presently used involves writing the information on a granular magnetic material. This technology is now reaching some physical limits because the grains are becoming so small that their magnetization becomes unstable and the information written on them is gradually lost."

Therefore, new approaches are needed for magnetic data storage densities exceeding 1Tbit/in2.

"Our new approach involves using bit-patterned media, which are made of arrays of physically separated magnetic nanodots, with each nanodot carrying one bit of information. To further extend the storage density, it's possible to increase the number of bits per dots by stacking several magnetic layers to obtain a multilevel magnetic recording device," explains Moritz.

In that context, Moritz and colleagues were able to demonstrate that the best way to achieve a 2-bit-per-dot media involves stacking in-plane and perpendicular-to-plane magnetic media atop each dot. The perpendicularly magnetized layer can be read right above the dot, whereas the in-plane magnetized layer can be read between dots. This enables doubling of the areal density for a given dot size by taking better advantage of the whole patterned media area.


Source

Friday, April 22, 2011

'Time Machine' Made to Visually Explore Space and Time in Videos: Time-Lapse GigaPans Provide New Way to Access Big Data

Viewers, for instance, can use the system to focus in on the details of a booth within a panorama of a carnival midway, but also reverse time to see how the booth was constructed. Or they can watch a group of plants sprout, grow and flower, shifting perspective to watch some plants move wildly as they grow while others get eaten by caterpillars. Or, they can view a computer simulation of the early universe, watching as gravity works across 600 million light-years to condense matter into filaments and finally into stars that can be seen by zooming in for a close up.

"With GigaPan Time Machine, you can simultaneously explore space and time at extremely high resolutions," said Illah Nourbakhsh, associate professor of robotics and head of the CREATE Lab."Science has always been about narrowing your point of view -- selecting a particular experiment or observation that you think might provide insight. But this system enables what we call exhaustive science, capturing huge amounts of data that can then be explored in amazing ways."

The system is an extension of the GigaPan technology developed by the CREATE Lab and NASA, which can capture a mosaic of hundreds or thousands of digital pictures and stitch those frames into a panorama that be interactively explored via computer. To extend GigaPan into the time dimension, image mosaics are repeatedly captured at set intervals, and then stitched across both space and time to create a video in which each frame can be hundreds of millions, or even billions of pixels.

An enabling technology for time-lapse GigaPans is a feature of the HTML5 language that has been incorporated into such browsers as Google's Chrome and Apple's Safari. HTML5, the latest revision of the HyperText Markup Language (HTML) standard that is at the core of the Internet, makes browsers capable of presenting video content without use of plug-ins such as Adobe Flash or Quicktime.

Using HTML5, CREATE Lab computer scientists Randy Sargent, Chris Bartley and Paul Dille developed algorithms and software architecture that make it possible to shift seamlessly from one video portion to another as viewers zoom in and out of Time Machine imagery. To keep bandwidth manageable, the GigaPan site streams only those video fragments that pertain to the segment and/or time frame being viewed.

"We were crashing the browsers early on," Sargent recalled."We're really pushing the browser technology to the limits."

Guidelines on how individuals can capture time-lapse images using GigaPan cameras are included on the site created for hosting the new imagery's large data files,http://timemachine.gigapan.org. Sargent explained the CREATE Lab is eager to work with people who want to capture Time Machine imagery with GigaPan, or use the visualization technology for other applications.

Once a Time Machine GigaPan has been created, viewers can annotate and save their explorations of it in the form of video"Time Warps."

Though the time-lapse mode is an extension of the original GigaPan concept, scientists already are applying the visualization techniques to other types of Big Data. Carnegie Mellon's Bruce and Astrid McWilliams Center for Cosmology, for instance, has used it to visualize a simulation of the early universe performed at the Pittsburgh Supercomputing Center by Tiziana Di Matteo, associate professor of physics.

"Simulations are a huge bunch of numbers, ugly numbers," Di Matteo said."Visualizing even a portion of a simulation requires a huge amount of computing itself." Visualization of these large data sets is crucial to the science, however."Discoveries often come from just looking at it," she explained.

Rupert Croft, associate professor of physics, said cosmological simulations are so massive that only a segment can be visualized at a time using usual techniques. Yet whatever is happening within that segment is being affected by forces elsewhere in the simulation that cannot be readily accessed. By converting the entire simulation into a time-lapse GigaPan, however, Croft and his Ph.D. student, Yu Feng, were able to create an image that provided both the big picture of what was happening in the early universe and the ability to look in detail at any region of interest.

Using a conventional GigaPan camera, Janet Steven, an assistant professor of biology at Sweet Briar College in Virginia, has created time-lapse imagery of rapid-growing brassicas, known as Wisconsin Fast Plants."This is such an incredible tool for plant biology," she said."It gives you the advantage of observing individual plants, groups of plants and parts of plants, all at once."

Steven, who has received GigaPan training through the Fine Outreach for Science program, said time-lapse photography has long been used in biology, but the GigaPan technology makes it possible to observe a number of plants in detail without having separate cameras for each plant. Even as one plant is studied in detail, it's possible to also see what neighboring plants are doing and how that might affect the subject plant, she added.

Steven said creating time-lapse GigaPans of entire landscapes could be a powerful tool for studying seasonal change in plants and ecosystems, an area of increasing interest for understanding climate change. Time-lapse GigaPan imagery of biological experiments also could be an educational tool, allowing students to make independent observations and develop their own hypotheses.

Google Inc. supported development of GigaPan Time Machine.


Source

Thursday, April 21, 2011

CAPTCHAs With Chaos: Strong Protection for Weak Passwords

Researchers at the Max Planck Institute for the Physics of Complex Systems in Dresden have been inspired by the physics of critical phenomena in their attempts to significantly improve password protection. The researchers split a password into two sections. With the first, easy-to-memorize section they encrypt a CAPTCHA ("completely automated public Turing test to tell computers and humans apart") -- an image that computer programs per se have difficulty in deciphering. The researchers also make it more difficult for computers, whose task it is to automatically crack passwords, to read the passwords without authorization. They use images of a simulated physical system, which they additionally make unrecognizable with a chaotic process. These p-CAPTCHAs enable the Dresden physicists to achieve a high level of password protection, even though the user need only remember a weak password.

Computers sometimes use brute force. Hacking programs use so-called brute-force attacks to try out all possible character combinations to guess passwords. CAPTCHAs are therefore intended as an additional safeguard the input of which originates from a human being and not from a machine. They pose a task for the user which is simple enough for any human, yet very difficult for a program. Users must enter a distorted text which is displayed on the screen, for example. CAPTCHAs are increasingly being bypassed, however. Personal data of members of the"SchülerVZ" social network for school pupils have already been stolen in this way.

Researchers at the Max Planck Institute for the Physics of Complex Systems in Dresden have now developed a new type of password protection that is based on a combination of characters and a CAPTCHA. They also use mathematical methods from the physics of critical phenomena to protect the CAPTCHA from being accessed by computers."We thus make the password protection both more effective and simpler," says Konstantin Kladko, who had the idea for this interdisciplinary approach during his time at the Dresden Max Planck Institute; he is currently a researcher at Axioma Research in Palo Alto/USA.

The Dresden-based researchers initially combine password and CAPTCHA in a completely novel way. The CAPTCHA is no longer generated anew each time in order to distinguish the human user from a computer on a case-by-case basis. Rather, the physicists use the codeword in the image, which can only be deciphered by humans as the real password, which provides access to a social network or an online bank account, for example. The researchers additionally encrypt this password using a combination of characters.

However, that's not all: the CAPTCHA is a snapshot of a dynamic, chaotic Hamiltonian system in two dimensions. For the sake of simplicity, his image can be imagined as a grey-scale pixel matrix, where every pixel represents an oscillator. The oscillators are coupled in a network. Every oscillator oscillates between two states and is affected by the neighbouring oscillators as it does so, thus resulting in the grey scales.

Chaotic development makes password unreadable

The physicists then leave the system to develop chaotically for a period of time. The grey-scale matrix changes the colour of its pixels. The result is an image that no longer contains a recognizable word. The researchers subsequently encrypt this image with the combination of characters and save the result."We therefore talk of a password-protected CAPTCHA or p-CAPTCHA," says Sergej Flach, who teamed up with Tetyana Laptyeva to achieve the decisive research results at the Max Planck Institute for the Physics of Complex Systems. Since the chaotic evolution of the initial image is deterministic, i.e. reversible, the whole procedure can be reversed using the combination of characters, so that the user can again read the password hidden in the CAPTCHA.

"The character combination we use to encrypt the password in the CAPTCHA can be very easy to remember," explains Konstantin Kladko."We thus take account of the fact that most people only want to, or can only, remember simple passwords." The fact that the passwords are correspondingly weak is now no longer important, because the real protection comes from the encrypted password in the CAPTCHA.

On the one hand, the password hidden in the CAPTCHA is too long for computers to be able to guess it using a brute-force attack in a reasonable length of time. On the other, the physicists use a critical system to generate the password image. This system is close to a phase transition: with a phase transition, the system changes from one physical state to another, from the paramagnetic to the ferromagnetic state, for example. Close to the transition, regions repeatedly form which temporarily have already completed the transition."The resulting image is always very grainy. Therefore, a computer cannot distinguish it from the original it is searching for," explains Sergej Flach.

"Although the study has just been submitted to a specialist journal and is only available online in an archive, it has already provoked a large number of responses in the community -- and not only in Hacker News," says Sergej Flach."I was very impressed by the depth of some comments in certain forums -- in Slashdot, for example." The specialists are obviously impressed by the ingenuity of the approach, which means passwords could be very difficult to crack in the future. Moreover, the method is easy and quick to implement in conventional computer systems."An expansion to several p-CAPTCHA levels is obvious," says Sergej Flach. Hoiwever, this requires increased computing power to reverse the chaotic development in a reasonable time:"We therefore want to investigate various Hamiltonian and non-Hamiltonian systems in the future to see whether they provide faster and even more effective protection."


Source

Wednesday, April 20, 2011

New Kid on the Plasmonic Block: Researchers Find Plasmonic Resonances in Semiconductor Nanocrystals

"We have demonstrated well-defined localized surface plasmon resonances arising from p-type carriers in vacancy-doped semiconductor quantum dots that should allow for plasmonic sensing and manipulation of solid-state processes in single nanocrystals," says Berkeley Lab director Paul Alivisatos, a nanochemistry authority who led this research."Our doped semiconductor quantum dots also open up the possibility of strongly coupling photonic and electronic properties, with implications for light harvesting, nonlinear optics, and quantum information processing."

Alivisatos is the corresponding author of a paper in the journalNature Materialstitled"Localized surface plasmon resonances arising from free carriers in doped quantum dots." Co-authoring the paper were Joseph Luther and Prashant Jain, along with Trevor Ewers.

The term"plasmonics" describes a phenomenon in which the confinement of light in dimensions smaller than the wavelength of photons in free space make it possible to match the different length-scales associated with photonics and electronics in a single nanoscale device. Scientists believe that through plasmonics it should be possible to design computer chip interconnects that are able to move much larger amounts of data much faster than today's chips. It should also be possible to create microscope lenses that can resolve nanoscale objects with visible light, a new generation of highly efficient light-emitting diodes, and supersensitive chemical and biological detectors. There is even evidence that plasmonic materials can be used to bend light around an object, thereby rendering that object invisible.

The plasmonic phenomenon was discovered in nanostructures at the interfaces between a noble metal, such as gold or silver, and a dielectric, such as air or glass. Directing an electromagnetic field at such an interface generates electronic surface waves that roll through the conduction electrons on a metal, like ripples spreading across the surface of a pond that has been plunked with a stone. Just as the energy in an electromagnetic field is carried in a quantized particle-like unit called a photon, the energy in such an electronic surface wave is carried in a quantized particle-like unit called a plasmon. The key to plasmonic properties is when the oscillation frequency between the plasmons and the incident photons matches, a phenomenon known as localized surface plasmon resonance (LSPR). Conventional scientific wisdom has held that LSPRs require a metal nanostructure , where the conduction electrons are not strongly attached to individual atoms or molecules. This has proved not to be the case as Prashant Jain, a member of the Alivisatos research group and one of the lead authors of the Nature Materials paper, explains.

"Our study represents a paradigm shift from metal nanoplasmonics as we've shown that, in principle, any nanostructure can exhibit LSPRs so long as the interface has an appreciable number of free charge carriers, either electrons or holes," Jain says."By demonstrating LSPRs in doped quantum dots, we've extended the range of candidate materials for plasmonics to include semiconductors, and we've also merged the field of plasmonic nanostructures, which exhibit tunable photonic properties, with the field of quantum dots, which exhibit tunable electronic properties."

Jain and his co-authors made their quantum dots from the semiconductor copper sulfide, a material that is known to support numerous copper-deficient stoichiometries. Initially, the copper sulfide nanocrystals were synthesized using a common hot injection method. While this yielded nanocrystals that were intrinsically self-doped with p-type charge carriers, there was no control over the amount of charge vacancies or carriers.

"We were able to overcome this limitation by using a room-temperature ion exchange method to synthesize the copper sulfide nanocrystals," Jain says."This freezes the nanocrystals into a relatively vacancy-free state, which we can then dope in a controlled manner using common chemical oxidants."

By introducing enough free electrical charge carriers via dopants and vacancies, Jain and his colleagues were able to achieve LSPRs in the near-infrared range of the electromagnetic spectrum. The extension of plasmonics to include semiconductors as well as metals offers a number of significant advantages, as Jain explains.

"Unlike a metal, the concentration of free charge carriers in a semiconductor can be actively controlled by doping, temperature, and/or phase transitions," he says."Therefore, the frequency and intensity of LSPRs in dopable quantum dots can be dynamically tuned. The LSPRs of a metal, on the other hand, once engineered through a choice of nanostructure parameters, such as shape and size, is permanently locked-in."

Jain envisions quantum dots as being integrated into a variety of future film and chip-based photonic devices that can be actively switched or controlled, and also being applied to such optical applications as in vivo imaging. In addition, the strong coupling that is possible between photonic and electronic modes in such doped quantum dots holds exciting potential for applications in solar photovoltaics and artificial photosynthesis

"In photovoltaic and artificial photosynthetic systems, light needs to be absorbed and channeled to generate energetic electrons and holes, which can then be used to make electricity or fuel," Jain says."To be efficient, it is highly desirable that such systems exhibit an enhanced interaction of light with excitons. This is what a doped quantum dot with an LSPR mode could achieve."

The potential for strongly coupled electronic and photonic modes in doped quantum dots arises from the fact that semiconductor quantum dots allow for quantized electronic excitations (excitons), while LSPRs serve to strongly localize or confine light of specific frequencies within the quantum dot. The result is an enhanced exciton-light interaction. Since the LSPR frequency can be controlled by changing the doping level, and excitons can be tuned by quantum confinement, it should be possible to engineer doped quantum dots for harvesting the richest frequencies of light in the solar spectrum.

Quantum dot plasmonics also hold intriguing possibilities for future quantum communication and computation devices.

"The use of single photons, in the form of quantized plasmons, would allow quantum systems to send information at nearly the speed of light, compared with the electron speed and resistance in classical systems," Jain says."Doped quantum dots by providing strongly coupled quantized excitons and LSPRs and within the same nanostructure could serve as a source of single plasmons."

Jain and others in Alivsatos' research group are now investigating the potential of doped quantum dots made from other semiconductors, such as copper selenide and germanium telluride, which also display tunable plasmonic or photonic resonances. Germanium telluride is of particular interest because it has phase change properties that are useful for memory storage devices.

"A long term goal is to generalize plasmonic phenomena to all doped quantum dots, whether heavily self-doped or extrinsically doped with relatively few impurities or vacancies," Jain says.

This research was supported by the DOE Office of Science.


Source

Tuesday, April 19, 2011

Clumsy Avatars: Perfection Versus Mortality in Games and Simulation

The shop is one of several projects Chang uses to explore humanity in technology. Chang, an electronic artist and recently appointed co-director of the Games and Simulation Arts and Sciences program at Rensselaer, sees the dialogue between perfection and mortality as an important influence in the growing world of games and simulation.

"There's this transcendence that technology promises us. At its extreme is the notion of immortality that -- with artificial intelligence, robotics, and virtual reality -- you could download your consciousness and take yourself out of the limitations of the physical body," said Chang."But at the same time, that's what makes us human: our frailty and our mortality."

In other words, while the"sell" behind technology is often about achieving perfection (with a smart phone all the answers are at hand, with GPS we never lose our way, in Second Life we are beautiful), the risk is a loss of humanity.

That dialogue and tension leads Chang to believe that the nascent world of gaming and simulation could become"a new cultural form" as great as literature, art, music, and theater.

"This is just the beginning; we don't really know what this is going to be, and 'games and simulation' is just the best term we have to describe a much larger form," said Chang."Twenty years ago nobody knew what the Web was going to be. There was this huge form on the horizon that we were sort of fumbling toward with different technological experiments, artistic experiments; I think this is what's going on with games and simulation right now.

"There are many things that are very difficult to do hands-on -- it's very difficult to simulate a disaster, it's very difficult to manipulate atoms and molecules at the atomic level -- and this is where simulation comes in handy," said Chang."That kind of learning experience, that way of gaining knowledge that's intuitive, that comes through experience and involvement, can be expanded to many other realms."

As an electronic artist, Chang's own work is at the intersection of virtual environments, experimental gaming, and contemporary media art.

"I'm interested in what you could call evocative and poetic experiences within technological systems -- creating that powerful experience that you can get from great music, theater, books, and paintings through immersive and interactive simulations as well," Chang said."But I'm also interested in the experiences of being human within technological systems."

Other recent projects include"Becoming," a computer-driven video installation in which the attributes of two animated figures -- each inhabiting their own space -- are interchanged."Over time, this causes each figure to take on the attributes of the other, distorted by the structure of their digital information."

In"Insecurity Camera," an installation shown at art exhibits around the country, a"shy" security camera turns away at the approach of subjects.

"What I'm interested in is getting at those human qualities that are still there," Chang said."Some of this has to do with frailty, with fumbling, weakness, and failure. These are things that can get disguised, they can get swept under the rug when we think about technology."

Chang earned a bachelor of arts in computer science from Amherst College, and a master of fine arts in art and technology studies from the Art Institute of Chicago. His installations, performances, and immersive virtual reality environments have been exhibited in numerous venues and festivals worldwide, including Boston CyberArts, SIGGRAPH, the FILE International Electronic Language Festival in Sao Paulo, the Athens MediaTerra Festival, the Wired NextFest, and the Vancouver New Forms Festival, among others. He has designed interactive exhibits for museums such as the Museum of Contemporary Art in Chicago and the Field Museum of Natural History.

Chang teaches a two-semester game development course that joins students with backgrounds in all aspects of games -- computer programming, computer science, design, art, and writing -- in the process of creating games. The students start with a design, and proceed through all the steps of planning, creating art work, writing code, and refining their game.

"Think of it as a foundation into developing games that you can take into experimental game design and stretch beyond it," Chang said.

As the"new cultural form" evolves, Chang sees ample room for exploration.

For example, said Chang, virtual reality, in which experiences are staged in a wholly digital world, leads to different implications than augmented reality, in which digital elements overlay the physical world. One implication of virtual reality -- in which, as in Second Life, users can experiment with their identity -- lies in research which suggests that personal growth gains made within the virtual world transfer to the real world. One implication of augmented reality -- in which users may add digital elements that only they can access -- is the possibility of several people sharing the same physical world while experiencing divergent realities.

In the near term, the most immediate implications for the emerging form are, as might be expected, in entertainment and education.

"What's already happening is this enrichment of the notion of what entertainment is through games," Chang said."When you talk about games, you often have ideas of simple first-person shooter or action games. But within the realm of entertainment is an immense diversity of possibilities -- from complex emotional dramatic story-based games to casual games on your cell phone. There's this range of ways of playing from competitive, multiplayer, social to creative. This is just within the entertainment realm."


Source

Monday, April 18, 2011

Super-Small Transistor Created: Artificial Atom Powered by Single Electrons

The researchers report inNature Nanotechnologythat the transistor's central component -- an island only 1.5 nanometers in diameter -- operates with the addition of only one or two electrons. That capability would make the transistor important to a range of computational applications, from ultradense memories to quantum processors, powerful devices that promise to solve problems so complex that all of the world's computers working together for billions of years could not crack them.

In addition, the tiny central island could be used as an artificial atom for developing new classes of artificial electronic materials, such as exotic superconductors with properties not found in natural materials, explained lead researcher Jeremy Levy, a professor of physics and astronomy in Pitt's School of Arts and Sciences. Levy worked with lead author and Pitt physics and astronomy graduate student Guanglei Cheng, as well as with Pitt physics and astronomy researchers Feng Bi, Daniela Bogorin,and Cheng Cen. The Pitt researchers worked with a team from the University of Wisconsin at Madison led by materials science and engineering professor Chang-Beom Eom, including research associates Chung Wun Bark, Jae-Wan Park, and Chad Folkman. Also part of the team were Gilberto Medeiros-Ribeiro, of HP Labs, and Pablo F. Siles, a doctoral student at the State University of Campinas in Brazil.

Levy and his colleagues named their device SketchSET, or sketch-based single-electron transistor, after a technique developed in Levy's lab in 2008 that works like a microscopic Etch A SketchTM, the drawing toy that inspired the idea. Using the sharp conducting probe of an atomic force microscope, Levy can create such electronic devices as wires and transistors of nanometer dimensions at the interface of a crystal of strontium titanate and a 1.2 nanometer thick layer of lanthanum aluminate. The electronic devices can then be erased and the interface used anew.

The SketchSET -- which is the first single-electron transistor made entirely of oxide-based materials -- consists of an island formation that can house up to two electrons. The number of electrons on the island -- which can be only zero, one, or two -- results in distinct conductive properties. Wires extending from the transistor carry additional electrons across the island.

One virtue of a single-electron transistor is its extreme sensitivity to an electric charge, Levy explained. Another property of these oxide materials is ferroelectricity, which allows the transistor to act as a solid-state memory. The ferroelectric state can, in the absence of external power, control the number of electrons on the island, which in turn can be used to represent the 1 or 0 state of a memory element. A computer memory based on this property would be able to retain information even when the processor itself is powered down, Levy said. The ferroelectric state also is expected to be sensitive to small pressure changes at nanometer scales, making this device potentially useful as a nanoscale charge and force sensor.

The research inNature Nanotechnologyalso was supported in part by grants from the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. Army Research Office, the National Science Foundation, and the Fine Foundation.


Source

Sunday, April 17, 2011

Privacy Mode Helps Secure Android Smartphones

"There are a lot of concerns about potential leaks of personal information from smartphones," says Dr. Xuxian Jiang, an assistant professor of computer science at NC State and co-author of a paper describing the research."We have developed software that creates a privacy mode for Android systems, giving users flexible control over what personal information is available to various applications." The privacy software is called Taming Information-Stealing Smartphone Applications (TISSA).

TISSA works by creating a privacy setting manager that allows users to customize the level of information each smartphone application can access. Those settings can be adjusted any time that the relevant applications are being run -- not just when the applications are installed.

The TISSA prototype includes four possible privacy settings for each application. These settings are Trusted, Anonymized, Bogus and Empty. If an application is listed as Trusted, TISSA does not impose additional information access restrictions. If the user selects Anonymized, TISSA provides the application with generalized information that allows the application to run, without providing access to detailed personal information. The Bogus setting provides an application with fake results when it requests personal information. The Empty setting responds to information requests by saying the relevant information does not exist or is unavailable.

Jiang says TISSA could be easily modified to incorporate additional settings that would allow more fine-grained control of access to personal information."These settings may be further specialized for different types of information, such as your contact list or your location," Jiang says."The settings can also be specialized for different applications."

For example, a user may install a weather application that requires location data in order to provide the user with the local weather forecast. Rather than telling the application exactly where the user is, TISSA could be programmed to give the application generalized location data -- such as a random location within a 10-mile radius of the user. This would allow the weather application to provide the local weather forecast information, but would ensure that the application couldn't be used to track the user's movements.

The researchers are currently exploring how to make this software available to Android users."The software modification is relatively minor," Jiang says,"and could be incorporated through an over-the-air update."

The paper,"Taming Information-Stealing Smartphone Applications (on Android)," was co-authored by Jiang; Yajin Zhou, a Ph.D. student at NC State; Dr. Vincent Freeh, an associate professor of computer science at NC State; and Dr. Xinwen Zhang of Huawei America Research Center. The paper will be presented in June at the 4th International Conference on Trust and Trustworthy Computing, in Pittsburgh, Pa. The research was supported by the National Science Foundation and NC State's Secure Open Systems Initiative, which receives funding from the U.S. Army Research Office.


Source

Saturday, April 16, 2011

Hydrocarbons Deep Within Earth: New Computational Study Reveals How

The thermodynamic and kinetic properties of hydrocarbons at high pressures and temperatures are important for understanding carbon reservoirs and fluxes in Earth.

The work provides a basis for understanding experiments that demonstrated polymerization of methane to form high hydrocarbons and earlier methane forming reactions under pressure.

Hydrocarbons (molecules composed of the elements hydrogen and carbon) are the main building block of crude oil and natural gas. Hydrocarbons contribute to the global carbon cycle (one of the most important cycles of Earth that allows for carbon to be recycled and reused throughout the biosphere and all of its organisms).

The team includes colleagues at UC Davis, Lawrence Livermore National Laboratory and Shell Projects& Technology. One of the researchers, UC Davis Professor Giulia Galli, is the co-chair of the Deep Carbon Observatory's Physics and Chemistry of Deep Carbon Directorate and former LLNL researcher.

Geologists and geochemists believe that nearly all (more than 99 percent) of the hydrocarbons in commercially produced crude oil and natural gas are formed by the decomposition of the remains of living organisms, which were buried under layers of sediments in Earth's crust, a region approximately 5-10 miles below Earth's surface.

But hydrocarbons of purely chemical deep crustal or mantle origin (abiogenic) could occur in some geologic settings, such as rifts or subduction zones said Galli, a senior author on the study.

"Our simulation study shows that methane molecules fuse to form larger hydrocarbon molecules when exposed to the very high temperatures and pressures of the Earth's upper mantle," Galli said."We don't say that higher hydrocarbons actually occur under the realistic 'dirty' Earth mantle conditions, but we say that the pressures and temperatures alone are right for it to happen.

Galli and colleagues used the Mako computer cluster in Berkeley and computers at Lawrence Livermore to simulate the behavior of carbon and hydrogen atoms at the enormous pressures and temperatures found 40 to 95 miles deep inside Earth. They used sophisticated techniques based on first principles and the computer software system Qbox, developed at UC Davis.

They found that hydrocarbons with multiple carbon atoms can form from methane, (a molecule with only one carbon and four hydrogen atoms) at temperatures greater than 1,500 K (2,240 degrees Fahrenheit) and pressures 50,000 times those at Earth's surface (conditions found about 70 miles below the surface).

"In the simulation, interactions with metal or carbon surfaces allowed the process to occur faster -- they act as 'catalysts,'" said UC Davis' Leonardo Spanu, the first author of the paper.

The research does not address whether hydrocarbons formed deep in Earth could migrate closer to the surface and contribute to oil or gas deposits. However, the study points to possible microscopic mechanisms of hydrocarbon formation under very high temperatures and pressures.

Galli's co-authors on the paper are Spanu; Davide Donadio at the Max Planck Institute in Meinz, Germany; Detlef Hohl at Shell Global Solutions, Houston; and Eric Schwegler of Lawrence Livermore National Laboratory.


Source

Friday, April 15, 2011

New Spin on Graphene Makes It Magnetic

The results, reported inScience, could be a potentially huge breakthrough in the field of spintronics.

Spintronics is a group of emerging technologies that exploit the intrinsic spin of the electron, in addition to its fundamental electric charge that is exploited in microelectronics.

Billions of spintronics devices such as sensors and memories are already being produced. Every hard disk drive has a magnetic sensor that uses a flow of spins, and magnetic random access memory (MRAM) chips are becoming increasingly popular.

The findings are part of a large international effort involving research groups from the US, Russia, Japan and the Netherlands.

The key feature for spintronics is to connect the electron spin to electric current as current can be manipulated by means routinely used in microelectronics.

It is believed that, in future spintronics devices and transistors, coupling between the current and spin will be direct, without using magnetic materials to inject spins as it is done at the moment.

So far, this route has only been demonstrated by using materials with so-called spin-orbit interaction, in which tiny magnetic fields created by nuclei affect the motion of electrons through a crystal. The effect is generally small which makes it difficult to use.

The researchers found a new way to interconnect spin and charge by applying a relatively weak magnetic field to graphene and found that this causes a flow of spins in the direction perpendicular to electric current, making a graphene sheet magnetised.

The effect resembles the one caused by spin-orbit interaction but is larger and can be tuned by varying the external magnetic field.

The Manchester researchers also show that graphene placed on boron nitride is an ideal material for spintronics because the induced magnetism extends over macroscopic distances from the current path without decay.

The team believes their discovery offers numerous opportunities for redesigning current spintronics devices and making new ones such as spin-based transistors.

Professor Geim said:"The holy grail of spintronics is the conversion of electricity into magnetism or vice versa.

"We offer a new mechanism, thanks to unique properties of graphene. I imagine that many venues of spintronics can benefit from this finding."

Antonio Castro Neto, a physics professor from Boston who wrote a news article for theSciencemagazine which accompanies the research paper commented:"Graphene is opening doors for many new technologies.

"Not surprisingly, the 2010 Nobel Physics prize was awarded to Andre Geim and Kostya Novoselov for their groundbreaking experiments in this material.

"Apparently not satisfied with what they have accomplished so far, Geim and his collaborators have now demonstrated another completely unexpected effect that involves quantum mechanics at ambient conditions. This discovery opens a new chapter to the short but rich history of graphene."


Source

Thursday, April 14, 2011

Magnetic New Graphene Discovery

The finding by a team of Maryland researchers, led by Physics Professor Michael S. Fuhrer of the UMD Center for Nanophysics and Advanced Materials is the latest of many amazing properties discovered for graphene.

A honeycomb sheet of carbon atoms just one atom thick, graphene is the basic constituent of graphite. Some 200 times stronger than steel, it conducts electricity at room temperature better than any other known material (a 2008 discovery by Fuhrer, et. al). Graphene is widely seen as having great, perhaps even revolutionary, potential for nanotechnology applications. The 2010 Nobel Prize in physics was awarded to scientists Konstantin Novoselov and Andre Geim for their 2004 discovery of how to make graphene.

In their new graphene discovery, Fuhrer and his University of Maryland colleagues have found that missing atoms in graphene, called vacancies, act as tiny magnets -- they have a"magnetic moment." Moreover, these magnetic moments interact strongly with the electrons in graphene which carry electrical currents, giving rise to a significant extra electrical resistance at low temperature, known as the Kondo effect. The results appear in the paper"Tunable Kondo effect in graphene with defects" published this month inNature Physics.

The Kondo effect is typically associated with adding tiny amounts of magnetic metal atoms, such as iron or nickel, to a non-magnetic metal, such as gold or copper. Finding the Kondo effect in graphene with vacancies was surprising for two reasons, according to Fuhrer.

"First, we were studying a system of nothing but carbon, without adding any traditionally magnetic impurities. Second, graphene has a very small electron density, which would be expected to make the Kondo effect appear only at extremely low temperatures," he said.

The team measured the characteristic temperature for the Kondo effect in graphene with vacancies to be as high as 90 Kelvin, which is comparable to that seen in metals with very high electron densities. Moreover the Kondo temperature can be tuned by the voltage on an electrical gate, an effect not seen in metals. They theorize that the same unusual properties of that result in graphene's electrons acting as if they have no mass also make them interact very strongly with certain kinds of impurities, such as vacancies, leading to a strong Kondo effect at a relatively high temperature.

Fuhrer thinks that if vacancies in graphene could be arranged in just the right way, ferromagnetism could result."Individual magnetic moments can be coupled together through the Kondo effect, forcing them all to line up in the same direction," he said.

"The result would be a ferromagnet, like iron, but instead made only of carbon. Magnetism in graphene could lead to new types of nanoscale sensors of magnetic fields. And, when coupled with graphene's tremendous electrical properties, magnetism in graphene could also have interesting applications in the area of spintronics, which uses the magnetic moment of the electron, instead of its electric charge, to represent the information in a computer.

"This opens the possibility of 'defect engineering' in graphene -- plucking out atoms in the right places to design the magnetic properties you want," said Fuhrer.

This research was supported by grants from the National Science Foundation and the Office of Naval Research.


Source

Wednesday, April 13, 2011

Accelerate Data Storage by Several Orders of Magnitude? Ultra-Fast Magnetic Reversal Observed

With a constantly growing flood of information, we are being inundated with increasing quantities of data, which we in turn want to process faster than ever. Oddly, the physical limit to the recording speed of magnetic storage media has remained largely unresearched. In experiments performed on the particle accelerator BESSY II of Helmholtz-Zentrum Berlin, Dutch researchers have now achieved ultrafast magnetic reversal and discovered a surprising phenomenon.

In magnetic memory, data is encoded by reversing the magnetization of tiny points. Such memory works using the so-called magnetic moments of atoms, which can be in either"parallel" or"antiparallel" alignment in the storage medium to represent to"0" and"1."

The alignment is determined by a quantum mechanical effect called"exchange interaction." This is the strongest and therefore the fastest"force" in magnetism. It takes less than a hundred femtoseconds to restore magnetic order if it has been disturbed. One femtosecond is a millionth of a billionth of a second. Ilie Radu and his colleagues have now studied the hitherto unknown behaviour of magnetic alignment before the exchange interaction kicks in. Together with researchers from Berlin and York, they have published their results inNature.

For their experiment, the researchers needed an ultra-short laser pulse to heat the material and thus induce magnetic reversal. They also needed an equally short X-ray pulse to observe how the magnetization changed. This unique combination of a femtosecond laser and circular polarized, femtosecond X-ray light is available in one place in the world: at the synchrotron radiation source BESSY II in Berlin, Germany.

In their experiment, the scientists studied an alloy of gadolinium, iron and cobalt (GdFeCo), in which the magnetic moments naturally align antiparallel. They fired a laser pulse lasting 60 femtoseconds at the GdFeCo and observed the reversal using the circular-polarized X-ray light, which also allowed them to distinguish the individual elements. What they observed came as a complete surprise: The Fe atoms already reversed their magnetization after 300 femtoseconds while the Gd atoms required five times as long to do so. That means the atoms were all briefly in parallel alignment, making the material strongly magnetized."This is as strange as finding the north pole of a magnet reversing slower than the south pole," says Ilie Radu.

With their observation, the researchers have not only proven that magnetic reversal can take place in femtosecond timeframes, they have also derived a concrete technical application from it:"Translated to magnetic data storage, this would signify a read/write rate in the terahertz range. That would be around 1000 times faster than present-day commercial computers," says Radu.


Source

Tuesday, April 12, 2011

Toward a Computer Model of the Brain: New Technique Poised to Untangle Brain's Complexity

A new area of research is emerging in the neuroscience known as 'connectomics'. With parallels to genomics, which maps the our genetic make-up, connectomics aims to map the brain's connections (known as 'synapses'). By mapping these connections -- and hence how information flows through the circuits of the brain -- scientists hope to understand how perceptions, sensations and thoughts are generated in the brain and how these functions go wrong in diseases such as Alzheimer's disease, schizophrenia and stroke.

Mapping the brain's connections is no trivial task, however: there are estimated to be one hundred billion nerve cells ('neurons') in the brain, each connected to thousands of other nerve cells -- making an estimated 150 trillion synapses. Dr Tom Mrsic-Flogel, a Wellcome Trust Research Career Development Fellow at UCL (University College London), has been leading a team of researchers trying to make sense of this complexity.

"How do we figure out how the brain's neural circuitry works?" he asks."We first need to understand the function of each neuron and find out to which other brain cells it connects. If we can find a way of mapping the connections between nerve cells of certain functions, we will then be in a position to begin developing a computer model to explain how the complex dynamics of neural networks generate thoughts, sensations and movements."

Nerve cells in different areas of the brain perform different functions. Dr Mrsic-Flogel and colleagues focus on the visual cortex, which processes information from the eye. For example, some neurons in this part of the brain specialise in detecting the edges in images; some will activate upon detection of a horizontal edge, others by a vertical edge. Higher up in visual hierarchy, some neurons respond to more complex visual features such as faces: lesions to this area of the brain can prevent people from being able to recognise faces, even though they can recognise individual features such as eyes and the nose, as was famously described in the book The Man Who Mistook Wife for a Hat by Oliver Sachs.

In a study published online April 10 in the journalNature, the team at UCL describe a technique developed in mice which enables them to combine information about the function of neurons together with details of their synaptic connections.

The researchers looked into the visual cortex of the mouse brain, which contains thousands of neurons and millions of different connections. Using high resolution imaging, they were able to detect which of these neurons responded to a particular stimulus, for example a horizontal edge.

Taking a slice of the same tissue, the researchers then applied small currents to a subset of neurons in turn to see which other neurons responded -- and hence which of these were synaptically connected. By repeating this technique many times, the researchers were able to trace the function and connectivity of hundreds of nerve cells in visual cortex.

The study has resolved the debate about whether local connections between neurons are random -- in other words, whether nerve cells connect sporadically, independent of function -- or whether they are ordered, for example constrained by the properties of the neuron in terms of how it responds to particular stimuli. The researchers showed that neurons which responded very similarly to visual stimuli, such as those which respond to edges of the same orientation, tend to connect to each other much more than those that prefer different orientations.

Using this technique, the researchers hope to begin generating a wiring diagram of a brain area with a particular behavioural function, such as the visual cortex. This knowledge is important for understanding the repertoire of computations carried out by neurons embedded in these highly complex circuits. The technique should also help reveal the functional circuit wiring of regions that underpin touch, hearing and movement.

"We are beginning to untangle the complexity of the brain," says Dr Mrsic-Flogel."Once we understand the function and connectivity of nerve cells spanning different layers of the brain, we can begin to develop a computer simulation of how this remarkable organ works. But it will take many years of concerted efforts amongst scientists and massive computer processing power before it can be realised."

The research was supported by the Wellcome Trust, the European Research Council, the European Molecular Biology Organisation, the Medical Research Council, the Overseas Research Students Award Scheme and UCL.

"The brain is an immensely complex organ and understanding its inner workings is one of science's ultimate goals," says Dr John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust."This important study presents neuroscientists with one of the key tools that will help them begin to navigate and survey the landscape of the brain."


Source

Monday, April 11, 2011

Artificial Intelligence for Improving Data Processing

Within this framework, five leading scientists presented the latest advances in their research work on different aspects of AI. The speakers tackled issues ranging from the more theoretical such as algorithms capable of solving combinatorial problems to robots that can reason about emotions, systems that use vision to monitor activities, and automated players that learn how to win in a given situation."Inviting speakers from groups of references allows us to offer a panoramic view of the main problems and the techniques open in the area, including advances in video and multi-sensor systems, task planning, automated learning, games, and artificial consciousness or reasoning," the experts noted.

The participants from the AVIRES (The Artificial Vision and Real Time Systems) research group at the University of Udine gave a seminar on the introduction of data fusion techniques and distributed artificial vision. In particular, they dealt with automated surveillance systems with visual sensor networks, from basic techniques for image processing and object recognition to Bayesian reasoning for understanding activities and automated learning and data fusion to make high performance system. Dr.Simon Lucas, professor at the Essex University and editor in chief of IEEE Transactions on Computational Intelligence and AI in Games and a researcher focusing on the application of AI techniques on games, presented the latest trends in generation algorithms for game strategies. During his presentation, he pointed out the strength of UC3M in this area, citing its victory in two of the competitions held at the international level during the most recent edition of the Conference on Computational Intelligence and Games.

In addition, Enrico Giunchiglia, professor at the University of Genoa and former president of the Council of the International Conference on Automated Planning and Scheduling (ICAPS), described the most recent work in the area of logic satisfaction, which is rapidly growing due to its applications in circuit design and in task planning

Artificial Intelligence (IA) is as old as computer science and has generated ideas, techniques and applications that permit it to solve difficult problems. The field is very active and offers solutions to very diverse sectors. The number of industrial applications that have an AI technique is very high, and from the scientific point of view, there are many specialized journals and congresses. Furthermore, new lines of research are constantly being open and there is a still great room for improvement in knowledge transfer between researchers and industry. These are some of the main ideas gathered at the 4th International Seminar on New Issues on Artificial Intelligence), organized by the SCALAB group in the UC3M Computer Engineering Department at the Leganés campus of this Madrid university.

The future of Artificial Intelligence

This seminar also included a talk on the promising future of AI."The tremendous surge in the number of devices capable of capturing and processing information, together with the growth of the computing capacity and the advances in algorithms enormously boost the possibilities for practical application," the researchers from the SCALAB group pointed out. Among them we can cite the construction of computer programs that make life easier, which take decisions in complex environments or which allow problems to be solved in environments which are difficult to access for people," he noted. From the point of view of these research trends, more and more emphasis is being placed on developing systems capable of learning and demonstrating intelligent behavior without being tied to replicating a human model.

AI will allow advances in the development of systems capable of automatically understanding a situation and its context with the use of sensor data and information systems as well as establishing plans of action, from support applications to decision making within dynamic situations. According to the researchers, this is due to the rapid advances and the availability of sensor technology which provides a continuous flow of data about the environment, information that must be dealt with appropriately in a node of data fusion and information. Likewise, the development of sophisticated techniques for task planning allow plans of action to be composed, executed, checked for correct execution, and rectified in case of some failure, and finally to learn from mistakes made.

This technology has allowed a wide range of applications such as integrated systems for surveillance, monitoring and detecting anomalies, activity recognition, teleassistence systems, transport logistic planning, etc. According to Antonio Chella, Full Professor at the University of Palermo and expert in Artificial Consciousness, the future of AI will imply discovering a new meaning of the word"intelligence." Until now, it has been equated with automated reasoning in software systems, but in the future AI will tackle more daring concepts such as the incarnation of intelligence in robots, as well as emotions, and above all consciousness.


Source

Sunday, April 10, 2011

Control the Cursor With Power of Thought

In a new study, scientists from Washington University demonstrated that humans can control a cursor on a computer screen using words spoken out loud and in their head, holding huge applications for patients who may have lost their speech through brain injury or disabled patients with limited movement.

By directly connecting the patient's brain to a computer, the researchers showed that the computer could be controlled with up to 90% accuracy even when no prior training was given.

The study, published April 7, in IOP Publishing'sJournal of Neural Engineering, involves a technique called electrocortiography (ECoG) -- the placing of electrodes directly onto a patient's brain to record electrical activity -- which has previously been used to identify regions of the brain that cause epilepsy and has led to effective treatments.

More recently, the process of ECoG has been applied to brain-computer interfaces (BCI) which aim to assist or repair brain functions and have already been used to restore the sight of one patient and stimulate limb movement in others.

The study used four patients, between the ages of 36-48, who suffered from epilepsy. Each patient was given a craniotomy -- an invasive procedure used to place an electrode onto the brain of the patient -- and was monitored whilst undergoing trials.

During the trials, the electrodes placed on the patient's brain would emit signals which were acquired, processed, and stored on a computer.

The trials involved the patients sitting in front of a screen and trying to move a cursor toward a target using pre-defined words that were associated with specific directions. For instance, saying or thinking of the word"AH" would move the cursor right.

At some point in the future researchers hope to permanently insert implants into a patient's brain to help restore functionality and, even more impressively, read someone's mind.

Dr. Eric C Leuthardt, the lead author, of Washington University School of Medicine, said:"This is one of the earliest examples, to a very, very small extent, of what is called 'reading minds' -- detecting what people are saying to themselves in their internal dialogue."

This study was the first to demonstrate microscale ECoG recordings meaning that future operations that require this technology may use an implant that is very small and minimally invasive.

Also, the study identified that speech intentions can be acquired through a site that is less than a centimetre wide which would require only a small insertion into the brain. This would greatly reduce the risk of a surgical procedure.

Dr Leuthardt continued,"We want to see if we can not just detect when you're saying dog, tree, tool or some other word, but also learn what the pure idea of that looks like in your mind. It's exciting and a little scary to think of reading minds, but it has incredible potential for people who can't communicate or are suffering from other disabilities."


Source

Saturday, April 9, 2011

Free Software Makes Computer Mouse Easier for People With Disabilities

So it often goes for computer users whose motor disabilities prevent them from easily using a mouse.

As the population ages, more people are having trouble with motor control, but a University of Washington team has invented two mouse cursors that make clicking targets a whole lot easier. And neither requires additional computer hardware -- just some free, downloadable software. The researchers hope that in exchange for the software, users offer feedback.

The Pointing Magnifier combines an area cursor with visual and motor magnification, reducing need for fine, precise pointing. The UW's AIM Research Group, which invented the Pointing Magnifier, learned that users can much more easily acquire targets, even small ones, 23 percent faster with the Pointing Magnifier.

The magnifier runs on Windows-based computer systems. It replaces the conventional cursor with a larger, circular cursor that can be made even larger for users who have less motor control. To acquire a target, the user places the large cursor somewhere over the target, and clicks. The Pointing Magnifier then magnifies everything under that circular area until it fills the screen, making even tiny targets large. The user then clicks with a point cursor inside that magnified area, acquiring the target. Although the Pointing Magnifier requires two clicks, it's much easier to use than a conventional mouse, which can require many clicks to connect with a target.

Screen magnifiers for people with visual impairments have been around a long time, but such magnifiers affect only the size of screen pixels, not the motor space in which users act, thus offering no benefit to users with motor impairments. The Pointing Magnifier enlarges both visual and motor space.

Software for the Pointing Magnifier includes a control panel that allows the user to adjust color, transparency level, magnification factor, and area cursor size. User preferences are saved when the application is closed. Keyboard shortcuts quickly enable or disable the Pointing Magnifier. The UW team is also making shortcuts customizable.

"It's less expensive to create computer solutions for people who have disabilities if you focus on software rather than specialized hardware, and software is usually easier to procure than hardware," said Jacob O. Wobbrock, an assistant professor in the Information School who leads the AIM Group.

His group's paper on enhanced area cursors, including the Pointing Magnifier, was presented at the 2010 User Interface Software and Technology symposium in New York. A follow-on paper will be presented at a similar conference in May.

Another AIM technology, the Angle Mouse, similarly helps people with disabilities. Like the Pointing Magnifier, it may be downloaded, and two videos, one for general audiences and another for academic ones, are available as well.

When the Angle Mouse cursor initially blasts towards a target, the spread of movement angles, even for people with motor impairments, tends to be narrow, so the Angle Mouse keeps the cursor moving fast. However, when the cursor nears its target and the user tries to land, the angles formed by movements diverge sharply, so the Angle Mouse slows the cursor, enlarges motor space and makes the target easier to get into. The more trouble a user has, the larger the target will be made in motor space. (The target's visual appearance will not change.)

Wobbrock compares the Angle Mouse to a race car."On a straightaway, when the path is open, the car whips along, but in a tight corner, the car slows and makes a series of precise corrections, ensuring its accuracy."

A study of the Angle Mouse included 16 people, half of whom had motor impairments. The Angle Mouse improved motor-impaired pointing performance by 10 percent over the regular Windows™ default mouse and 11 percent over sticky icons -- an earlier innovation in which targets slow the cursor when it is inside them.

"Pointing is an essential part of using a computer, but it can be quite difficult and time consuming if dexterity is a problem," Wobbrock said."Even shaving one second off each time a person points may save hours over the course of a year."

Wobbrock suggests that users try both the Pointing Magnifier and the Angle Mouse before deciding which they prefer.

"Our cursors make ubiquitous mice, touchpads, and trackballs more effective for people with motor impairments without requiring new, custom hardware," Wobbrock said."We're achieving accessibility by improving devices that computer users already have. Making computers friendlier for everyone is the whole point of our work."

The Pointing Magnifier work was funded by the National Science Foundation and the Natural Sciences and Engineering Research Council of Canada.

Co-authors of the research paper that included the Pointing Magnifier are Leah Findlater, Alex Jansen, Kristen Shinohara, Morgan Dixon, Peter Kamb, Joshua Rakita and Wobbrock.

The Angle Mouse work was supported by Microsoft Research, Intel Research and the National Science Foundation.

Co-authors of the Angle Mouse paper are Wobbrock, James Fogarty, Shih-Yen (Sean) Liu, Shunichi Kimuro, and Susumi Harada.


Source

Friday, April 8, 2011

Mathematical Model Simulating Rat Whiskers Provides Insight Into Sense of Touch

Hundreds of papers are published each year that use the rat whisker system as a model to understand brain development and neural processing. Rats move their whiskers rhythmically against objects to explore the environment by touch. Using only tactile information from its whiskers, a rat can determine all of an object's spatial properties, including size, shape, orientation and texture.

But there is a big missing piece that prevents a full understanding of the neural signals recorded in these studies: no one knows how to represent the"touch" of a whisker in terms of mechanical variables.

"We don't understand touch nearly as well as other senses," says Mitra Hartmann, associate professor of biomedical engineering and mechanical engineering at the McCormick School of Engineering and Applied Science."We know that visual and auditory stimuli can be quantified by the intensity and frequency of light and sound, but we don't fully understand the mechanics that generate our sense of touch."

To create a model that starts to quantify these mechanics, Hartmann's team first studied the structure of the rat whisker array -- the 30 whiskers arranged in a regular pattern on each side of a rat's face. By analyzing them in both two- and three-dimensional scans, they defined the relationship between the size and shape of each whisker and its placement on the face of the rat.

Using this information, the team created a model that quantifies the full shape and structure of the rat head and whisker array. The model now allows the team to simulate the rat"whisking" against different objects and to predict the full pattern of inputs into the whisker system as a rat encounters an object. The simulations can then be compared against real behavior.

The research is published online in the journalPLoS Computational Biology.

Understanding the mechanics of the rat whisker system may provide a step toward understanding the human sense of touch.

"The big question our laboratory is interested in is how do animals, including humans, actively move their sensors through the environment and somehow turn that sensory data into a stable perception of the world," Hartmann says.

To determine how a rat can sense the shape of an object, Hartmann's team previously developed a light sheet to monitor the precise locations of the whiskers as they came in contact with the object. Using high-speed video, the team can also analyze how the rat moves its head to explore different shapes. These behavioral observations can then be paired with the output from the model.

These advances will provide insight into the sense of touch but may also enable new technologies that could make use of the whisker system. For example, Hartmann's lab created arrays of robotic whiskers that can, in several respects, mimic the capabilities of mammalian whiskers. The researchers demonstrated that these arrays can sense information about both object shape and fluid flow.

"We show that the bending moment, or torque, at the whisker base can be used to generate three-dimensional spatial representations of the environment," Hartmann says."We used this principle to make arrays of robotic whiskers that can replicate much of the basic mechanics of rat whiskers." The technology, she said, could be used to extract the three-dimensional features of almost any solid object.

Hartmann envisions that a better understanding of the whisker system may be useful for engineering applications in which the use of cameras is limited. But most importantly, a better understanding of the rat whisker system could translate into a better understanding of ourselves.

"Although whiskers and hands are very different, the basic neural pathways that process tactile information are in many respects similar across mammals," Hartmann says."A better understanding of neural processing in the whisker system may provide insights into how our own brains process information."

In addition to Hartmann, other authors of the paper are Blythe Towal, Brian Quist and Joseph Solomon, all of Northwestern, and Venkatesh Gopal of Elmhurst College.


Source

Thursday, April 7, 2011

iPad Helps Archaeologists

UC teams of archaeologists have spent more than a decade at the site of the Roman city that was buried under a volcano in 79 AD. The project is producing a complete archaeological analysis of homes, shops and businesses at a forgotten area inside one of the busiest gates of Pompeii, the Porta Stabia.

Through years of painstaking recording of their excavations, the researchers are exploring the social and cultural scene of a lost city and how the middle class neighborhood influenced Pompeian and Roman culture.

The standard archaeological approach to recording this history -- a 300-year tradition -- involves taking precise measurements, drawings and notes, all recorded on paper with pencil. But last summer, the researchers found that the handheld computers and their ability to digitally record and immediately communicate information held many advantages over a centuries-honed tradition of archaeological recording.

"There's a common, archival nature to what we're doing. There's a precious timelessness, a priceless sort of quality to the data that we're gathering, so we have made an industry of being very, very careful about how we record things," explains Ellis."Once we've excavated through it, it's gone, so ever since our undergraduate years, we've become very, very good and consistent at recording. We're excited about discovering there's another way," Ellis says.

"Because the trench supervisor is so busy, it can take days to share handwritten notes between trenches," explains Wallrodt."Now, we can give them an (electronic) notebook every day if they want it."

Wallrodt says one of the biggest concerns of adopting the new technology was switching from drawing on a large sheet of paper to sticking one's finger on the iPad's glass."With the iPad, there's also a lot less to carry. There's no big board for drawing, no ruler and no calculator."

The researchers say they plan to pack even more iPads on their trip to Pompeii this June. The research project is funded by the Louise Taft Semple Fund through the UC Department of Classics.

*The iPad research experiment, led by Steven Ellis, UC assistant professor of classics, and John Wallrodt, a senior research associate for the Department of Classics, has been featured on the National Geographic Channel as well as Apple's website. That's after the researchers took six iPads to UC's excavation site at Pompeii last summer. The iPads themselves were just being introduced at the time.


Source

Wednesday, April 6, 2011

Gaming, Simulation Tools Merged to Create Models for Border Security

The Borders High Level Model (HLM) uses a serious gaming platform known as Ground Truth, a force-on-force battle simulation tool called Dante™, and the work of several collaborating organizations.

"There's a lot of debate going on in the government concerning the technology and infrastructure investments that need to be made along the border," explained Jason Reinhardt, who serves as the Borders HLM project manager at Sandia."How much fence do we need? What kind of fence? What is the right mix of border personnel and technology? How can sensors, vehicles and other technical equipment most effectively be used? With Borders HLM, CBP officials can simulate their defensive architectures, accurately measure their performance and start to answer these difficult questions."

Ground Truth, initially funded through internal Sandia investments in 2007, is a gaming platform originally designed to prepare decision makers and first responders for weapons of mass destruction/weapons of mass effect (WMD/WME) attacks in metropolitan areas. Developed by Sandia computer scientist and Borders HLM principal investigator Donna Djordjevich, the software provides a virtual environment where users can play through various scenarios to see the effects of their decisions under the constraints of time and resources.

For the Borders HLM project, the Ground Truth software has been integrated into bottom-projected touch surface table. On this game surface, users can see"people" moving across the border terrain, observe CBP"personnel" respond to incidents and essentially control those movements and"apprehend" suspects. Users can also view a leader board of sorts that shows how many suspects have been apprehended, the dollar amount spent implementing the chosen architecture and other metrics that matter to CBP decision-makers.

Dante™, also part of the Borders HLM platform, is a force-on-force battle simulation tool built on the well-known Umbra simulation framework developed and introduced in 2001 by Sandia researchers.

The work also builds from another Sandia borders project from the mid-2000s (focused on the impact of new detection technology at ports of entry) and capitalizes on a range of existing Sandia capabilities, including the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), the National Infrastructure Simulation and Analysis Center (NISAC, a joint Sandia and Los Alamos National Laboratory program) and even the lab's expertise in robotics.

According to Reinhardt and Djordjevich, there were a number of technical challenges in integrating a mature modeling technology like Dante with a newer gaming technology like Ground Truth.

"We needed to create real-time control for the user, and our current capabilities weren't built to do that," Reinhardt said."There's also the fact that we're modeling 64 square miles of border, and we need to do so at a pretty high fidelity," added Djordjevich, who pointed out that Ground Truth's terrain was originally developed at a fixed, small scale. To help overcome some of the barriers, Sandia has looked to some important collaborators.

The University of Utah provided a technology, Visualization Streams for Ultimate Scalability (ViSUS), which allows researchers to progressively stream in terrain and imagery data and minimize data processing requirements, an important consideration given that HLM requires many gigabytes of data. For its part, Happynin Games, an iPhone/mobile game development company, developed the 3-dimentional artwork and the characters found in the simulations. Sandia, acting as the systems integrator, then put all the pieces together, presented the Borders HLM product to CBP and demonstrated how it would allow them to go through all the steps of the"engagement analysis cycle."

"We learned that the border patrol agents and CBP decision-makers need a tool that offers a common view of the problems they face," said Reinhardt."With our high level model, they can play through various scenarios and see how people, technology and other elements all interact. Then, later, they can go back and do a baseline analysis and dig into the details of why certain architectures and solutions aren't working as well as they should." CBP personnel can then play the game again with a recommended solution, and the end users can critique and tweak it to their liking.

With additional funding and the right kind of collaborations, Djordjevich said, more robust features could be added to make Borders HLM even more valuable to CBP and other potential customers. The current version, for instance, only deals with individual border crossers, so it doesn't capture crowd behaviors. Other sensor types, such as radiation detectors or even airborne equipment, could also be added.

Reinhardt says the future of the Borders HLM tool will likely depend on the direction in which CPB chooses to go with its border operations."Our high-level models tool will likely change the way CBP conducts its business, and it will probably have a real long-term impact on how large expenditures are justified or reputed on and around the nation's borders."

Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.


Source

Tuesday, April 5, 2011

Device Enables Computer to Identify Whether User Is Male or Female

The Spanish Patent and Trademark Office has awarded the Universidad Politécnica de Madrid and the Universidad Rey Juan Carlos Spanish patent ES 2 339 100 B2 for the device.

Thanks to the new algorithm, devices can be built to measure television or advertising video audiences by gathering demographic information about spectators (dynamic marketing). The new device is also useful for conducting market research at shopping centres, stores, banks or any other business using cameras to count people and extract demographic information. Another application is interactive kiosks with a virtual vendor, as the device automatically extracts information about the user, such as the person's gender, to improve interaction.

A step forward in gender recognition from facial images

This research, the results of which were published in IEEE Transactions on Pattern Analysis and Machine Intelligence, demonstrates that linear techniques are just as good as support vector machines (SVM) for the gender recognition problem. The developed technique is applicable in devices that have low computational resources, like telephones or intelligent cameras.

The study concludes that linear methods are useful for training with databases that contain a small number of images, as well as for outputting gender classifiers that are as fast as boosting-based classifiers. However, boosting- or SVM-based methods will require more training images to get good results. Finally, SVM-based classifiers are the slowest option. Additionally, the experimental evidence suggests that there is a dependency among different demographic variables like gender, age or ethnicity.

Device for demographic face classification

The invention is a device equipped with a camera that captures digital images and is connected to an image processing system. The image processing system trims each face detection image to the size of 25x25 pixels. An elliptical mask (designed to eliminate background interference) is then applied to the image, and it is equalized and classified.

The device advances the state of the art by using a classifier based on the most efficient linear classification methods: principal component analysis (PCA), followed by Fisher's linear discriminant analysis (LDA) using a Bayesian classifier in the small dimensional space output by the LDA. For PCA+LDA to be competitive, the crucial step is to select the most discriminant PCA features before performing LDA.

One of the major research areas in informatics is the development of machines that interact with users in the same way as human beings communicate with each other. This research is a step further in this direction.


Source

Sunday, April 3, 2011

Self-Cooling Observed in Graphene Elctronics

Led by mechanical science and engineering professor William King and electrical and computer engineering professor Eric Pop, the team will publish its findings in the April 3 advance online edition of the journalNature Nanotechnology.

The speed and size of computer chips are limited by how much heat they dissipate. All electronics dissipate heat as a result of the electrons in the current colliding with the device material, a phenomenon called resistive heating. This heating outweighs other smaller thermoelectric effects that can locally cool a device. Computers with silicon chips use fans or flowing water to cool the transistors, a process that consumes much of the energy required to power a device.

Future computer chips made out of graphene -- carbon sheets 1 atom thick -- could be faster than silicon chips and operate at lower power. However, a thorough understanding of heat generation and distribution in graphene devices has eluded researchers because of the tiny dimensions involved.

The Illinois team used an atomic force microscope tip as a temperature probe to make the first nanometer-scale temperature measurements of a working graphene transistor. The measurements revealed surprising temperature phenomena at the points where the graphene transistor touches the metal connections. They found that thermoelectric cooling effects can be stronger at graphene contacts than resistive heating, actually lowering the temperature of the transistor.

"In silicon and most materials, the electronic heating is much larger than the self-cooling," King said."However, we found that in these graphene transistors, there are regions where the thermoelectric cooling can be larger than the resistive heating, which allows these devices to cool themselves. This self-cooling has not previously been seen for graphene devices."

This self-cooling effect means that graphene-based electronics could require little or no cooling, begetting an even greater energy efficiency and increasing graphene's attractiveness as a silicon replacement.

"Graphene electronics are still in their infancy; however, our measurements and simulations project that thermoelectric effects will become enhanced as graphene transistor technology and contacts improve" said Pop, who is also affiliated with the Beckman Institute for Advanced Science, and the Micro and Nanotechnology Laboratory at the U. of I.

Next, the researchers plan to use the AFM temperature probe to study heating and cooling in carbon nanotubes and other nanomaterials.

King also is affiliated with the department of materials science and engineering, the Frederick Seitz Materials Research Laboratory, the Beckman Institute, and the Micro and Nanotechnology Laboratory.

The Air Force Office of Scientific Research and the Office of Naval Research supported this work. Co-authors of the paper included graduate student Kyle Grosse, undergraduate Feifei Lian and postdoctoral researcher Myung-Ho Bae.


Source

Friday, April 1, 2011

World First: Calculations With 14 Quantum Bits

The term entanglement was introduced by the Austrian Nobel laureate Erwin Schrödinger in 1935, and it describes a quantum mechanical phenomenon that while it can clearly be demonstrated experimentally, is not understood completely. Entangled particles cannot be defined as single particles with defined states but rather as a whole system. By entangling single quantum bits, a quantum computer will solve problems considerably faster than conventional computers."It becomes even more difficult to understand entanglement when there are more than two particles involved," says Thomas Monz, junior scientist in the research group led by Rainer Blatt at the Institute for Experimental Physics at the University of Innsbruck."And now our experiment with many particles provides us with new insights into this phenomenon," adds Blatt.

World record: 14 quantum bits

Since 2005 the research team of Rainer Blatt has held the record for the number of entangled quantum bits realized experimentally. To date, nobody else has been able to achieve controlled entanglement of eight particles, which represents one quantum byte. Now the Innsbruck scientists have almost doubled this record. They confined 14 calcium atoms in an ion trap, which, similar to a quantum computer, they then manipulated with laser light. The internal states of each atom formed single qubits and a quantum register of 14 qubits was produced. This register represents the core of a future quantum computer. In addition, the physicists of the University of Innsbruck have found out that the decay rate of the atoms is not linear, as usually expected, but is proportional to the square of the number of the qubits. When several particles are entangled, the sensitivity of the system increases significantly."This process is known as superdecoherence and has rarely been observed in quantum processing," explains Thomas Monz. It is not only of importance for building quantum computers but also for the construction of precise atomic clocks or carrying out quantum simulations.

Increasing the number of entangled particles

By now the Innsbruck experimental physicists have succeeded in confining up to 64 particles in an ion trap."We are not able to entangle this high number of ions yet," says Thomas Monz."However, our current findings provide us with a better understanding about the behavior of many entangled particles." And this knowledge may soon enable them to entangle even more atoms. Some weeks ago Rainer Blatt's research group reported on another important finding in this context in the scientific journalNature: They showed that ions might be entangled by electromagnetic coupling. This enables the scientists to link many little quantum registers efficiently on a micro chip. All these findings are important steps to make quantum technologies suitable for practical information processing," Rainer Blatt is convinced.

The results of this work are published in the scientific journalPhysical Review Letters. The Innsbruck researchers are supported by the Austrian Science Fund (FWF), the European Commission and the Federation of Austrian Industries Tyrol.


Source