Monday, February 28, 2011

Potential 'Game Changer' for Pathologists

Balis isn't playing war games. The director of the Division of Pathology Informatics at the University of Michigan Medical School is demonstrating the extreme flexibility of a software-tool aimed at making the detection of abnormalities in cell and tissue samples faster, more accurate and more consistent.

In a medical setting, instead of helicopters, the technique, known as Spatially-Invariant Vector Quantization (SIVQ), can pinpoint cancer cells and other critical features from digital images made from tissue slides.

But SIVQ isn't limited to any particular area of medicine. It can readily separate calcifications from malignancies in breast tissue samples, search for and count particular cell types in a bone marrow slide, or quickly identify the cherry red nucleoli of cells associated with Hodgkin's disease, according to findings just published in theJournal of Pathology Informatics.

"The fact that the algorithm operates effortlessly across domains and lengths scales, while requiring minimal user training, sets it apart from conventional approaches to image analysis," Balis says.

The technology -- developed in conjunction with researchers at Massachusetts General Hospital and Harvard Medical School -- differs from conventional pattern recognition software by basing its core search on a series of concentric, pattern-matching rings, rather than the more typical rectangular or square blocks. This approach takes advantage of the rings' continuous symmetry, allowing for the recognition of features no matter how they're rotated or whether they're reversed, like in a mirror.

"That's good because in pathology, images of cells and tissue do not have a particular orientation," Balis says."They can face any direction." One of the images included with the paper demonstrates this principle; SIVQ consistently identifies the letter A from a field of text, no matter how the letters are rotated.

How it works

In SIVQ, a search starts with the user selecting a small area of pixels, known as a vector, which she wants to try to match elsewhere in the image. The vector can also come from a stored library of images.

The algorithm then compares this circular vector to every part of the image. And at every location, the ring rotates through millions of possibilities in an attempt to find a match in every possible degree of rotation. Smaller rings within the main ring can provide an even more refined search.

The program then creates a heat map, shading the image based on the quality of match at every point.

This technique wouldn't work with a square or rectangular-shaped search structure because those shapes don't remain symmetrical as they rotate, Balis explains.

Why hasn't everyone been using circles all along?

"It's one of those things that's only obvious in hindsight," Balis says.

In testing the algorithm, researchers even used it to find Waldo in an illustration from a Where's Waldo? children's book.

"You just have to generate a vector for his face," explains Jason Hipp, M.D., Ph.D., co-lead author of the paper -- just as one would generate a vector to recognize calcifications in breast tissue.

A"game changer"

Hipp believes the technology has the potential to be a"game changer" for the field by opening myriad new possibilities for deeper image analysis.

"It's going to allow us to think about things differently," says Hipp, a pathology informatics research fellow and clinical lecturer in the Department of Pathology."We're starting to bridge the gap between the qualitative analysis carried out by trained expert pathologists with the quantitative approaches made possible by advances in imaging technology."

For example, the most common way to look at tissue samples is still a staining technique that dates back to the1800s. Reading these complex slides and rendering a diagnosis is part of the art of pathology.

SIVQ, however, can assist pathologists by pre-screening an image and identifying potentially problematic areas, including subtle features that may not be readily apparent to the eye.

SIVQ's efficiency in pre-identifying potential problems becomes apparent when one considers that a pathologist may review more than 100 slides in a single day.

"Unlike even the most diligent humans, computers do not suffer from the effects of boredom or fatigue," Balis says.

Working together

Vectors can also be pooled to create shared libraries -- a catalog of reference images upon which the computer can search -- Balis explains, which could help pathologists to quickly identify rare anomalies.

"Bringing such tools into the clinical workflow could provide a higher level of expertise that is distributed more widely, and lower the rate at which findings get overlooked," Balis says.

Following the publication of this first paper presenting the SIVQ algorithm, the team has a number of research projects nearing completion that demonstrate the technology's potential usefulness in a number of basic science and clinical applications. These efforts involve collaborations with researchers at the National Institutes of Health, Mayo Clinic, Rutgers University, Harvard Medical School and Massachusetts General Hospital.

SIVQ may also help with the analysis of"liquid biopsies," an experimental technique of scanning blood samples for tiny numbers of cancer cells hiding amid billions of healthy ones. Balis was involved with the development of that technology at Massachusetts General Hospital before he came to U-M and members of that research team are also involved in developing SIVQ and its applications.

Still, pathologists shouldn't be worried that SIVQ will put them out of a job.

"No one is talking about replacing pathologists any time soon," Balis says."But working in tandem with this technology, the hope is that they will be able to achieve a higher overall level of performance."


Source

Sunday, February 27, 2011

Atomic Antennas Transmit Quantum Information Across a Microchip

The researchers have published their work in the scientific journalNature.

Six years ago scientists at the University of Innsbruck realized the first quantum byte -- a quantum computer with eight entangled quantum particles; a record that still stands."Nevertheless, to make practical use of a quantum computer that performs calculations, we need a lot more quantum bits," says Prof. Rainer Blatt, who, with his research team at the Institute for Experimental Physics, created the first quantum byte in an electromagnetic ion trap."In these traps we cannot string together large numbers of ions and control them simultaneously."

To solve this problem, the scientists have started to design a quantum computer based on a system of many small registers, which have to be linked. To achieve this, Innsbruck quantum physicists have now developed a revolutionary approach based on a concept formulated by theoretical physicists Ignacio Cirac and Peter Zoller. In their experiment, the physicists electromagnetically coupled two groups of ions over a distance of about 50 micrometers. Here, the motion of the particles serves as an antenna."The particles oscillate like electrons in the poles of a TV antenna and thereby generate an electromagnetic field," explains Blatt."If one antenna is tuned to the other one, the receiving end picks up the signal of the sender, which results in coupling." The energy exchange taking place in this process could be the basis for fundamental computing operations of a quantum computer.

Antennas amplify transmission

"We implemented this new concept in a very simple way," explains Rainer Blatt. In a miniaturized ion trap a double-well potential was created, trapping the calcium ions. The two wells were separated by 54 micrometers."By applying a voltage to the electrodes of the ion trap, we were able to match the oscillation frequencies of the ions," says Blatt.

"This resulted in a coupling process and an energy exchange, which can be used to transmit quantum information." A direct coupling of two mechanical oscillations at the quantum level has never been demonstrated before. In addition, the scientists show that the coupling is amplified by using more ions in each well."These additional ions function as antennas and increase the distance and speed of the transmission," says Rainer Blatt, who is excited about the new concept. This work constitutes a promising approach for building a fully functioning quantum computer.

"The new technology offers the possibility to distribute entanglement. At the same time, we are able to target each memory cell individually," explains Rainer Blatt. The new quantum computer could be based on a chip with many micro traps, where ions communicate with each other through electromagnetic coupling. This new approach represents an important step towards practical quantum technologies for information processing.

The quantum researchers are supported by the Austrian Science Fund FWF, the European Union, the European Research Council and the Federation of Austrian Industries Tyrol.


Source

Saturday, February 26, 2011

Etched Quantum Dots Shape Up as Single Photon Emitters

The conventional way to build quantum dots -- at NIST and elsewhere -- is to grow them like crystals in a solution, but this somewhat haphazard process results in irregular shapes. The new, more precise process was developed by NIST postdoctoral researcher Varun Verma when he was a student at the University of Illinois. Verma uses electron beam lithography and etching to carve quantum dots inside a semiconductor sandwich (called a quantum well) that confines particles in two dimensions. Lithography controls the dot's size and position, while sandwich thickness and composition -- as well as dot size -- can be used to tune the color of the dot's light emissions.

Some quantum dots are capable of emitting individual, isolated photons on demand, a crucial trait for quantum information systems that encode information by manipulating single photons. In new work reported inOptics Express, NIST tests demonstrated that the lithographed and etched quantum dots do indeed work as sources of single photons. The tests were performed on dots made of indium gallium arsenide. Dots of various diameters were patterned in specific positions in square arrays. Using a laser to excite individual dots and a photon detector to analyze emissions, NIST researchers found that dots 35 nanometers (nm) wide, for instance, emitted nearly all light at a wavelength of 888.6 nm. The timing pattern indicated that the light was emitted as a train of single photons.

NIST researchers now plan to construct reflective cavities around individual etched dots to guide their light emissions. If each dot can emit most photons perpendicular to the chip surface, more light can be collected to make a more efficient single photon source. Vertical emission has been demonstrated with crystal-grown quantum dots, but these dots can't be positioned or distributed reliably in cavities. Etched dots offer not only precise positioning but also the possibility of making identical dots, which could be used to generate special states of light such as two or more photons that are entangled, a quantum phenomenon that links their properties even at a distance.

The quantum dots tested in the experiments were made at NIST. A final step was carried out at the University of Illinois, where a crystal layer was grown over the dots to form clean interfaces.


Source

Friday, February 25, 2011

A Semantic Sommelier: Wine Application Highlights the Power of Web 3.0

Web scientist and Rensselaer Polytechnic Institute Tetherless World Research Constellation Professor Deborah McGuinness has been developing a family of applications for the most tech-savvy wine connoisseurs since her days as a graduate student in the 1980s -- before what we now know as the World Wide Web had even been envisioned.

Today, McGuinness is among the world's foremost experts in Web ontology languages. These languages are used to encode meanings in a language that computers can understand. The most recent version of her wine application serves as an exceptional example of what the future of the World Wide Web, often called Web 3.0, might in fact look like. It is also an exceptional tool for teaching future Web Scientists about ontologies.

"The wine agent came about because I had to demonstrate the new technology that I was developing," McGuinness said."I had sophisticated applications that used cutting-edge artificial intelligence technology in domains, such as telecommunications equipment, that were difficult for anyone other than well-trained engineers to understand." McGuinness took the technology into the domain of wines and foods to create a program that she uses as a semantic tutorial, an"Ontologies 101" as she calls it. And students throughout the years have done many things with the wine agent including, most recently, experimentation with social media and mobile phone applications.

Today, the semantic sommelier is set to provide even the most novice of foodies some exciting new tools to expand their wine knowledge and food-pairing abilities on everything from their home PC to their smart phone. Evan Patton, a graduate student in computer science at Rensselaer, is the most recent student to tinker with the wine agent and is working with McGuinness to bring it into the mobile space on both the iPhone and Droid platforms.

The agent uses the Web Ontology Language (OWL), the formal language for the Semantic Web. Like the English language, which uses an agreed upon alphabet to form words and sentences that all English-speaking people can recognize, OWL uses a formalized set of symbols to create a code or language that a wide variety of applications can"read." This allows your computer to operate more efficiently and more intelligently with your cell phone or your Facebook page, or any other webpage or web-enabled device. These semantics also allow for an entirely new generation in smart search technologies.

Thanks to its semantic technology, the sommelier is input with basic background knowledge about wine and food. For wine, that includes its body, color (red versus white or blush), sweetness, and flavor. For food, this includes the course (e.g. appetizer versus entrée), ingredient type (e.g. fish versus meat), and its heat (mild versus spicy). The semantic technologies beneath the application then encode that knowledge and apply reasoning to search and share that information. This semantic functionality can now be exploited for a variety of culinary purposes, all of which McGuinness, a personal lover of fine wines, and Patton are working together on.

Having a spicy fish dish for dinner? Search within the system and it will arrive at a good wine pairing for the meal. Beyond basic pairings, the application has strong possibilities for use in individual restaurants, according to McGuinness, who envisions teaming up with restaurant owners to input their specific menus and wine lists. Thus, a diner could check menus and wine holdings before going out for dinner or they could enter a restaurant, pull out their smart phone, and instantly know what is in the wine cellar and goes best with that chef's clams casino. Beyond pairings, diners could rate different wines, providing fellow diners with personal reviews and the restaurateur with valuable information on what to stock up on next week. Is it a dry restaurant? The application could also be loaded up with the inventory within the liquor store down the street.

Beyond the table, the application can also be used to make personal wine suggestions and virtual wine cellars that you could share with your friends via Facebook or other social media platforms. It could also be used to manage a personal wine cellar, providing information on what is a peak flavor at the moment or what in your cellar would go best with your famous steak au poivre.

"Today we have 10 gadgets with us at any given time," McGuinness said."We live and breathe social media. With semantic technologies, we can offload more of the searching and reasoning required to locate and share information to the computer while still maintaining personal control over our information and how we use it. We also increase the ability of our technologies to interact with each other and decrease the need for as many gadgets or as many interactions with them since the applications do more work for us."


Source

Thursday, February 24, 2011

Quantum Simulator Becomes Accessible to the World

The researchers have published their work in the scientific journalNature.

Many phenomena in our world are based on the nature of quantum physics: the structure of atoms and molecules, chemical reactions, material properties, magnetism and possibly also certain biological processes. Since the complexity of phenomena increases exponentially with more quantum particles involved, a detailed study of these complex systems reaches its limits quickly; and conventional computers fail when calculating these problems. To overcome these difficulties, physicists have been developing quantum simulators on various platforms, such as neutral atoms, ions or solid-state systems, which, similar to quantum computers, utilize the particular nature of quantum physics to control this complexity.

In another breakthrough in this field, a team of young scientists in the research groups of Rainer Blatt and Peter Zoller at the Institute for Experimental Physics and Theoretical Physics of the University of Innsbruck and the Institute of Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences have been the first to engineer a comprehensive toolbox for an open-system quantum computer, which will enable researchers to construct more sophisticated quantum simulators for investigating complex problems in quantum physics.

Using controlled dissipation

The physicists use a natural phenomenon In their experiments that they usually try to minimize as much as possible: environmental disturbances. Such disturbances usually cause information loss in quantum systems and destroy fragile quantum effects such as entanglement or interference. In physics this deleterious process is called dissipation. Innsbruck researchers, led by experimental physicists Julio Barreiro and Philipp Schindler as well as the theorist Markus Müller, have now been first in using dissipation in a quantum simulator with trapped ions in a beneficial way and engineered system-environment coupling experimentally.

"We not only control all internal states of the quantum system consisting of up to four ions but also the coupling to the environment," explains Julio Barreiro."In our experiment we use an additional ion that interacts with the quantum system and, at the same time, establishes a controlled contact to the environment," explains Philipp Schindler. The surprising result is that by using dissipation, the researchers are able to generate and intensify quantum effects, such as entanglement, in the system."We achieved this by controlling the disruptive environment," says an excited Markus Müller.

Putting the quantum world into order

In one of their experiments the researchers demonstrate the control of dissipative dynamics by entangling four ions using the environment ion."Contrary to other common procedures this also works irrespective of the initial state of each particle," explains Müller."Through a collective cooling process, the particles are driven to a common state." This procedure can be used to prepare many-body states, which otherwise could only be created and observed in an extremely well isolated quantum system.

The beneficial use of an environment allows for the realization of new types of quantum dynamics and the investigation of systems that have scarcely been accessible for experiments until now. In the last few years there has been continuous thinking about how dissipation, instead of suppressing it, could be actively used as a resource for building quantum computers and quantum memories. Innsbruck theoretical and experimental physicists cooperated closely and they have now been the first to successfully implement these dissipative effects in a quantum simulator.

The Innsbruck researchers are supported by the Austrian Science Fund (FWF), the European Commission and the Federation of Austrian Industries Tyrol.


Source

Wednesday, February 23, 2011

Quantum Hot Potato: Researchers Entice Two Atoms to Swap Smallest Energy Units

Described in a paper published Feb. 23 byNature, the NIST experiments enticed two beryllium ions (electrically charged atoms) to take turns vibrating in an electromagnetic trap, exchanging units of energy, or quanta, that are a hallmark of quantum mechanics. As little as one quantum was traded back and forth in these exchanges, signifying that the ions are"coupled" or linked together. These ions also behave like objects in the larger, everyday world in that they are"harmonic oscillators" similar to pendulums and tuning forks, making repetitive, back-and-forth motions.

"First one ion is jiggling a little and the other is not moving at all; then the jiggling motion switches to the other ion. The smallest amount of energy you could possibly see is moving between the ions," explains first author Kenton Brown, a NIST post-doctoral researcher."We can also tune the coupling, which affects how fast they exchange energy and to what degree. We can turn the interaction on and off."

The experiments were made possible by a novel, one-layer ion trap cooled to minus 269 C (minus 452 F) with a liquid helium bath. The ions, 40 micrometers apart, float above the surface of the trap. In contrast to a conventional two-layer trap, the surface trap features smaller electrodes and can position ions closer together, enabling stronger coupling. Chilling to cryogenic temperatures suppresses unwanted heat that can distort ion behavior.

The energy swapping demonstrations begin by cooling both ions with a laser to slow their motion. Then one ion is cooled further to a motionless state with two opposing ultraviolet laser beams. Next the coupling interaction is turned on by tuning the voltages of the trap electrodes. In separate experiments reported inNature, NIST researchers measured the ions swapping energy at levels of several quanta every 155 microseconds and at the single quantum level somewhat less frequently, every 218 microseconds. Theoretically, the ions could swap energy indefinitely until the process is disrupted by heating. NIST scientists observed two round-trip exchanges at the single quantum level.

To detect and measure the ions' activity, NIST scientists apply an oscillating pulse to the trap at different frequencies while illuminating both ions with an ultraviolet laser and analyzing the scattered light. Each ion has its own characteristic vibration frequency; when excited, the motion reduces the amount of laser light absorbed. Dimming of the scattered light tells scientists an ion is vibrating at a particular pulse frequency.

To turn on the coupling interaction, scientists use electrode voltages to tune the frequencies of the two ions, nudging them closer together. The coupling is strongest when the frequencies are closest. The motions become linked due to the electrostatic interactions of the positively charged ions, which tend to repel each other. Coupling associates each ion with both characteristic frequencies.

The new experiments are similar to the same NIST research group's 2009 demonstration of entanglement -- a quantum phenomenon linking properties of separated particles -- in a mechanical system of two separated pairs of vibrating ions. However, the new experiments coupled the oscillators' motions more directly than before and, therefore, may simplify information processing. In this case the researchers observed quantum behavior but did not verify entanglement.

The new technique could be useful in a future quantum computer, which would use quantum systems such as ions to solve problems that are intractable today. For example, quantum computers could break today's most widely used data encryption codes. Direct coupling of ions in separate locations could simplify logic operations and help correct processing errors. The technique is also a feature of proposals for quantum simulations, which may help explain the mechanisms of complex quantum systems such as high-temperature superconductors.

In addition, the demonstration also suggests that similar interactions could be used to connect different types of quantum systems, such as a trapped ion and a particle of light (photon), to transfer information in a future quantum network. For example, a trapped ion could act as a"quantum transformer" between a superconducting quantum bit (qubit) and a qubit made of photons.


Source

Tuesday, February 22, 2011

'Fingerprints' Match Molecular Simulations With Reality

ORNL's Jeremy Smith collaborated on devising a method -- dynamical fingerprints --that reconciles the different signals between experiments and computer simulations to strengthen analyses of molecules in motion. The research will be published in theProceedings of the National Academy of Sciences.

"Experiments tend to produce relatively simple and smooth-looking signals, as they only 'see' a molecule's motions at low resolution," said Smith, who directs ORNL's Center for Molecular Biophysics and holds a Governor's Chair at the University of Tennessee."In contrast, data from a supercomputer simulation are complex and difficult to analyze, as the atoms move around in the simulation in a multitude of jumps, wiggles and jiggles. How to reconcile these different views of the same phenomenon has been a long-standing problem."

The new method solves the problem by calculating peaks within the simulated and experimental data, creating distinct"dynamical fingerprints." The technique, conceived by Smith's former graduate student Frank Noe, now at the Free University of Berlin, can then link the two datasets.

Supercomputer simulations and modeling capabilities can add a layer of complexity missing from many types of molecular experiments.

"When we started the research, we had hoped to find a way to use computer simulation to tell us which molecular motions the experiment actually sees," Smith said."When we were finished we got much more -- a method that could also tell us which other experiments should be done to see all the other motions present in the simulation. This method should allow major facilities like the ORNL's Spallation Neutron Source to be used more efficiently."

Combining the power of simulations and experiments will help researchers tackle scientific challenges in areas like biofuels, drug development, materials design and fundamental biological processes, which require a thorough understanding of how molecules move and interact.

"Many important things in science depend on atoms and molecules moving," Smith said."We want to create movies of molecules in motion and check experimentally if these motions are actually happening."

"The aim is to seamlessly integrate supercomputing with the Spallation Neutron Source so as to make full use of the major facilities we have here at ORNL for bioenergy and materials science development," Smith said.

The collaborative work included researchers from L'Aquila, Italy, Wuerzburg and Bielefeld, Germany, and the University of California at Berkeley. The research was funded in part by a Scientific Discovery through Advanced Computing grant from the DOE Office of Science.


Source

Monday, February 21, 2011

World's Smallest Magnetic Field Sensor: Researchers Explore Using Organic Molecules as Electronic Components

For the first time, a team of scientists from KIT and the Institut de Physique et Chimie des Matériaux de Strasbourg (IPCMS) have now succeeded in combining the concepts of spin electronics and molecular electronics in a single component consisting of a single molecule. Components based on this principle have a special potential, as they allow for the production of very small and highly efficient magnetic field sensors for read heads in hard disks or for non-volatile memories in order to further increase reading speed and data density.

Use of organic molecules as electronic components is being investigated extensively at the moment. Miniaturization is associated with the problem of the information being encoded with the help of the charge of the electron (current on or off). However, this requires a relatively high amount of energy. In spin electronics, the information is encoded in the intrinsic rotation of the electron, the spin. The advantage is that the spin is maintained even when switching off current supply, which means that the component can store information without any energy consumption.

The German-French research team has now combined these concepts. The organic molecule H2-phthalocyanin that is also used as blue dye in ball pens exhibits a strong dependence of its resistance, if it is trapped between spin-polarized, i.e. magnetic electrodes. This effect was first observed in purely metal contacts by Albert Fert and Peter Grünberg. It is referred to as giant magnetoresistance and was acknowledged by the Nobel Prize for Physics in 2007.

The giant magnetoresistance effect on single molecules was demonstrated at KIT within the framework of a combined experimental and theoretical project of CFN and a German-French graduate school in cooperation with the IPCMS, Strasbourg. The results of the scientists are now presented in the journalNature Nanotechnology.

Karlsruhe Institute of Technology (KIT) is a public corporation and state institution of Baden-Wuerttemberg, Germany. It fulfills the mission of a university and the mission of a national research center of the Helmholtz Association. KIT focuses on a knowledge triangle that links the tasks of research, teaching, and innovation.


Source

Sunday, February 20, 2011

Augmented Reality System for Learning Chess

An ordinary webcam, a chess board, a set of 32 pieces and custom software are the key elements in the final degree project of the telecommunications engineering students Ivan Paquico and Cristina Palmero, from the UPC-Barcelona Tech's Terrassa School of Engineering (EET). The project, for which the students were awarded a distinction, was directed by the professor Jordi Voltas and completed during an international mobility placement in Finland.

The system created by Ivan Paquico, the 2001 Spanish Internet chess champion, and Cristina Palmero, a keen player and federation member, is a didactic tool that will help chess clubs and associations to teach the game and make it more appealing, particularly to younger players.

The system combines augmented reality, computer vision and artificial intelligence, and the only equipment required is a high-definition home webcam, the Augmented Reality Chess software, a standard board and pieces, and a set of cardboard markers the same size as the squares on the board, each marked with the first letter of the corresponding piece: R for the king (reiin Catalan), D for the queen (dama), T for the rooks (torres), A for the bishops (alfils), C for the knights (cavalls) and P for the pawns (peons).

Learning chess with virtual pieces

To use the system, learners play with an ordinary chess board but move the cardboard markers instead of standard pieces. The table is lit from above and the webcam focuses on the board, and every time the player moves one of the markers the system recognises the piece and reproduces the move in 3D on the computer screen, creating a virtual representation of the game.

For example, if the learner moves the marker P (pawn), the corresponding piece will be displayed on the screen in 3D, with all of the possible moves indicated. This is a simple and attractive way of showing novices the permitted movements of each piece, making the system particularly suitable for children learning the basics of this board game.

Making chess accessible to all

The learning tool also incorporates a move-tracking program called Chess Recognition: from the images captured by the webcam, the system instantly recognises and analyses every movement of every piece and can act as a referee, identify illegal moves and provide the players with an audible description of the game status. According to Ivan Paquico and Cristina Palmero, this feature could be very useful for players with visual impairment -- who have their own federation and, until now, have had to play with specially adapted boards and pieces -- and for clubs and federations, tournament organisers and enthusiasts of all levels.

The Chess Recognition program saves whole games so that they can be shared, broadcast online and viewed on demand, and can generate a complete user history for analysing the evolution of a player's game. The program also creates an automatic copy of the scoresheet (the official record of each game) for players to view or print.

The technology for playing chess and recording games online has been available for a number of years, but until now players needed sophisticated equipment including pieces with integrated chips and a special electronic board with a USB connection. The standard retail cost of this equipment is between 400 and 500 euros.


Source

Saturday, February 19, 2011

Scientists Steer Car With the Power of Thought

They then succeeded in developing an interface to connect the sensors to their otherwise purely computer-controlled vehicle, so that it can now be"controlled" via thoughts. Driving by thought control was tested on the site of the former Tempelhof Airport.

The scientists from Freie Universität first used the sensors for measuring brain waves in such a way that a person can move a virtual cube in different directions with the power of his or her thoughts. The test subject thinks of four situations that are associated with driving, for example,"turn left" or"accelerate." In this way the person trained the computer to interpret bioelectrical wave patterns emitted from his or her brain and to link them to a command that could later be used to control the car. The computer scientists connected the measuring device with the steering, accelerator, and brakes of a computer-controlled vehicle, which made it possible for the subject to influence the movement of the car just using his or her thoughts.

"In our test runs, a driver equipped with EEG sensors was able to control the car with no problem -- there was only a slight delay between the envisaged commands and the response of the car," said Prof. Raúl Rojas, who heads the AutoNOMOS project at Freie Universität Berlin. In a second test version, the car drove largely automatically, but via the EEG sensors the driver was able to determine the direction at intersections.

The AutoNOMOS Project at Freie Universität Berlin is studying the technology for the autonomous vehicles of the future. With the EEG experiments they investigate hybrid control approaches, i.e., those in which people work with machines.

The computer scientists have made a short film about their research, which is available at:http://tinyurl.com/BrainDriver


Source

Friday, February 18, 2011

Controlling a Computer With Thoughts?

The projects build upon ongoing research conducted in epilepsy patients who had the interfaces temporarily placed on their brains and were able to move cursors and play computer games, as well as in monkeys that through interfaces guided a robotic arm to feed themselves marshmallows and turn a doorknob.

"We are now ready to begin testing BCI technology in the patients who might benefit from it the most, namely those who have lost the ability to move their upper limbs due to a spinal cord injury," said Michael L. Boninger, M.D., director, UPMC Rehabilitation Institute, chair, Department of Physical Medicine and Rehabilitation, Pitt School of Medicine, and a senior scientist on both projects."It's particularly exciting for us to be able to test two types of interfaces within the brain."

"By expanding our research from the laboratory to clinical settings, we hope to gain a better understanding of how to train and motivate patients who will benefit from BCI technology," said Elizabeth Tyler-Kabara, M.D., Ph.D., a UPMC neurosurgeon and assistant professor of neurological surgery and bioengineering, Pitt Schools of Medicine and Engineering, and the lead surgeon on both projects.

In one project, funded by an$800,000 grant from the National Institutes of Health, a BCI based on electrocorticography (ECoG) will be placed on the motor cortex surface of a spinal cord injury patient's brain for up to 29 days. The neural activity picked up by the BCI will be translated through a computer processor, allowing the patient to learn to control computer cursors, virtual hands, computer games and assistive devices such as a prosthetic hand or a wheelchair.

The second project, funded by the Defense Advanced Research Projects Agency (DARPA) for up to$6 million over three years, is part of a program led by the Johns Hopkins University Applied Physics Laboratory (APL), Laurel, Md. It will further develop technology tested in monkeys by Andrew Schwartz, Ph.D., professor of neurobiology, Pitt School of Medicine, and also a senior investigator on both projects.

It uses an interface that is a tiny, 10-by-10 array of electrodes that is implanted on the surface of the brain to read activity from individual neurons. Those signals will be processed and relayed to maneuver a sophisticated prosthetic arm.

"Our animal studies have shown that we can interpret the messages the brain sends to make a simple robotic arm reach for an object and turn a mechanical wrist," Dr. Schwartz said."The next step is to see not only if we can make these techniques work for people, but also if we can make the movements more complex."

In the study, which is expected to begin by late 2011, participants will get two separate electrodes. In future research efforts, the technology may be enhanced with an innovative telemetry system that would allow wireless control of a prosthetic arm, as well as a sensory component.

"Our ultimate aim is to develop technologies that can give patients with physical disabilities control of assistive devices that will help restore their independence," Dr. Boninger said.


Source

Thursday, February 17, 2011

US Secret Service Moves Tiny Town to Virtual Tiny Town: Teaching Secret Service Agents and Officers How to Prepare a Site Security Plan

Now, with help from the Department of Homeland Security (DHS) Science& Technology Directorate (S&T), the Secret Service is giving training scenarios a high-tech edge: moving from static tabletop models to virtual kiosks with gaming technology and 3D modeling.

For the past 40 years, a miniature model environment called"Tiny Town" has been one of the methods used to teach Secret Service agents and officers how to prepare a site security plan. The model includes different sites -- an airport, outdoor stadium, urban rally site and a hotel interior -- and uses scaled models of buildings, cars and security assets. The scenario-based training allows students to illustrate a dignitary's entire itinerary and accommodate unrelated, concurrent activities in a public venue. Various elements of a visit are covered, such as an arrival, rope line or public remarks. The class works as a whole and in small groups to develop and present their security plan.

Enter videogame technology. The Secret Service's James J. Rowley Training Center near Washington, D.C., sought to take these scenarios beyond a static environment to encompass the dynamic threat spectrum that exists today, while taking full advantage of the latest computer software technology.

The agency's Security and Incident Modeling Lab wanted to update Tiny Town and create a more relevant and flexible training tool. With funding from DHS S&T, the Secret Service developed the Site Security Planning Tool (SSPT), a new training system dubbed"Virtual Tiny Town" by instructors, with high-tech features:

  • 3D models and game-based virtual environments
  • Simulated chemical plume dispersion for making and assessing decisions
  • A touch interface to foster collaborative, interactive involvement by student teams
  • A means to devise, configure, and test a security plan that is simple, engaging, and flexible
  • Both third- and first-person viewing perspectives for overhead site evaluation and for a virtual"walk-through" of the site, reflecting how it would be performed in the field.

The new technology consists of three kiosks, each composed of a 55" Perceptive Pixel touch screen with an attached projector and camera, and a computer running Virtual Battle Space (VBS2) as the base simulation game. The kiosks can accommodate a team of up to four students, and each kiosk's synthetic environment, along with the team's crafted site security plan, can be displayed on a large wall-mounted LED 3D TV monitor for conducting class briefings and demonstrating simulated security challenges.

In addition to training new recruits, SSPT can also provide in-service protective details with advanced training on a range of scenarios, including preparation against chemical, biological or radiological attacks, armed assaults, suicide bombers and other threats.

Future enhancements to SSPT will include modeling the resulting health effects and crowd behaviors of a chemical, radiological or biological attack, to better prepare personnel for a more comprehensive array of scenarios and the necessary life-saving actions required to protect dignitaries and the public alike.

The Site Security Planning Tool development is expected to be completed and activated by spring 2011.


Source

Wednesday, February 16, 2011

'Mashup' Technologies: Better Contact With Public Authorities

Potholes in the road or a park bench in need of repair -- we all come across these or similar problems every now and then. If only there were a simple way of reporting them to the right department of the public administration! The latest mashup technology and mobile applications make it possible to come up with solutions.

Inspired by the UK websitewww.fixmystreet.com, the Fraunhofer Institute for Open Communication Systems FOKUS in Berlin is taking this approach further. Damage reports can be assigned GPS coordinates by cell phone and entered. The system then provides an overview of communications received and indicates whether the same matter has been reported by someone else.

As used in information and communication technologies, the term 'mashup' refers to the mixing or combination of data, types of presentation and functionalities from various sources in order to create new services. One example is the placing of restaurant reviews in online maps such as Google Maps. Fraunhofer FOKUS's Government Mashups research project is putting the technology at the public sector's disposal. Solutions that already exist are being developed further to meet the requirements of government administration and the relevant public sector staff are being assisted in the technical implementation of these new services."Mashups hold enormous potential for public authorities because they link up internal and external data quickly and cheaply," says project manager Dipl.-Ing. Jens Klessmann."Without any knowledge of computer programming and at little cost administrative staff can create new mashups which can be adapted effortlessly to changing requirements."

Numerous possible applications exist: In addition to complaints management, the use of public funding can for instance be graphically represented, restaurant reviews can be linked to the results of food hygiene inspections, statistics and other official data can be made more easily accessible, and capacity utilization at different airports can be illustrated in order to coordinate rescue services in the event of a disaster.

Such projects are underpinned by statutory regulations and political requirements. For example, laws on the freedom and re-use of government information already require public bodies to provide official data. In its current program to promote networked and transparent administration the German government has announced that it intends to develop a common strategy for open government. This will include the provision of open data, which are the raw material for government mashups. In addition, governments and public bodies find themselves under growing pressure to justify and explain the increasingly complex procedures underlying their actions. Mashups can be used to explain and visualize these matters.

At CeBit 2011 Fraunhofer FOKUS will present two advanced demonstrators for mashups. Visitors will be invited to take a photo of a pothole on a smart phone and send it to a fictitious city authority as a complaint. And the research scientists will use statistical data from the World Bank to demonstrate how such information can be translated, processed and visualized so that anyone interested can download it.


Source

Tuesday, February 15, 2011

3-D Movies on Your Cell Phone

The experts will be presenting their solution from February 14-17 at the Mobile World Congress in Barcelona.

Halting page loading and postage stamp sized-videos jiggling all over the screen -- those days are gone for good thanks to Smartphones, flat rates and fast data links. Last year, 100 million videos were seen on YouTube with cell phones all over the world.

A survey of the high-tech association BITKOM found that 10 million people surf the Internet with their cell phones in Germany. And there's another hype that is unbroken: 3-D movies. Researchers at the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut, HHI in Berlin, Germany, have been able to put both of them together so you can experience mobile Internet in three dimensions.

The researchers have come up with a special compression technique for movies in especially good high-resolution HD quality. It computes the movies down to low data rates while maintaining quality: H.264/AVC. What the H.246/AVC video format is to high-definition movies, the Multiview Video Coding (MVC) is to 3-D movies. Thomas Schierl is a scientist at the HHI, and he explained that"MVC is used to pack together the two images needed for the stereoscopic 3-D effect to measurably reduce the film's bit rate," and this technique can be used to reduce the size of 3-D movies as much as 40 percent.

That means that you can quickly receive excellent quality 3-D movies in connection with the new 3G-LTE mobile radio standard. Key is the radio resource management integrated into the LTE system that allows flexible data transmission while including various quality of service classes. Thomas Wirth, another scientist at the HHI, explains further:"The 2-D and 3-D bit streams divided up by MVC can be prioritized for each user at the air interface to support different services, thus opening up a completely new field for business models." Premium services for instance, where only the paying user can watch the 3-D version of the movie. Also a 3-D quality guarantee, even in unfavorable reception conditions like in a driving car, is possible. That means that kids can still watch Ice Age in 3-D without interruption in the back seat of the car.


Source

Monday, February 14, 2011

New Wireless Technology Developed for Faster, More Efficient Networks

"Wireless communication is a one-way street. Over."

Radio traffic can flow in only one direction at a time on a specific frequency, hence the frequent use of"over" by pilots and air traffic controllers, walkie-talkie users and emergency personnel as they take turns speaking.

But now, Stanford researchers have developed the first wireless radios that can send and receive signals at the same time.

This immediately makes them twice as fast as existing technology, and with further tweaking will likely lead to even faster and more efficient networks in the future.

"Textbooks say you can't do it," said Philip Levis, assistant professor of computer science and of electrical engineering."The new system completely reworks our assumptions about how wireless networks can be designed," he said.

Cell phone networks allow users to talk and listen simultaneously, but they use a work-around that is expensive and requires careful planning, making the technique less feasible for other wireless networks, including Wi-Fi.

Sparked from a simple idea

A trio of electrical engineering graduate students, Jung Il Choi, Mayank Jain and Kannan Srinivasan, began working on a new approach when they came up with a seemingly simple idea. What if radios could do the same thing our brains do when we listen and talk simultaneously: screen out the sound of our own voice?

In most wireless networks, each device has to take turns speaking or listening."It's like two people shouting messages to each other at the same time," said Levis."If both people are shouting at the same time, neither of them will hear the other."

It took the students several months to figure out how to build the new radio, with help from Levis and Sachin Katti, assistant professor of computer science and of electrical engineering.

Their main roadblock to two-way simultaneous conversation was this: Incoming signals are overwhelmed by the radio's own transmissions, making it impossible to talk and listen at the same time.

"When a radio is transmitting, its own transmission is millions, billions of times stronger than anything else it might hear {from another radio}," Levis said."It's trying to hear a whisper while you yourself are shouting."

But, the researchers realized, if a radio receiver could filter out the signal from its own transmitter, weak incoming signals could be heard."You can make it so you don't hear your own shout and you can hear someone else's whisper," Levis said.

Their setup takes advantage of the fact that each radio knows exactly what it's transmitting, and hence what its receiver should filter out. The process is analogous to noise-canceling headphones.

When the researchers demonstrated their device last fall at MobiCom 2010, an international gathering of more than 500 of the world's top experts in mobile networking, they won the prize for best demonstration. Until then, people didn't believe sending and receiving signals simultaneously could be done, Jain said. Levis said a researcher even told the students their idea was"so simple and effective, it won't work," because something that obvious must have already been tried unsuccessfully.

Breakthrough for communications technology

But work it did, with major implications for future communications networks. The most obvious effect of sending and receiving signals simultaneously is that it instantly doubles the amount of information you can send, Levis said. That means much-improved home and office networks that are faster and less congested.

But Levis also sees the technology having larger impacts, such as overcoming a major problem with air traffic control communications. With current systems, if two aircraft try to call the control tower at the same time on the same frequency, neither will get through. Levis says these blocked transmissions have caused aircraft collisions, which the new system would help prevent.

The group has a provisional patent on the technology and is working to commercialize it. They are currently trying to increase both the strength of the transmissions and the distances over which they work. These improvements are necessary before the technology is practical for use in Wi-Fi networks.

But even more promising are the system's implications for future networks. Once hardware and software are built to take advantage of simultaneous two-way transmission,"there's no predicting the scope of the results," Levis said.


Source

Sunday, February 13, 2011

Fingerprint Makes Computer Chips Counterfeit-Proof

Fraunhofer researchers will be presenting a prototype at the embedded world Exhibition& Conference in Nuremberg from March 1 to 3.

Product piracy long ago ceased to be limited exclusively to the consumer goods sector. Industry, too, is increasingly having to combat this problem. Cheap fakes cost business dear: The German mechanical and plant engineering sector alone lost 6.4 billion euros of revenue in 2010, according to a survey by the German Engineering Federation (VDMA). Sales losses aside, low-quality counterfeits can also damage a company's brand image. Worse, they can even put people's lives at risk if they are used in areas where safety is paramount, such as automobile or aircraft manufacture. Patent rights or organizational provisions such as confidentiality agreements are no longer sufficient to prevent product piracy. Today's commercially available anti-piracy technology provides a degree of protection, but it no longer constitutes an insurmountable obstacle for the product counterfeiters: Criminals are using scanning electron microscopes, focused ion beams or laser bolts to intercept security keys -- and adopting increasingly sophisticated methods.

No two chips are the same

At embedded world, researchers from the Fraunhofer Institute for Secure Information Technology SIT will be demonstrating how electronic components or chips can be made counterfeit-proof using physical unclonable functions (PUFs)."Every component has a kind of individual fingerprint since small differences inevitably arise between components during production," explains Dominik Merli, a scientist at Fraunhofer SIT in Garching near Munich. Printed circuits, for instance, end up with minimal variations in thickness or length during the manufacturing process. While these variations do not affect functionality, they can be used to generate a unique code.

Invasive attacks destroy the structure

A PUF module is integrated directly into a chip -- a setup that is feasible not only in a large number of programmable semiconductors known as FPGAs (field programmable gate arrays) but equally in hardware components such as microchips and smartcards."At its heart is a measuring circuit, for instance a ring oscillator. This oscillator generates a characteristic clock signal which allows the chip's precise material properties to be determined. Special electronic circuits then read these measurement data and generate the component-specific key from the data," explains Merli. Unlike conventional cryptographic processes, the secret key is not stored on the hardware but is regenerated as and when required. Since the code relates directly to the system properties at any given point in time, it is virtually impossible to extract and clone it. Invasive attacks on the chip would alter physical parameters, thus distorting or destroying the unique structure.

The Garching-based researchers have already developed two prototypes: A butterfly PUF and a ring oscillator PUF. At present, these modules are being optimized for practical applications. The experts will be at embedded world in Nuremberg (hall 11, stand 203) from March 1-3 to showcase FPGA boards that can generate an individual cryptographic key using a ring oscillator PUF. These allow attack-resistant security solutions to be rolled out in embedded systems.


Source

Saturday, February 12, 2011

Ultrafast Quantum Computer Closer: Ten Billion Bits of Entanglement Achieved in Silicon

The researchers used high magnetic fields and low temperatures to produce entanglement between the electron and the nucleus of an atom of phosphorus embedded in a highly purified silicon crystal. The electron and the nucleus behave as a tiny magnet, or 'spin', each of which can represent a bit of quantum information. Suitably controlled, these spins can interact with each other to be coaxed into an entangled state -- the most basic state that cannot be mimicked by a conventional computer.

An international team from the UK, Japan, Canada and Germany, report their achievement in the journalNature.

'The key to generating entanglement was to first align all the spins by using high magnetic fields and low temperatures,' said Stephanie Simmons of Oxford University's Department of Materials, first author of the report. 'Once this has been achieved, the spins can be made to interact with each other using carefully timed microwave and radiofrequency pulses in order to create the entanglement, and then prove that it has been made.'

The work has important implications for integration with existing technology as it uses dopant atoms in silicon, the foundation of the modern computer chip. The procedure was applied in parallel to a vast number of phosphorus atoms.

'Creating 10 billion entangled pairs in silicon with high fidelity is an important step forward for us,' said co-author Dr John Morton of Oxford University's Department of Materials who led the team. 'We now need to deal with the challenge of coupling these pairs together to build a scalable quantum computer in silicon.'

In recent years quantum entanglement has been recognised as a key ingredient in building new technologies that harness quantum properties. Famously described by Einstein as"spooky action at distance" -- when two objects are entangled it is impossible to describe one without also describing the other and the measurement of one object will reveal information about the other object even if they are separated by thousands of miles.

Creating true entanglement involves crossing the barrier between the ordinary uncertainty encountered in our everyday lives and the strange uncertainties of the quantum world. For example, flipping a coin there is a 50% chance that it comes up heads and 50% tails, but we would never imagine the coin could land with both heads and tails facing upwards simultaneously: a quantum object such as the electron spin can do just that.

Dr Morton said: 'At high temperatures there is simply a 50/50 mixture of spins pointing in different directions but, under the right conditions, all the spins can be made to point in two opposing directions at the same time. Achieving this was critical to the generation of spin entanglement.'


Source

Friday, February 11, 2011

CeBIT 2011: Cloud Computing for Administration

Researchers will be presenting these and other solutions on"Computing in the Cloud" at CeBIT in Hanover from March 1-5, 2011.

Cloud Computing is a tempting development for IT managers: with cloud computing, companies and organizations no longer have to acquire servers and software solutions themselves and instead rent the capacities they need for data, computing power and applications from professional providers. You only pay for what you use. In Germany, primarily companies are turning to cloud computing, transferring their data, applications and networks to server farms at Amazon, Google, IBM, Microsoft or other IT service providers. In the space of just a few years, cloud computing has emerged as a market worth billions, with a high level of importance for business-location policy in the German economy.

In autumn 2010, researchers from the Fraunhofer Institute for Open Communication System FOKUS in Berlin, together with their colleagues from the Hertie School of Governance, published a study,"Kooperatives eGovernment -- Cloud Computing für dieÖffentliche Verwaltung" {"Cooperative eGovernment: Cloud Computing for Public Administration"}. The study had been commissioned by ISPRAT, an organization dedicated to conduct interdisciplinary studies in politics, law, administration and technology. The study addresses the aspect of security, identifies risks, and uses various implementation scenarios to describe the benefits and advantages of this new technology for public administrators, with a particular focus on federal requirements in Germany.

"There are considerable reservations about cloud computing in the public-administration area. First, because of the fundamental need to protect citizens' personal data entrusted to public administrators; but also the potential of outsourcing processes are frightening in the eyes of the authorities. Due to fear of the loss of expertise, for one, and for another because the law requires that core tasks remain in the hands of administrators." This is how study co-author Linda Strick of FOKUS summarizes the status quo.

The study points out that cloud-specific security risks do in fact exist, but that these can be completely understood and analyzed."There is even reason to assume that cloud-based systems can actually fulfill higher security standards than classic solutions," Strick explains. To assist administrators with introduction of the new technology, FOKUS researchers in the eGovernment laboratory are developing application scenarios for a media-fragmentation-free and hence interoperable use of cloud-computing technologies.

A cockpit for security

To permit companies and public authorities to acquire practical experience with the new technology and test security concepts, experts from the Fraunhofer Institute for Secure Information Technology SIT in Munich have created a Cloud Computing Test Laboratory. Along with security concepts and technologies for cloud-computing providers, researchers there are also developing and studying strategies for secure integration of cloud services in existing IT infrastructures.

"In our test lab, function, reliability and interoperability tests, along with individual security analyses and penetration tests, can be carried out and all of the developmental phases considered, from the design of individual services to prototypes to the testing of fully functional comprehensive systems," notes Angelika Ruppel of SIT in Munich.

Working with the German federal office for information security {Bundesamt für Sicherheit in der Informationstechnik} BSI, her division has drafted minimum requirements for providers and has developed a Cloud Cockpit. With this solution, companies can securely transfer their data between different cloud systems while monitoring information relevant to security and data protection. Even the application of hybrid cloud infrastructures, with which companies can use both internal and external computing power, can be securely controlled using the Cloud Cockpit.


Source

Thursday, February 10, 2011

How Much Information Is There in the World?

A study appearing on Feb. 10 inScienceExpress, an electronic journal that provides selectSciencearticles ahead of print, calculates the world's total technological capacity -- how much information humankind is able to store, communicate and compute.

"We live in a world where economies, political freedom and cultural growth increasingly depend on our technological capabilities," said lead author Martin Hilbert of the USC Annenberg School for Communication& Journalism."This is the first time-series study to quantify humankind's ability to handle information."

So how much information is there in the world? How much has it grown?

Prepare for some big numbers:

  • Looking at both digital memory and analog devices, the researchers calculate that humankind is able to store at least 295 exabytes of information. (Yes, that's a number with 20 zeroes in it.)

    Put another way, if a single star is a bit of information, that's a galaxy of information for every person in the world. That's 315 times the number of grains of sand in the world. But it's still less than one percent of the information that is stored in all the DNA molecules of a human being.

  • 2002 could be considered the beginning of the digital age, the first year worldwide digital storage capacity overtook total analog capacity. As of 2007, almost 94 percent of our memory is in digital form.
  • In 2007, humankind successfully sent 1.9 zettabytes of information through broadcast technology such as televisions and GPS. That's equivalent to every person in the world reading 174 newspapers every day.
  • On two-way communications technology, such as cell phones, humankind shared 65 exabytes of information through telecommunications in 2007, the equivalent of every person in the world communicating the contents of six newspapers every day.
  • In 2007, all the general-purpose computers in the world computed 6.4 x 10^18 instructions per second, in the same general order of magnitude as the number of nerve impulses executed by a single human brain. Doing these instructions by hand would take 2,200 times the period since the Big Bang.
  • From 1986 to 2007, the period of time examined in the study, worldwide computing capacity grew 58 percent a year, ten times faster than the United States' GDP.

Telecommunications grew 28 percent annually, and storage capacity grew 23 percent a year.

"These numbers are impressive, but still miniscule compared to the order of magnitude at which nature handles information" Hilbert said."Compared to nature, we are but humble apprentices. However, while the natural world is mind-boggling in its size, it remains fairly constant. In contrast, the world's technological information processing capacities are growing at exponential rates."

Priscila Lopez of the Open University of Catalonia was co-author of the study.


Source

Wednesday, February 9, 2011

World's First Programmable Nanoprocessor: Nanowire Tiles Can Perform Arithmetic and Logical Functions

The groundbreaking prototype computer system, described in a paper appearing in the journalNature, represents a significant step forward in the complexity of computer circuits that can be assembled from synthesized nanometer-scale components.

It also represents an advance because these ultra-tiny nanocircuits can be programmed electronically to perform a number of basic arithmetic and logical functions.

"This work represents a quantum jump forward in the complexity and function of circuits built from the bottom up, and thus demonstrates that this bottom-up paradigm, which is distinct from the way commercial circuits are built today, can yield nanoprocessors and other integrated systems of the future," says principal investigator Charles M. Lieber, who holds a joint appointment at Harvard's Department of Chemistry and Chemical Biology and School of Engineering and Applied Sciences.

The work was enabled by advances in the design and synthesis of nanowire building blocks. These nanowire components now demonstrate the reproducibility needed to build functional electronic circuits, and also do so at a size and material complexity difficult to achieve by traditional top-down approaches.

Moreover, the tiled architecture is fully scalable, allowing the assembly of much larger and ever more functional nanoprocessors.

"For the past 10 to 15 years, researchers working with nanowires, carbon nanotubes, and other nanostructures have struggled to build all but the most basic circuits, in large part due to variations in properties of individual nanostructures," says Lieber, the Mark Hyman Professor of Chemistry."We have shown that this limitation can now be overcome and are excited about prospects of exploiting the bottom-up paradigm of biology in building future electronics."

An additional feature of the advance is that the circuits in the nanoprocessor operate using very little power, even allowing for their miniscule size, because their component nanowires contain transistor switches that are"nonvolatile."

This means that unlike transistors in conventional microcomputer circuits, once the nanowire transistors are programmed, they do not require any additional expenditure of electrical power for maintaining memory.

"Because of their very small size and very low power requirements, these new nanoprocessor circuits are building blocks that can control and enable an entirely new class of much smaller, lighter weight electronic sensors and consumer electronics," says co-author Shamik Das, the lead engineer in MITRE's Nanosystems Group.

"This new nanoprocessor represents a major milestone toward realizing the vision of a nanocomputer that was first articulated more than 50 years ago by physicist Richard Feynman," says James Ellenbogen, a chief scientist at MITRE.

Co-authors on the paper included four members of Lieber's lab at Harvard: Hao Yan (Ph.D. '10), SungWoo Nam (Ph.D. '10), Yongjie Hu (Ph.D. '10), and doctoral candidate Hwan Sung Choe, as well as collaborators at MITRE.

The research team at MITRE comprised Das, Ellenbogen, and nanotechnology laboratory director Jim Klemic. The MITRE Corporation is a not-for-profit company that provides systems engineering, research and development, and information technology support to the government. MITRE's principal locations are in Bedford, Mass., and McLean, Va.

The research was supported by a Department of Defense National Security Science and Engineering Faculty Fellowship, the National Nanotechnology Initiative, and the MITRE Innovation Program.


Source

Monday, February 7, 2011

Engineers Grow Nanolasers on Silicon, Pave Way for on-Chip Photonics

They describe their work in a paper to be published Feb. 6 in an advanced online issue of the journalNature Photonics.

"Our results impact a broad spectrum of scientific fields, including materials science, transistor technology, laser science, optoelectronics and optical physics," said the study's principal investigator, Connie Chang-Hasnain, UC Berkeley professor of electrical engineering and computer sciences.

The increasing performance demands of electronics have sent researchers in search of better ways to harness the inherent ability of light particles to carry far more data than electrical signals can. Optical interconnects are seen as a solution to overcoming the communications bottleneck within and between computer chips.

Because silicon, the material that forms the foundation of modern electronics, is extremely deficient at generating light, engineers have turned to another class of materials known as III-V (pronounced"three-five") semiconductors to create light-based components such as light-emitting diodes (LEDs) and lasers.

But the researchers pointed out that marrying III-V with silicon to create a single optoelectronic chip has been problematic. For one, the atomic structures of the two materials are mismatched.

"Growing III-V semiconductor films on silicon is like forcing two incongruent puzzle pieces together," said study lead author Roger Chen, a UC Berkeley graduate student in electrical engineering and computer sciences."It can be done, but the material gets damaged in the process."

Moreover, the manufacturing industry is set up for the production of silicon-based materials, so for practical reasons, the goal has been to integrate the fabrication of III-V devices into the existing infrastructure, the researchers said.

"Today's massive silicon electronics infrastructure is extremely difficult to change for both economic and technological reasons, so compatibility with silicon fabrication is critical," said Chang-Hasnain."One problem is that growth of III-V semiconductors has traditionally involved high temperatures -- 700 degrees Celsius or more -- that would destroy the electronics. Meanwhile, other integration approaches have not been scalable."

The UC Berkeley researchers overcame this limitation by finding a way to grow nanopillars made of indium gallium arsenide, a III-V material, onto a silicon surface at the relatively cool temperature of 400 degrees Celsius.

"Working at nanoscale levels has enabled us to grow high quality III-V materials at low temperatures such that silicon electronics can retain their functionality," said Chen.

The researchers used metal-organic chemical vapor deposition to grow the nanopillars on the silicon."This technique is potentially mass manufacturable, since such a system is already used commercially to make thin film solar cells and light emitting diodes," said Chang-Hasnain.

Once the nanopillar was made, the researchers showed that it could generate near infrared laser light -- a wavelength of about 950 nanometers -- at room temperature. The hexagonal geometry dictated by the crystal structure of the nanopillars creates a new, efficient, light-trapping optical cavity. Light circulates up and down the structure in a helical fashion and amplifies via this optical feedback mechanism.

The unique approach of growing nanolasers directly onto silicon could lead to highly efficient silicon photonics, the researchers said. They noted that the miniscule dimensions of the nanopillars -- smaller than one wavelength on each side, in some cases -- make it possible to pack them into small spaces with the added benefit of consuming very little energy

"Ultimately, this technique may provide a powerful and new avenue for engineering on-chip nanophotonic devices such as lasers, photodetectors, modulators and solar cells," said Chen.

"This is the first bottom-up integration of III-V nanolasers onto silicon chips using a growth process compatible with the CMOS (complementary metal oxide semiconductor) technology now used to make integrated circuits," said Chang-Hasnain."This research has the potential to catalyze an optoelectronics revolution in computing, communications, displays and optical signal processing. In the future, we expect to improve the characteristics of these lasers and ultimately control them electronically for a powerful marriage between photonic and electronic devices."

The Defense Advanced Research Projects Agency and a Department of Defense National Security Science and Engineering Faculty Fellowship helped support this research.


Source

Saturday, February 5, 2011

Physicists Challenge Classical World With Quantum-Mechanical Implementation of 'Shell Game'

In a paper published in the Jan. 30 issue of the journalNature Physics, UCSB researchers show the first demonstration of the coherent control of a multi-resonator architecture. This topic has been a holy grail among physicists studying photons at the quantum-mechanical level for more than a decade.

The UCSB researchers are Matteo Mariantoni, postdoctoral fellow in the Department of Physics; Haohua Wang, postdoctoral fellow in physics; John Martinis, professor of physics; and Andrew Cleland, professor of physics.

According to the paper, the"shell man," the researcher, makes use of two superconducting quantum bits (qubits) to move the photons -- particles of light -- between the resonators. The qubits -- the quantum-mechanical equivalent of the classical bits used in a common PC -- are studied at UCSB for the development of a quantum super computer. They constitute one of the key elements for playing the photon shell game.

"This is an important milestone toward the realization of a large-scale quantum register," said Mariantoni."It opens up an entirely new dimension in the realm of on-chip microwave photonics and quantum-optics in general."

The researchers fabricated a chip where three resonators of a few millimeters in length are coupled to two qubits."The architecture studied in this work resembles a quantum railroad," said Mariantoni."Two quantum stations -- two of the three resonators -- are interconnected through the third resonator which acts as a quantum bus. The qubits control the traffic and allow the shuffling of photons among the resonators."

In a related experiment, the researchers played a more complex game that was inspired by an ancient mathematical puzzle developed in an Indian temple called the Towers of Hanoi, according to legend.

The Towers of Hanoi puzzle consists of three posts and a pile of disks of different diameter, which can slide onto any post. The puzzle starts with the disks in a stack in ascending order of size on one post, with the smallest disk at the top. The aim of the puzzle is to move the entire stack to another post, with only one disk being moved at a time, and with no disk being placed on top of a smaller disk.

In the quantum-mechanical version of the Towers of Hanoi, the three posts are represented by the resonators and the disks by quanta of light with different energy."This game demonstrates that a truly Bosonic excitation can be shuffled among resonators -- an interesting example of the quantum-mechanical nature of light," said Mariantoni.

Mariantoni was supported in this work by an Elings Prize Fellowship in Experimental Science from UCSB's California NanoSystems Institute.


Source

Friday, February 4, 2011

New Mathematical Model of Information Processing in the Brain Accurately Predicts Some of the Peculiarities of Human Vision

At the Society of Photo-Optical Instrumentation Engineers' Human Vision and Electronic Imaging conference on Jan. 27, Ruth Rosenholtz, a principal research scientist in the Department of Brain and Cognitive Sciences, presented a new mathematical model of how the brain does that summarizing. The model accurately predicts the visual system's failure on certain types of image-processing tasks, a good indication that it captures some aspect of human cognition.

Most models of human object recognition assume that the first thing the brain does with a retinal image is identify edges -- boundaries between regions with different light-reflective properties -- and sort them according to alignment: horizontal, vertical and diagonal. Then, the story goes, the brain starts assembling these features into primitive shapes, registering, for instance, that in some part of the visual field, a horizontal feature appears above a vertical feature, or two diagonals cross each other. From these primitive shapes, it builds up more complex shapes -- four L's with different orientations, for instance, would make a square -- and so on, until it's constructed shapes that it can identify as features of known objects.

While this might be a good model of what happens at the center of the visual field, Rosenholtz argues, it's probably less applicable to the periphery, where human object discrimination is notoriously weak. In a series of papers in the last few years, Rosenholtz has proposed that cognitive scientists instead think of the brain as collecting statistics on the features in different patches of the visual field.

Patchy impressions

On Rosenholtz's model, the patches described by the statistics get larger the farther they are from the center. This corresponds with a loss of information, in the same sense that, say, the average income for a city is less informative than the average income for every household in the city. At the center of the visual field, the patches might be so small that the statistics amount to the same thing as descriptions of individual features: A 100-percent concentration of horizontal features could indicate a single horizontal feature. So Rosenholtz's model would converge with the standard model.

But at the edges of the visual field, the models come apart. A large patch whose statistics are, say, 50 percent horizontal features and 50 percent vertical could contain an array of a dozen plus signs, or an assortment of vertical and horizontal lines, or a grid of boxes.

In fact, Rosenholtz's model includes statistics on much more than just orientation of features: There are also measures of things like feature size, brightness and color, and averages of other features -- about 1,000 numbers in all. But in computer simulations, storing even 1,000 statistics for every patch of the visual field requires only one-90th as many virtual neurons as storing visual features themselves, suggesting that statistical summary could be the type of space-saving technique the brain would want to exploit.

Rosenholtz's model grew out of her investigation of a phenomenon called visual crowding. If you were to concentrate your gaze on a point at the center of a mostly blank sheet of paper, you might be able to identify a solitary A at the left edge of the page. But you would fail to identify an identical A at the right edge, the same distance from the center, if instead of standing on its own it were in the center of the word"BOARD."

Rosenholtz's approach explains this disparity: The statistics of the lone A are specific enough to A's that the brain can infer the letter's shape; but the statistics of the corresponding patch on the other side of the visual field also factor in the features of the B, O, R and D, resulting in aggregate values that don't identify any of the letters clearly.

Road test

Rosenholtz's group has also conducted a series of experiments with human subjects designed to test the validity of the model. Subjects might, for instance, be asked to search for a target object -- like the letter O -- amid a sea of"distractors" -- say, a jumble of other letters. A patch of the visual field that contains 11 Q's and one O would have very similar statistics to one that contains a dozen Q's. But it would have much different statistics than a patch that contained a dozen plus signs. In experiments, the degree of difference between the statistics of different patches is an extremely good predictor of how quickly subjects can find a target object: It's much easier to find an O among plus signs than it is to find it amid Q's.

Rosenholtz, who has a joint appointment to the Computer Science and Artificial Intelligence Laboratory, is also interested in the implications of her work for data visualization, an active research area in its own right. For instance, designing subway maps with an eye to maximizing the differences between the summary statistics of different regions could make them easier for rushing commuters to take in at a glance.

In vision science,"there's long been this notion that somehow what the periphery is for is texture," says Denis Pelli, a professor of psychology and neural science at New York University. Rosenholtz's work, he says,"is turning it into real calculations rather than just a side comment." Pelli points out that the brain probably doesn't track exactly the 1,000-odd statistics that Rosenholtz has used, and indeed, Rosenholtz says that she simply adopted a group of statistics commonly used to describe visual data in computer vision research. But Pelli also adds that visual experiments like the ones that Rosenholtz is performing are the right way to narrow down the list to"the ones that really matter."


Source

Thursday, February 3, 2011

Future Surgeons May Use Robotic Nurse, 'Gesture Recognition'

Both the hand-gesture recognition and robotic nurse innovations might help to reduce the length of surgeries and the potential for infection, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.

The"vision-based hand gesture recognition" technology could have other applications, including the coordination of emergency response activities during disasters.

"It's a concept Tom Cruise demonstrated vividly in the film 'Minority Report,'" Wachs said.

Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria.

The new approach is a system that uses a camera and specialized algorithms to recognize hand gestures as commands to instruct a computer or robot.

At the same time, a robotic scrub nurse represents a potential new tool that might improve operating-room efficiency, Wachs said.

Findings from the research will be detailed in a paper appearing in the February issue of Communications of the ACM, the flagship publication of the Association for Computing Machinery. The paper was written by researchers at Purdue, the Naval Postgraduate School in Monterey, Calif., and Ben-Gurion University of the Negev, Israel.

Research into hand-gesture recognition began several years ago in work led by the Washington Hospital Center and Ben-Gurion University, where Wachs was a research fellow and doctoral student, respectively.

He is now working to extend the system's capabilities in research with Purdue's School of Veterinary Medicine and the Department of Speech, Language, and Hearing Sciences.

"One challenge will be to develop the proper shapes of hand poses and the proper hand trajectory movements to reflect and express certain medical functions," Wachs said."You want to use intuitive and natural gestures for the surgeon, to express medical image navigation activities, but you also need to consider cultural and physical differences between surgeons. They may have different preferences regarding what gestures they may want to use."

Other challenges include providing computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures.

"Say the surgeon starts talking to another person in the operating room and makes conversational gestures," Wachs said."You don't want the robot handing the surgeon a hemostat."

A scrub nurse assists the surgeon and hands the proper surgical instruments to the doctor when needed.

"While it will be very difficult using a robot to achieve the same level of performance as an experienced nurse who has been working with the same surgeon for years, often scrub nurses have had very limited experience with a particular surgeon, maximizing the chances for misunderstandings, delays and sometimes mistakes in the operating room," Wachs said."In that case, a robotic scrub nurse could be better."

The Purdue researcher has developed a prototype robotic scrub nurse, in work with faculty in the university's School of Veterinary Medicine.

Researchers at other institutions developing robotic scrub nurses have focused on voice recognition. However, little work has been done in the area of gesture recognition, Wachs said.

"Another big difference between our focus and the others is that we are also working on prediction, to anticipate what images the surgeon will need to see next and what instruments will be needed," he said.

Wachs is developing advanced algorithms that isolate the hands and apply"anthropometry," or predicting the position of the hands based on knowledge of where the surgeon's head is. The tracking is achieved through a camera mounted over the screen used for visualization of images.

"Another contribution is that by tracking a surgical instrument inside the patient's body, we can predict the most likely area that the surgeon may want to inspect using the electronic image medical record, and therefore saving browsing time between the images," Wachs said."This is done using a different sensor mounted over the surgical lights."

The hand-gesture recognition system uses a new type of camera developed by Microsoft, called Kinect, which senses three-dimensional space. The camera is found in new consumer electronics games that can track a person's hands without the use of a wand.

"You just step into the operating room, and automatically your body is mapped in 3-D," he said.

Accuracy and gesture-recognition speed depend on advanced software algorithms.

"Even if you have the best camera, you have to know how to program the camera, how to use the images," Wachs said."Otherwise, the system will work very slowly."

The research paper defines a set of requirements, including recommendations that the system should:

  • Use a small vocabulary of simple, easily recognizable gestures.
  • Not require the user to wear special virtual reality gloves or certain types of clothing.
  • Be as low-cost as possible.
  • Be responsive and able to keep up with the speed of a surgeon's hand gestures.
  • Let the user know whether it understands the hand gestures by providing feedback, perhaps just a simple"OK."
  • Use gestures that are easy for surgeons to learn, remember and carry out with little physical exertion.
  • Be highly accurate in recognizing hand gestures.
  • Use intuitive gestures, such as two fingers held apart to mimic a pair of scissors.
  • Be able to disregard unintended gestures by the surgeon, perhaps made in conversation with colleagues in the operating room.
  • Be able to quickly configure itself to work properly in different operating rooms, under various lighting conditions and other criteria.

"Eventually we also want to integrate voice recognition, but the biggest challenges are in gesture recognition," Wachs said."Much is already known about voice recognition."

The work is funded by the U.S. Agency for Healthcare Research and Quality.


Source

Wednesday, February 2, 2011

Computer-Assisted Diagnosis Tools to Aid Pathologists

"The advent of digital whole-slide scanners in recent years has spurred a revolution in imaging technology for histopathology," according to Metin N. Gurcan, Ph.D., an associate professor of Biomedical Informatics at The Ohio State University Medical Center."The large multi-gigapixel images produced by these scanners contain a wealth of information potentially useful for computer-assisted disease diagnosis, grading and prognosis."

Follicular Lymphoma (FL) is one of the most common forms of non-Hodgkin Lymphoma occurring in the United States. FL is a cancer of the human lymph system that usually spreads into the blood, bone marrow and, eventually, internal organs.

A World Health Organization pathological grading system is applied to biopsy samples; doctors usually avoid prescribing severe therapies for lower grades, while they usually recommend radiation and chemotherapy regimens for more aggressive grades.

Accurate grading of the pathological samples generally leads to a promising prognosis, but diagnosis depends solely upon a labor-intensive process that can be affected by human factors such as fatigue, reader variation and bias. Pathologists must visually examine and grade the specimens through high-powered microscopes.

Processing and analysis of such high-resolution images, Gurcan points out, remain non-trivial tasks, not just because of the sheer size of the images, but also due to complexities of underlying factors involving differences in staining, illumination, instrumentation and goals. To overcome many of these obstacles to automation, Gurcan and medical center colleagues, Dr. Gerard Lozanski and Dr. Arwa Shana'ah, turned to the Ohio Supercomputer Center.

Ashok Krishnamurthy, Ph.D., interim co-executive director of the center, and Siddharth Samsi, a computational science researcher there and an OSU graduate student in Electrical and Computer Engineering, put the power of a supercomputer behind the process.

"Our group has been developing tools for grading of follicular lymphoma with promising results," said Samsi."We developed a new automated method for detecting lymph follicles using stained tissue by analyzing the morphological and textural features of the images, mimicking the process that a human expert might use to identify follicle regions. Using these results, we developed models to describe tissue histology for classification of FL grades."

Histological grading of FL is based on the number of large malignant cells counted in within tissue samples measuring just 0.159 square millimeters and taken from ten different locations. Based on these findings, FL is assigned to one of three increasing grades of malignancy: Grade I (0-5 cells), Grade II (6-15 cells) and Grade III (more than 15 cells).

"The first step involves identifying potentially malignant regions by combining color and texture features," Samsi explained."The second step applies an iterative watershed algorithm to separate merged regions and the final step involves eliminating false positives."

The large data sizes and complexity of the algorithms led Gurcan and Samsi to leverage the parallel computing resources of OSC's Glenn Cluster in order to reduce the time required to process the images. They used MATLAB® and the Parallel Computing Toolbox™ to achieve significant speed-ups. Speed is the goal of the National Cancer Institute-FUNDED research project, but accuracy is essential. Gurcan and Samsi compared their computer segmentation results with manual segmentation and found an average similarity score of 87.11 percent.

"This algorithm is the first crucial step in a computer-aided grading system for Follicular Lymphoma," Gurcan said."By identifying all the follicles in a digitized image, we can use the entire tissue section for grading of the disease, thus providing experts with another tool that can help improve the accuracy and speed of the diagnosis."


Source

Tuesday, February 1, 2011

Internet Addresses: An Inevitable Shortage, but an Uneven One

There is some good news, according to computer scientist John Heideman, who heads a team at the USC Viterbi School of Engineering Information Sciences Institute that has just released its results in the form of a detailed outline, including a 10-minute video and an interactive web browser that allows users to explore the nooks and crannies of Internet space themselves.

Heidemann who is a senior project leader at ISI and a research associate professor in the USC Viterbi School of Engineering Department of Computer Science, says his group has found that while some of the already allocated address blocks (units of Internet real estate, ranging from 256 to more than 16 million addresses) are heavily used, many are still sparsely used."Even allowing for undercount," the group finds,"probably only 14 percent of addresses are visible on the public Internet."

Nevertheless,"as full allocation happens, there will be pressure to improve utilization and eventually trade underutilized areas," the video shows. These strategies have limits, the report notes. Better utilization, trading, and other strategies can recover"twice or four times current utilization. But requests for address double every year, so trading will only help for two years. Four billion addresses are just not enough for 7 billion people."

The IPv6 protocol allows many, many more addresses -- 1000 1000 trillion -- but may involve transition costs.

Heideman's group report comes as the Number Resource Organization (NRO) and he Internet Assigned Numbers Authority (IANA) are preparing to make an announcement saying they have given out all the addresses, passing on most to regional authorities.

The ISI video offers a thorough background in the hows and whys of the current IPv4 Internet address system, in which each address is a number between zero and 2 to the 32nd power (4,294,967,295), usually written in"dotted-decimal notation" as four base-10 numbers separated by periods.

Heidemann, working with collaborator Yuri Pradkin and ISI colleagues, produced an earlier Internet census in 2007, following on previous work at ISI -- the first complete census since 1982. To do it, they sent a message ('ping') each to each possible Internet address. The video explains the pinging process.

At the time, some 2.8 million of the 4.3 million possible addresses had been allocated; today more than 3.5 million are allocated. The current effort, funded by Department of Homeland Security Science and Technology Directorate and the NSF, was carried out by Aniruddh Rao and Xue Cui of ISI, along with Heidemann. Peer-reviewed analysis of their approach appeared in ACM Internet Measurements Conference, 2008.


Source