Wednesday, March 30, 2011

Physicists Rotate Beams of Light

Light waves can oscillate in different directions -- much like a string that can vibrate up and down or left and right -- depending on the direction in which it is picked. This is called the polarization of light. Physicists at the Vienna University of Technology have now, together with researchers at Würzburg University, developed a method to control and manipulate the polarization of light using ultra thin layers of semiconductor material.

For future research on light and its polarization this is an important step forward -- and this breakthrough could even open up possibilities for completely new computer technology. The experiment can be viewed as the optical version of an electronic transistor. The results of the experiment have now been published in the journalPhysical Review Letters.

Controlling light with magnetic fields

The polarization of light can change, when it passes through a material in a strong magnetic field. This phenomenon is known as the"Faraday effect.""So far, however, this effect had only been observed in materials in which it was very weak," professor Andrei Pimenov explains. He carried out the experiments at the Institute for Solid State Physics of the TU Vienna, together with his assistant Alexey Shuvaev. Using light of the right wavelength and extremely clean semiconductors, scientists in Vienna and Würzburg could achieve a Faraday effect which is orders of magnitude stronger than ever measured before.

Now light waves can be rotated into arbitrary directions -- the direction of the polarization can be tuned with an external magnetic field. Surprisingly, an ultra-thin layer of less than a thousandth of a millimeter is enough to achieve this."Such thin layers made of other materials could only change the direction of polarization by a fraction of one degree," says professor Pimenov. If the beam of light is then sent through a polarization filter, which only allows light of a particular direction of polarization to pass, the scientists can, rotating the direction appropriately, decide whether the beam should pass or not.

The key to this astonishing effect lies in the behavior of the electrons in the semiconductor. The beam of light oscillates the electrons, and the magnetic field deflects their vibrating motion. This complicated motion of the electrons in turn affects the beam of light and changes its direction of polarization.

An optical transistor

In the experiment, a layer of the semiconductor mercury telluride was irradiated with light in the infrared spectral range."The light has a frequency in the terahertz domain -- those are the frequencies, future generations of computers may operate with," professor Pimenov believes."For years, the clock rates of computers have not really increased, because a domain has been reached, in which material properties just don't play along anymore." A possible solution is to complement electronic circuits with optical elements. In a transistor, the basic element of electronics, an electric current is controlled by an external signal. In the experiment at TU Vienna, a beam of light is controlled by an external magnetic field. The two systems are very much alike."We could call our system a light-transistor," Pimenov suggests.

Before optical circuits for computers can be considered, the newly discovered effect will prove useful as a tool for further research. In optics labs, it will play an important role in research on new materials and the physics of light.


Source

Thursday, March 24, 2011

BrainGate Neural Interface System Reaches 1,000-Day Performance Milestone

Results from five consecutive days of device use surrounding her 1,000th day in the device trial appeared online March 24 in theJournal of Neural Engineering.

"This proof of concept -- that after 1,000 days a woman who has no functional use of her limbs and is unable to speak can reliably control a cursor on a computer screen using only the intended movement of her hand -- is an important step for the field," said Dr. Leigh Hochberg, a Brown engineering associate professor, VA rehabilitation researcher, visiting associate professor of neurology at Harvard Medical School, and director of the BrainGate pilot clinical trial at MGH.

The woman, identified in the paper as S3, performed two"point-and-click" tasks each day by thinking about moving the cursor with her hand. In both tasks she averaged greater than 90 percent accuracy. Some on-screen targets were as small as the effective area of a Microsoft Word menu icon.

In each of S3's two tasks, performed in 2008, she controlled the cursor movement and click selections continuously for 10 minutes. The first task was to move the cursor to targets arranged in a circle and in the center of the screen, clicking to select each one in turn. The second required her to follow and click on a target as it sequentially popped up with varying size at random points on the screen.

From fundamental neuroscience to clinical utility

Under development since 2002, the investigational BrainGate system is a combination of hardware and software that directly senses electrical signals produced by neurons in the brain that control movement. By decoding those signals and translating them into digital instructions, the system is being evaluated for its ability to give people with paralysis control of external devices such as computers, robotic assistive devices, or wheelchairs. The BrainGate team is also engaged in research toward control of advanced prosthetic limbs and toward direct intracortical control of functional electrical stimulation devices for people with spinal cord injury, in collaboration with researchers at the Cleveland FES Center.

The system is currently in pilot clinical trials, directed by Hochberg at MGH.

BrainGate uses a tiny (4x4 mm, about the size of a baby aspirin) silicon electrode array to read neural signals directly within brain tissue. Although external sensors placed on the brain or skull surface can also read neural activity, they are believed to be far less precise. In addition, many prototype brain implants have eventually failed because of moisture or other perils of the internal environment.

"Neuroengineers have often wondered whether useful signals could be recorded from inside the brain for an extended period of time," Hochberg said."This is the first demonstration that this microelectrode array technology can provide useful neuroprosthetic signals allowing a person with tetraplegia to control an external device for an extended period of time."

Moving forward

Device performance was not the same at 2.7 years as it was earlier on, Hochberg added. At 33 months fewer electrodes were recording useful neural signals than after only six months. But John Donoghue -- VA senior research career scientist, Henry Merritt Wriston Professor of Neuroscience, director of the Brown Institute for Brain Science, and original developer of the BrainGate system -- said no evidence has emerged of any fundamental incompatibility between the sensor and the brain. Instead, it appears that decreased signal quality over time can largely be attributed to engineering, mechanical or procedural issues. Since S3's sensor was built and implanted in 2005, the sensor's manufacturer has reported continual quality improvements. The data from this study will be used to further understand and modify the procedures or device to further increase durability.

"None of us will be fully satisfied with an intracortical recording device until it provides decades of useful signals," Hochberg said."Nevertheless, I'm hopeful that the progress made in neural interface systems will someday be able to provide improved communication, mobility, and independence for people with locked-in syndrome or other forms of paralysis and eventually better control over prosthetic, robotic, or functional electrical stimulation systems {stimulating electrodes that have already returned limb function to people with cervical spinal cord injury}, even while engineers continue to develop ever-better implantable sensors."

In addition to demonstrating the very encouraging longevity of the BrainGate sensor, the paper also presents an advance in how the performance of a brain-computer interface can be measured, Simeral said."As the field continues to evolve, we'll eventually be able to compare and contrast technologies effectively."

As for S3, who had a brainstem stroke in the mid-1990s and is now in her late 50s, she continues to participate in trials with the BrainGate system, which continues to record useful signals, Hochberg said. However, data beyond the 1000th day in 2008 has thus far only been presented at scientific meetings, and Hochberg can only comment on data that has already completed the scientific peer review process and appeared in publication.

In addition to Simeral, Hochberg, and Donoghue, other authors are Brown computer scientist Michael Black and former Brown computer scientist Sung-Phil Kim.

About the BrainGate collaboration

This advance is the result of the ongoing collaborative BrainGate research at Brown University, Massachusetts General Hospital, and Providence VA Medical Center. The BrainGate research team is focused on developing and testing neuroscientifically inspired technologies to improve the communication, mobility, and independence of people with neurologic disorders, injury, or limb loss.

For more information, visitwww.braingate2.org.

The implanted microelectrode array and associated neural recording hardware used in the BrainGate research are manufactured by BlackRock Microsystems, LLC (Salt Lake City, UT).

This research was funded in part by the Rehabilitation Research and Development Service, Department of Veterans Affairs; The National Institutes of Health (NIH), including NICHD-NCMRR, NINDS/NICHD, NIDCD/ARRA, NIBIB, NINDS-Javits; the Doris Duke Charitable Foundation; MGH-Deane Institute for Integrated Research on Atrial Fibrillation and Stroke; and the Katie Samson Foundation.

The BrainGate pilot clinical trial was previously directed by Cyberkinetics Neurotechnology Systems, Inc., Foxborough, MA (CKI). CKI ceased operations in 2009. The clinical trials of the BrainGate2 Neural Interface System are now administered by Massachusetts General Hospital, Boston, Mass. Donoghue is a former chief scientific officer and a former director of CKI; he held stocks and received compensation. Hochberg received research support from Massachusetts General and Spaulding Rehabilitation Hospitals, which in turn received clinical trial support from Cyberkinetics. Simeral received compensation as a consultant to CKI.


Source

Wednesday, March 23, 2011

Toward Computers That Fit on a Pen Tip: New Technologies Usher in the Millimeter-Scale Computing Era

And a compact radio that needs no tuning to find the right frequency could be a key enabler to organizing millimeter-scale systems into wireless sensor networks. These networks could one day track pollution, monitor structural integrity, perform surveillance, or make virtually any object smart and trackable.

Both developments at the University of Michigan are significant milestones in the march toward millimeter-scale computing, believed to be the next electronics frontier.

Researchers are presenting papers on each at the International Solid-State Circuits Conference (ISSCC) in San Francisco. The work is being led by three faculty members in the U-M Department of Electrical Engineering and Computer Science: professors Dennis Sylvester and David Blaauw, and assistant professor David Wentzloff.

Bell's Law and the promise of pervasive computing

Nearly invisible millimeter-scale systems could enable ubiquitous computing, and the researchers say that's the future of the industry. They point to Bell's Law, a corollary to Moore's Law. (Moore's says that the number of transistors on an integrated circuit doubles every two years, roughly doubling processing power.)

Bell's Law says there's a new class of smaller, cheaper computers about every decade. With each new class, the volume shrinks by two orders of magnitude and the number of systems per person increases. The law has held from 1960s' mainframes through the '80s' personal computers, the '90s' notebooks and the new millennium's smart phones.

"When you get smaller than hand-held devices, you turn to these monitoring devices," Blaauw said."The next big challenge is to achieve millimeter-scale systems, which have a host of new applications for monitoring our bodies, our environment and our buildings. Because they're so small, you could manufacture hundreds of thousands on one wafer. There could be 10s to 100s of them per person and it's this per capita increase that fuels the semiconductor industry's growth."

The first complete millimeter-scale system

Blaauw and Sylvester's new system is targeted toward medical applications. The work they present at ISSCC focuses on a pressure monitor designed to be implanted in the eye to conveniently and continuously track the progress of glaucoma, a potentially blinding disease. (The device is expected to be commercially available several years from now.)

In a package that's just over 1 cubic millimeter, the system fits an ultra low-power microprocessor, a pressure sensor, memory, a thin-film battery, a solar cell and a wireless radio with an antenna that can transmit data to an external reader device that would be held near the eye.

"This is the first true millimeter-scale complete computing system," Sylvester said.

"Our work is unique in the sense that we're thinking about complete systems in which all the components are low-power and fit on the chip. We can collect data, store it and transmit it. The applications for systems of this size are endless."

The processor in the eye pressure monitor is the third generation of the researchers' Phoenix chip, which uses a unique power gating architecture and an extreme sleep mode to achieve ultra-low power consumption. The newest system wakes every 15 minutes to take measurements and consumes an average of 5.3 nanowatts. To keep the battery charged, it requires exposure to 10 hours of indoor light each day or 1.5 hours of sunlight. It can store up to a week's worth of information.

While this system is miniscule and complete, its radio doesn't equip it to talk to other devices like it. That's an important feature for any system targeted toward wireless sensor networks.

A unique compact radio to enable wireless sensor networks

Wentzloff and doctoral student Kuo-Ken Huang have taken a step toward enabling such node-to-node communication. They've developed a consolidated radio with an on-chip antenna that doesn't need the bulky external crystal that engineers rely on today when two isolated devices need to talk to each other. The crystal reference keeps time and selects a radio frequency band. Integrating the antenna and eliminating this crystal significantly shrinks the radio system. Wentzloff's is less than 1 cubic millimeter in size.

He and Huang's key innovation is to engineer the new antenna to keep time on its own and serve as its own reference. By integrating the antenna through an advanced CMOS process, they can precisely control its shape and size and therefore how it oscillates in response to electrical signals.

"Antennas have a natural resonant frequency for electrical signals that is defined by their geometry, much like a pure audio tone on a tuning fork," Wentzloff said."By designing a circuit to monitor the signal on the antenna and measure how close it is to the antenna's natural resonance, we can lock the transmitted signal to the antenna's resonant frequency."

"This is the first integrated antenna that also serves as its own reference. The radio on our chip doesn't need external tuning. Once you deploy a network of these, they'll automatically align at the same frequency."

The researchers are now working on lowering the radio's power consumption so that it's compatible with millimeter-scale batteries.

Greg Chen, a doctoral student in the Department of Electrical Engineering and Computer Science, presents"A Cubic-Millimeter Energy-Autonomous Wireless Intraocular Pressure Monitor." The researchers are collaborating with Ken Wise, the William Gould Dow Distinguished University Professor of Electrical Engineering and Computer Science on the packaging of the sensor, and with Paul Lichter, chair of the Department of Ophthalmology and Visual Sciences at the U-M Medical School, for the implantation studies. Huang presents"A 60GHz Antenna-Referenced Frequency-Locked Loop in 0.13μm CMOS for Wireless Sensor Networks." This research is funded by the National Science Foundation. The university is pursuing patent protection for the intellectual property, and is seeking commercialization partners to help bring the technology to market.


Source

Tuesday, March 22, 2011

Simulating Tomorrow's Accelerators at Near the Speed of Light

But realizing the promise of laser-plasma accelerators crucially depends on being able to simulate their operation in three-dimensional detail. Until now such simulations have challenged or exceeded even the capabilities of supercomputers.

A team of researchers led by Jean-Luc Vay of Berkeley Lab's Accelerator and Fusion Research Division (AFRD) has borrowed a page from Einstein to perfect a revolutionary new method for calculating what happens when a laser pulse plows through a plasma in an accelerator like BELLA. Using their"boosted-frame" method, Vay's team has achieved full 3-D simulations of a BELLA stage in just a few hours of supercomputer time, calculations that would have been beyond the state of the art just two years ago.

Not only are the recent BELLA calculations tens of thousands of times faster than conventional methods, they overcome problems that plagued previous attempts to achieve the full capacity of the boosted-frame method, such as violent numerical instabilities. Vay and his colleagues, Cameron Geddes of AFRD, Estelle Cormier-Michel of the Tech-X Corporation in Denver, and David Grote of Lawrence Livermore National Laboratory, publish their latest findings in the March, 2011 issue of the journalPhysics of Plasma Letters.

Space, time, and complexity

The boosted-frame method, first proposed by Vay in 2007, exploits Einstein's Theory of Special Relativity to overcome difficulties posed by the huge range of space and time scales in many accelerator systems. Vast discrepancies of scale are what made simulating these systems too costly.

"Most researchers assumed that since the laws of physics are invariable, the huge complexity of these systems must also be invariable," says Vay."But what are the appropriate units of complexity? It turns out to depend on how you make the measurements."

Laser-plasma wakefield accelerators are particularly challenging: they send a very short laser pulse through a plasma measuring a few centimeters or more, many orders of magnitude longer than the pulse itself (or the even-shorter wavelength of its light). In its wake, like a speedboat on water, the laser pulse creates waves in the plasma. These alternating waves of positively and negatively charged particles set up intense electric fields. Bunches of free electrons, shorter than the laser pulse,"surf" the waves and are accelerated to high energies.

"The most common way to model a laser-plasma wakefield accelerator in a computer is by representing the electromagnetic fields as values on a grid, and the plasma as particles that interact with the fields," explains Geddes, a member of the BELLA science staff who has long worked on laser-plasma acceleration."Since you have to resolve the finest structures -- the laser wavelength, the electron bunch -- over the relatively enormous length of the plasma, you need a grid with hundreds of millions of cells."

The laser period must also be resolved in time, and calculated over millions of time steps. As a result, while much of the important physics of BELLA is three-dimensional, direct 3-D simulation was initially impractical. Just a one-dimensional simulation of BELLA required 5,000 hours of supercomputer processor time at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC).

Choosing the right frame

The key to reducing complexity and cost lies in choosing the right point of view, or"reference frame." When Albert Einstein was 16 years old he imagined riding along in a frame moving with a beam of light -- a thought experiment that, 10 years later, led to his Theory of Special Relativity, which establishes that there is no privileged reference frame. Observers moving at different velocities may experience space and time differently and even see things happening in a different order, but calculations from any point of view can recover the same physical result.

Among the consequences are that the speed of light in a vacuum is always the same; compared to a stationary observer's experience, time moves more slowly while space contracts for an observer traveling near light speed. These different points of view are called Lorentz frames, and changing one for another is called a Lorentz transformation. The"boosted frame" of the laser pulse is the key to enabling calculations of laser-plasma wakefield accelerators that would otherwise be inaccessible.

A laser pulse pushing through a tenuous plasma moves only a little slower than light through a vacuum. An observer in the stationary laboratory frame sees it as a rapid oscillation of electromagnetic fields moving through a very long plasma, whose simulation requires high resolution and many time steps. But for an observer moving with the pulse, time slows, and the frequency of the oscillations is greatly reduced; meanwhile space contracts, and the plasma becomes much shorter. Thus relatively few time steps are needed to model the interaction between the laser pulse, the plasma waves formed in its wake, and the bunches of electrons riding the wakefield through the plasma. Fewer steps mean less computer time.

Eliminating instability

Early attempts to apply the boosted-frame method to laser-plasma wakefield simulations encountered numerical instabilities that limited how much the calculation frame could be boosted. Calculations could still be speeded up tens or even hundreds of times, but the full promise of the method could not be realized.

Vay's team showed that using a particular boosted frame, that of the wakefield itself -- in which the laser pulse is almost stationary -- realizes near-optimal speedup of the calculation. And it fundamentally modifies the appearance of the laser in the plasma. In the laboratory frame the observer sees many oscillations of the electromagnetic field in the laser pulse; in the frame of the wake, the observer sees just a few at a time.

Not only is speedup possible because of the coarser resolution, but at the same time numerical instabilities due to short wavelengths can be suppressed without affecting the laser pulse. Combined with special techniques for interpreting the data between frames, this allows the full potential of the boosted-frame principle to be reached.

"We produced the first full multidimensional simulation of the 10 billion-electron-volt design for BELLA," says Vay."We even ran simulations all the way up to a trillion electron volts, which establishes our ability to model the behavior of laser-plasma wakefield accelerator stages at varying energies. With this calculation we achieved the theoretical maximum speedup of the boosted-frame method for such systems -- a million times faster than similar calculations in the laboratory frame."

Simulations will still be challenging, especially those needed to tailor applications of high-energy laser-plasma wakefield accelerators to such uses as free-electron lasers for materials and biological sciences, or for homeland security or other research. But the speedup achieves what might otherwise have been virtually impossible: it puts the essential high-resolution simulations within reach of new supercomputers.

This work was supported by the U.S. Department of Energy's Office of Science, including calculations with the WARP beam-simulation code and other applications at the National Energy Research Scientific Computing Center (NERSC).


Source

Monday, March 21, 2011

Running on a Faster Track: Researchers Develop Scheduling Tool to Save Time on Public Transport

Dr. Tal Raviv and his graduate student Mor Kaspi of Tel Aviv University's Department of Industrial Engineering in the Iby and Aladar Fleischman Faculty of Engineering have developed a tool that makes passenger train journeys shorter, especially when transfers are involved -- a computer-based system to shave precious travel minutes off a passenger's journey.

Dr. Raviv's solution, the"Service Oriented Timetable," relies on computers and complicated algorithms to do the scheduling."Our solution is useful for any metropolitan region where passengers are transferring from one train to another, and where train service providers need to ensure that the highest number of travellers can make it from Point A to Point B as quickly as possible," says Dr. Raviv.

Saves time and resources

In the recent economic downturn, more people are seeking to scale back their monthly transportation costs. Public transportation is a win-win -- good for both the bank account and the environment. But when travel routes are complicated by transfers, it becomes a hard job to manage who can wait -- and who can't -- between trains.

Another factor is consumer preference. Ideally, each passenger would like a direct train to his destination, with no stops en route. But passengers with different itineraries must compete for the system's resources. Adding a stop at a certain station will improve service for passengers for whom the station is the final destination, but will cause a delay for passengers who are only passing through it. The question is how to devise a schedule which is fair for everyone. What are the decisions that will improve the overall condition of passengers in the train system?

It's not about adding more resources to the system, but more intelligently managing what's already there, Dr. Raviv explains.

More time on the train, less time on the platform

In their train timetabling system, Dr. Raviv and Kaspi study the timetables to find places in the train scheduling system that can be optimized so passengers make it to their final destination faster.

Traditionally, train planners looked for solutions based on the frequency of trains passing through certain stops. Dr. Raviv and Kaspi, however, are developing a high-tech solution for scheduling trains that considers the total travel time of passengers, including their waiting time at transfer stations.

"Let's say you commute to Manhattan from New Jersey every day. We can find a way to synchronize trains to minimize the average travel time of passengers," says Dr. Raviv."That will make people working in New York a lot happier."

The project has already been simulated on the Israel Railway, reducing the average travel time per commuter from 60 to 48 minutes. The tool can be most useful in countries and cities, he notes, where train schedules are robust and very complicated.

The researchers won a competition of the Railway Application Section of the International Institute for Operation Research and Management Science (INFORMS) last November for their computer program that optimizes a refuelling schedule for freight trains. Dr. Raviv also works on optimizing other forms of public transport, including the bike-sharing programs found in over 400 cities around the world today.


Source

Sunday, March 20, 2011

Teaching Robots to Move Like Humans

The research was presented March 7 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"It's important to build robots that meet people's social expectations because we think that will make it easier for people to understand how to approach them and how to interact with them," said Andrea Thomaz, assistant professor in the School of Interactive Computing at Georgia Tech's College of Computing.

Thomaz, along with Ph.D. student Michael Gielniak, conducted a study in which they asked how easily people can recognize what a robot is doing by watching its movements.

"Robot motion is typically characterized by jerky movements, with a lot of stops and starts, unlike human movement which is more fluid and dynamic," said Gielniak."We want humans to interact with robots just as they might interact with other humans, so that it's intuitive."

Using a series of human movements taken in a motion-capture lab, they programmed the robot, Simon, to perform the movements. They also optimized that motion to allow for more joints to move at the same time and for the movements to flow into each other in an attempt to be more human-like. They asked their human subjects to watch Simon and identify the movements he made.

"When the motion was more human-like, human beings were able to watch the motion and perceive what the robot was doing more easily," said Gielniak.

In addition, they tested the algorithm they used to create the optimized motion by asking humans to perform the movements they saw Simon making. The thinking was that if the movement created by the algorithm was indeed more human-like, then the subjects should have an easier time mimicking it. Turns out they did.

"We found that this optimization we do to create more life-like motion allows people to identify the motion more easily and mimic it more exactly," said Thomaz.

The research that Thomaz and Gielniak are doing is part of a theme in getting robots to move more like humans move. In future work, the pair plan on looking at how to get Simon to perform the same movements in various ways.

"So, instead of having the robot move the exact same way every single time you want the robot to perform a similar action like waving, you always want to see a different wave so that people forget that this is a robot they're interacting with," said Gielniak.


Source

Saturday, March 19, 2011

Mini Disks for Data Storage

Tiny magnets organize themselves in vortices in the researchers' mini disks. The individual magnets can twist either in a clockwise or a counterclockwise direction in the disk. These two different states can be used in data processing just like switching the electricity"on" and"off" in conventional computers. In contrast to conventional memory storage systems, these magnetic vortices can be switched by the electrons' intrinsic spin and with far less power consumption.

In the exterior section of a vortex the magnetic particles align nearly parallel to one another while the space in the disk's center is insufficient for such a parallel arrangement. Therefore, the elementary magnets in the center of a vortex twist away from the surface of the disk in order to gain space and thus, orient themselves once again next to one another without consuming much energy.

The formation of a vortex only works smoothly if the individual magnetic disks maintain some distance to one another or are relatively big. In order to achieve a high data storage density for compact and efficient devices, manufacturers and users ask for the smallest possible data processing units, which in turn also feature small magnetic vortices and require a closely packed structure. Then, however, the tiny magnets in each disk"feel" their neighbors in the adjacent disks and start to interact. This interaction, though, is a poor prerequisite for memory storage systems.

Therefore Norbert Martin and Jeffrey McCord eliminated the cylindrical shape of the small magnetic disks and instead prepared them with slanted edges. The tiny magnets at the edges are thus forced in the direction of the slant. This orientation creates in turn a magnetic field perpendicular to the disk surface, which then is in the preferred direction of the slant. This requires a lot less energy than the symmetric orientation of this magnetic field for the disks with vertical outer edges. Accordingly, magnetic vortices form more easily with slanted edges.

To create these vortices, Norbert Martin places tiny glass spheres with a diameter of 0.30 thousandth of a millimeter (300 nanometers) on top of a thin magnetic layer. Under specific conditions, all of these glass spheres arrange next to each other and therefore form a mask of tiny hexagons with small gaps. When the scientists direct argon ions at this layer, these atomic and electrically charged projectiles penetrate the gaps between the glass spheres and force particles out of the magnetic layer located under the gaps. The array of the glass spheres, thus, functions as a mask: One magnetic disk remains below each individual glass sphere, while the magnetic layer under the gaps erodes. During the bombardment, though, the argon ions remove material from the glass spheres which, according to that, continuously decrease in size. At the end of the process the diameter of the glass spheres is only 260 nanometers, instead of the original 300 nanometers. This permits the argon ions to reach also areas which are located further inside the magnetic disks that are emerging beneath the glass spheres over time. Because the time of bombardment is shorter in these places, less material is removed on the inside. The desired slanted edge is therefore created virtually on its own.


Source

Friday, March 18, 2011

Bomb Disposal Robot Getting Ready for Front-Line Action

The organisations have come together to create a lightweight, remote-operated vehicle, or robot, that can be controlled by a wireless device, not unlike a games console, from a distance of several hundred metres.

The innovative robot, which can climb stairs and even open doors, will be used by soldiers on bomb disposal missions in countries such as Afghanistan.

Experts from the Department of Computer& Communications Engineering, based within the university's School of Engineering, are working on the project alongside NIC Instruments Limited of Folkestone, manufacturers of security search and bomb disposal equipment.

Much lighter and more flexible than traditional bomb disposal units, the robot is easier for soldiers to carry and use when out in the field. It has cameras on board, which relay images back to the operator via the hand-held control, and includes a versatile gripper which can carry and manipulate delicate items.

The robot also includes nuclear, biological and chemical weapons sensors.

Measuring just 72cm by 35cm, the robot weighs 48 kilogrammes and can move at speeds of up to eight miles per hour.


Source

Wednesday, March 16, 2011

Miniature Lasers Could Help Launch New Age of the Internet

Professor Dennis Deppe's miniature laser diode emits more intense light than those currently used. The light emits at a single wavelength, making it ideal for use in compact disc players, laser pointers and optical mice for computers, in addition to high-speed data transmission.

Until now, the biggest challenge has been the failure rate of these tiny devices. They don't work very well when they face huge workloads; the stress makes them crack.

The smaller size and elimination of non-semiconductor materials means the new devices could potentially be used in heavy data transmission, which is critical in developing the next generation of the Internet. By incorporating laser diodes into cables in the future, massive amounts of data could be moved across great distances almost instantaneously. By using the tiny lasers in optical clocks, the precision of GPS and high-speed wireless data communications also would increase.

"The new laser diodes represent a sharp departure from past commercial devices in how they are made," Deppe said from his lab inside the College of Optics and Photonics."The new devices show almost no change in operation under stress conditions that cause commercial devices to rapidly fail."

"At the speed at which the industry is moving, I wouldn't be surprised if in four to five years, when you go to Best Buy to buy cables for all your electronics, you'll be selecting cables with laser diodes embedded in them," he added.

Deppe and Sabine Freisem, a senior research scientist who has been collaborating with Deppe for the past eight years, presented their findings in January at the SPIE (formerly The International Society for Optical Engineering) Photonics West conference in San Francisco.

Deppe has spent 21 years researching semiconductor lasers, and he is considered an international expert in the area. sdPhotonics is working on the commercialization of many of his creations and has several ongoing contracts.

"This is definitely a milestone," Freisem said."The implications for the future are huge."

But there is still one challenge that the team is working to resolve. The voltage necessary to make the laser diodes work more efficiently must be optimized

Deppe said once that problem is resolved, the uses for the laser diodes will multiply. They could be used in lasers in space to remove unwanted hair.

"We usually have no idea how often we use this technology in our everyday life already," Deppe said."Most of us just don't think about it. With further development, it will only become more commonplace."


Source

Tuesday, March 15, 2011

Room-Temperature Spintronic Computers Coming Soon? Silicon Spin Transistors Heat Up and Spins Last Longer

"Electronic devices mostly use the charge of the electrons -- a negative charge that is moving," says Ashutosh Tiwari, an associate professor of materials science and engineering at the University of Utah."Spintronic devices will use both the charge and the spin of the electrons. With spintronics, we want smaller, faster and more power-efficient computers and other devices."

Tiwari and Ph.D. student Nathan Gray report their creation of room-temperature, spintronic transistors on a silicon semiconductor this month in the journalApplied Physics Letters. The research -- in which electron"spin" aligned in a certain way was injected into silicon chips and maintained for a record 276 trillionths of a second -- was funded by the National Science Foundation.

"Almost every electronic device has silicon-based transistors in it," Gray says."The current thrust of industry has been to make those transistors smaller and to add more of them into the same device" to process more data. He says his and Tiwari's research takes a different approach.

"Instead of just making transistors smaller and adding more of them, we make the transistors do more work at the same size because they have two different ways {electron charge and spin} to manipulate and process data," says Gray.

A Quick Spin through Spintronics

Modern computers and other electronic devices work because negatively charged electrons flow as electrical current. Transistors are switches that reduce computerized data to a binary code of ones or zeros represented by the presence or absence of electrons in semiconductors, most commonly silicon.

In addition to electric charge, electrons have another property known as spin, which is like the electron's intrinsic angular momentum. An electron's spin often is described as a bar magnet that points up or down, which also can represent ones and zeroes for computing.

Most previous research on spintronic transistors involved using optical radiation -- in the form of polarized light from lasers -- to orient the electron spins in non-silicon materials such as gallium arsenide or organic semiconductors at supercold temperatures.

"Optical methods cannot do that with silicon, which is the workhorse of the semiconductor and electronics industry, and the industry doesn't want to retool for another material," Tiwari says.

"Spintronics will become useful only if we use silicon," he adds.

The Experiment

In the new study, Tiwari and Gray used electricity and magnetic fields to inject"spin polarized carriers" -- namely, electrons with their spins aligned either all up or all down -- into silicon at room temperature.

Their trick was to use magnesium oxide as a"tunnel barrier" to get the aligned electron spins to travel from one nickel-iron electrode through the silicon semiconductor to another nickel-iron electrode. Without the magnesium oxide, the spins would get randomized almost immediately, with half up and half down, Gray says.

"This thing works at room temperature," Tiwari says."Most of the devices in earlier studies have to be cooled to very low temperatures" -- colder than 200 below zero Fahrenheit -- to align the electrons' spins either all up or all down."Our new way of putting spin inside the silicon does not require any cooling."

The experiment used a flat piece of silicon about 1 inch long, about 0.3 inches wide and one-fiftieth of an inch thick. An ultra-thin layer of magnesium oxide was deposited on the silicon wafer. Then, one dozen tiny transistors were deposited on the silicon wafer so they could be used to inject electrons with aligned spins into the silicon and later detect them.

Each nickel-iron transistor had three contacts or electrodes: one through which electrons with aligned spins were injected into the silicon and detected, a negative electrode and a positive electrode used to measure voltage.

During the experiment, the researchers send direct current through the spin-injector electrode and negative electrode of each transistor. The current is kept steady, and the researchers measure variations in voltage while applying a magnetic field to the apparatus

"By looking at the change in the voltage when we apply a magnetic field, we can find how much spin has been injected and the spin lifetime," Tiwari says.

A 328 Nanometer, 276 Picosecond Step for Spintronics

For spintronic devices to be practical, electrons with aligned spins need to be able to move adequate distances and retain their spin alignments for an adequate time.

During the new study, the electrons retained their spins for 276 picoseconds, or 276 trillionths of a second. And based on that lifetime, the researchers calculate the spin-aligned electrons moved through the silicon 328 nanometers, which is 328 billionths of a meter or about 13 millionths of an inch.

"It's a tiny distance for us, but in transistor technology, it is huge," Gray says."Transistors are so small today that that's more than enough to get the electron where we need it to go."

"Those are very good numbers," Tiwari says."These numbers are almost 10 times bigger than what we need {for spintronic devices} and two times bigger than if you use aluminum oxide" instead of the magnesium oxide in his study.

He says Dutch researchers previously were able to inject aligned spins into silicon using aluminum oxide as the"tunneling medium," but the new study shows magnesium oxide works better.

The new study's use of electronic spin injection is much more practical than using optical methods such as lasers because lasers are too big for chips in consumer electronic devices, Tiwari says.

He adds that spintronic computer processors require little power compared with electronic devices, so a battery that may power an electronic computer for eight hours might last more than 24 hours on a spintronic computer.

Gray says spintronics is"the next big step to push the limits of semiconductor technology that we see in every aspect of our lives: computers, cell phones, GPS (navigation) devices, iPods, TVs."


Source

Monday, March 14, 2011

Real March Madness Is Relying on Seedings to Determine Final Four

According to an operations research analysis model developed by Sheldon H. Jacobson, a professor of computer science and the director of the simulation and optimization laboratory at the University of Illinois, you're better off picking a combination of two top-seeded teams, a No. 2 seed and a No. 3 seed.

"There are patterns that exist in the seeds," Jacobson says."As much as we like to believe otherwise, the fact of the matter is that we've uncovered a model that captures this pattern. As a result of that, in spite of what we emotionally feel about teams or who's going to win, the reality is that the numbers trump all of these things," Jacobson said."It's more likely to be 1, 1, 2, 3 in the Final Four than four No. 1's."

Jacobson's model is unique in that it prognosticates not based on who the teams are, but on the seeds they hold. He describes his model in a forthcoming paper in the journal Omega with co-authors Alex Nikolaev, of the University of Buffalo; Adrian Lee, of CITERI (Central Illinois Technology and Education Research Institute); and Douglas King, a graduate student at Illinois.

Jacobson has also integrated the model into a user-friendly website to help March Madness fans determine the relative probability of their chosen team combinations appearing in the final rounds of the NCAA men's basketball tournament.

A number of websites offer assistance to budding bracketologists, such as game-by-game probabilities of certain match-ups or determining the spread on a given team reaching a particular point in the tournament. Jacobson's website is the only one to look at collective groups of seeds within the brackets.

"What we do is use the power of analytics to uncover trends in 'bracketology.' It really is a mathematical science," he said."What our model enables us to do is look at the likelihood or probability that a certain set of seed combinations will occur as we advance deeper into the tournament."

Jacobson's team applied a statistical method called goodness-of-fit testing to NCAA tournament data from 1985 to 2010, identifying patterns in seed distribution in the Elite Eight, Final Four and national championship rounds. They found that the seeds themselves exhibit certain statistical patterns, independent of the team. They then fit the pattern to a stochastic model they can use to assess probabilities and odds.

Two computer science undergraduates, Ammar Rizwan and Emon Dai, built the websitebracketodds.cs.illinois.edubased on Jacobson's model. The publicly accessible website will be up through the entire tournament. Users can evaluate their brackets and also can compare relative likelihood of two sets of seed combinations.

"For each of the rounds that we have available, you could put in what you have so far and even compare it to other possible sets," Rizwan said.

For example, the probability of the Final Four comprising the four top-seeded teams is 0.026, or once every 39 years. Meanwhile, the probability of a Final Four of all No. 16 seeds -- the lowest-seeded teams in the tournament -- is so small that it has a frequency of happening once every eight hundred trillion years. (The Milky Way contains an estimated one hundred billion stars.)

"Basically, if every star was given a year, the years it would take for this to occur is 8,000 times all the stars in the galaxy," Jacobson said."It gives you perspective."

However, sets with long odds do happen. The most unlikely combination in the 26 years studied occurred in 2000, with a Final Four seed combination of 1, 5, 8 and 8. But such a bracket is only predicted to happen once every 32,000 years, so those filling out brackets at home shouldn't hope for a repeat.

What amateur bracketologists can be confident of is upsets. For even the most probable Final Four combination of 1,1,2,3 to occur, two top-seeded schools have to lose.

"In fact, upsets occur with great frequency and great predictability. If you look statistically, there's a certain number of upsets that occur in each round. We just don't know which team they're going to be or when they're going to occur," Jacobson said.

After the 2011 tournament, and in years to come, Jacobson will integrate the new data into the model to continually refine its prediction power. For 2012, Jacobson, Rizwan and Dai hope to integrate a comparative probability feature into the website to allow users to calculate, for example, the probability of a particular set of Final Four seeds if the Elite Eight seeds are given.

Until then, users can find out how likely their picks really are, and compare them against friends' picks -- or even sports commentators'.

"We're not here specifically to say 'Syracuse is going to beat Kentucky in the Elite Eight.' What we're saying is that the seed numbers have patterns," Jacobson said."A 1, 1, 2, 3 is the most likely Final Four. I don't know which two 1's, I don't know which No. 2 and I don't know which No. 3. But I can tell you that if you want to go purely with the odds, choose a Final Four with seeds 1, 1, 2, 3."


Source

Thursday, March 10, 2011

How Do People Respond to Being Touched by a Robotic Nurse?

The research is being presented March 9 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"What we found was that how people perceived the intent of the robot was really important to how they responded. So, even though the robot touched people in the same way, if people thought the robot was doing that to clean them, versus doing that to comfort them, it made a significant difference in the way they responded and whether they found that contact favorable or not," said Charlie Kemp, assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.

In the study, researchers looked at how people responded when a robotic nurse, known as Cody, touched and wiped a person's forearm. Although Cody touched the subjects in exactly the same way, they reacted more positively when they believed Cody intended to clean their arm versus when they believed Cody intended to comfort them.

These results echo similar studies done with nurses.

"There have been studies of nurses and they've looked at how people respond to physical contact with nurses," said Kemp, who is also an adjunct professor in Georgia Tech's College of Computing."And they found that, in general, if people interpreted the touch of the nurse as being instrumental, as being important to the task, then people were OK with it. But if people interpreted the touch as being to provide comfort… people were not so comfortable with that."

In addition, Kemp and his research team tested whether people responded more favorably when the robot verbally indicated that it was about to touch them versus touching them without saying anything.

"The results suggest that people preferred when the robot did not actually give them the warning," said Tiffany Chen, doctoral student at Georgia Tech."We think this might be because they were startled when the robot started speaking, but the results are generally inconclusive."

Since many useful tasks require that a robot touch a person, the team believes that future research should investigate ways to make robot touch more acceptable to people, especially in healthcare. Many important healthcare tasks, such as wound dressing and assisting with hygiene, would require a robotic nurse to touch the patient's body,

"If we want robots to be successful in healthcare, we're going to need to think about how do we make those robots communicate their intention and how do people interpret the intentions of the robot," added Kemp."And I think people haven't been as focused on that until now. Primarily people have been focused on how can we make the robot safe, how can we make it do its task effectively. But that's not going to be enough if we actually want these robots out there helping people in the real world."

In addition to Kemp and Chen, the research group consists of Andrea Thomaz, assistant professor in Georgia Tech's College of Computing, and postdoctoral fellow Chih-Hung Aaron King.


Source

Wednesday, March 9, 2011

Extremely Fast Magnetic Random Access Memory (MRAM) Computer Data Storage Within Reach

An invention made by the Physikalisch-Technische Bundesanstalt (PTB) changes this situation: A special chip connection, in association with dynamic triggering of the component, reduces the response from -- so far -- 2 ns to below 500 ps. This corresponds to a data rate of up to 2 GBit (instead of the approx. 400 MBit so far). Power consumption and the thermal load will be reduced, as well as the bit error rate. The European patent is just being granted this spring; the US patent was already granted in 2010. An industrial partner for further development and manufacturing such MRAMs under licence is still being searched for.

Fast computer storage chips like DRAM and SRAM (Dynamic and Static Random Access Memory) which are commonly used today, have one decisive disadvantage: in the case of an interruption of the power supply, the information stored on them is irrevocably lost. The MRAM promises to put an end to this. In the MRAM, the digital information is not stored in the form of an electric charge, but via the magnetic alignment of storage cells (magnetic spins). MRAMs are very universal storage chips because they allow -- in addition to the non-volatile information storage -- also faster access, a high integration density and an unlimited number of writing and reading cycles.

However, the current MRAM models are not yet fast enough to outperform the best competitors. The time for programming a magnetic bit amounts to approx. 2 ns. Whoever wants to speed this up, reaches certain limits which have something to do with the fundamental physical properties of magnetic storage cells: during the programming process, not only the desired storage cell is magnetically excited, but also a large number of other cells. These excitations -- the so-called magnetic ringing -- are only slightly attenuated, their decay can take up to approx. 2 ns, and during this time, no other cell of the MRAM chip can be programmed. As a result, the maximum clock rate of MRAM is, so far, limited to approx. 400 MHz.

Until now, all experiments made to increase the velocity have led to intolerable write errors. Now, PTB scientists have optimized the MRAM design and integrated the so-called ballistic bit triggering which has also been developed at PTB. Here, the magnetic pulses which serve for the programming are selected in such a skilful way that the other cells in the MRAM are hardly magnetically excited at all. The pulse ensures that the magnetization of a cell which is to be switched performs half a precision rotation (180°), while a cell whose storage state is to remain unchanged performs a complete precision rotation (360°). In both cases, the magnetization is in the state of equilibrium after the magnetic pulse has decayed, and magnetic excitations do not occur any more.

This optimal bit triggering also works with ultra-short switching pulses with a duration below 500 ps. The maximum clock rates of the MRAM are, therefore, above 2 GHz. In addition, several bits can be programmed at the same time which would allow the effective write rate per bit to be increased again by more than one order. This invention allows clock rates to be achieved with MRAM which can compete with those of the fastest volatile storage components.


Source

Tuesday, March 8, 2011

How Can Robots Get Our Attention?

The research is being presented March 8 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"The primary focus was trying to give Simon, our robot, the ability to understand when a human being seems to be reacting appropriately, or in some sense is interested now in a response with respect to Simon and to be able to do it using a visual medium, a camera," said Aaron Bobick, professor and chair of the School of Interactive Computing in Georgia Tech's College of Computing.

Using the socially expressive robot Simon, from Assistant Professor Andrea Thomaz's Socially Intelligent Machines lab, researchers wanted to see if they could tell when he had successfully attracted the attention of a human who was busily engaged in a task and when he had not.

"Simon would make some form of a gesture, or some form of an action when the user was present, and the computer vision task was to try to determine whether or not you had captured the attention of the human being," said Bobick.

With close to 80 percent accuracy Simon was able to tell, using only his cameras as a guide, whether someone was paying attention to him or ignoring him.

"We would like to bring robots into the human world. That means they have to engage with human beings, and human beings have an expectation of being engaged in a way similar to the way other human beings would engage with them," said Bobick.

"Other human beings understand turn-taking. They understand that if I make some indication, they'll turn and face someone when they want to engage with them and they won't when they don't want to engage with them. In order for these robots to work with us effectively, they have to obey these same kinds of social conventions, which means they have to perceive the same thing humans perceive in determining how to abide by those conventions," he added.

Researchers plan to go further with their investigations into how Simon can read communication cues by studying whether he can tell by a person's gaze whether they are paying attention or using elements of language or other actions.

"Previously people would have pre-defined notions of what the user should do in a particular context and they would look for those," said Bobick."That only works when the person behaves exactly as expected. Our approach, which I think is the most novel element, is to use the user's current behavior as the baseline and observe what changes."

The research team for this study consisted of Bobick, Thomaz, doctoral student Jinhan Lee and undergraduate student Jeffrey Kiser.


Source

Monday, March 7, 2011

New Technique for Improving Robot Navigation Systems

An autonomous mobile robot is a robot that is able to navigate its environment without colliding or getting lost. Unmanned robots are also able to recover from spatial disorientation. Conducted by Sergio Guadarrama, researcher of the European Centre for Soft Computing, and Antonio Ruiz, assistant professor at the Universidad Politécnica de Madrid's Facultad de Informática, and published in the Information Sciences journal, the research focuses on map building. Map building is one of the skills related to autonomous navigation, where a robot is required to explore an unknown environment (enclosure, plant, buildings, etc.) and draw up a map of the environment. Before it can do this, the robot has to use its sensors to perceive obstacles.

The main sensor types used for autonomous navigation are vision and range sensors. Although vision sensors can capture much more information from the environment, this research used range, specifically ultrasonic, sensors, which are less accurate, to demonstrate that the model builds accurate maps from few and imprecise input data.

Once it has captured the ranges, the robot has to map these distances to obstacles on the map. Point clouds are used to draw the map, as the imprecision of the range data rules out the use of straight lines or even isolated points. Even so, the resulting map is by no means an architectural blueprint of the site, because not even the robot's location is precisely known, and there is no guarantee that each point cloud is correctly positioned. In actual fact, one and the same obstacle can be viewed properly from one robot position, but not from another. This can produce contradictory information -obstacle and no obstacle- about the same area of the map under construction. Which of the two interpretations is correct?

Exploring unknown spaces

The solution is based on linguistic descriptions of the antonyms"vacant" and"occupied" and inspired by computing with words and the computational theory of perceptions, two theories proposed by L.A. Zadeh of the University of California at Berkeley. Whereas other published research views obstacles and empty spaces as complementary concepts, this research assumes that, rather than being complements, obstacles and vacant spaces are a pair of opposites.

For example, we can infer that an occupied space is not vacant, but we cannot infer that an unoccupied space is empty. This space could be unknown or ambiguous, because the robot has limited information about its environment. Also the contradictions between"vacant" and"occupied" are also explicitly represented.

This way, the robot is able to make a distinction between two types of unknown spaces: spaces that are unknown because information is contradictory and spaces that are unknown because they are unexplored. This would lead the robot to navigate with caution through the contradictory spaces and explore the unexplored spaces. The map is constructed using linguistic rules, such as"If the measured distance is short, then assign a high confidence level to the measurement" or"If an obstacle has been seen several times, then increase the confidence in its presence," where"short,""high" and"several" are fuzzy sets, subject to fuzzy sets theory. Contradictions are resolved by a greater reliance on shorter ranges and combining multiple measures.

Compared with the results of other methods, the outcomes show that the maps built using this technique better capture the shape of walls and open spaces, and contain fewer errors from incorrect sensor data. This opens opportunities for improving the current autonomous navigation systems for robots.


Source

Sunday, March 6, 2011

New Kinds of Superconductivity? Physicists Demonstrate Coveted 'Spin-Orbit Coupling' in Atomic Gases

In the researchers' demonstration of spin-orbit coupling, two lasers allow an atom's motion to flip it between a pair of energy states. The new work, published inNature, demonstrates this effect for the first time in bosons, which make up one of the two major classes of particles. The same technique could be applied to fermions, the other major class of particles, according to the researchers. The special properties of fermions would make them ideal for studying new kinds of interactions between two particles -- for example those leading to novel"p-wave" superconductivity, which may enable a long-sought form of quantum computing known as topological quantum computation.

In an unexpected development, the team also discovered that the lasers modified how the atoms interacted with each other and caused atoms in one energy state to separate in space from atoms in the other energy state.

One of the most important phenomena in quantum physics, spin-orbit coupling describes the interplay that can occur between a particle's internal properties and its external properties. In atoms, it usually describes interactions that only occur within an atom: how an electron's orbit around an atom's core (nucleus) affects the orientation of the electron's internal bar-magnet-like"spin." In semiconductor materials such as gallium arsenide, spin-orbit coupling is an interaction between an electron's spin and its linear motion in a material.

"Spin-orbit coupling is often a bad thing," said JQI's Ian Spielman, senior author of the paper."Researchers make 'spintronic' devices out of gallium arsenide, and if you've prepared a spin in some desired orientation, the last thing you'd want it to do is to flip to some other spin when it's moving."

"But from the point of view of fundamental physics, spin-orbit coupling is really interesting," he said."It's what drives these new kinds of materials called 'topological insulators.'"

One of the hottest topics in physics right now, topological insulators are special materials in which location is everything: the ability of electrons to flow depends on where they are located within the material. Most regions of such a material are insulating, and electric current does not flow freely. But in a flat, two-dimensional topological insulator, current can flow freely along the edge in one direction for one type of spin, and the opposite direction for the opposite kind of spin. In 3-D topological insulators, electrons would flow freely on the surface but be inhibited inside the material. While researchers have been making higher and higher quality versions of this special class of material in solids, spin-orbit coupling in trapped ultracold gases of atoms could help realize topological insulators in their purest, most pristine form, as gases are free of impurity atoms and the other complexities of solid materials.

Usually, atoms do not exhibit the same kind of spin-orbit coupling as electrons exhibit in gallium-arsenide crystals. While each individual atom has its own spin-orbit coupling going on between its internal components (electrons and nucleus), the atom's overall motion generally is not affected by its internal energy state.

But the researchers were able to change that. In their experiment, researchers trapped and cooled a gas of about 200,000 rubidium-87 atoms down to 100 nanokelvins, 3 billion times colder than room temperature. The researchers selected a pair of energy states, analogous to the"spin-up" and"spin-down" states in an electron, from the available atomic energy levels. An atom could occupy either of these"pseudospin" states. Then researchers shined a pair of lasers on the atoms so as to change the relationship between the atom's energy and its momentum (its mass times velocity), and therefore its motion. This created spin-orbit coupling in the atom: the moving atom flipped between its two"spin" states at a rate that depended upon its velocity.

"This demonstrates that the idea of using laser light to create spin-orbit coupling in atoms works. This is all we expected to see," Spielman said."But something else really neat happened."

They turned up the intensity of their lasers, and atoms of one spin state began to repel the atoms in the other spin state, causing them to separate.

"We changed fundamentally how these atoms interacted with one another," Spielman said."We hadn't anticipated that and got lucky."

The rubidium atoms in the researchers' experiment were bosons, sociable particles that can all crowd into the same space even if they possess identical values in their properties including spin. But Spielman's calculations show that they could also create this same effect in ultracold gases of fermions. Fermions, the more antisocial type of atoms, cannot occupy the same space when they are in an identical state. And compared to other methods for creating new interactions between fermions, the spin states would be easier to control and longer lived.

A spin-orbit-coupled Fermi gas could interact with itself because the lasers effectively split each atom into two distinct components, each with its own spin state, and two such atoms with different velocities could then interact and pair up with one other. This kind of pairing opens up possibilities, Spielman said, for studying novel forms of superconductivity, particularly"p-wave" superconductivity, in which two paired atoms have a quantum-mechanical phase that depends on their relative orientation. Such p-wave superconductors may enable a form of quantum computing known as topological quantum computation.


Source

Saturday, March 5, 2011

Human Cues Used to Improve Computer User-Friendliness

"Our research in computer graphics and computer vision tries to make using computers easier," says the Binghamton University computer scientist."Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you're talking to a friend. This could also help disabled people use computers the way everyone else does."

Yin's team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. One of the basic challenges in this area is"computer vision." That is, how can a simple webcam work more like the human eye? Can camera-captured data understand a real-world object? Can this data be used to"see" the user and"understand" what the user wants to do?

To some extent, that's already possible. Witness one of Yin's graduate students giving a PowerPoint presentation and using only his eyes to highlight content on various slides. When Yin demonstrated this technology for Air Force experts last year, the only hardware he brought was a webcam attached to a laptop computer.

Yin says the next step would be enabling the computer to recognize a user's emotional state. He works with a well-established set of six basic emotions -- anger, disgust, fear, joy, sadness, and surprise -- and is experimenting with different ways to allow the computer to distinguish among them. Is there enough data in the way the lines around the eyes change? Could focusing on the user's mouth provide sufficient clues? What happens if the user's face is only partially visible, perhaps turned to one side?

"Computers only understand zeroes and ones," Yin says."Everything is about patterns. We want to find out how to recognize each emotion using only the most important features."

He's partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Many people with autism have difficulty interpreting others' emotions; therapists sometimes use photographs of people to teach children how to understand when someone is happy or sad and so forth. Yin could produce not just photographs, but three-dimensional avatars that are able to display a range of emotions. Given the right pictures, Yin could even produce avatars of people from a child's family for use in this type of therapy.

Yin and Gerhardstein's previous collaboration led to the creation of a 3D facial expression database, which includes 100 subjects with 2,500 facial expression models. The database is available at no cost to the nonprofit research community and has become a worldwide test bed for those working on related projects in fields such as biomedicine, law enforcement and computer science.

Once Yin became interested in human-computer interaction, he naturally grew more excited about the possibilities for artificial intelligence.

"We want not only to create a virtual-person model, we want to understand a real person's emotions and feelings," Yin says."We want the computer to be able to understand how you feel, too. That's hard, even harder than my other work."

Imagine if a computer could understand when people are in pain. Some may ask a doctor for help. But others -- young children, for instance -- cannot express themselves or are unable to speak for some reason. Yin wants to develop an algorithm that would enable a computer to determine when someone is in pain based just on a photograph.

Yin describes that health-care application and, almost in the next breath, points out that the same system that could identify pain might also be used to figure out when someone is lying. Perhaps a computer could offer insights like the ones provided by Tim Roth's character, Dr. Cal Lightman, on the television show Lie to Me. The fictional character is a psychologist with an expertise in tracking deception who often partners with law-enforcement agencies.

"This technology," Yin says,"could help us to train the computer to do facial-recognition analysis in place of experts."


Source

Friday, March 4, 2011

Method Developed to Match Police Sketch, Mug Shot: Algorithms and Software Will Match Sketches With Mugshots in Police Databases

A team led by MSU University Distinguished Professor of Computer Science and Engineering Anil Jain and doctoral student Brendan Klare has developed a set of algorithms and created software that will automatically match hand-drawn facial sketches to mug shots that are stored in law enforcement databases.

Once in use, Klare said, the implications are huge.

"We're dealing with the worst of the worst here," he said."Police sketch artists aren't called in because someone stole a pack of gum. A lot of time is spent generating these facial sketches so it only makes sense that they are matched with the available technology to catch these criminals."

Typically, artists' sketches are drawn by artists from information obtained from a witness. Unfortunately, Klare said,"often the facial sketch is not an accurate depiction of what the person looks like."

There also are few commercial software programs available that produce sketches based on a witness' description. Those programs, however, tend to be less accurate than sketches drawn by a trained forensic artist.

The MSU project is being conducted in the Pattern Recognition and Image Processing lab in the Department of Computer Science and Engineering. It is the first large-scale experiment matching operational forensic sketches with photographs and, so far, results have been promising.

"We improved significantly on one of the top commercial face-recognition systems," Klare said."Using a database of more than 10,000 mug shot photos, 45 percent of the time we had the correct person."

All of the sketches used were from real crimes where the criminal was later identified.

"We don't match them pixel by pixel," said Jain, director of the PRIP lab."We match them up by finding high-level features from both the sketch and the photo; features such as the structural distribution and the shape of the eyes, nose and chin."

This project and its results appear in the March 2011 issue of the journalIEEE Transactions on Pattern Analysis and Machine Intelligence.

The MSU team plans to field test the system in about a year.

The sketches used in this research were provided by forensic artists Lois Gibson and Karen Taylor, and forensic sketch artists working for the Michigan State Police.


Source

Thursday, March 3, 2011

New Developments in Quantum Computing

At the Association for Computing Machinery's 43rd Symposium on Theory of Computing in June, associate professor of computer science Scott Aaronson and his graduate student Alex Arkhipov will present a paper describing an experiment that, if it worked, would offer strong evidence that quantum computers can do things that classical computers can't. Although building the experimental apparatus would be difficult, it shouldn't be as difficult as building a fully functional quantum computer.

Aaronson and Arkhipov's proposal is a variation on an experiment conducted by physicists at the University of Rochester in 1987, which relied on a device called a beam splitter, which takes an incoming beam of light and splits it into two beams traveling in different directions. The Rochester researchers demonstrated that if two identical light particles -- photons -- reach the beam splitter at exactly the same time, they will both go either right or left; they won't take different paths. It's another quantum behavior of fundamental particles that defies our physical intuitions.

The MIT researchers' experiment would use a larger number of photons, which would pass through a network of beam splitters and eventually strike photon detectors. The number of detectors would be somewhere in the vicinity of the square of the number of photons -- about 36 detectors for six photons, 100 detectors for 10 photons.

For any run of the MIT experiment, it would be impossible to predict how many photons would strike any given detector. But over successive runs, statistical patterns would begin to build up. In the six-photon version of the experiment, for instance, it could turn out that there's an 8 percent chance that photons will strike detectors 1, 3, 5, 7, 9 and 11, a 4 percent chance that they'll strike detectors 2, 4, 6, 8, 10 and 12, and so on, for any conceivable combination of detectors.

Calculating that distribution -- the likelihood of photons striking a given combination of detectors -- is a hard problem. The researchers' experiment doesn't solve it outright, but every successful execution of the experiment does take a sample from the solution set. One of the key findings in Aaronson and Arkhipov's paper is that, not only is calculating the distribution a hard problem, but so is simulating the sampling of it. For an experiment with more than, say, 100 photons, it would probably be beyond the computational capacity of all the computers in the world.

The question, then, is whether the experiment can be successfully executed. The Rochester researchers performed it with two photons, but getting multiple photons to arrive at a whole sequence of beam splitters at exactly the right time is more complicated. Barry Sanders, director of the University of Calgary's Institute for Quantum Information Science, points out that in 1987, when the Rochester researchers performed their initial experiment, they were using lasers mounted on lab tables and getting photons to arrive at the beam splitter simultaneously by sending them down fiber-optic cables of different lengths. But recent years have seen the advent of optical chips, in which all the optical components are etched into a silicon substrate, which makes it much easier to control the photons' trajectories.

The biggest problem, Sanders believes, is generating individual photons at predictable enough intervals to synchronize their arrival at the beam splitters."People have been working on it for a decade, making great things," Sanders says."But getting a train of single photons is still a challenge."

Sanders points out that even if the problem of getting single photons onto the chip is solved, photon detectors still have inefficiencies that could make their measurements inexact: in engineering parlance, there would be noise in the system. But Aaronson says that he and Arkhipov explicitly consider the question of whether simulating even a noisy version of their optical experiment would be an intractably hard problem. Although they were unable to prove that it was, Aaronson says that"most of our paper is devoted to giving evidence that the answer to that is yes." He's hopeful that a proof is forthcoming, whether from his research group or others'.


Source

Wednesday, March 2, 2011

New Generation of Optical Integrated Devices for Future Quantum Computers

Quantum computers, holding the great promise of tremendous computational power for particular tasks, have been the goal of worldwide efforts by scientists for several years. Tremendous advances have been made but there is still a long way to go.

Building a quantum computer will require a large number of interconnected components -- gates -- which work in a similar way to the microprocessors in current personal computers. Currently, most quantum gates are large structures and the bulky nature of these devices prevents scalability to the large and complex circuits required for practical applications.

Recently, the researchers from the University of Bristol's Centre for Quantum Photonics showed, in several important breakthroughs, that quantum information can be manipulated with integrated photonic circuits. Such circuits are compact (enabling scalability) and stable (with low noise) and could lead in the near future to mass production of chips for quantum computers.

Now the team, in collaboration with Dr Terry Rudolph at Imperial College, London, shows a new class of integrated divides that promise further reduction in the number of components that will be used for building future quantum circuits.

These devices, based on optical multimode interference (and therefore often called MMIs) have been widely employed in classical optics as they are compact and very robust to fabrication tolerances."While building a complex quantum network requires a large number of basic components, MMIs can often enable the implementation with much fewer resources," said Alberto Peruzzo, PhD student working on the experiment.

Until now it was not clear how these devices would work in the quantum regime. Bristol researchers have demonstrated that MMIs can perform quantum interference at the high fidelity required.

Scientists will now be able to implement more compact photonics circuits for quantum computing. MMIs can generate large entangled states, at the heart of the exponential speedup promised by quantum computing.

"Applications will range from new circuits for quantum computation to ultra precise measurement and secure quantum communication," said Professor Jeremy O'Brien, director of the Centre for Quantum Photonics.

The team now plans to build new sophisticated circuits for quantum computation and quantum metrology using MMI devices.


Source

Tuesday, March 1, 2011

Plug-and-Play Multi-Core Voltage Regulator Could Lead to 'Smarter' Smartphones, Slimmer Laptops and Energy-Friendly Data Centers

Today's consumers expect mobile devices that are increasingly small, yet ever-more powerful. All the bells and whistles, however, suck up energy, and a phone that lasts only 4 hours because it's also a GPS device is only so much use.

To promote energy-efficient multitasking, Harvard graduate student Wonyoung Kim has developed and demonstrated a new device with the potential to reduce the power usage of modern processing chips.

The advance could allow the creation of"smarter" smartphones, slimmer laptops, and more energy-friendly data centers.

Kim's on-chip, multi-core voltage regulator (MCVR) addresses what amounts to a mismatch between power supply and demand.

"If you're listening to music on your MP3 player, you don't need to send power to the image and graphics processors at the same time," Kim says."If you're just looking at photos, you don't need to power the audio processor or the HD video processor."

"It's like shutting off the lights when you leave the room."

Kim's research at Harvard's School of Engineering and Applied Sciences (SEAS) showed in 2008 that fine-grain voltage control was a theoretical possibility. This month, he presented a paper at the Institute of Electrical and Electronics Engineers' (IEEE) International Solid-State Circuits Conference (ISSCC) showing that the MCVR could actually be implemented in hardware.

Essentially a DC-DC converter, the MCVR can take a 2.4-volt input and scale it down to voltages ranging from 0.4 to 1.4V. Built for speed, it can increase or decrease the output by 1V in under 20 nanoseconds.

The MCVR also uses an algorithm to recognize parts of the processor that are not in use and cuts power to them, saving energy. Kim says it results in a longer battery life (or, in the case of stationary data centers, lower energy bills), while providing the same performance.

The on-chip design means that the power supply can be managed not just for each processor chip, but for each individual core on the chip. The short distance that signals then have to travel between the voltage regulator and the cores allows power scaling to happen quickly -- in a matter of nanoseconds rather than microseconds -- further improving efficiency.

Kim has obtained a provisional patent for the MCVR with his Ph.D. co-advisers at SEAS, Gu-Yeon Wei, Gordon McKay Professor of Electrical Engineering, and David Brooks, Gordon McKay Professor of Computer Science, who are coauthors on the paper he presented this week.

"Wonyoung Kim's research takes an important step towards a higher level of integration for future chips," says Wei."Systems today rely on off-chip, board-level voltage regulators that are bulky and slow. Integrating the voltage regulator along with the IC chip to which it supplies power not only reduces broad-level size and cost, but also opens up exciting opportunities to improve energy efficiency."

"Kim's three-level design overcomes issues that hamper traditional buck and switch-capacitor converters by merging good attributes of both into a single structure," adds Brooks."We believe research on integrated voltage regulators like Kim's will be an essential component of future computing devices where energy-efficient performance and low cost are in demand."

Although Kim estimates that the greatest demand for the MCVR right now could be in the market for mobile phones, the device would also have applications in other computing scenarios. Used in laptops, the MCVR might reduce the heat output of the processor, which is currently one barrier to making slimmer notebooks. In stationary scenarios, the rising cost of powering servers of ever-increasing speed and capacity could be reduced.

"This is a plug-and-play device in the sense that it can be easily incorporated into the design of processor chips," says Kim."Including the MCVR on a chip would add about 10 percent to the manufacturing cost, but with the potential for 20 percent or more in power savings."

The research was supported by the National Science Foundation's Division of Computer and Network Systems and Division of Computing and Communication Foundations.


Source