Wednesday, February 9, 2011

World's First Programmable Nanoprocessor: Nanowire Tiles Can Perform Arithmetic and Logical Functions

The groundbreaking prototype computer system, described in a paper appearing in the journalNature, represents a significant step forward in the complexity of computer circuits that can be assembled from synthesized nanometer-scale components.

It also represents an advance because these ultra-tiny nanocircuits can be programmed electronically to perform a number of basic arithmetic and logical functions.

"This work represents a quantum jump forward in the complexity and function of circuits built from the bottom up, and thus demonstrates that this bottom-up paradigm, which is distinct from the way commercial circuits are built today, can yield nanoprocessors and other integrated systems of the future," says principal investigator Charles M. Lieber, who holds a joint appointment at Harvard's Department of Chemistry and Chemical Biology and School of Engineering and Applied Sciences.

The work was enabled by advances in the design and synthesis of nanowire building blocks. These nanowire components now demonstrate the reproducibility needed to build functional electronic circuits, and also do so at a size and material complexity difficult to achieve by traditional top-down approaches.

Moreover, the tiled architecture is fully scalable, allowing the assembly of much larger and ever more functional nanoprocessors.

"For the past 10 to 15 years, researchers working with nanowires, carbon nanotubes, and other nanostructures have struggled to build all but the most basic circuits, in large part due to variations in properties of individual nanostructures," says Lieber, the Mark Hyman Professor of Chemistry."We have shown that this limitation can now be overcome and are excited about prospects of exploiting the bottom-up paradigm of biology in building future electronics."

An additional feature of the advance is that the circuits in the nanoprocessor operate using very little power, even allowing for their miniscule size, because their component nanowires contain transistor switches that are"nonvolatile."

This means that unlike transistors in conventional microcomputer circuits, once the nanowire transistors are programmed, they do not require any additional expenditure of electrical power for maintaining memory.

"Because of their very small size and very low power requirements, these new nanoprocessor circuits are building blocks that can control and enable an entirely new class of much smaller, lighter weight electronic sensors and consumer electronics," says co-author Shamik Das, the lead engineer in MITRE's Nanosystems Group.

"This new nanoprocessor represents a major milestone toward realizing the vision of a nanocomputer that was first articulated more than 50 years ago by physicist Richard Feynman," says James Ellenbogen, a chief scientist at MITRE.

Co-authors on the paper included four members of Lieber's lab at Harvard: Hao Yan (Ph.D. '10), SungWoo Nam (Ph.D. '10), Yongjie Hu (Ph.D. '10), and doctoral candidate Hwan Sung Choe, as well as collaborators at MITRE.

The research team at MITRE comprised Das, Ellenbogen, and nanotechnology laboratory director Jim Klemic. The MITRE Corporation is a not-for-profit company that provides systems engineering, research and development, and information technology support to the government. MITRE's principal locations are in Bedford, Mass., and McLean, Va.

The research was supported by a Department of Defense National Security Science and Engineering Faculty Fellowship, the National Nanotechnology Initiative, and the MITRE Innovation Program.


Source

Monday, February 7, 2011

Engineers Grow Nanolasers on Silicon, Pave Way for on-Chip Photonics

They describe their work in a paper to be published Feb. 6 in an advanced online issue of the journalNature Photonics.

"Our results impact a broad spectrum of scientific fields, including materials science, transistor technology, laser science, optoelectronics and optical physics," said the study's principal investigator, Connie Chang-Hasnain, UC Berkeley professor of electrical engineering and computer sciences.

The increasing performance demands of electronics have sent researchers in search of better ways to harness the inherent ability of light particles to carry far more data than electrical signals can. Optical interconnects are seen as a solution to overcoming the communications bottleneck within and between computer chips.

Because silicon, the material that forms the foundation of modern electronics, is extremely deficient at generating light, engineers have turned to another class of materials known as III-V (pronounced"three-five") semiconductors to create light-based components such as light-emitting diodes (LEDs) and lasers.

But the researchers pointed out that marrying III-V with silicon to create a single optoelectronic chip has been problematic. For one, the atomic structures of the two materials are mismatched.

"Growing III-V semiconductor films on silicon is like forcing two incongruent puzzle pieces together," said study lead author Roger Chen, a UC Berkeley graduate student in electrical engineering and computer sciences."It can be done, but the material gets damaged in the process."

Moreover, the manufacturing industry is set up for the production of silicon-based materials, so for practical reasons, the goal has been to integrate the fabrication of III-V devices into the existing infrastructure, the researchers said.

"Today's massive silicon electronics infrastructure is extremely difficult to change for both economic and technological reasons, so compatibility with silicon fabrication is critical," said Chang-Hasnain."One problem is that growth of III-V semiconductors has traditionally involved high temperatures -- 700 degrees Celsius or more -- that would destroy the electronics. Meanwhile, other integration approaches have not been scalable."

The UC Berkeley researchers overcame this limitation by finding a way to grow nanopillars made of indium gallium arsenide, a III-V material, onto a silicon surface at the relatively cool temperature of 400 degrees Celsius.

"Working at nanoscale levels has enabled us to grow high quality III-V materials at low temperatures such that silicon electronics can retain their functionality," said Chen.

The researchers used metal-organic chemical vapor deposition to grow the nanopillars on the silicon."This technique is potentially mass manufacturable, since such a system is already used commercially to make thin film solar cells and light emitting diodes," said Chang-Hasnain.

Once the nanopillar was made, the researchers showed that it could generate near infrared laser light -- a wavelength of about 950 nanometers -- at room temperature. The hexagonal geometry dictated by the crystal structure of the nanopillars creates a new, efficient, light-trapping optical cavity. Light circulates up and down the structure in a helical fashion and amplifies via this optical feedback mechanism.

The unique approach of growing nanolasers directly onto silicon could lead to highly efficient silicon photonics, the researchers said. They noted that the miniscule dimensions of the nanopillars -- smaller than one wavelength on each side, in some cases -- make it possible to pack them into small spaces with the added benefit of consuming very little energy

"Ultimately, this technique may provide a powerful and new avenue for engineering on-chip nanophotonic devices such as lasers, photodetectors, modulators and solar cells," said Chen.

"This is the first bottom-up integration of III-V nanolasers onto silicon chips using a growth process compatible with the CMOS (complementary metal oxide semiconductor) technology now used to make integrated circuits," said Chang-Hasnain."This research has the potential to catalyze an optoelectronics revolution in computing, communications, displays and optical signal processing. In the future, we expect to improve the characteristics of these lasers and ultimately control them electronically for a powerful marriage between photonic and electronic devices."

The Defense Advanced Research Projects Agency and a Department of Defense National Security Science and Engineering Faculty Fellowship helped support this research.


Source

Saturday, February 5, 2011

Physicists Challenge Classical World With Quantum-Mechanical Implementation of 'Shell Game'

In a paper published in the Jan. 30 issue of the journalNature Physics, UCSB researchers show the first demonstration of the coherent control of a multi-resonator architecture. This topic has been a holy grail among physicists studying photons at the quantum-mechanical level for more than a decade.

The UCSB researchers are Matteo Mariantoni, postdoctoral fellow in the Department of Physics; Haohua Wang, postdoctoral fellow in physics; John Martinis, professor of physics; and Andrew Cleland, professor of physics.

According to the paper, the"shell man," the researcher, makes use of two superconducting quantum bits (qubits) to move the photons -- particles of light -- between the resonators. The qubits -- the quantum-mechanical equivalent of the classical bits used in a common PC -- are studied at UCSB for the development of a quantum super computer. They constitute one of the key elements for playing the photon shell game.

"This is an important milestone toward the realization of a large-scale quantum register," said Mariantoni."It opens up an entirely new dimension in the realm of on-chip microwave photonics and quantum-optics in general."

The researchers fabricated a chip where three resonators of a few millimeters in length are coupled to two qubits."The architecture studied in this work resembles a quantum railroad," said Mariantoni."Two quantum stations -- two of the three resonators -- are interconnected through the third resonator which acts as a quantum bus. The qubits control the traffic and allow the shuffling of photons among the resonators."

In a related experiment, the researchers played a more complex game that was inspired by an ancient mathematical puzzle developed in an Indian temple called the Towers of Hanoi, according to legend.

The Towers of Hanoi puzzle consists of three posts and a pile of disks of different diameter, which can slide onto any post. The puzzle starts with the disks in a stack in ascending order of size on one post, with the smallest disk at the top. The aim of the puzzle is to move the entire stack to another post, with only one disk being moved at a time, and with no disk being placed on top of a smaller disk.

In the quantum-mechanical version of the Towers of Hanoi, the three posts are represented by the resonators and the disks by quanta of light with different energy."This game demonstrates that a truly Bosonic excitation can be shuffled among resonators -- an interesting example of the quantum-mechanical nature of light," said Mariantoni.

Mariantoni was supported in this work by an Elings Prize Fellowship in Experimental Science from UCSB's California NanoSystems Institute.


Source

Friday, February 4, 2011

New Mathematical Model of Information Processing in the Brain Accurately Predicts Some of the Peculiarities of Human Vision

At the Society of Photo-Optical Instrumentation Engineers' Human Vision and Electronic Imaging conference on Jan. 27, Ruth Rosenholtz, a principal research scientist in the Department of Brain and Cognitive Sciences, presented a new mathematical model of how the brain does that summarizing. The model accurately predicts the visual system's failure on certain types of image-processing tasks, a good indication that it captures some aspect of human cognition.

Most models of human object recognition assume that the first thing the brain does with a retinal image is identify edges -- boundaries between regions with different light-reflective properties -- and sort them according to alignment: horizontal, vertical and diagonal. Then, the story goes, the brain starts assembling these features into primitive shapes, registering, for instance, that in some part of the visual field, a horizontal feature appears above a vertical feature, or two diagonals cross each other. From these primitive shapes, it builds up more complex shapes -- four L's with different orientations, for instance, would make a square -- and so on, until it's constructed shapes that it can identify as features of known objects.

While this might be a good model of what happens at the center of the visual field, Rosenholtz argues, it's probably less applicable to the periphery, where human object discrimination is notoriously weak. In a series of papers in the last few years, Rosenholtz has proposed that cognitive scientists instead think of the brain as collecting statistics on the features in different patches of the visual field.

Patchy impressions

On Rosenholtz's model, the patches described by the statistics get larger the farther they are from the center. This corresponds with a loss of information, in the same sense that, say, the average income for a city is less informative than the average income for every household in the city. At the center of the visual field, the patches might be so small that the statistics amount to the same thing as descriptions of individual features: A 100-percent concentration of horizontal features could indicate a single horizontal feature. So Rosenholtz's model would converge with the standard model.

But at the edges of the visual field, the models come apart. A large patch whose statistics are, say, 50 percent horizontal features and 50 percent vertical could contain an array of a dozen plus signs, or an assortment of vertical and horizontal lines, or a grid of boxes.

In fact, Rosenholtz's model includes statistics on much more than just orientation of features: There are also measures of things like feature size, brightness and color, and averages of other features -- about 1,000 numbers in all. But in computer simulations, storing even 1,000 statistics for every patch of the visual field requires only one-90th as many virtual neurons as storing visual features themselves, suggesting that statistical summary could be the type of space-saving technique the brain would want to exploit.

Rosenholtz's model grew out of her investigation of a phenomenon called visual crowding. If you were to concentrate your gaze on a point at the center of a mostly blank sheet of paper, you might be able to identify a solitary A at the left edge of the page. But you would fail to identify an identical A at the right edge, the same distance from the center, if instead of standing on its own it were in the center of the word"BOARD."

Rosenholtz's approach explains this disparity: The statistics of the lone A are specific enough to A's that the brain can infer the letter's shape; but the statistics of the corresponding patch on the other side of the visual field also factor in the features of the B, O, R and D, resulting in aggregate values that don't identify any of the letters clearly.

Road test

Rosenholtz's group has also conducted a series of experiments with human subjects designed to test the validity of the model. Subjects might, for instance, be asked to search for a target object -- like the letter O -- amid a sea of"distractors" -- say, a jumble of other letters. A patch of the visual field that contains 11 Q's and one O would have very similar statistics to one that contains a dozen Q's. But it would have much different statistics than a patch that contained a dozen plus signs. In experiments, the degree of difference between the statistics of different patches is an extremely good predictor of how quickly subjects can find a target object: It's much easier to find an O among plus signs than it is to find it amid Q's.

Rosenholtz, who has a joint appointment to the Computer Science and Artificial Intelligence Laboratory, is also interested in the implications of her work for data visualization, an active research area in its own right. For instance, designing subway maps with an eye to maximizing the differences between the summary statistics of different regions could make them easier for rushing commuters to take in at a glance.

In vision science,"there's long been this notion that somehow what the periphery is for is texture," says Denis Pelli, a professor of psychology and neural science at New York University. Rosenholtz's work, he says,"is turning it into real calculations rather than just a side comment." Pelli points out that the brain probably doesn't track exactly the 1,000-odd statistics that Rosenholtz has used, and indeed, Rosenholtz says that she simply adopted a group of statistics commonly used to describe visual data in computer vision research. But Pelli also adds that visual experiments like the ones that Rosenholtz is performing are the right way to narrow down the list to"the ones that really matter."


Source

Thursday, February 3, 2011

Future Surgeons May Use Robotic Nurse, 'Gesture Recognition'

Both the hand-gesture recognition and robotic nurse innovations might help to reduce the length of surgeries and the potential for infection, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.

The"vision-based hand gesture recognition" technology could have other applications, including the coordination of emergency response activities during disasters.

"It's a concept Tom Cruise demonstrated vividly in the film 'Minority Report,'" Wachs said.

Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria.

The new approach is a system that uses a camera and specialized algorithms to recognize hand gestures as commands to instruct a computer or robot.

At the same time, a robotic scrub nurse represents a potential new tool that might improve operating-room efficiency, Wachs said.

Findings from the research will be detailed in a paper appearing in the February issue of Communications of the ACM, the flagship publication of the Association for Computing Machinery. The paper was written by researchers at Purdue, the Naval Postgraduate School in Monterey, Calif., and Ben-Gurion University of the Negev, Israel.

Research into hand-gesture recognition began several years ago in work led by the Washington Hospital Center and Ben-Gurion University, where Wachs was a research fellow and doctoral student, respectively.

He is now working to extend the system's capabilities in research with Purdue's School of Veterinary Medicine and the Department of Speech, Language, and Hearing Sciences.

"One challenge will be to develop the proper shapes of hand poses and the proper hand trajectory movements to reflect and express certain medical functions," Wachs said."You want to use intuitive and natural gestures for the surgeon, to express medical image navigation activities, but you also need to consider cultural and physical differences between surgeons. They may have different preferences regarding what gestures they may want to use."

Other challenges include providing computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures.

"Say the surgeon starts talking to another person in the operating room and makes conversational gestures," Wachs said."You don't want the robot handing the surgeon a hemostat."

A scrub nurse assists the surgeon and hands the proper surgical instruments to the doctor when needed.

"While it will be very difficult using a robot to achieve the same level of performance as an experienced nurse who has been working with the same surgeon for years, often scrub nurses have had very limited experience with a particular surgeon, maximizing the chances for misunderstandings, delays and sometimes mistakes in the operating room," Wachs said."In that case, a robotic scrub nurse could be better."

The Purdue researcher has developed a prototype robotic scrub nurse, in work with faculty in the university's School of Veterinary Medicine.

Researchers at other institutions developing robotic scrub nurses have focused on voice recognition. However, little work has been done in the area of gesture recognition, Wachs said.

"Another big difference between our focus and the others is that we are also working on prediction, to anticipate what images the surgeon will need to see next and what instruments will be needed," he said.

Wachs is developing advanced algorithms that isolate the hands and apply"anthropometry," or predicting the position of the hands based on knowledge of where the surgeon's head is. The tracking is achieved through a camera mounted over the screen used for visualization of images.

"Another contribution is that by tracking a surgical instrument inside the patient's body, we can predict the most likely area that the surgeon may want to inspect using the electronic image medical record, and therefore saving browsing time between the images," Wachs said."This is done using a different sensor mounted over the surgical lights."

The hand-gesture recognition system uses a new type of camera developed by Microsoft, called Kinect, which senses three-dimensional space. The camera is found in new consumer electronics games that can track a person's hands without the use of a wand.

"You just step into the operating room, and automatically your body is mapped in 3-D," he said.

Accuracy and gesture-recognition speed depend on advanced software algorithms.

"Even if you have the best camera, you have to know how to program the camera, how to use the images," Wachs said."Otherwise, the system will work very slowly."

The research paper defines a set of requirements, including recommendations that the system should:

  • Use a small vocabulary of simple, easily recognizable gestures.
  • Not require the user to wear special virtual reality gloves or certain types of clothing.
  • Be as low-cost as possible.
  • Be responsive and able to keep up with the speed of a surgeon's hand gestures.
  • Let the user know whether it understands the hand gestures by providing feedback, perhaps just a simple"OK."
  • Use gestures that are easy for surgeons to learn, remember and carry out with little physical exertion.
  • Be highly accurate in recognizing hand gestures.
  • Use intuitive gestures, such as two fingers held apart to mimic a pair of scissors.
  • Be able to disregard unintended gestures by the surgeon, perhaps made in conversation with colleagues in the operating room.
  • Be able to quickly configure itself to work properly in different operating rooms, under various lighting conditions and other criteria.

"Eventually we also want to integrate voice recognition, but the biggest challenges are in gesture recognition," Wachs said."Much is already known about voice recognition."

The work is funded by the U.S. Agency for Healthcare Research and Quality.


Source

Wednesday, February 2, 2011

Computer-Assisted Diagnosis Tools to Aid Pathologists

"The advent of digital whole-slide scanners in recent years has spurred a revolution in imaging technology for histopathology," according to Metin N. Gurcan, Ph.D., an associate professor of Biomedical Informatics at The Ohio State University Medical Center."The large multi-gigapixel images produced by these scanners contain a wealth of information potentially useful for computer-assisted disease diagnosis, grading and prognosis."

Follicular Lymphoma (FL) is one of the most common forms of non-Hodgkin Lymphoma occurring in the United States. FL is a cancer of the human lymph system that usually spreads into the blood, bone marrow and, eventually, internal organs.

A World Health Organization pathological grading system is applied to biopsy samples; doctors usually avoid prescribing severe therapies for lower grades, while they usually recommend radiation and chemotherapy regimens for more aggressive grades.

Accurate grading of the pathological samples generally leads to a promising prognosis, but diagnosis depends solely upon a labor-intensive process that can be affected by human factors such as fatigue, reader variation and bias. Pathologists must visually examine and grade the specimens through high-powered microscopes.

Processing and analysis of such high-resolution images, Gurcan points out, remain non-trivial tasks, not just because of the sheer size of the images, but also due to complexities of underlying factors involving differences in staining, illumination, instrumentation and goals. To overcome many of these obstacles to automation, Gurcan and medical center colleagues, Dr. Gerard Lozanski and Dr. Arwa Shana'ah, turned to the Ohio Supercomputer Center.

Ashok Krishnamurthy, Ph.D., interim co-executive director of the center, and Siddharth Samsi, a computational science researcher there and an OSU graduate student in Electrical and Computer Engineering, put the power of a supercomputer behind the process.

"Our group has been developing tools for grading of follicular lymphoma with promising results," said Samsi."We developed a new automated method for detecting lymph follicles using stained tissue by analyzing the morphological and textural features of the images, mimicking the process that a human expert might use to identify follicle regions. Using these results, we developed models to describe tissue histology for classification of FL grades."

Histological grading of FL is based on the number of large malignant cells counted in within tissue samples measuring just 0.159 square millimeters and taken from ten different locations. Based on these findings, FL is assigned to one of three increasing grades of malignancy: Grade I (0-5 cells), Grade II (6-15 cells) and Grade III (more than 15 cells).

"The first step involves identifying potentially malignant regions by combining color and texture features," Samsi explained."The second step applies an iterative watershed algorithm to separate merged regions and the final step involves eliminating false positives."

The large data sizes and complexity of the algorithms led Gurcan and Samsi to leverage the parallel computing resources of OSC's Glenn Cluster in order to reduce the time required to process the images. They used MATLAB® and the Parallel Computing Toolbox™ to achieve significant speed-ups. Speed is the goal of the National Cancer Institute-FUNDED research project, but accuracy is essential. Gurcan and Samsi compared their computer segmentation results with manual segmentation and found an average similarity score of 87.11 percent.

"This algorithm is the first crucial step in a computer-aided grading system for Follicular Lymphoma," Gurcan said."By identifying all the follicles in a digitized image, we can use the entire tissue section for grading of the disease, thus providing experts with another tool that can help improve the accuracy and speed of the diagnosis."


Source

Tuesday, February 1, 2011

Internet Addresses: An Inevitable Shortage, but an Uneven One

There is some good news, according to computer scientist John Heideman, who heads a team at the USC Viterbi School of Engineering Information Sciences Institute that has just released its results in the form of a detailed outline, including a 10-minute video and an interactive web browser that allows users to explore the nooks and crannies of Internet space themselves.

Heidemann who is a senior project leader at ISI and a research associate professor in the USC Viterbi School of Engineering Department of Computer Science, says his group has found that while some of the already allocated address blocks (units of Internet real estate, ranging from 256 to more than 16 million addresses) are heavily used, many are still sparsely used."Even allowing for undercount," the group finds,"probably only 14 percent of addresses are visible on the public Internet."

Nevertheless,"as full allocation happens, there will be pressure to improve utilization and eventually trade underutilized areas," the video shows. These strategies have limits, the report notes. Better utilization, trading, and other strategies can recover"twice or four times current utilization. But requests for address double every year, so trading will only help for two years. Four billion addresses are just not enough for 7 billion people."

The IPv6 protocol allows many, many more addresses -- 1000 1000 trillion -- but may involve transition costs.

Heideman's group report comes as the Number Resource Organization (NRO) and he Internet Assigned Numbers Authority (IANA) are preparing to make an announcement saying they have given out all the addresses, passing on most to regional authorities.

The ISI video offers a thorough background in the hows and whys of the current IPv4 Internet address system, in which each address is a number between zero and 2 to the 32nd power (4,294,967,295), usually written in"dotted-decimal notation" as four base-10 numbers separated by periods.

Heidemann, working with collaborator Yuri Pradkin and ISI colleagues, produced an earlier Internet census in 2007, following on previous work at ISI -- the first complete census since 1982. To do it, they sent a message ('ping') each to each possible Internet address. The video explains the pinging process.

At the time, some 2.8 million of the 4.3 million possible addresses had been allocated; today more than 3.5 million are allocated. The current effort, funded by Department of Homeland Security Science and Technology Directorate and the NSF, was carried out by Aniruddh Rao and Xue Cui of ISI, along with Heidemann. Peer-reviewed analysis of their approach appeared in ACM Internet Measurements Conference, 2008.


Source