Monday, May 23, 2011

World Record in Ultra-Rapid Data Transmission

The advance is reported in the journalNature Photonics.

In this experiment, KIT scientists led by Professor Jürg Leuthold beat their own record in high-speed data transmission of 2010, when they exceeded the magic limit of 10 terabits per second -- i.e. a data rate of 10,000 billion bits per second. This success of the group is due to a new data decoding process. The opto-electric decoding method is based on initially purely optical calculation at highest data rates in order to break down the high data rate to smaller bit rates that can then be processed electrically. The initially optical reduction of the bit rates is required, as no electronic processing methods are available for a data rate of 26 terabits per second. Leuthold's team applies the so-called orthogonal frequency division multiplexing (OFDM) for record data encoding. For many years, this process has been used successfully in mobile communications, based on mathematical routines (Fast Fourier Transformation).

"The challenge was to increase the process speed not only by a factor of 1,000, but by a factor of nearly a million for data processing at 26 terabits per second," explains Leuthold, who heads the Institutes of Photonics and Quantum Electronics and Microstructure Technology at KIT."The decisive innovative idea was optical implementation of the mathematical routine." Calculation in the optical range turned out to be not only extremely fast, but also highly energy-efficient, because energy is required for the laser and a few process steps only.

"Our result shows that physical limits are not yet exceeded even at extremely high data rates," Leuthold says, noting the constantly growing data volume on the internet. According to Leuthold, transmission of 26 terabits per second confirms that even high data rates can be handled today, while energy consumption is minimized."A few years ago, data rates of 26 terabits per second were deemed utopian even for systems with many lasers." Leuthold adds,"and there would not have been any applications. With 26 terabits per second, it would have been possible to transmit up to 400 million telephone calls at the same time. Nobody needed this at that time. Today, the situation is different."

Video transmissions consume much Internet bandwidth and require extremely high bit rates. The need is growing constantly. In communication networks, first lines with channel data rates of 100 gigabits per second (corresponding to 0.1 terabit per second) have already been taken into operation. Research now concentrates on developing systems for transmission lines in the range of 400 Gigabits/s to 1 Tbit/s. Hence, the Karlsruhe invention is ahead of the ongoing development. Companies and scientists from all over Europe were involved in the experimental implementation of ultra-rapid data transmission at KIT. Among them were members of the staff of Agilent and Micram Deutschland, Time-Bandwidth Switzerland, Finisar Israel, and the University of Southampton in Great Britain.


Source

Tuesday, May 17, 2011

Single Atom Stores Quantum Information

Quantum computers will one day be able to cope with computational tasks in no time where current computers would take years. They will take their enormous computing power from their ability to simultaneously process the diverse pieces of information which are stored in the quantum state of microscopic physical systems, such as single atoms and photons. In order to be able to operate, the quantum computers must exchange these pieces of information between their individual components. Photons are particularly suitable for this, as no matter needs to be transported with them. Particles of matter however will be used for the information storage and processing. Researchers are therefore looking for methods whereby quantum information can be exchanged between photons and matter. Although this has already been done with ensembles of many thousands of atoms, physicists at the Max Planck Institute of Quantum Optics in Garching have now proved that quantum information can also be exchanged between single atoms and photons in a controlled way.

Using a single atom as a storage unit has several advantages -- the extreme miniaturization being only one, says Holger Specht from the Garching-based Max Planck Institute, who was involved in the experiment. The stored information can be processed by direct manipulation on the atom, which is important for the execution of logical operations in a quantum computer."In addition, it offers the chance to check whether the quantum information stored in the photon has been successfully written into the atom without destroying the quantum state," says Specht. It is thus possible to ascertain at an early stage that a computing process must be repeated because of a storage error.

The fact that no one had succeeded until very recently in exchanging quantum information between photons and single atoms was because the interaction between the particles of light and the atoms is very weak. Atom and photon do not take much notice of each other, as it were, like two party guests who hardly talk to each other, and can therefore exchange only a little information. The researchers in Garching have enhanced the interaction with a trick. They placed a rubidium atom between the mirrors of an optical resonator, and then used very weak laser pulses to introduce single photons into the resonator. The mirrors of the resonator reflected the photons to and fro several times, which strongly enhanced the interaction between photons and atom. Figuratively speaking, the party guests thus meet more often and the chance that they talk to each other increases.

The photons carried the quantum information in the form of their polarization. This can be left-handed (the direction of rotation of the electric field is anti-clockwise) or right-handed (clock-wise). The quantum state of the photon can contain both polarizations simultaneously as a so-called superposition state. In the interaction with the photon the rubidium atom is usually excited and then loses the excitation again by means of the probabilistic emission of a further photon. The Garching-based researchers did not want this to happen. On the contrary, the absorption of the photon was to bring the rubidium atom into a definite, stable quantum state. The researchers achieved this with the aid of a further laser beam, the so-called control laser, which they directed onto the rubidium atom at the same time as it interacted with the photon.

The spin orientation of the atom contributes decisively to the stable quantum state generated by control laser and photon. Spin gives the atom a magnetic moment. The stable quantum state, which the researchers use for the storage, is thus determined by the orientation of the magnetic moment. The state is characterized by the fact that it reflects the photon's polarization state: the direction of the magnetic moment corresponds to the rotational direction of the photon's polarization, a mixture of both rotational directions being stored by a corresponding mixture of the magnetic moments.

This state is read out by the reverse process: irradiating the rubidium atom with the control laser again causes it to re-emit the photon which was originally incident. In the vast majority of cases, the quantum information in the read-out photon agrees with the information originally stored, as the physicists in Garching discovered. The quantity that describes this relationship, the so-called fidelity, was more than 90 percent. This is significantly higher than the 67 percent fidelity that can be achieved with classical methods, i.e. those not based on quantum effects. The method developed in Garching is therefore a real quantum memory.

The physicists measured the storage time, i.e. the time the quantum information in the rubidium can be retained, as around 180 microseconds."This is comparable with the storage times of all previous quantum memories based on ensembles of atoms," says Stephan Ritter, another researcher involved in the experiment. Nevertheless, a significantly longer storage time is necessary for the method to be used in a quantum computer or a quantum network. There is also a further quality characteristic of the single-atom quantum memory from Garching which could be improved: the so-called efficiency. It is a measure of how many of the irradiated photons are stored and then read out again. This was just under 10 percent.

The storage time is mainly limited by magnetic field fluctuations from the laboratory surroundings, says Ritter."It can therefore be increased by storing the quantum information in quantum states of the atoms which are insensitive to magnetic fields." The efficiency is limited by the fact that the atom does not sit still in the centre of the resonator, but moves. This causes the strength of the interaction between atom and photon to decrease. The researchers can thus also improve the efficiency: by greater cooling of the atom, i.e. by further reducing its kinetic energy.

The researchers at the Max Planck Institute in Garching now want to work on these two improvements."If this is successful, the prospects for the single-atom quantum memory would be excellent," says Stephan Ritter. The interface between light and individual atoms would make it possible to network more atoms in a quantum computer with each other than would be possible without such an interface; a fact that would make such a computer more powerful. Moreover, the exchange of photons would make it possible to quantum mechanically entangle atoms across large distances. The entanglement is a kind of quantum mechanical link between particles which is necessary to transport quantum information across large distances. The technique now being developed at the Max Planck Institute of Quantum Optics could some day thus become an essential component of a future"quantum Internet."


Source

Monday, May 16, 2011

Beyond Smart Phones: Sensor Network to Make 'Smart Cities' Envisioned

Computer scientists, electrical and computer engineers, and mathemati­cians at the TU Darmstadt and the University of Kassel have joined forces and are working on implementing that vision under their"Cocoon" project. The backbone of a"smart" city is a communications network consisting of sen­sors that receive streams of data, or signals, analyze them, and trans­mit them onward. Such sensors thus act as both receivers and trans­mit­ters, i.e., represent trans­ceivers. The networked communications involved oper­ates wire­lessly via radio links, and yields added values to all partici­pants by analyzing the input data involved. For example, the"Smart Home" control system already on the market allows networking all sorts of devices and automatically regulating them to suit demands, thereby alleg­edly yielding energy savings of as much as fifteen percent.

"Smart Home" might soon be followed by"Smart Hospital,""Smart Indus­try," or"Smart Farm," and even"smart" systems tailored to suit mobile net­works are feasible. Traffic jams may be avoided by, for example, car-to-car or car-to-environment (car-to-X) communications. Health-service sys­tems might also benefit from mobile, sensor communications whenever patients need to be kept supplied with information tailored to suit their health­care needs while underway. Furthermore, sensors on their bodies could assess the status of their health and automatically transmit calls for emergency medical assistance, whenever necessary.

"Smart" and mobile, thanks to beam forming

The researchers regard the ceaseless travels of sensors on mobile systems and their frequent entries into/exits from instrumented areas as the major hurdle to be overcome in implementing their vision of"smart" cities. Sensor-aided devices will have to deal with that by responding to subtle changes in their environments and flexibly, efficiently, regulating the quali­ties of received and transmitted signals. Beam forming, a field in which the TU Darmstadt's Institute for Communications Technology is active, should help out there. On that subject, Prof. Rolf Jakoby of the TU Darmstadt's Electrical Engineering and Information Technology Dept. remarked that,"Current types of antennae radiate omnidirectionally, like light bulbs. We intend to create conditions, under which antennae will, in the future, behave like spotlights that, once they have located a sought device, will track it, while suppressing interference by stray electromag­netic radiation from other devices that might also be present in the area."

Such antennae, along with transceivers equipped with them, are thus recon­figurable, i.e., adjustable to suit ambient conditions by means of onboard electronic circuitry or remote controls. Working in col­lab­or­a­tion with an industrial partner, Jakoby has already equipped terres­trial digital-television (TDTV) transmitters with reconfigurable amplifiers that allow amplifying transmitted-signal levels by as much as ten percent. He added that,"If all of Germany's TDTV‑transmitters were equipped with such amp­li­fiers, we could shut down one nuclear power plant."

Frequency bands are a scarce resource

Reconfigurable devices also make much more efficient use of a scarce resource, freq­uency bands. Users have thus far been allocated rigorously defined frequency bands, where only fifteen to twenty percent of the capacities of even the more popular ones have been allocated. Beam forming might allow making more efficient use of them. Jakoby noted that,"This is an area that we are still taking a close look at, but we are well along the way toward understand­ing the system better." However, only a few uses of beam forming have emerged to date, since currently available systems are too expensive for mass applications.

Small, model networks are targeted

Yet another fundamental problem remains to be solved before"smart" cities may become realities. Sensor communications requires the cooper­a­tion of all devices involved, across all communications protocols, such as"Bluetooth," and across all networks, such as the European Global System for Mobile Communications (GSM) mobile-telephone network or wireless local-area networks (WLAN), which cannot be achieved with current devices, communications protocols, and networks. Jakoby explained that,"Con­verting all devices to a common communications protocol is infeas­ible, which is why we are seeking a new protocol that would be superim­posed upon everything and allow them to communicate via several proto­cols." Transmission channels would also have to be capable of handling a mas­sive flood of data, since, as Prof. Abdelhak Zoubir of the TU Darm­stadt's Electrical Engineer­ing and Information Technology Dept., the"Cocoon" project's coordinator, put it,"A"smart" Darm­stadt alone would surely involve a million sensors communicating with one another via satel­lites, mobile telephones, computers, and all of the other types of devices that we already have available. Furthermore, since a single, mobile sensor is readily capable of generating several hundred Meg­a­bytes of data annu­ally, new models for handling the communications of millions of such sen­sors that will more densely compress data in order to provide for error-free com­munica­tions will be needed. Several hurdles will thus have to be over­come before"smart" cities become reality. Nevertheless, the scientists working on the"Cocoon" project are convinced that they will be able to simulate a"smart" city incorporating various types of devices employing early versions of small, model networks.

Over the next three years, scientists at the TU Darmstadt will be receiving a total of 4.5 million Euros from the State of Hesse's Offensive for Devel­op­ing Scientific-Economic Excellence for their researches in conjunction with their"Cocoon -- Cooperative Sensor Communications" project.


Source

Sunday, May 15, 2011

Crowdsourcing Science: Researcher Uses Facebook to Identify Thousands of Fish

In January and February, Bloom helped conduct the first ichthyological survey on Guyana's Cuyuni River. The trip was funded through the Biological Diversity of the Guiana Shield program at the Smithsonian Institution's National Museum of Natural History and was led by Dr. Brian Sidlauskas, assistant professor of fisheries at Oregon State University (OSU). The goal was to find out which species of fish live in the Cuyuni and get a good estimate of their abundance.

The Cuyuni is bisected by the Guyana/Venezuela border and extends 210 kilometres into the thick jungles of western Guyana. The region is under intense ecological pressure from the artisanal gold mining operations that pepper the Guyanese hinterland. This mining has terrible impacts on the surrounding environment. Chief among these are the increase in sedimentation in the rivers and the release of elemental mercury directly into the food chain."That's why it's important we get there now, to find out what's there," says Bloom."Because in 30 years, who knows what the Cuyuni will look like?"

For two weeks, Bloom, Sidlauskas and the rest of the team spent day and night catching as many fish as they could with various nets. They slept in makeshift jungle camps. In two weeks, the team had collected more than 5,000 fish specimens. Then they realized they had a big problem.

"In order to get the fish out of the country," says Bloom,"we needed an accurate count of each species." The team's research permit required them to report this information to the Guyanese government."We couldn't leave the country until we turned over our data to the authorities."

Time was of the essence, as Sidlauskas, Bloom and OSU graduate student Whit Bronaugh had to return to North America as soon as possible. But how could a handful of people possibly identify 5,000 fish in just a few days?"A lot of people think fish experts know hundreds and hundreds of species," says Bloom."But they really don't. We're all specialists on one particular group or another." The last thing the team wanted was to fudge the data, because the whole point of the project was to gather accurate information for the Guyanese government to use in its conservation and development planning.

That's when Bloom made a great suggestion."Let's just put them up on Facebook and see if our friends can help." Sidlauskas loved the idea, so he uploaded the photos that Bronaugh had meticulously taken of each species."The network of fish experts is pretty small," says Bloom,"and fish people can be real fanatics. Once a fish pops up on Facebook, they get very excited and start arguing. So next thing we knew, we had a really interesting intellectual debate going on between various world experts on fish, sort of like a real-time peer review that reached across continents and around the world." In less than 24 hours, their network of friends -- many of whom hold Ph.D.s in ichthyology and whom Bloom refers to as"diehard fish-heads" -- had identified almost every specimen.

With 5,000 identifications in hand, the team was able to deliver their results to the government and return home on schedule. The National Museum of Natural History's blog ran a story on the team's novel use of social networking to crowdsource their data. Then the Smithsonian Institution's blog,Smithsonian Science, andSmithsonianmagazine's blog did the same. Not long after that, employees at Facebook caught wind of the story and chose it as a"Facebook Story of the Week" on the company's page. In less than a few weeks, more than 9,000 people had"liked" the story, and more than 2,500 comments were registered.

"Bloom's elegant approach to solving this particular scientific and logistical problem is reflective of the ingenuity and inventiveness that one finds amongst UTSC researchers," says vice-principal of research at UTSC, Malcolm Campbell."Combining his passion for research, with the preparedness and cutting-edge thinking that are part-and-parcel of his UTSC graduate degree, Bloom devised a particularly effective solution in a tight spot," says Campbell."Bloom and his supervisor, assistant professor Nate Lovejoy, are superb examples of how the best minds are conducting the best research at UTSC."

The results of the biodiversity survey on the Cuyuni River were somewhat discouraging. Bloom says 5,000 fish is not many; he can remember similar trips on different Guyanese rivers where the team pulled in up to 20,000 specimens."Species diversity and abundance were very low," he says."We need to continue monitoring, but this isn't good news for the region."

But the team's use of Facebook to crowdsource accurate scientific data has had an unexpected consequence: it's led Bloom to change his mind about the value of online tools."Social networking is so powerful, and scientists should be using it more to connect with the world-at-large," he says."I can't take credit for the idea, though." Bloom's friend, an ichthyologist at Texas A&M named Nathan Lujan, has been using Facebook to identify fish for years."And Nathan?" says Bloom."Nathan is a real fish-head."


Source

Saturday, May 14, 2011

Toward Faster Transistors: Physicists Discover Physical Phenomenon That Could Boost Computers' Clock Speed

In this week's issue of the journalScience,MIT researchers and their colleagues at the University of Augsburg in Germany report the discovery of a new physical phenomenon that could yield transistors with greatly enhanced capacitance -- a measure of the voltage required to move a charge. And that, in turn, could lead to the revival of clock speed as the measure of a computer's power.

In today's computer chips, transistors are made from semiconductors, such as silicon. Each transistor includes an electrode called the gate; applying a voltage to the gate causes electrons to accumulate underneath it. The electrons constitute a channel through which an electrical current can pass, turning the semiconductor into a conductor.

Capacitance measures how much charge accumulates below the gate for a given voltage. The power that a chip consumes, and the heat it gives off, are roughly proportional to the square of the gate's operating voltage. So lowering the voltage could drastically reduce the heat, creating new room to crank up the clock.

MIT Professor of Physics Raymond Ashoori and Lu Li, a postdoc and Pappalardo Fellow in his lab -- together with Christoph Richter, Stefan Paetel, Thilo Kopp and Jochen Mannhart of the University of Augsburg -- investigated the unusual physical system that results when lanthanum aluminate is grown on top of strontium titanate. Lanthanum aluminate consists of alternating layers of lanthanum oxide and aluminum oxide. The lanthanum-based layers have a slight positive charge; the aluminum-based layers, a slight negative charge. The result is a series of electric fields that all add up in the same direction, creating an electric potential between the top and bottom of the material.

Ordinarily, both lanthanum aluminate and strontium titanate are excellent insulators, meaning that they don't conduct electrical current. But physicists had speculated that if the lanthanum aluminate gets thick enough, its electrical potential would increase to the point that some electrons would have to move from the top of the material to the bottom, to prevent what's called a"polarization catastrophe." The result is a conductive channel at the juncture with the strontium titanate -- much like the one that forms when a transistor is switched on. So Ashoori and his collaborators decided to measure the capacitance between that channel and a gate electrode on top of the lanthanum aluminate.

They were amazed by what they found: Although their results were somewhat limited by their experimental apparatus, it may be that an infinitesimal change in voltage will cause a large amount of charge to enter the channel between the two materials."The channel may suck in charge -- shoomp! Like a vacuum," Ashoori says."And it operates at room temperature, which is the thing that really stunned us."

Indeed, the material's capacitance is so high that the researchers don't believe it can be explained by existing physics."We've seen the same kind of thing in semiconductors," Ashoori says,"but that was a very pure sample, and the effect was very small. This is a super-dirty sample and a super-big effect." It's still not clear, Ashoori says, just why the effect is so big:"It could be a new quantum-mechanical effect or some unknown physics of the material."

There is one drawback to the system that the researchers investigated: While a lot of charge will move into the channel between materials with a slight change in voltage, it moves slowly -- much too slowly for the type of high-frequency switching that takes place in computer chips. That could be because the samples of the material are, as Ashoori says,"super dirty"; purer samples might exhibit less electrical resistance. But it's also possible that, if researchers can understand the physical phenomena underlying the material's remarkable capacitance, they may be able to reproduce them in more practical materials.

Triscone cautions that wholesale changes to the way computer chips are manufactured will inevitably face resistance."So much money has been injected into the semiconductor industry for decades that to do something new, you need a really disruptive technology," he says.

"It's not going to revolutionize electronics tomorrow," Ashoori agrees."But this mechanism exists, and once we know it exists, if we can understand what it is, we can try to engineer it."


Source

Friday, May 13, 2011

Scientists Afflict Computers With 'Schizophrenia' to Better Understand the Human Brain

The researchers used a virtual computer model, or"neural network," to simulate the excessive release of dopamine in the brain. They found that the network recalled memories in a distinctly schizophrenic-like fashion.

Their results were published in April inBiological Psychiatry.

"The hypothesis is that dopamine encodes the importance-the salience-of experience," says Uli Grasemann, a graduate student in the Department of Computer Science at The University of Texas at Austin."When there's too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn't be learning from."

The results bolster a hypothesis known in schizophrenia circles as the hyperlearning hypothesis, which posits that people suffering from schizophrenia have brains that lose the ability to forget or ignore as much as they normally would. Without forgetting, they lose the ability to extract what's meaningful out of the immensity of stimuli the brain encounters. They start making connections that aren't real, or drowning in a sea of so many connections they lose the ability to stitch together any kind of coherent story.

The neural network used by Grasemann and his adviser, Professor Risto Miikkulainen, is called DISCERN. Designed by Miikkulainen, DISCERN is able to learn natural language. In this study it was used to simulate what happens to language as the result of eight different types of neurological dysfunction. The results of the simulations were compared by Ralph Hoffman, professor of psychiatry at the Yale School of Medicine, to what he saw when studying human schizophrenics.

In order to model the process, Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information-not as distinct units, but as statistical relationships of words, sentences, scripts and stories.

"With neural networks, you basically train them by showing them examples, over and over and over again," says Grasemann."Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output. You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned."

In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate-essentially telling it to stop forgetting so much.

"It's an important mechanism to be able to ignore things," says Grasemann."What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia."

After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.

In another instance, DISCERN began showing evidence of"derailment"-replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.

"Information processing in neural networks tends to be like information processing in the human brain in many ways," says Grasemann."So the hope was that it would also break down in similar ways. And it did."

The parallel between their modified neural network and human schizophrenia isn't absolute proof the hyperlearning hypothesis is correct, says Grasemann. It is, however, support for the hypothesis, and also evidence of how useful neural networks can be in understanding the human brain.

"We have so much more control over neural networks than we could ever have over human subjects," he says."The hope is that this kind of modeling will help clinical research."


Source

Tuesday, May 10, 2011

Original Versus Copy: Researchers Develop Forgery-Proof Prototypes for Product Authentication

For this, all the data has to be electronically checked. In the framework of the"Crypta" project supported by the Federal Ministry for Transport, Innovation and Technology (BMVIT), scientists from Graz University of Technology have now developed a prototype which safeguards objects according to new standards.

Whether for checking the origin of foodstuffs or as proof of authenticity of drugs, the future will bring an increased use of electronic assistants to make sure that the quality is right. RFID (Radio Frequency Identification) technology enables objects to be identified wirelessly."You need a reading device and an RFID tag which communicate with each other," explains project leader Jörn-Marc Schmidt from the Institute of Applied Information Processing and Communications at Graz University of Technology. There is a difference between active and passive tags. The former are connected to a power source whereas the latter draw the required power directly from the field of the reading unit, which makes them particularly suitable, for instance, for applications in supermarkets.

Private Key

For a long time the same electronic keys were used for these energy-efficient passive tags and their readers -- using what experts call symmetrical methods."In asymmetrical methods, the transmitter and receiver possess different keys. Secure digital signatures are thus made possible," adds Jörn-Marc Schmidt. Together with the semiconductor manufacturer austriamicrosystems and RF-iT Solutions GmbH, an RFID software and services provider from Graz, the researchers have now developed a prototype which uses a standard method for passive tags for the first time.

"For every tag there is a public key and a private key which remains secret," explains Jörn-Marc Schmidt. This is a development which could be made use of everywhere where proof of authenticity is important. The research results are the gratifying outcome of the Crypta research project of the FIT-IT funding line of the BMVIT, which supports application-oriented research in information technology in particular.


Source

Monday, May 9, 2011

Toward Optical Computing in Handheld Electronics: Graphene Optical Modulators Could Lead to Ultrafast Communications

The team of researchers, led by UC Berkeley engineering professor Xiang Zhang, built a tiny optical device that uses graphene, a one-atom-thick layer of crystallized carbon, to switch light on and off. This switching ability is the fundamental characteristic of a network modulator, which controls the speed at which data packets are transmitted. The faster the data pulses are sent out, the greater the volume of information that can be sent. Graphene-based modulators could soon allow consumers to stream full-length, high-definition, 3-D movies onto a smartphone in a matter of seconds, the researchers said.

"This is the world's smallest optical modulator, and the modulator in data communications is the heart of speed control," said Zhang, who directs a National Science Foundation (NSF) Nanoscale Science and Engineering Center at UC Berkeley."Graphene enables us to make modulators that are incredibly compact and that potentially perform at speeds up to ten times faster than current technology allows. This new technology will significantly enhance our capabilities in ultrafast optical communication and computing."

In this latest work, described in the May 8 advanced online publication of the journalNature, researchers were able to tune the graphene electrically to absorb light in wavelengths used in data communication. This advance adds yet another advantage to graphene, which has gained a reputation as a wonder material since 2004 when it was first extracted from graphite, the same element in pencil lead. That achievement earned University of Manchester scientists Andre Geim and Konstantin Novoselov the Nobel Prize in Physics last year.

Zhang worked with fellow faculty member Feng Wang, an assistant professor of physics and head of the Ultrafast Nano-Optics Group at UC Berkeley. Both Zhang and Wang are faculty scientists at Lawrence Berkeley National Laboratory's Materials Science Division.

"The impact of this technology will be far-reaching," said Wang."In addition to high-speed operations, graphene-based modulators could lead to unconventional applications due to graphene's flexibility and ease in integration with different kinds of materials. Graphene can also be used to modulate new frequency ranges, such as mid-infrared light, that are widely used in molecular sensing."

Graphene is the thinnest, strongest crystalline material yet known. It can be stretched like rubber, and it has the added benefit of being an excellent conductor of heat and electricity. This last quality of graphene makes it a particularly attractive material for electronics.

"Graphene is compatible with silicon technology and is very cheap to make," said Ming Liu, post-doctoral researcher in Zhang's lab and co-lead author of the study."Researchers in Korea last year have already produced 30-inch sheets of it. Moreover, very little graphene is required for use as a modulator. The graphite in a pencil can provide enough graphene to fabricate 1 billion optical modulators."

It is the behavior of photons and electrons in graphene that first caught the attention of the UC Berkeley researchers.

The researchers found that the energy of the electrons, referred to as its Fermi level, can be easily altered depending upon the voltage applied to the material. The graphene's Fermi level in turn determines if the light is absorbed or not.

When a sufficient negative voltage is applied, electrons are drawn out of the graphene and are no longer available to absorb photons. The light is"switched on" because the graphene becomes totally transparent as the photons pass through.

Graphene is also transparent at certain positive voltages because, in that situation, the electrons become packed so tightly that they cannot absorb the photons.

The researchers found a sweet spot in the middle where there is just enough voltage applied so the electrons can prevent the photons from passing, effectively switching the light"off."

"If graphene were a hallway, and electrons were people, you could say that, when the hall is empty, there's no one around to stop the photons," said Xiaobo Yin, co-lead author of the Nature paper and a research scientist in Zhang's lab."In the other extreme, when the hall is too crowded, people can't move and are ineffective in blocking the photons. It's in between these two scenarios that the electrons are allowed to interact with and absorb the photons, and the graphene becomes opaque."

In their experiment, the researchers layered graphene on top of a silicon waveguide to fabricate optical modulators. The researchers were able to achieve a modulation speed of 1 gigahertz, but they noted that the speed could theoretically reach as high as 500 gigahertz for a single modulator.

While components based upon optics have many advantages over those that use electricity, including the ability to carry denser packets of data more quickly, attempts to create optical interconnects that fit neatly onto a computer chip have been hampered by the relatively large amount of space required in photonics.

Light waves are less agile in tight spaces than their electrical counterparts, the researchers noted, so photon-based applications have been primarily confined to large-scale devices, such as fiber optic lines.

"Electrons can easily make an L-shaped turn because the wavelengths in which they operate are small," said Zhang."Light wavelengths are generally bigger, so they need more space to maneuver. It's like turning a long, stretch limo instead of a motorcycle around a corner. That's why optics require bulky mirrors to control their movements. Scaling down the optical device also makes it faster because the single atomic layer of graphene can significantly reduce the capacitance -- the ability to hold an electric charge -- which often hinders device speed."

Graphene-based modulators could overcome the space barrier of optical devices, the researchers said. They successfully shrunk a graphene-based optical modulator down to a relatively tiny 25 square microns, a size roughly 400 times smaller than a human hair. The footprint of a typical commercial modulator can be as large as a few square millimeters.

Even at such a small size, graphene packs a punch in bandwidth capability. Graphene can absorb a broad spectrum of light, ranging over thousands of nanometers from ultraviolet to infrared wavelengths. This allows graphene to carry more data than current state-of-the-art modulators, which operate at a bandwidth of up to 10 nanometers, the researchers said.

"Graphene-based modulators not only offer an increase in modulation speed, they can enable greater amounts of data packed into each pulse," said Zhang."Instead of broadband, we will have 'extremeband.' What we see here and going forward with graphene-based modulators are tremendous improvements, not only in consumer electronics, but in any field that is now limited by data transmission speeds, including bioinformatics and weather forecasting. We hope to see industrial applications of this new device in the next few years."

Other UC Berkeley co-authors of this paper are graduate student Erick Ulin-Avila and post-doctoral researcher Thomas Zentgraf in Zhang's lab; and visiting scholar Baisong Geng and graduate student Long Ju in Wang's lab.

This work was supported through the Center for Scalable and Integrated Nano-Manufacturing (SINAM), an NSF Nanoscale Science and Engineering Center. Funding from the Department of Energy's Basic Energy Science program at Lawrence Berkeley National Laboratory also helped support this research.


Source

Sunday, May 8, 2011

Robot Engages Novice Computer Scientists

A product of CMU's famed Robotics Institute, Finch was designed specifically to make introductory computer science classes an engaging experience once again.

A white plastic, two-wheeled robot with bird-like features, Finch can quickly be programmed by a novice to say"Hello, World," or do a little dance, or make its beak glow blue in response to cold temperature or some other stimulus. But the simple look of the tabletop robot is deceptive. Based on four years of educational research sponsored by the National Science Foundation, Finch includes a number of features that could keep students busy for a semester or more thinking up new things to do with it.

"Students are more interested and more motivated when they can work with something interactive and create programs that operate in the real world," said Tom Lauwers, who earned his Ph.D. in robotics at CMU in 2010 and is now an instructor in the Robotics Institute's CREATE Lab."We packed Finch with sensors and mechanisms that engage the eyes, the ears -- as many senses as possible."

Lauwers has launched a startup company, BirdBrain Technologies, to produce Finch and now sells them online atwww.finchrobot.comfor$99 each.

"Our vision is to make Finch affordable enough that every student can have one to take home for assignments," said Lauwers, who developed the robot with Illah Nourbakhsh, associate professor of robotics and director of the CREATE Lab. Less than a foot long, Finch easily fits in a backpack and is rugged enough to survive being hauled around and occasionally dropped.

Finch includes temperature and light sensors, a three-axis accelerometer and a bump sensor. It has color-programmable LED lights, a beeper and speakers. With a pencil inserted in its tail, Finch can be used to draw pictures. It can be programmed to be a moving, noise-making alarm clock. It even has uses beyond a robot; its accelerometer enables it to be used as a 3-D mouse to control a computer display.

Robot kits suitable for students as young as 12 are commercially available, but often cost more than the Finch, Lauwers said. What's more, the idea is to use the robot to make computer programming lessons more interesting, not to use precious instructional time to first build a robot.

Finch is a plug-and-play device, so no drivers or other software must be installed beyond what is used in typical computer science courses. Finch connects with and receives power from the computer over a 15-foot USB cable, eliminating batteries and off-loading its computation to the computer. Support for a wide range of programming languages and environments is coming, including graphical languages appropriate for young students. Finch currently can be programmed with the Java and Python languages widely used by educators.

A number of assignments are available on the Finch Robot website to help teachers drop Finch into their lesson plans, and the website allows instructors to upload their own assignments or ideas in return for company-provided incentives. The robot has been classroom-tested at the Community College of Allegheny County, Pa., and by instructors in high school, university and after-school programs.

"Computer science now touches virtually every scientific discipline and is a critical part of most new technologies, yet U.S. universities saw declining enrollments in computer science through most of the past decade," Nourbakhsh said."If Finch can help motivate students to give computer science a try, we think many more students will realize that this is a field that they would enjoy exploring."


Source

Saturday, May 7, 2011

Unique Norwegian Nano-Product: Processor Chips With a Global Market

"Actually, we are the only ones who have succeeded in developing radar transceivers like these," says Dag T. Wisland, CEO of Novelda AS.

Small company, heavyweight technology

With just 20 employees, Novelda develops high-performance nano-electronics that pave the way for new, advanced radar technology.

Although the company is small, its technology is absolutely cutting-edge. Novelda's silicon chips, which measure just 2 x 2 mm, have made an international breakthrough. Each chip contains nearly two million transistors and 512 radars that simultaneously sense and transmit information.

Unlike conventional radar devices, which must be placed some metres away from the object to be measured, Novelda's can be located directly on the object. This capability opens up opportunities for product development with all sorts of exciting applications.

"We have customers located all over the world who are developing applications based on our technology," explains Chief Marketing Officer Aage Kalsæg."In the health care sector alone, our sensors are used in solutions being developed for monitoring heart rate, taking wireless ECG readings, and measuring fluid in the lungs."

"Some of the other exciting development projects are snow depth radars that combine GPS with water content measurement, as well as radars that can penetrate walls and rubble and find people trapped in collapsed buildings. The possibilities are endless."

Intensive R&D is crucial

Novelda's path -- from start-up company in 2004 to technological market leader -- has been an arduous one. Continuity in research is a critical element of the company's success. NOVELDA has received public funding from the Research Council of Norway and its programmes such as User-driven Research-based Innovation (BIA) and Core Competence and Growth in ICT (VERDIKT), as well as with EUREKA's Eurostars Programme with its funding and support specifically dedicated to SMEs.


Source

Friday, May 6, 2011

EEG Headset With Flying Harness Lets Users 'Fly' by Controlling Their Thoughts

Creative director and Rensselaer MFA candidate Yehuda Duenyas describes the"Infinity Simulator" as a platform similar to a gaming console -- like the Wii or the Kinect -- writ large.

"Instead of you sitting and controlling gaming content, it's a whole system that can control live elements -- so you can control 3-D rigging, sound, lights, and video," said Duenyas, who works under the moniker"xxxy.""It's a system for creating hybrids of theater, installation, game, and ride."

Duenyas created the"Infinity Simulator" with a team of collaborators, including Michael Todd, a Rensselaer 2010 graduate in computer science. Duenyas will exhibit the new system in the art installation"The Ascent" on May 12 at Curtis R. Priem Experimental Media and Performing Arts Center (EMPAC).

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd.

Within the theater, the rigging -- including the harness -- is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The"Infinity Simulator," a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

"We've built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it," said Duenyas."The 'Infinity Simulator' is the center; everything talks to the 'Infinity Simulator.'"

The May 12"The Ascent" installation is only one experience made possible by the new platform, Duenyas said.

"'The Ascent' embodies the maiden experience that we'll be presenting," Duenyas said."But we've found that it's a versatile platform to create almost any type of experience that involves rigging, video, sound, and light. The idea is that it's reactive to the users' body; there's a physical interaction."

Duenyas, a Brooklyn-based artist and theater director, specializes in experiential theater performances.

"The thing that I focus on the most is user experience," Duenyas said."All the shows I do with my theater company and on my own involve a lot of set and set design -- you're entering into a whole world. You're having an experience that is more than going to a show, although a show is part of it."

The"Infinity Simulator" stemmed from an idea Duenyas had for such a theatrical experience.

"It started with an idea that I wanted to create a simulator that would give people a feeling of infinity," Duenyas said. His initial vision was that of a room similar to a Cave Automated Virtual Environment -- a room paneled with projection screens -- in which participants would be able to float effortlessly in an environment intended to evoke a glimpse into infinity.

At Rensselaer, Duenyas took advantage of the technology at hand to explore his idea, first with a video game he developed in 2010, then -- working through the Department of the Arts -- with EMPAC's computer-controlled 3-D theatrical flying harness.

"The charge of the arts department is to allow the artists that they bring into the department to use technology to enhance what they've been doing already," Duenyas said."In coming here (EMPAC), and starting to translate our ideas into a physical space, so many different things started opening themselves up to us."

The 2010 video game, also developed with Todd, tracked the movements -- pitch and yaw -- of players suspended in a custom-rigged harness, allowing players to soar through simulated landscapes. Duenyas said that that game (also called the"Infinity Simulator") and the new platform are part of the same vision.

EMPAC Director Johannes Goebel saw the game on display at the 2010 GameFest and discussed the custom-designed 3-D theatrical flying rig in EMPAC with Duenyas. Working through the Arts Department, Duenyas submitted a proposal to work with the rig, and his proposal was accepted.

Duenyas and his team experimented -- first gaining peripheral control over the system, and then linking it to the EEG headset -- and created the Ascent installation as an initial project. In the installation, the Infinity Simulator is programmed to respond to relaxation.

"We're measuring two brain states -- alpha and theta -- waking consciousness and everyday brain computational processing," said Duenyas."If you close your eyes and take a deep breath, that processing power decreases. When it decreases below a certain threshold, that is the trigger for you to elevate."

As a user rises, their ascent triggers a changing display of lights, sound, and video. Duenyas said he wants to hint at transcendental experience, while keeping the door open for a more circumspect interpretation.

"The point is that the user is trying to transcend the everyday and get into this meditative state so they can have this experience. I see it as some sort of iconic spiritual simulator. That's the serious side," he said."There's also a real tongue-in-cheek side of my work: I want clouds, I want Terry Gilliam's animated fist to pop out of a cloud and hit you in the face. It's mixing serious religious symbology, but not taking it seriously."

The humor is prompted, in part, by the limitations of this earliest iteration of Duenyas' vision.

"It started with, 'I want to have a glimpse of infinity,' 'I want to float in space.' Then you get in the harness and you're like 'man, this harness is uncomfortable,'" he said."In order to achieve the original vision, we had to build an infrastructure, and I still see development of the infinity experience is a ways off; but what we can do with the infrastructure in a realistic time frame is create 'The Ascent,' which is going to be really fun, and totally other."

Creating the"Infinity Simulator" has prompted new possibilities.

"The vision now is to play with this fun system that we can use to build any experience," he said."It's sort of overwhelming because you could do so many things -- you could create a flight through cumulus clouds, you could create an augmented physicality parkour course where you set up different features in the room and guide yourself to different heights. It's limitless."


Source

Thursday, May 5, 2011

Transistors Reinvented Using New 3-D Structure

The three-dimensional Tri-Gate transistors represent a fundamental departure from the two-dimensional planar transistor structure that has powered not only all computers, mobile phones and consumer electronics to-date, but also the electronic controls within cars, spacecraft, household appliances, medical devices and virtually thousands of other everyday devices for decades.

"Intel's scientists and engineers have once again reinvented the transistor, this time utilizing the third dimension," said Intel President and CEO Paul Otellini."Amazing, world-shaping devices will be created from this capability as we advance Moore's Law into new realms."

Scientists have long recognized the benefits of a 3-D structure for sustaining the pace of Moore's Law as device dimensions become so small that physical laws become barriers to advancement. The key to this latest breakthrough is Intel's ability to deploy its novel 3-D Tri-Gate transistor design into high-volume manufacturing, ushering in the next era of Moore's Law and opening the door to a new generation of innovations across a broad spectrum of devices.

Moore's Law is a forecast for the pace of silicon technology development that states that roughly every 2 years transistor density will double, while increasing functionality and performance and decreasing costs. It has become the basic business model for the semiconductor industry for more than 40 years.

Unprecedented Power Savings and Performance Gains

Intel's 3-D Tri-Gate transistors enable chips to operate at lower voltage with lower leakage, providing an unprecedented combination of improved performance and energy efficiency compared to previous state-of-the-art transistors. The capabilities give chip designers the flexibility to choose transistors targeted for low power or high performance, depending on the application.

The 22nm 3-D Tri-Gate transistors provide up to 37 percent performance increase at low voltage versus Intel's 32nm planar transistors. This incredible gain means that they are ideal for use in small handheld devices, which operate using less energy to"switch" back and forth. Alternatively, the new transistors consume less than half the power when at the same performance as 2-D planar transistors on 32nm chips.

"The performance gains and power savings of Intel's unique 3-D Tri-Gate transistors are like nothing we've seen before," said Mark Bohr, Intel Senior Fellow."This milestone is going further than simply keeping up with Moore's Law. The low-voltage and low-power benefits far exceed what we typically see from one process generation to the next. It will give product designers the flexibility to make current devices smarter and wholly new ones possible. We believe this breakthrough will extend Intel's lead even further over the rest of the semiconductor industry."

Continuing the Pace of Innovation -- Moore's Law

Transistors continue to get smaller, cheaper and more energy efficient in accordance with Moore's Law -- named for Intel co-founder Gordon Moore. Because of this, Intel has been able to innovate and integrate, adding more features and computing cores to each chip, increasing performance, and decreasing manufacturing cost per transistor.

Sustaining the progress of Moore's Law becomes even more complex with the 22nm generation. Anticipating this, Intel research scientists in 2002 invented what they called a Tri-Gate transistor, named for the three sides of the gate. This announcement follows further years of development in Intel's highly coordinated research-development-manufacturing pipeline, and marks the implementation of this work for high-volume manufacturing.

The 3-D Tri-Gate transistors are a reinvention of the transistor. The traditional"flat" two-dimensional planar gate is replaced with an incredibly thin three-dimensional silicon fin that rises up vertically from the silicon substrate. Control of current is accomplished by implementing a gate on each of the three sides of the fin -- two on each side and one across the top -- rather than just one on top, as is the case with the 2-D planar transistor. The additional control enables as much transistor current flowing as possible when the transistor is in the"on" state (for performance), and as close to zero as possible when it is in the"off" state (to minimize power), and enables the transistor to switch very quickly between the two states (again, for performance).

Just as skyscrapers let urban planners optimize available space by building upward, Intel's 3-D Tri-Gate transistor structure provides a way to manage density. Since these fins are vertical in nature, transistors can be packed closer together, a critical component to the technological and economic benefits of Moore's Law. For future generations, designers also have the ability to continue growing the height of the fins to get even more performance and energy-efficiency gains.

"For years we have seen limits to how small transistors can get," said Moore."This change in the basic structure is a truly revolutionary approach, and one that should allow Moore's Law, and the historic pace of innovation, to continue."

World's First Demonstration of 22nm 3-D Tri-Gate Transistors

The 3-D Tri-Gate transistor will be implemented in the company's upcoming manufacturing process, called the 22nm node, in reference to the size of individual transistor features. More than 6 million 22nm Tri-Gate transistors could fit in the period at the end of this sentence.

Intel has demonstrated the world's first 22nm microprocessor, codenamed"Ivy Bridge," working in a laptop, server and desktop computer. Ivy Bridge-based Intel® Core™ family processors will be the first high-volume chips to use 3-D Tri-Gate transistors. Ivy Bridge is slated for high-volume production readiness by the end of this year.

This silicon technology breakthrough will also aid in the delivery of more highly integrated Intel® Atom™ processor-based products that scale the performance, functionality and software compatibility of Intel® architecture while meeting the overall power, cost and size requirements for a range of market segment needs.


Source

Wednesday, May 4, 2011

Evolutionary Lessons for Wind Farm Efficiency

Senior Lecturer Dr Frank Neumann, from the School of Computer Science, is using a"selection of the fittest" step-by-step approach called"evolutionary algorithms" to optimise wind turbine placement. This takes into account wake effects, the minimum amount of land needed, wind factors and the complex aerodynamics of wind turbines.

"Renewable energy is playing an increasing role in the supply of energy worldwide and will help mitigate climate change," says Dr Neumann."To further increase the productivity of wind farms, we need to exploit methods that help to optimise their performance."

Dr Neumann says the question of exactly where wind turbines should be placed to gain maximum efficiency is highly complex."An evolutionary algorithm is a mathematical process where potential solutions keep being improved a step at a time until the optimum is reached," he says.

"You can think of it like parents producing a number of offspring, each with differing characteristics," he says."As with evolution, each population or 'set of solutions' from a new generation should get better. These solutions can be evaluated in parallel to speed up the computation."

Other biology-inspired algorithms to solve complex problems are based on ant colonies.

"Ant colony optimisation" uses the principle of ants finding the shortest way to a source of food from their nest.

"You can observe them in nature, they do it very efficiently communicating between each other using pheromone trails," says Dr Neumann."After a certain amount of time, they will have found the best route to the food -- problem solved. We can also solve human problems using the same principles through computer algorithms."

Dr Neumann has come to the University of Adelaide this year from Germany where he worked at the Max Planck Institute. He is working on wind turbine placement optimisation in collaboration with researchers at the Massachusetts Institute of Technology.

"Current approaches to solving this placement optimisation can only deal with a small number of turbines," Dr Neumann says."We have demonstrated an accurate and efficient algorithm for as many as 1000 turbines."

The researchers are now looking to fine-tune the algorithms even further using different models of wake effect and complex aerodynamic factors.


Source

Tuesday, May 3, 2011

College Students' Use of Kindle DX Points to E-Reader’s Role in Academia

The UW last year was one of seven U.S. universities that participated in a pilot study of the Kindle DX, a larger version of the popular e-reader. UW researchers who study technology looked at how students involved in the pilot project did their academic reading.

"There is no e-reader that supports what we found these students doing," said first author Alex Thayer, a UW doctoral student in Human Centered Design and Engineering."It remains to be seen how to design one. It's a great space to get into, there's a lot of opportunity."

Thayer is presenting the findings in Vancouver, B.C. at the Association for Computing Machinery's Conference on Human Factors in Computing Systems, where the study received an honorable mention for best paper.

"Most e-readers were designed for leisure reading -- think romance novels on the beach," said co-author Charlotte Lee, a UW assistant professor of Human Centered Design and Engineering."We found that reading is just a small part of what students are doing. And when we realize how dynamic and complicated a process this is, it kind of redefines what it means to design an e-reader."

Some of the other schools participating in the pilot project conducted shorter studies, generally looking at the e-reader's potential benefits and drawbacks for course use. The UW study looked more broadly at how students did their academic reading, following both those who incorporated the e-reader into their routines and those who did not.

"We were not trying to evaluate the device, per se, but wanted to think long term, really looking to the future of e-readers, what are students trying to do, how can we support that," Lee said.

The researchers interviewed 39 first-year graduate students in the UW's Department of Computer Science& Engineering, 7 women and 32 men, ranging from 21 to 53 years old.

By spring quarter of 2010, seven months into the study, less than 40 percent of the students were regularly doing their academic reading on the Kindle DX. Reasons included the device's lack of support for taking notes and difficulty in looking up references. (Amazon Corp., which makes the Kindle DX, has since improved some of these features.)

UW researchers continued to interview all the students over the nine-month period to find out more about their reading habits, with or without the e-reader. They found:

  • Students did most of the reading in fixed locations: 47 percent of reading was at home, 25 percent at school, 17 percent on a bus and 11 percent in a coffee shop or office.
  • The Kindle DX was more likely to replace students' paper-based reading than their computer-based reading.
  • Of the students who continued to use the device, some read near a computer so they could look up references or do other tasks that were easier to do on a computer. Others tucked a sheet of paper into the case so they could write notes.
  • With paper, three quarters of students marked up texts as they read. This included highlighting key passages, underlining, drawing pictures and writing notes in margins.
  • A drawback of the Kindle DX was the difficulty of switching between reading techniques, such as skimming an article's illustrations or references just before reading the complete text. Students frequently made such switches as they read course material.
  • The digital text also disrupted a technique called cognitive mapping, in which readers used physical cues such as the location on the page and the position in the book to go back and find a section of text or even to help retain and recall the information they had read.

Lee predicts that over time software will help address some of these issues. She even envisions niche software that could support reading styles specific to certain disciplines.

"You can imagine that a historian going through illuminated texts is going to have very different navigation needs than someone who is comparing algorithms," Lee said.

It's likely that desktop computers, laptops, tablet computers and yes, even paper, will play a role in academic reading's future. But the authors say e-readers will also find their place. Thayer imagines the situation will be similar to today's music industry, where mp3s, CDs and LPs all coexist in music-lovers' listening habits.

"E-readers are not where they need to be in order to support academic reading," Lee concludes. But asked when e-readers will reach that point, she predicts:"It's going to be sooner than we think."

Other co-authors are Linda Hwang, Heidi Sales, Pausali Sen and Ninad Dalal of the UW.


Source

Thursday, April 28, 2011

Good Eggs: Nanomagnets Offer Food for Thought About Computer Memories

For a study described in a new paper, NIST researchers used electron-beam lithography to make thousands of nickel-iron magnets, each about 200 nanometers (billionths of a meter) in diameter. Each magnet is ordinarily shaped like an ellipse, a slightly flattened circle. Researchers also made some magnets in three different egglike shapes with an increasingly pointy end. It's all part of NIST research on nanoscale magnetic materials, devices and measurement methods to support development of future magnetic data storage systems.

It turns out that even small distortions in magnet shape can lead to significant changes in magnetic properties. Researchers discovered this by probing the magnets with a laser and analyzing what happens to the"spins" of the electrons, a quantum property that's responsible for magnetic orientation. Changes in the spin orientation can propagate through the magnet like waves at different frequencies. The more egg-like the magnet, the more complex the wave patterns and their related frequencies. (Something similar happens when you toss a pebble in an asymmetrically shaped pond.) The shifts are most pronounced at the ends of the magnets.

To confirm localized magnetic effects and"color" the eggs, scientists made simulations of various magnets using NIST's object-oriented micromagnetic framework (OOMMF). Lighter colors indicate stronger frequency signals.

The egg effects explain erratic behavior observed in large arrays of nanomagnets, which may be imperfectly shaped by the lithography process. Such distortions can affect switching in magnetic devices. The egg study results may be useful in developing random-access memories (RAM) based on interactions between electron spins and magnetized surfaces. Spin-RAM is one approach to making future memories that could provide high-speed access to data while reducing processor power needs by storing data permanently in ever-smaller devices. Shaping magnets like eggs breaks up a symmetric frequency pattern found in ellipse structures and thus offers an opportunity to customize and control the switching process.

"For example, intentional patterning of egg-like distortions into spinRAM memory elements may facilitate more reliable switching," says NIST physicist Tom Silva, an author of the new paper.

"Also, this study has provided the Easter Bunny with an entirely new market for product development."


Source

Monday, April 25, 2011

Creative, Online Learning Tool Helps Students Tackle Real-World Problems

A new computer interface developed at Iowa State University is helping students use what they've learned in the horticulture classroom and apply it to problems they'll face when they are on the job site.

The project, called ThinkSpace, is led by a group of ISU faculty including Ann Marie VanDerZanden, professor of horticulture and associate director of ISU's Center for Excellence in Learning and Teaching.

ThinkSpace has many different features that make it an effective way to teach using ill-structured problems. This type of problem allows students to choose from multiple paths to arrive at a solution.

By contrast, well-structured problems have a straight path to the one, clear solution.

In horticulture, the ThinkSpace platform is being used for upper-level classes and requires students to access what they've learned throughout their time studying horticulture and apply it to real-world problems.

In these classes, VanDerZanden gives students computer-delivered information about residential landscape.

That information includes illustrations of the work site, descriptions of the trees on the property, explanations of the problems the homeowner is experiencing, mock audio interview files with the property owner, and about anything else a horticulture professional would discover when approaching a homeowner with a landscape problem.

Also, just like in real life, some of the information is relevant to the problem, and some information is not.

"It forces students to take this piece of information, and that piece of information, and another piece of information, and then figure out what is wrong -- in this case with a plant," said VanDerZanden.

When the students think they have determined the problem, they enter their responses into the online program.

VanDerZanden can then check the responses.

For those students on the right track, she allows them to continue toward a solution.

For those who may have misdiagnosed the situation, VanDerZanden steers the students toward the right track before allowing them to move forward.

So far, the response from students has been very positive.

"The students like the variety," said VanDerZanden."They like struggling with real-world problems, rather that something that is just made up. On the other hand, they can get frustrated because there is not a clear-cut answer."

The entire process leverages the classroom experience into something the students can use at work.

"I think this really enhances student learning," said VanDerZanden."Students apply material from previous classes to a plausible, real-world situation. For instance students see what happens when a tree was pruned really hard to allow a piece of equipment to get into the customer's yard. As a result, the tree sends out a lot of new succulent shoots, and then there is an aphid infestation in the tree. It helps students start making all of those connections."

The ThinkSpace interface was developed from existing technologies already being used in ISU's College of Veterinary Medicine, College of Engineering and department of English.

VanDerZanden and her group recently received a$446,000 grant from the United States Department of Agriculture Higher Education Challenge Grant program to further develop ThinkSpace so it could more useful to other academic areas and universities.

As part of this research, VanDerZanden is also working with faculty members at University of Pennsylvania, Philadelphia; University of Wisconsin, Madison; and Kansas State University, Manhattan.


Source

Sunday, April 24, 2011

Decoding Human Genes Is Goal of New Open-Source Encyclopedia

In a paper that will be published in the journalPLoS Biologyon 19 April 2011, the project -- called ENCODE (Encyclopedia Of DNA Elements) -- provides an overview of the team's ongoing efforts to interpret the human genome sequence, as well as a guide for using the vast amounts of data and resources produced so far by the project.

Ross Hardison, the T. Ming Chu Professor of Biochemistry and Molecular Biology at Penn State University and one of the principal investigators of the ENCODE Project team, explained that the philosophy behind the project is one of scientific openness, transparency, and collaboration across sub-disciplines. ENCODE comes on the heels of the now-complete Human Genome Project -- a 13-year effort aimed at identifying all the approximately 20,000 to 25,000 genes in human DNA -- which also was based on the belief in open-source data sharing to further scientific discovery and public understanding of science.

The ENCODE Project has accomplished this goal by publishing its database atgenome.ucsc.edu/ENCODE, and by posting tools to facilitate data use atencodeproject.org."ENCODE resources are already being used by scientists for discovery," Hardison said."But what's kind of revolutionary is that they also are being used in classes to train students in all areas of biology. Our classes here at Penn State are using real data on genomic variation and function in classroom problem sets, shortly after the labs have generated them."

Hardison explained that there are about 3-billion base pairs in the human genome, making the cataloging and interpretation of the information a monumental task."We have a very lofty goal: To identify the function of every nucleotide of the human genome," he said."Not only are we discovering the genes that give information to cells and make proteins, but we also want to know what determines that the proteins are made in the right cells, and at the appropriate time. Finding the DNA elements that govern this regulated expression of genes is a major goal of ENCODE." Hardison explained that ENCODE's job is to identify the human genome's functional regions, many of which are quite esoteric."The human DNA sequence often is described as a kind of language, but without a key to interpret it, without a full understanding of the 'grammar,' it might as well be a big jumble of letters." Hardison added that the ENCODE Project supplies data such as where proteins bind to DNA and where parts of DNA are augmented by additional chemical markers. These proteins and chemical additions are keys to understanding how different cells within the human body interpret the language of DNA.

In the soon-to-be-published paper, the team shows how the ENCODE data can be immediately useful in interpreting associations between disease and DNA sequences that can vary from person to person -- single nucleotide polymorphisms (SNPs). For example, scientists know that DNA variants located upstream of a gene called MYC are associated with multiple cancers, but until recently the mechanism behind this association was a mystery. ENCODE data already have been used to confirm that the variants can change binding of certain proteins, leading to enhanced expression of the MYC gene and, therefore, to the development of cancer. ENCODE also has made similar studies possible for thousands of other DNA variants that may be associated with susceptibility to a variety of human diseases.

Another of the principal investigators of the project, Richard Myers, president and director of the HudsonAlpha Institute for Biotechnology, explained that the ENCODE Project is unique because it requires collaboration from multiple people all over the world at the cutting edge of their fields."People are working in a coordinated manner to figure out the function of our human genome," he said."The importance of the project extends beyond basic knowledge of who and what we are as humans, and into an understanding of human health and disease."

Scientists with the ENCODE Project also are applying up to 20 different tests in 108 commonly used cell lines to compile important data. John Stamatoyannopoulos, an assistant professor of genome sciences and medicine at the University of Washington and another principal investigator, explained that the ENCODE Project has been responsible for producing many assays -- molecular-biology procedures for measuring the activity of biochemical agents -- that are now fundamental to biology."Widely used computational tools for processing and interpreting large-scale functional genomic data also have been developed by the project," Stamatoyannopoulos added."The depth, quality, and diversity of the ENCODE data are unprecedented."

Hardison said that the portion of the human genome that actually codes for protein is about 1.1 percent."That's still a lot of data," he said."And to complicate matters even more, most mechanisms for gene expression and regulation lie outside what we call the 'coding' region of DNA." Hardison explained that scientists have a limited number of tools with which to explore the genome, and one that has been used widely is inter-species comparison."For example, we can compare humans and chimpanzees and glean some fascinating information," Hardison said."But very few proteins and other DNA products differ in any fundamental way between humans and chimps. The important difference between us and our close cousins lies in gene expression -- the basic level at which genes give rise to traits such as eye color, height, and susceptibility to a particular disease. ENCODE is helping to map the very proteins involved in gene regulation and gene expression. Our paper not only explains how to find the data, but it also explains how to apply the data to interpret the human genome."

The ENCODE Project is funded, primarily, by the National Human Genome Research Institute of the U. S. National Institutes of Health.


Source

Saturday, April 23, 2011

'3-D Towers' of Information Double Data Storage Areal Density

This greatly enhances the amount of data that can be stored in a magnetic storage device and provides a method to reach beyond a wall of physical limits that the currently used technology is hitting. The team presents their findings in the American Institute of Physics'Journal of Applied Physics.

"Over the past 50 years, with the rise of multimedia devices, the worldwide Internet, and the general growth in demand for greater data storage capacity, the areal density of information in magnetic hard disk drives has exponentially increased by 7 orders of magnitude," says Jerome Moritz, a researcher at SPINTEC, in Grenoble."This areal density is now about 500Gbit/in2, and the technology presently used involves writing the information on a granular magnetic material. This technology is now reaching some physical limits because the grains are becoming so small that their magnetization becomes unstable and the information written on them is gradually lost."

Therefore, new approaches are needed for magnetic data storage densities exceeding 1Tbit/in2.

"Our new approach involves using bit-patterned media, which are made of arrays of physically separated magnetic nanodots, with each nanodot carrying one bit of information. To further extend the storage density, it's possible to increase the number of bits per dots by stacking several magnetic layers to obtain a multilevel magnetic recording device," explains Moritz.

In that context, Moritz and colleagues were able to demonstrate that the best way to achieve a 2-bit-per-dot media involves stacking in-plane and perpendicular-to-plane magnetic media atop each dot. The perpendicularly magnetized layer can be read right above the dot, whereas the in-plane magnetized layer can be read between dots. This enables doubling of the areal density for a given dot size by taking better advantage of the whole patterned media area.


Source

Friday, April 22, 2011

'Time Machine' Made to Visually Explore Space and Time in Videos: Time-Lapse GigaPans Provide New Way to Access Big Data

Viewers, for instance, can use the system to focus in on the details of a booth within a panorama of a carnival midway, but also reverse time to see how the booth was constructed. Or they can watch a group of plants sprout, grow and flower, shifting perspective to watch some plants move wildly as they grow while others get eaten by caterpillars. Or, they can view a computer simulation of the early universe, watching as gravity works across 600 million light-years to condense matter into filaments and finally into stars that can be seen by zooming in for a close up.

"With GigaPan Time Machine, you can simultaneously explore space and time at extremely high resolutions," said Illah Nourbakhsh, associate professor of robotics and head of the CREATE Lab."Science has always been about narrowing your point of view -- selecting a particular experiment or observation that you think might provide insight. But this system enables what we call exhaustive science, capturing huge amounts of data that can then be explored in amazing ways."

The system is an extension of the GigaPan technology developed by the CREATE Lab and NASA, which can capture a mosaic of hundreds or thousands of digital pictures and stitch those frames into a panorama that be interactively explored via computer. To extend GigaPan into the time dimension, image mosaics are repeatedly captured at set intervals, and then stitched across both space and time to create a video in which each frame can be hundreds of millions, or even billions of pixels.

An enabling technology for time-lapse GigaPans is a feature of the HTML5 language that has been incorporated into such browsers as Google's Chrome and Apple's Safari. HTML5, the latest revision of the HyperText Markup Language (HTML) standard that is at the core of the Internet, makes browsers capable of presenting video content without use of plug-ins such as Adobe Flash or Quicktime.

Using HTML5, CREATE Lab computer scientists Randy Sargent, Chris Bartley and Paul Dille developed algorithms and software architecture that make it possible to shift seamlessly from one video portion to another as viewers zoom in and out of Time Machine imagery. To keep bandwidth manageable, the GigaPan site streams only those video fragments that pertain to the segment and/or time frame being viewed.

"We were crashing the browsers early on," Sargent recalled."We're really pushing the browser technology to the limits."

Guidelines on how individuals can capture time-lapse images using GigaPan cameras are included on the site created for hosting the new imagery's large data files,http://timemachine.gigapan.org. Sargent explained the CREATE Lab is eager to work with people who want to capture Time Machine imagery with GigaPan, or use the visualization technology for other applications.

Once a Time Machine GigaPan has been created, viewers can annotate and save their explorations of it in the form of video"Time Warps."

Though the time-lapse mode is an extension of the original GigaPan concept, scientists already are applying the visualization techniques to other types of Big Data. Carnegie Mellon's Bruce and Astrid McWilliams Center for Cosmology, for instance, has used it to visualize a simulation of the early universe performed at the Pittsburgh Supercomputing Center by Tiziana Di Matteo, associate professor of physics.

"Simulations are a huge bunch of numbers, ugly numbers," Di Matteo said."Visualizing even a portion of a simulation requires a huge amount of computing itself." Visualization of these large data sets is crucial to the science, however."Discoveries often come from just looking at it," she explained.

Rupert Croft, associate professor of physics, said cosmological simulations are so massive that only a segment can be visualized at a time using usual techniques. Yet whatever is happening within that segment is being affected by forces elsewhere in the simulation that cannot be readily accessed. By converting the entire simulation into a time-lapse GigaPan, however, Croft and his Ph.D. student, Yu Feng, were able to create an image that provided both the big picture of what was happening in the early universe and the ability to look in detail at any region of interest.

Using a conventional GigaPan camera, Janet Steven, an assistant professor of biology at Sweet Briar College in Virginia, has created time-lapse imagery of rapid-growing brassicas, known as Wisconsin Fast Plants."This is such an incredible tool for plant biology," she said."It gives you the advantage of observing individual plants, groups of plants and parts of plants, all at once."

Steven, who has received GigaPan training through the Fine Outreach for Science program, said time-lapse photography has long been used in biology, but the GigaPan technology makes it possible to observe a number of plants in detail without having separate cameras for each plant. Even as one plant is studied in detail, it's possible to also see what neighboring plants are doing and how that might affect the subject plant, she added.

Steven said creating time-lapse GigaPans of entire landscapes could be a powerful tool for studying seasonal change in plants and ecosystems, an area of increasing interest for understanding climate change. Time-lapse GigaPan imagery of biological experiments also could be an educational tool, allowing students to make independent observations and develop their own hypotheses.

Google Inc. supported development of GigaPan Time Machine.


Source

Thursday, April 21, 2011

CAPTCHAs With Chaos: Strong Protection for Weak Passwords

Researchers at the Max Planck Institute for the Physics of Complex Systems in Dresden have been inspired by the physics of critical phenomena in their attempts to significantly improve password protection. The researchers split a password into two sections. With the first, easy-to-memorize section they encrypt a CAPTCHA ("completely automated public Turing test to tell computers and humans apart") -- an image that computer programs per se have difficulty in deciphering. The researchers also make it more difficult for computers, whose task it is to automatically crack passwords, to read the passwords without authorization. They use images of a simulated physical system, which they additionally make unrecognizable with a chaotic process. These p-CAPTCHAs enable the Dresden physicists to achieve a high level of password protection, even though the user need only remember a weak password.

Computers sometimes use brute force. Hacking programs use so-called brute-force attacks to try out all possible character combinations to guess passwords. CAPTCHAs are therefore intended as an additional safeguard the input of which originates from a human being and not from a machine. They pose a task for the user which is simple enough for any human, yet very difficult for a program. Users must enter a distorted text which is displayed on the screen, for example. CAPTCHAs are increasingly being bypassed, however. Personal data of members of the"SchülerVZ" social network for school pupils have already been stolen in this way.

Researchers at the Max Planck Institute for the Physics of Complex Systems in Dresden have now developed a new type of password protection that is based on a combination of characters and a CAPTCHA. They also use mathematical methods from the physics of critical phenomena to protect the CAPTCHA from being accessed by computers."We thus make the password protection both more effective and simpler," says Konstantin Kladko, who had the idea for this interdisciplinary approach during his time at the Dresden Max Planck Institute; he is currently a researcher at Axioma Research in Palo Alto/USA.

The Dresden-based researchers initially combine password and CAPTCHA in a completely novel way. The CAPTCHA is no longer generated anew each time in order to distinguish the human user from a computer on a case-by-case basis. Rather, the physicists use the codeword in the image, which can only be deciphered by humans as the real password, which provides access to a social network or an online bank account, for example. The researchers additionally encrypt this password using a combination of characters.

However, that's not all: the CAPTCHA is a snapshot of a dynamic, chaotic Hamiltonian system in two dimensions. For the sake of simplicity, his image can be imagined as a grey-scale pixel matrix, where every pixel represents an oscillator. The oscillators are coupled in a network. Every oscillator oscillates between two states and is affected by the neighbouring oscillators as it does so, thus resulting in the grey scales.

Chaotic development makes password unreadable

The physicists then leave the system to develop chaotically for a period of time. The grey-scale matrix changes the colour of its pixels. The result is an image that no longer contains a recognizable word. The researchers subsequently encrypt this image with the combination of characters and save the result."We therefore talk of a password-protected CAPTCHA or p-CAPTCHA," says Sergej Flach, who teamed up with Tetyana Laptyeva to achieve the decisive research results at the Max Planck Institute for the Physics of Complex Systems. Since the chaotic evolution of the initial image is deterministic, i.e. reversible, the whole procedure can be reversed using the combination of characters, so that the user can again read the password hidden in the CAPTCHA.

"The character combination we use to encrypt the password in the CAPTCHA can be very easy to remember," explains Konstantin Kladko."We thus take account of the fact that most people only want to, or can only, remember simple passwords." The fact that the passwords are correspondingly weak is now no longer important, because the real protection comes from the encrypted password in the CAPTCHA.

On the one hand, the password hidden in the CAPTCHA is too long for computers to be able to guess it using a brute-force attack in a reasonable length of time. On the other, the physicists use a critical system to generate the password image. This system is close to a phase transition: with a phase transition, the system changes from one physical state to another, from the paramagnetic to the ferromagnetic state, for example. Close to the transition, regions repeatedly form which temporarily have already completed the transition."The resulting image is always very grainy. Therefore, a computer cannot distinguish it from the original it is searching for," explains Sergej Flach.

"Although the study has just been submitted to a specialist journal and is only available online in an archive, it has already provoked a large number of responses in the community -- and not only in Hacker News," says Sergej Flach."I was very impressed by the depth of some comments in certain forums -- in Slashdot, for example." The specialists are obviously impressed by the ingenuity of the approach, which means passwords could be very difficult to crack in the future. Moreover, the method is easy and quick to implement in conventional computer systems."An expansion to several p-CAPTCHA levels is obvious," says Sergej Flach. Hoiwever, this requires increased computing power to reverse the chaotic development in a reasonable time:"We therefore want to investigate various Hamiltonian and non-Hamiltonian systems in the future to see whether they provide faster and even more effective protection."


Source

Wednesday, April 20, 2011

New Kid on the Plasmonic Block: Researchers Find Plasmonic Resonances in Semiconductor Nanocrystals

"We have demonstrated well-defined localized surface plasmon resonances arising from p-type carriers in vacancy-doped semiconductor quantum dots that should allow for plasmonic sensing and manipulation of solid-state processes in single nanocrystals," says Berkeley Lab director Paul Alivisatos, a nanochemistry authority who led this research."Our doped semiconductor quantum dots also open up the possibility of strongly coupling photonic and electronic properties, with implications for light harvesting, nonlinear optics, and quantum information processing."

Alivisatos is the corresponding author of a paper in the journalNature Materialstitled"Localized surface plasmon resonances arising from free carriers in doped quantum dots." Co-authoring the paper were Joseph Luther and Prashant Jain, along with Trevor Ewers.

The term"plasmonics" describes a phenomenon in which the confinement of light in dimensions smaller than the wavelength of photons in free space make it possible to match the different length-scales associated with photonics and electronics in a single nanoscale device. Scientists believe that through plasmonics it should be possible to design computer chip interconnects that are able to move much larger amounts of data much faster than today's chips. It should also be possible to create microscope lenses that can resolve nanoscale objects with visible light, a new generation of highly efficient light-emitting diodes, and supersensitive chemical and biological detectors. There is even evidence that plasmonic materials can be used to bend light around an object, thereby rendering that object invisible.

The plasmonic phenomenon was discovered in nanostructures at the interfaces between a noble metal, such as gold or silver, and a dielectric, such as air or glass. Directing an electromagnetic field at such an interface generates electronic surface waves that roll through the conduction electrons on a metal, like ripples spreading across the surface of a pond that has been plunked with a stone. Just as the energy in an electromagnetic field is carried in a quantized particle-like unit called a photon, the energy in such an electronic surface wave is carried in a quantized particle-like unit called a plasmon. The key to plasmonic properties is when the oscillation frequency between the plasmons and the incident photons matches, a phenomenon known as localized surface plasmon resonance (LSPR). Conventional scientific wisdom has held that LSPRs require a metal nanostructure , where the conduction electrons are not strongly attached to individual atoms or molecules. This has proved not to be the case as Prashant Jain, a member of the Alivisatos research group and one of the lead authors of the Nature Materials paper, explains.

"Our study represents a paradigm shift from metal nanoplasmonics as we've shown that, in principle, any nanostructure can exhibit LSPRs so long as the interface has an appreciable number of free charge carriers, either electrons or holes," Jain says."By demonstrating LSPRs in doped quantum dots, we've extended the range of candidate materials for plasmonics to include semiconductors, and we've also merged the field of plasmonic nanostructures, which exhibit tunable photonic properties, with the field of quantum dots, which exhibit tunable electronic properties."

Jain and his co-authors made their quantum dots from the semiconductor copper sulfide, a material that is known to support numerous copper-deficient stoichiometries. Initially, the copper sulfide nanocrystals were synthesized using a common hot injection method. While this yielded nanocrystals that were intrinsically self-doped with p-type charge carriers, there was no control over the amount of charge vacancies or carriers.

"We were able to overcome this limitation by using a room-temperature ion exchange method to synthesize the copper sulfide nanocrystals," Jain says."This freezes the nanocrystals into a relatively vacancy-free state, which we can then dope in a controlled manner using common chemical oxidants."

By introducing enough free electrical charge carriers via dopants and vacancies, Jain and his colleagues were able to achieve LSPRs in the near-infrared range of the electromagnetic spectrum. The extension of plasmonics to include semiconductors as well as metals offers a number of significant advantages, as Jain explains.

"Unlike a metal, the concentration of free charge carriers in a semiconductor can be actively controlled by doping, temperature, and/or phase transitions," he says."Therefore, the frequency and intensity of LSPRs in dopable quantum dots can be dynamically tuned. The LSPRs of a metal, on the other hand, once engineered through a choice of nanostructure parameters, such as shape and size, is permanently locked-in."

Jain envisions quantum dots as being integrated into a variety of future film and chip-based photonic devices that can be actively switched or controlled, and also being applied to such optical applications as in vivo imaging. In addition, the strong coupling that is possible between photonic and electronic modes in such doped quantum dots holds exciting potential for applications in solar photovoltaics and artificial photosynthesis

"In photovoltaic and artificial photosynthetic systems, light needs to be absorbed and channeled to generate energetic electrons and holes, which can then be used to make electricity or fuel," Jain says."To be efficient, it is highly desirable that such systems exhibit an enhanced interaction of light with excitons. This is what a doped quantum dot with an LSPR mode could achieve."

The potential for strongly coupled electronic and photonic modes in doped quantum dots arises from the fact that semiconductor quantum dots allow for quantized electronic excitations (excitons), while LSPRs serve to strongly localize or confine light of specific frequencies within the quantum dot. The result is an enhanced exciton-light interaction. Since the LSPR frequency can be controlled by changing the doping level, and excitons can be tuned by quantum confinement, it should be possible to engineer doped quantum dots for harvesting the richest frequencies of light in the solar spectrum.

Quantum dot plasmonics also hold intriguing possibilities for future quantum communication and computation devices.

"The use of single photons, in the form of quantized plasmons, would allow quantum systems to send information at nearly the speed of light, compared with the electron speed and resistance in classical systems," Jain says."Doped quantum dots by providing strongly coupled quantized excitons and LSPRs and within the same nanostructure could serve as a source of single plasmons."

Jain and others in Alivsatos' research group are now investigating the potential of doped quantum dots made from other semiconductors, such as copper selenide and germanium telluride, which also display tunable plasmonic or photonic resonances. Germanium telluride is of particular interest because it has phase change properties that are useful for memory storage devices.

"A long term goal is to generalize plasmonic phenomena to all doped quantum dots, whether heavily self-doped or extrinsically doped with relatively few impurities or vacancies," Jain says.

This research was supported by the DOE Office of Science.


Source