Monday, May 23, 2011

World Record in Ultra-Rapid Data Transmission

The advance is reported in the journalNature Photonics.

In this experiment, KIT scientists led by Professor Jürg Leuthold beat their own record in high-speed data transmission of 2010, when they exceeded the magic limit of 10 terabits per second -- i.e. a data rate of 10,000 billion bits per second. This success of the group is due to a new data decoding process. The opto-electric decoding method is based on initially purely optical calculation at highest data rates in order to break down the high data rate to smaller bit rates that can then be processed electrically. The initially optical reduction of the bit rates is required, as no electronic processing methods are available for a data rate of 26 terabits per second. Leuthold's team applies the so-called orthogonal frequency division multiplexing (OFDM) for record data encoding. For many years, this process has been used successfully in mobile communications, based on mathematical routines (Fast Fourier Transformation).

"The challenge was to increase the process speed not only by a factor of 1,000, but by a factor of nearly a million for data processing at 26 terabits per second," explains Leuthold, who heads the Institutes of Photonics and Quantum Electronics and Microstructure Technology at KIT."The decisive innovative idea was optical implementation of the mathematical routine." Calculation in the optical range turned out to be not only extremely fast, but also highly energy-efficient, because energy is required for the laser and a few process steps only.

"Our result shows that physical limits are not yet exceeded even at extremely high data rates," Leuthold says, noting the constantly growing data volume on the internet. According to Leuthold, transmission of 26 terabits per second confirms that even high data rates can be handled today, while energy consumption is minimized."A few years ago, data rates of 26 terabits per second were deemed utopian even for systems with many lasers." Leuthold adds,"and there would not have been any applications. With 26 terabits per second, it would have been possible to transmit up to 400 million telephone calls at the same time. Nobody needed this at that time. Today, the situation is different."

Video transmissions consume much Internet bandwidth and require extremely high bit rates. The need is growing constantly. In communication networks, first lines with channel data rates of 100 gigabits per second (corresponding to 0.1 terabit per second) have already been taken into operation. Research now concentrates on developing systems for transmission lines in the range of 400 Gigabits/s to 1 Tbit/s. Hence, the Karlsruhe invention is ahead of the ongoing development. Companies and scientists from all over Europe were involved in the experimental implementation of ultra-rapid data transmission at KIT. Among them were members of the staff of Agilent and Micram Deutschland, Time-Bandwidth Switzerland, Finisar Israel, and the University of Southampton in Great Britain.


Source

Tuesday, May 17, 2011

Single Atom Stores Quantum Information

Quantum computers will one day be able to cope with computational tasks in no time where current computers would take years. They will take their enormous computing power from their ability to simultaneously process the diverse pieces of information which are stored in the quantum state of microscopic physical systems, such as single atoms and photons. In order to be able to operate, the quantum computers must exchange these pieces of information between their individual components. Photons are particularly suitable for this, as no matter needs to be transported with them. Particles of matter however will be used for the information storage and processing. Researchers are therefore looking for methods whereby quantum information can be exchanged between photons and matter. Although this has already been done with ensembles of many thousands of atoms, physicists at the Max Planck Institute of Quantum Optics in Garching have now proved that quantum information can also be exchanged between single atoms and photons in a controlled way.

Using a single atom as a storage unit has several advantages -- the extreme miniaturization being only one, says Holger Specht from the Garching-based Max Planck Institute, who was involved in the experiment. The stored information can be processed by direct manipulation on the atom, which is important for the execution of logical operations in a quantum computer."In addition, it offers the chance to check whether the quantum information stored in the photon has been successfully written into the atom without destroying the quantum state," says Specht. It is thus possible to ascertain at an early stage that a computing process must be repeated because of a storage error.

The fact that no one had succeeded until very recently in exchanging quantum information between photons and single atoms was because the interaction between the particles of light and the atoms is very weak. Atom and photon do not take much notice of each other, as it were, like two party guests who hardly talk to each other, and can therefore exchange only a little information. The researchers in Garching have enhanced the interaction with a trick. They placed a rubidium atom between the mirrors of an optical resonator, and then used very weak laser pulses to introduce single photons into the resonator. The mirrors of the resonator reflected the photons to and fro several times, which strongly enhanced the interaction between photons and atom. Figuratively speaking, the party guests thus meet more often and the chance that they talk to each other increases.

The photons carried the quantum information in the form of their polarization. This can be left-handed (the direction of rotation of the electric field is anti-clockwise) or right-handed (clock-wise). The quantum state of the photon can contain both polarizations simultaneously as a so-called superposition state. In the interaction with the photon the rubidium atom is usually excited and then loses the excitation again by means of the probabilistic emission of a further photon. The Garching-based researchers did not want this to happen. On the contrary, the absorption of the photon was to bring the rubidium atom into a definite, stable quantum state. The researchers achieved this with the aid of a further laser beam, the so-called control laser, which they directed onto the rubidium atom at the same time as it interacted with the photon.

The spin orientation of the atom contributes decisively to the stable quantum state generated by control laser and photon. Spin gives the atom a magnetic moment. The stable quantum state, which the researchers use for the storage, is thus determined by the orientation of the magnetic moment. The state is characterized by the fact that it reflects the photon's polarization state: the direction of the magnetic moment corresponds to the rotational direction of the photon's polarization, a mixture of both rotational directions being stored by a corresponding mixture of the magnetic moments.

This state is read out by the reverse process: irradiating the rubidium atom with the control laser again causes it to re-emit the photon which was originally incident. In the vast majority of cases, the quantum information in the read-out photon agrees with the information originally stored, as the physicists in Garching discovered. The quantity that describes this relationship, the so-called fidelity, was more than 90 percent. This is significantly higher than the 67 percent fidelity that can be achieved with classical methods, i.e. those not based on quantum effects. The method developed in Garching is therefore a real quantum memory.

The physicists measured the storage time, i.e. the time the quantum information in the rubidium can be retained, as around 180 microseconds."This is comparable with the storage times of all previous quantum memories based on ensembles of atoms," says Stephan Ritter, another researcher involved in the experiment. Nevertheless, a significantly longer storage time is necessary for the method to be used in a quantum computer or a quantum network. There is also a further quality characteristic of the single-atom quantum memory from Garching which could be improved: the so-called efficiency. It is a measure of how many of the irradiated photons are stored and then read out again. This was just under 10 percent.

The storage time is mainly limited by magnetic field fluctuations from the laboratory surroundings, says Ritter."It can therefore be increased by storing the quantum information in quantum states of the atoms which are insensitive to magnetic fields." The efficiency is limited by the fact that the atom does not sit still in the centre of the resonator, but moves. This causes the strength of the interaction between atom and photon to decrease. The researchers can thus also improve the efficiency: by greater cooling of the atom, i.e. by further reducing its kinetic energy.

The researchers at the Max Planck Institute in Garching now want to work on these two improvements."If this is successful, the prospects for the single-atom quantum memory would be excellent," says Stephan Ritter. The interface between light and individual atoms would make it possible to network more atoms in a quantum computer with each other than would be possible without such an interface; a fact that would make such a computer more powerful. Moreover, the exchange of photons would make it possible to quantum mechanically entangle atoms across large distances. The entanglement is a kind of quantum mechanical link between particles which is necessary to transport quantum information across large distances. The technique now being developed at the Max Planck Institute of Quantum Optics could some day thus become an essential component of a future"quantum Internet."


Source

Monday, May 16, 2011

Beyond Smart Phones: Sensor Network to Make 'Smart Cities' Envisioned

Computer scientists, electrical and computer engineers, and mathemati­cians at the TU Darmstadt and the University of Kassel have joined forces and are working on implementing that vision under their"Cocoon" project. The backbone of a"smart" city is a communications network consisting of sen­sors that receive streams of data, or signals, analyze them, and trans­mit them onward. Such sensors thus act as both receivers and trans­mit­ters, i.e., represent trans­ceivers. The networked communications involved oper­ates wire­lessly via radio links, and yields added values to all partici­pants by analyzing the input data involved. For example, the"Smart Home" control system already on the market allows networking all sorts of devices and automatically regulating them to suit demands, thereby alleg­edly yielding energy savings of as much as fifteen percent.

"Smart Home" might soon be followed by"Smart Hospital,""Smart Indus­try," or"Smart Farm," and even"smart" systems tailored to suit mobile net­works are feasible. Traffic jams may be avoided by, for example, car-to-car or car-to-environment (car-to-X) communications. Health-service sys­tems might also benefit from mobile, sensor communications whenever patients need to be kept supplied with information tailored to suit their health­care needs while underway. Furthermore, sensors on their bodies could assess the status of their health and automatically transmit calls for emergency medical assistance, whenever necessary.

"Smart" and mobile, thanks to beam forming

The researchers regard the ceaseless travels of sensors on mobile systems and their frequent entries into/exits from instrumented areas as the major hurdle to be overcome in implementing their vision of"smart" cities. Sensor-aided devices will have to deal with that by responding to subtle changes in their environments and flexibly, efficiently, regulating the quali­ties of received and transmitted signals. Beam forming, a field in which the TU Darmstadt's Institute for Communications Technology is active, should help out there. On that subject, Prof. Rolf Jakoby of the TU Darmstadt's Electrical Engineering and Information Technology Dept. remarked that,"Current types of antennae radiate omnidirectionally, like light bulbs. We intend to create conditions, under which antennae will, in the future, behave like spotlights that, once they have located a sought device, will track it, while suppressing interference by stray electromag­netic radiation from other devices that might also be present in the area."

Such antennae, along with transceivers equipped with them, are thus recon­figurable, i.e., adjustable to suit ambient conditions by means of onboard electronic circuitry or remote controls. Working in col­lab­or­a­tion with an industrial partner, Jakoby has already equipped terres­trial digital-television (TDTV) transmitters with reconfigurable amplifiers that allow amplifying transmitted-signal levels by as much as ten percent. He added that,"If all of Germany's TDTV‑transmitters were equipped with such amp­li­fiers, we could shut down one nuclear power plant."

Frequency bands are a scarce resource

Reconfigurable devices also make much more efficient use of a scarce resource, freq­uency bands. Users have thus far been allocated rigorously defined frequency bands, where only fifteen to twenty percent of the capacities of even the more popular ones have been allocated. Beam forming might allow making more efficient use of them. Jakoby noted that,"This is an area that we are still taking a close look at, but we are well along the way toward understand­ing the system better." However, only a few uses of beam forming have emerged to date, since currently available systems are too expensive for mass applications.

Small, model networks are targeted

Yet another fundamental problem remains to be solved before"smart" cities may become realities. Sensor communications requires the cooper­a­tion of all devices involved, across all communications protocols, such as"Bluetooth," and across all networks, such as the European Global System for Mobile Communications (GSM) mobile-telephone network or wireless local-area networks (WLAN), which cannot be achieved with current devices, communications protocols, and networks. Jakoby explained that,"Con­verting all devices to a common communications protocol is infeas­ible, which is why we are seeking a new protocol that would be superim­posed upon everything and allow them to communicate via several proto­cols." Transmission channels would also have to be capable of handling a mas­sive flood of data, since, as Prof. Abdelhak Zoubir of the TU Darm­stadt's Electrical Engineer­ing and Information Technology Dept., the"Cocoon" project's coordinator, put it,"A"smart" Darm­stadt alone would surely involve a million sensors communicating with one another via satel­lites, mobile telephones, computers, and all of the other types of devices that we already have available. Furthermore, since a single, mobile sensor is readily capable of generating several hundred Meg­a­bytes of data annu­ally, new models for handling the communications of millions of such sen­sors that will more densely compress data in order to provide for error-free com­munica­tions will be needed. Several hurdles will thus have to be over­come before"smart" cities become reality. Nevertheless, the scientists working on the"Cocoon" project are convinced that they will be able to simulate a"smart" city incorporating various types of devices employing early versions of small, model networks.

Over the next three years, scientists at the TU Darmstadt will be receiving a total of 4.5 million Euros from the State of Hesse's Offensive for Devel­op­ing Scientific-Economic Excellence for their researches in conjunction with their"Cocoon -- Cooperative Sensor Communications" project.


Source

Sunday, May 15, 2011

Crowdsourcing Science: Researcher Uses Facebook to Identify Thousands of Fish

In January and February, Bloom helped conduct the first ichthyological survey on Guyana's Cuyuni River. The trip was funded through the Biological Diversity of the Guiana Shield program at the Smithsonian Institution's National Museum of Natural History and was led by Dr. Brian Sidlauskas, assistant professor of fisheries at Oregon State University (OSU). The goal was to find out which species of fish live in the Cuyuni and get a good estimate of their abundance.

The Cuyuni is bisected by the Guyana/Venezuela border and extends 210 kilometres into the thick jungles of western Guyana. The region is under intense ecological pressure from the artisanal gold mining operations that pepper the Guyanese hinterland. This mining has terrible impacts on the surrounding environment. Chief among these are the increase in sedimentation in the rivers and the release of elemental mercury directly into the food chain."That's why it's important we get there now, to find out what's there," says Bloom."Because in 30 years, who knows what the Cuyuni will look like?"

For two weeks, Bloom, Sidlauskas and the rest of the team spent day and night catching as many fish as they could with various nets. They slept in makeshift jungle camps. In two weeks, the team had collected more than 5,000 fish specimens. Then they realized they had a big problem.

"In order to get the fish out of the country," says Bloom,"we needed an accurate count of each species." The team's research permit required them to report this information to the Guyanese government."We couldn't leave the country until we turned over our data to the authorities."

Time was of the essence, as Sidlauskas, Bloom and OSU graduate student Whit Bronaugh had to return to North America as soon as possible. But how could a handful of people possibly identify 5,000 fish in just a few days?"A lot of people think fish experts know hundreds and hundreds of species," says Bloom."But they really don't. We're all specialists on one particular group or another." The last thing the team wanted was to fudge the data, because the whole point of the project was to gather accurate information for the Guyanese government to use in its conservation and development planning.

That's when Bloom made a great suggestion."Let's just put them up on Facebook and see if our friends can help." Sidlauskas loved the idea, so he uploaded the photos that Bronaugh had meticulously taken of each species."The network of fish experts is pretty small," says Bloom,"and fish people can be real fanatics. Once a fish pops up on Facebook, they get very excited and start arguing. So next thing we knew, we had a really interesting intellectual debate going on between various world experts on fish, sort of like a real-time peer review that reached across continents and around the world." In less than 24 hours, their network of friends -- many of whom hold Ph.D.s in ichthyology and whom Bloom refers to as"diehard fish-heads" -- had identified almost every specimen.

With 5,000 identifications in hand, the team was able to deliver their results to the government and return home on schedule. The National Museum of Natural History's blog ran a story on the team's novel use of social networking to crowdsource their data. Then the Smithsonian Institution's blog,Smithsonian Science, andSmithsonianmagazine's blog did the same. Not long after that, employees at Facebook caught wind of the story and chose it as a"Facebook Story of the Week" on the company's page. In less than a few weeks, more than 9,000 people had"liked" the story, and more than 2,500 comments were registered.

"Bloom's elegant approach to solving this particular scientific and logistical problem is reflective of the ingenuity and inventiveness that one finds amongst UTSC researchers," says vice-principal of research at UTSC, Malcolm Campbell."Combining his passion for research, with the preparedness and cutting-edge thinking that are part-and-parcel of his UTSC graduate degree, Bloom devised a particularly effective solution in a tight spot," says Campbell."Bloom and his supervisor, assistant professor Nate Lovejoy, are superb examples of how the best minds are conducting the best research at UTSC."

The results of the biodiversity survey on the Cuyuni River were somewhat discouraging. Bloom says 5,000 fish is not many; he can remember similar trips on different Guyanese rivers where the team pulled in up to 20,000 specimens."Species diversity and abundance were very low," he says."We need to continue monitoring, but this isn't good news for the region."

But the team's use of Facebook to crowdsource accurate scientific data has had an unexpected consequence: it's led Bloom to change his mind about the value of online tools."Social networking is so powerful, and scientists should be using it more to connect with the world-at-large," he says."I can't take credit for the idea, though." Bloom's friend, an ichthyologist at Texas A&M named Nathan Lujan, has been using Facebook to identify fish for years."And Nathan?" says Bloom."Nathan is a real fish-head."


Source

Saturday, May 14, 2011

Toward Faster Transistors: Physicists Discover Physical Phenomenon That Could Boost Computers' Clock Speed

In this week's issue of the journalScience,MIT researchers and their colleagues at the University of Augsburg in Germany report the discovery of a new physical phenomenon that could yield transistors with greatly enhanced capacitance -- a measure of the voltage required to move a charge. And that, in turn, could lead to the revival of clock speed as the measure of a computer's power.

In today's computer chips, transistors are made from semiconductors, such as silicon. Each transistor includes an electrode called the gate; applying a voltage to the gate causes electrons to accumulate underneath it. The electrons constitute a channel through which an electrical current can pass, turning the semiconductor into a conductor.

Capacitance measures how much charge accumulates below the gate for a given voltage. The power that a chip consumes, and the heat it gives off, are roughly proportional to the square of the gate's operating voltage. So lowering the voltage could drastically reduce the heat, creating new room to crank up the clock.

MIT Professor of Physics Raymond Ashoori and Lu Li, a postdoc and Pappalardo Fellow in his lab -- together with Christoph Richter, Stefan Paetel, Thilo Kopp and Jochen Mannhart of the University of Augsburg -- investigated the unusual physical system that results when lanthanum aluminate is grown on top of strontium titanate. Lanthanum aluminate consists of alternating layers of lanthanum oxide and aluminum oxide. The lanthanum-based layers have a slight positive charge; the aluminum-based layers, a slight negative charge. The result is a series of electric fields that all add up in the same direction, creating an electric potential between the top and bottom of the material.

Ordinarily, both lanthanum aluminate and strontium titanate are excellent insulators, meaning that they don't conduct electrical current. But physicists had speculated that if the lanthanum aluminate gets thick enough, its electrical potential would increase to the point that some electrons would have to move from the top of the material to the bottom, to prevent what's called a"polarization catastrophe." The result is a conductive channel at the juncture with the strontium titanate -- much like the one that forms when a transistor is switched on. So Ashoori and his collaborators decided to measure the capacitance between that channel and a gate electrode on top of the lanthanum aluminate.

They were amazed by what they found: Although their results were somewhat limited by their experimental apparatus, it may be that an infinitesimal change in voltage will cause a large amount of charge to enter the channel between the two materials."The channel may suck in charge -- shoomp! Like a vacuum," Ashoori says."And it operates at room temperature, which is the thing that really stunned us."

Indeed, the material's capacitance is so high that the researchers don't believe it can be explained by existing physics."We've seen the same kind of thing in semiconductors," Ashoori says,"but that was a very pure sample, and the effect was very small. This is a super-dirty sample and a super-big effect." It's still not clear, Ashoori says, just why the effect is so big:"It could be a new quantum-mechanical effect or some unknown physics of the material."

There is one drawback to the system that the researchers investigated: While a lot of charge will move into the channel between materials with a slight change in voltage, it moves slowly -- much too slowly for the type of high-frequency switching that takes place in computer chips. That could be because the samples of the material are, as Ashoori says,"super dirty"; purer samples might exhibit less electrical resistance. But it's also possible that, if researchers can understand the physical phenomena underlying the material's remarkable capacitance, they may be able to reproduce them in more practical materials.

Triscone cautions that wholesale changes to the way computer chips are manufactured will inevitably face resistance."So much money has been injected into the semiconductor industry for decades that to do something new, you need a really disruptive technology," he says.

"It's not going to revolutionize electronics tomorrow," Ashoori agrees."But this mechanism exists, and once we know it exists, if we can understand what it is, we can try to engineer it."


Source

Friday, May 13, 2011

Scientists Afflict Computers With 'Schizophrenia' to Better Understand the Human Brain

The researchers used a virtual computer model, or"neural network," to simulate the excessive release of dopamine in the brain. They found that the network recalled memories in a distinctly schizophrenic-like fashion.

Their results were published in April inBiological Psychiatry.

"The hypothesis is that dopamine encodes the importance-the salience-of experience," says Uli Grasemann, a graduate student in the Department of Computer Science at The University of Texas at Austin."When there's too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn't be learning from."

The results bolster a hypothesis known in schizophrenia circles as the hyperlearning hypothesis, which posits that people suffering from schizophrenia have brains that lose the ability to forget or ignore as much as they normally would. Without forgetting, they lose the ability to extract what's meaningful out of the immensity of stimuli the brain encounters. They start making connections that aren't real, or drowning in a sea of so many connections they lose the ability to stitch together any kind of coherent story.

The neural network used by Grasemann and his adviser, Professor Risto Miikkulainen, is called DISCERN. Designed by Miikkulainen, DISCERN is able to learn natural language. In this study it was used to simulate what happens to language as the result of eight different types of neurological dysfunction. The results of the simulations were compared by Ralph Hoffman, professor of psychiatry at the Yale School of Medicine, to what he saw when studying human schizophrenics.

In order to model the process, Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information-not as distinct units, but as statistical relationships of words, sentences, scripts and stories.

"With neural networks, you basically train them by showing them examples, over and over and over again," says Grasemann."Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output. You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned."

In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate-essentially telling it to stop forgetting so much.

"It's an important mechanism to be able to ignore things," says Grasemann."What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia."

After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.

In another instance, DISCERN began showing evidence of"derailment"-replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.

"Information processing in neural networks tends to be like information processing in the human brain in many ways," says Grasemann."So the hope was that it would also break down in similar ways. And it did."

The parallel between their modified neural network and human schizophrenia isn't absolute proof the hyperlearning hypothesis is correct, says Grasemann. It is, however, support for the hypothesis, and also evidence of how useful neural networks can be in understanding the human brain.

"We have so much more control over neural networks than we could ever have over human subjects," he says."The hope is that this kind of modeling will help clinical research."


Source

Tuesday, May 10, 2011

Original Versus Copy: Researchers Develop Forgery-Proof Prototypes for Product Authentication

For this, all the data has to be electronically checked. In the framework of the"Crypta" project supported by the Federal Ministry for Transport, Innovation and Technology (BMVIT), scientists from Graz University of Technology have now developed a prototype which safeguards objects according to new standards.

Whether for checking the origin of foodstuffs or as proof of authenticity of drugs, the future will bring an increased use of electronic assistants to make sure that the quality is right. RFID (Radio Frequency Identification) technology enables objects to be identified wirelessly."You need a reading device and an RFID tag which communicate with each other," explains project leader Jörn-Marc Schmidt from the Institute of Applied Information Processing and Communications at Graz University of Technology. There is a difference between active and passive tags. The former are connected to a power source whereas the latter draw the required power directly from the field of the reading unit, which makes them particularly suitable, for instance, for applications in supermarkets.

Private Key

For a long time the same electronic keys were used for these energy-efficient passive tags and their readers -- using what experts call symmetrical methods."In asymmetrical methods, the transmitter and receiver possess different keys. Secure digital signatures are thus made possible," adds Jörn-Marc Schmidt. Together with the semiconductor manufacturer austriamicrosystems and RF-iT Solutions GmbH, an RFID software and services provider from Graz, the researchers have now developed a prototype which uses a standard method for passive tags for the first time.

"For every tag there is a public key and a private key which remains secret," explains Jörn-Marc Schmidt. This is a development which could be made use of everywhere where proof of authenticity is important. The research results are the gratifying outcome of the Crypta research project of the FIT-IT funding line of the BMVIT, which supports application-oriented research in information technology in particular.


Source