Monday, May 23, 2011

World Record in Ultra-Rapid Data Transmission

The advance is reported in the journalNature Photonics.

In this experiment, KIT scientists led by Professor Jürg Leuthold beat their own record in high-speed data transmission of 2010, when they exceeded the magic limit of 10 terabits per second -- i.e. a data rate of 10,000 billion bits per second. This success of the group is due to a new data decoding process. The opto-electric decoding method is based on initially purely optical calculation at highest data rates in order to break down the high data rate to smaller bit rates that can then be processed electrically. The initially optical reduction of the bit rates is required, as no electronic processing methods are available for a data rate of 26 terabits per second. Leuthold's team applies the so-called orthogonal frequency division multiplexing (OFDM) for record data encoding. For many years, this process has been used successfully in mobile communications, based on mathematical routines (Fast Fourier Transformation).

"The challenge was to increase the process speed not only by a factor of 1,000, but by a factor of nearly a million for data processing at 26 terabits per second," explains Leuthold, who heads the Institutes of Photonics and Quantum Electronics and Microstructure Technology at KIT."The decisive innovative idea was optical implementation of the mathematical routine." Calculation in the optical range turned out to be not only extremely fast, but also highly energy-efficient, because energy is required for the laser and a few process steps only.

"Our result shows that physical limits are not yet exceeded even at extremely high data rates," Leuthold says, noting the constantly growing data volume on the internet. According to Leuthold, transmission of 26 terabits per second confirms that even high data rates can be handled today, while energy consumption is minimized."A few years ago, data rates of 26 terabits per second were deemed utopian even for systems with many lasers." Leuthold adds,"and there would not have been any applications. With 26 terabits per second, it would have been possible to transmit up to 400 million telephone calls at the same time. Nobody needed this at that time. Today, the situation is different."

Video transmissions consume much Internet bandwidth and require extremely high bit rates. The need is growing constantly. In communication networks, first lines with channel data rates of 100 gigabits per second (corresponding to 0.1 terabit per second) have already been taken into operation. Research now concentrates on developing systems for transmission lines in the range of 400 Gigabits/s to 1 Tbit/s. Hence, the Karlsruhe invention is ahead of the ongoing development. Companies and scientists from all over Europe were involved in the experimental implementation of ultra-rapid data transmission at KIT. Among them were members of the staff of Agilent and Micram Deutschland, Time-Bandwidth Switzerland, Finisar Israel, and the University of Southampton in Great Britain.


Source

Tuesday, May 17, 2011

Single Atom Stores Quantum Information

Quantum computers will one day be able to cope with computational tasks in no time where current computers would take years. They will take their enormous computing power from their ability to simultaneously process the diverse pieces of information which are stored in the quantum state of microscopic physical systems, such as single atoms and photons. In order to be able to operate, the quantum computers must exchange these pieces of information between their individual components. Photons are particularly suitable for this, as no matter needs to be transported with them. Particles of matter however will be used for the information storage and processing. Researchers are therefore looking for methods whereby quantum information can be exchanged between photons and matter. Although this has already been done with ensembles of many thousands of atoms, physicists at the Max Planck Institute of Quantum Optics in Garching have now proved that quantum information can also be exchanged between single atoms and photons in a controlled way.

Using a single atom as a storage unit has several advantages -- the extreme miniaturization being only one, says Holger Specht from the Garching-based Max Planck Institute, who was involved in the experiment. The stored information can be processed by direct manipulation on the atom, which is important for the execution of logical operations in a quantum computer."In addition, it offers the chance to check whether the quantum information stored in the photon has been successfully written into the atom without destroying the quantum state," says Specht. It is thus possible to ascertain at an early stage that a computing process must be repeated because of a storage error.

The fact that no one had succeeded until very recently in exchanging quantum information between photons and single atoms was because the interaction between the particles of light and the atoms is very weak. Atom and photon do not take much notice of each other, as it were, like two party guests who hardly talk to each other, and can therefore exchange only a little information. The researchers in Garching have enhanced the interaction with a trick. They placed a rubidium atom between the mirrors of an optical resonator, and then used very weak laser pulses to introduce single photons into the resonator. The mirrors of the resonator reflected the photons to and fro several times, which strongly enhanced the interaction between photons and atom. Figuratively speaking, the party guests thus meet more often and the chance that they talk to each other increases.

The photons carried the quantum information in the form of their polarization. This can be left-handed (the direction of rotation of the electric field is anti-clockwise) or right-handed (clock-wise). The quantum state of the photon can contain both polarizations simultaneously as a so-called superposition state. In the interaction with the photon the rubidium atom is usually excited and then loses the excitation again by means of the probabilistic emission of a further photon. The Garching-based researchers did not want this to happen. On the contrary, the absorption of the photon was to bring the rubidium atom into a definite, stable quantum state. The researchers achieved this with the aid of a further laser beam, the so-called control laser, which they directed onto the rubidium atom at the same time as it interacted with the photon.

The spin orientation of the atom contributes decisively to the stable quantum state generated by control laser and photon. Spin gives the atom a magnetic moment. The stable quantum state, which the researchers use for the storage, is thus determined by the orientation of the magnetic moment. The state is characterized by the fact that it reflects the photon's polarization state: the direction of the magnetic moment corresponds to the rotational direction of the photon's polarization, a mixture of both rotational directions being stored by a corresponding mixture of the magnetic moments.

This state is read out by the reverse process: irradiating the rubidium atom with the control laser again causes it to re-emit the photon which was originally incident. In the vast majority of cases, the quantum information in the read-out photon agrees with the information originally stored, as the physicists in Garching discovered. The quantity that describes this relationship, the so-called fidelity, was more than 90 percent. This is significantly higher than the 67 percent fidelity that can be achieved with classical methods, i.e. those not based on quantum effects. The method developed in Garching is therefore a real quantum memory.

The physicists measured the storage time, i.e. the time the quantum information in the rubidium can be retained, as around 180 microseconds."This is comparable with the storage times of all previous quantum memories based on ensembles of atoms," says Stephan Ritter, another researcher involved in the experiment. Nevertheless, a significantly longer storage time is necessary for the method to be used in a quantum computer or a quantum network. There is also a further quality characteristic of the single-atom quantum memory from Garching which could be improved: the so-called efficiency. It is a measure of how many of the irradiated photons are stored and then read out again. This was just under 10 percent.

The storage time is mainly limited by magnetic field fluctuations from the laboratory surroundings, says Ritter."It can therefore be increased by storing the quantum information in quantum states of the atoms which are insensitive to magnetic fields." The efficiency is limited by the fact that the atom does not sit still in the centre of the resonator, but moves. This causes the strength of the interaction between atom and photon to decrease. The researchers can thus also improve the efficiency: by greater cooling of the atom, i.e. by further reducing its kinetic energy.

The researchers at the Max Planck Institute in Garching now want to work on these two improvements."If this is successful, the prospects for the single-atom quantum memory would be excellent," says Stephan Ritter. The interface between light and individual atoms would make it possible to network more atoms in a quantum computer with each other than would be possible without such an interface; a fact that would make such a computer more powerful. Moreover, the exchange of photons would make it possible to quantum mechanically entangle atoms across large distances. The entanglement is a kind of quantum mechanical link between particles which is necessary to transport quantum information across large distances. The technique now being developed at the Max Planck Institute of Quantum Optics could some day thus become an essential component of a future"quantum Internet."


Source

Monday, May 16, 2011

Beyond Smart Phones: Sensor Network to Make 'Smart Cities' Envisioned

Computer scientists, electrical and computer engineers, and mathemati­cians at the TU Darmstadt and the University of Kassel have joined forces and are working on implementing that vision under their"Cocoon" project. The backbone of a"smart" city is a communications network consisting of sen­sors that receive streams of data, or signals, analyze them, and trans­mit them onward. Such sensors thus act as both receivers and trans­mit­ters, i.e., represent trans­ceivers. The networked communications involved oper­ates wire­lessly via radio links, and yields added values to all partici­pants by analyzing the input data involved. For example, the"Smart Home" control system already on the market allows networking all sorts of devices and automatically regulating them to suit demands, thereby alleg­edly yielding energy savings of as much as fifteen percent.

"Smart Home" might soon be followed by"Smart Hospital,""Smart Indus­try," or"Smart Farm," and even"smart" systems tailored to suit mobile net­works are feasible. Traffic jams may be avoided by, for example, car-to-car or car-to-environment (car-to-X) communications. Health-service sys­tems might also benefit from mobile, sensor communications whenever patients need to be kept supplied with information tailored to suit their health­care needs while underway. Furthermore, sensors on their bodies could assess the status of their health and automatically transmit calls for emergency medical assistance, whenever necessary.

"Smart" and mobile, thanks to beam forming

The researchers regard the ceaseless travels of sensors on mobile systems and their frequent entries into/exits from instrumented areas as the major hurdle to be overcome in implementing their vision of"smart" cities. Sensor-aided devices will have to deal with that by responding to subtle changes in their environments and flexibly, efficiently, regulating the quali­ties of received and transmitted signals. Beam forming, a field in which the TU Darmstadt's Institute for Communications Technology is active, should help out there. On that subject, Prof. Rolf Jakoby of the TU Darmstadt's Electrical Engineering and Information Technology Dept. remarked that,"Current types of antennae radiate omnidirectionally, like light bulbs. We intend to create conditions, under which antennae will, in the future, behave like spotlights that, once they have located a sought device, will track it, while suppressing interference by stray electromag­netic radiation from other devices that might also be present in the area."

Such antennae, along with transceivers equipped with them, are thus recon­figurable, i.e., adjustable to suit ambient conditions by means of onboard electronic circuitry or remote controls. Working in col­lab­or­a­tion with an industrial partner, Jakoby has already equipped terres­trial digital-television (TDTV) transmitters with reconfigurable amplifiers that allow amplifying transmitted-signal levels by as much as ten percent. He added that,"If all of Germany's TDTV‑transmitters were equipped with such amp­li­fiers, we could shut down one nuclear power plant."

Frequency bands are a scarce resource

Reconfigurable devices also make much more efficient use of a scarce resource, freq­uency bands. Users have thus far been allocated rigorously defined frequency bands, where only fifteen to twenty percent of the capacities of even the more popular ones have been allocated. Beam forming might allow making more efficient use of them. Jakoby noted that,"This is an area that we are still taking a close look at, but we are well along the way toward understand­ing the system better." However, only a few uses of beam forming have emerged to date, since currently available systems are too expensive for mass applications.

Small, model networks are targeted

Yet another fundamental problem remains to be solved before"smart" cities may become realities. Sensor communications requires the cooper­a­tion of all devices involved, across all communications protocols, such as"Bluetooth," and across all networks, such as the European Global System for Mobile Communications (GSM) mobile-telephone network or wireless local-area networks (WLAN), which cannot be achieved with current devices, communications protocols, and networks. Jakoby explained that,"Con­verting all devices to a common communications protocol is infeas­ible, which is why we are seeking a new protocol that would be superim­posed upon everything and allow them to communicate via several proto­cols." Transmission channels would also have to be capable of handling a mas­sive flood of data, since, as Prof. Abdelhak Zoubir of the TU Darm­stadt's Electrical Engineer­ing and Information Technology Dept., the"Cocoon" project's coordinator, put it,"A"smart" Darm­stadt alone would surely involve a million sensors communicating with one another via satel­lites, mobile telephones, computers, and all of the other types of devices that we already have available. Furthermore, since a single, mobile sensor is readily capable of generating several hundred Meg­a­bytes of data annu­ally, new models for handling the communications of millions of such sen­sors that will more densely compress data in order to provide for error-free com­munica­tions will be needed. Several hurdles will thus have to be over­come before"smart" cities become reality. Nevertheless, the scientists working on the"Cocoon" project are convinced that they will be able to simulate a"smart" city incorporating various types of devices employing early versions of small, model networks.

Over the next three years, scientists at the TU Darmstadt will be receiving a total of 4.5 million Euros from the State of Hesse's Offensive for Devel­op­ing Scientific-Economic Excellence for their researches in conjunction with their"Cocoon -- Cooperative Sensor Communications" project.


Source

Sunday, May 15, 2011

Crowdsourcing Science: Researcher Uses Facebook to Identify Thousands of Fish

In January and February, Bloom helped conduct the first ichthyological survey on Guyana's Cuyuni River. The trip was funded through the Biological Diversity of the Guiana Shield program at the Smithsonian Institution's National Museum of Natural History and was led by Dr. Brian Sidlauskas, assistant professor of fisheries at Oregon State University (OSU). The goal was to find out which species of fish live in the Cuyuni and get a good estimate of their abundance.

The Cuyuni is bisected by the Guyana/Venezuela border and extends 210 kilometres into the thick jungles of western Guyana. The region is under intense ecological pressure from the artisanal gold mining operations that pepper the Guyanese hinterland. This mining has terrible impacts on the surrounding environment. Chief among these are the increase in sedimentation in the rivers and the release of elemental mercury directly into the food chain."That's why it's important we get there now, to find out what's there," says Bloom."Because in 30 years, who knows what the Cuyuni will look like?"

For two weeks, Bloom, Sidlauskas and the rest of the team spent day and night catching as many fish as they could with various nets. They slept in makeshift jungle camps. In two weeks, the team had collected more than 5,000 fish specimens. Then they realized they had a big problem.

"In order to get the fish out of the country," says Bloom,"we needed an accurate count of each species." The team's research permit required them to report this information to the Guyanese government."We couldn't leave the country until we turned over our data to the authorities."

Time was of the essence, as Sidlauskas, Bloom and OSU graduate student Whit Bronaugh had to return to North America as soon as possible. But how could a handful of people possibly identify 5,000 fish in just a few days?"A lot of people think fish experts know hundreds and hundreds of species," says Bloom."But they really don't. We're all specialists on one particular group or another." The last thing the team wanted was to fudge the data, because the whole point of the project was to gather accurate information for the Guyanese government to use in its conservation and development planning.

That's when Bloom made a great suggestion."Let's just put them up on Facebook and see if our friends can help." Sidlauskas loved the idea, so he uploaded the photos that Bronaugh had meticulously taken of each species."The network of fish experts is pretty small," says Bloom,"and fish people can be real fanatics. Once a fish pops up on Facebook, they get very excited and start arguing. So next thing we knew, we had a really interesting intellectual debate going on between various world experts on fish, sort of like a real-time peer review that reached across continents and around the world." In less than 24 hours, their network of friends -- many of whom hold Ph.D.s in ichthyology and whom Bloom refers to as"diehard fish-heads" -- had identified almost every specimen.

With 5,000 identifications in hand, the team was able to deliver their results to the government and return home on schedule. The National Museum of Natural History's blog ran a story on the team's novel use of social networking to crowdsource their data. Then the Smithsonian Institution's blog,Smithsonian Science, andSmithsonianmagazine's blog did the same. Not long after that, employees at Facebook caught wind of the story and chose it as a"Facebook Story of the Week" on the company's page. In less than a few weeks, more than 9,000 people had"liked" the story, and more than 2,500 comments were registered.

"Bloom's elegant approach to solving this particular scientific and logistical problem is reflective of the ingenuity and inventiveness that one finds amongst UTSC researchers," says vice-principal of research at UTSC, Malcolm Campbell."Combining his passion for research, with the preparedness and cutting-edge thinking that are part-and-parcel of his UTSC graduate degree, Bloom devised a particularly effective solution in a tight spot," says Campbell."Bloom and his supervisor, assistant professor Nate Lovejoy, are superb examples of how the best minds are conducting the best research at UTSC."

The results of the biodiversity survey on the Cuyuni River were somewhat discouraging. Bloom says 5,000 fish is not many; he can remember similar trips on different Guyanese rivers where the team pulled in up to 20,000 specimens."Species diversity and abundance were very low," he says."We need to continue monitoring, but this isn't good news for the region."

But the team's use of Facebook to crowdsource accurate scientific data has had an unexpected consequence: it's led Bloom to change his mind about the value of online tools."Social networking is so powerful, and scientists should be using it more to connect with the world-at-large," he says."I can't take credit for the idea, though." Bloom's friend, an ichthyologist at Texas A&M named Nathan Lujan, has been using Facebook to identify fish for years."And Nathan?" says Bloom."Nathan is a real fish-head."


Source

Saturday, May 14, 2011

Toward Faster Transistors: Physicists Discover Physical Phenomenon That Could Boost Computers' Clock Speed

In this week's issue of the journalScience,MIT researchers and their colleagues at the University of Augsburg in Germany report the discovery of a new physical phenomenon that could yield transistors with greatly enhanced capacitance -- a measure of the voltage required to move a charge. And that, in turn, could lead to the revival of clock speed as the measure of a computer's power.

In today's computer chips, transistors are made from semiconductors, such as silicon. Each transistor includes an electrode called the gate; applying a voltage to the gate causes electrons to accumulate underneath it. The electrons constitute a channel through which an electrical current can pass, turning the semiconductor into a conductor.

Capacitance measures how much charge accumulates below the gate for a given voltage. The power that a chip consumes, and the heat it gives off, are roughly proportional to the square of the gate's operating voltage. So lowering the voltage could drastically reduce the heat, creating new room to crank up the clock.

MIT Professor of Physics Raymond Ashoori and Lu Li, a postdoc and Pappalardo Fellow in his lab -- together with Christoph Richter, Stefan Paetel, Thilo Kopp and Jochen Mannhart of the University of Augsburg -- investigated the unusual physical system that results when lanthanum aluminate is grown on top of strontium titanate. Lanthanum aluminate consists of alternating layers of lanthanum oxide and aluminum oxide. The lanthanum-based layers have a slight positive charge; the aluminum-based layers, a slight negative charge. The result is a series of electric fields that all add up in the same direction, creating an electric potential between the top and bottom of the material.

Ordinarily, both lanthanum aluminate and strontium titanate are excellent insulators, meaning that they don't conduct electrical current. But physicists had speculated that if the lanthanum aluminate gets thick enough, its electrical potential would increase to the point that some electrons would have to move from the top of the material to the bottom, to prevent what's called a"polarization catastrophe." The result is a conductive channel at the juncture with the strontium titanate -- much like the one that forms when a transistor is switched on. So Ashoori and his collaborators decided to measure the capacitance between that channel and a gate electrode on top of the lanthanum aluminate.

They were amazed by what they found: Although their results were somewhat limited by their experimental apparatus, it may be that an infinitesimal change in voltage will cause a large amount of charge to enter the channel between the two materials."The channel may suck in charge -- shoomp! Like a vacuum," Ashoori says."And it operates at room temperature, which is the thing that really stunned us."

Indeed, the material's capacitance is so high that the researchers don't believe it can be explained by existing physics."We've seen the same kind of thing in semiconductors," Ashoori says,"but that was a very pure sample, and the effect was very small. This is a super-dirty sample and a super-big effect." It's still not clear, Ashoori says, just why the effect is so big:"It could be a new quantum-mechanical effect or some unknown physics of the material."

There is one drawback to the system that the researchers investigated: While a lot of charge will move into the channel between materials with a slight change in voltage, it moves slowly -- much too slowly for the type of high-frequency switching that takes place in computer chips. That could be because the samples of the material are, as Ashoori says,"super dirty"; purer samples might exhibit less electrical resistance. But it's also possible that, if researchers can understand the physical phenomena underlying the material's remarkable capacitance, they may be able to reproduce them in more practical materials.

Triscone cautions that wholesale changes to the way computer chips are manufactured will inevitably face resistance."So much money has been injected into the semiconductor industry for decades that to do something new, you need a really disruptive technology," he says.

"It's not going to revolutionize electronics tomorrow," Ashoori agrees."But this mechanism exists, and once we know it exists, if we can understand what it is, we can try to engineer it."


Source

Friday, May 13, 2011

Scientists Afflict Computers With 'Schizophrenia' to Better Understand the Human Brain

The researchers used a virtual computer model, or"neural network," to simulate the excessive release of dopamine in the brain. They found that the network recalled memories in a distinctly schizophrenic-like fashion.

Their results were published in April inBiological Psychiatry.

"The hypothesis is that dopamine encodes the importance-the salience-of experience," says Uli Grasemann, a graduate student in the Department of Computer Science at The University of Texas at Austin."When there's too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn't be learning from."

The results bolster a hypothesis known in schizophrenia circles as the hyperlearning hypothesis, which posits that people suffering from schizophrenia have brains that lose the ability to forget or ignore as much as they normally would. Without forgetting, they lose the ability to extract what's meaningful out of the immensity of stimuli the brain encounters. They start making connections that aren't real, or drowning in a sea of so many connections they lose the ability to stitch together any kind of coherent story.

The neural network used by Grasemann and his adviser, Professor Risto Miikkulainen, is called DISCERN. Designed by Miikkulainen, DISCERN is able to learn natural language. In this study it was used to simulate what happens to language as the result of eight different types of neurological dysfunction. The results of the simulations were compared by Ralph Hoffman, professor of psychiatry at the Yale School of Medicine, to what he saw when studying human schizophrenics.

In order to model the process, Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information-not as distinct units, but as statistical relationships of words, sentences, scripts and stories.

"With neural networks, you basically train them by showing them examples, over and over and over again," says Grasemann."Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output. You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned."

In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate-essentially telling it to stop forgetting so much.

"It's an important mechanism to be able to ignore things," says Grasemann."What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia."

After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.

In another instance, DISCERN began showing evidence of"derailment"-replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.

"Information processing in neural networks tends to be like information processing in the human brain in many ways," says Grasemann."So the hope was that it would also break down in similar ways. And it did."

The parallel between their modified neural network and human schizophrenia isn't absolute proof the hyperlearning hypothesis is correct, says Grasemann. It is, however, support for the hypothesis, and also evidence of how useful neural networks can be in understanding the human brain.

"We have so much more control over neural networks than we could ever have over human subjects," he says."The hope is that this kind of modeling will help clinical research."


Source

Tuesday, May 10, 2011

Original Versus Copy: Researchers Develop Forgery-Proof Prototypes for Product Authentication

For this, all the data has to be electronically checked. In the framework of the"Crypta" project supported by the Federal Ministry for Transport, Innovation and Technology (BMVIT), scientists from Graz University of Technology have now developed a prototype which safeguards objects according to new standards.

Whether for checking the origin of foodstuffs or as proof of authenticity of drugs, the future will bring an increased use of electronic assistants to make sure that the quality is right. RFID (Radio Frequency Identification) technology enables objects to be identified wirelessly."You need a reading device and an RFID tag which communicate with each other," explains project leader Jörn-Marc Schmidt from the Institute of Applied Information Processing and Communications at Graz University of Technology. There is a difference between active and passive tags. The former are connected to a power source whereas the latter draw the required power directly from the field of the reading unit, which makes them particularly suitable, for instance, for applications in supermarkets.

Private Key

For a long time the same electronic keys were used for these energy-efficient passive tags and their readers -- using what experts call symmetrical methods."In asymmetrical methods, the transmitter and receiver possess different keys. Secure digital signatures are thus made possible," adds Jörn-Marc Schmidt. Together with the semiconductor manufacturer austriamicrosystems and RF-iT Solutions GmbH, an RFID software and services provider from Graz, the researchers have now developed a prototype which uses a standard method for passive tags for the first time.

"For every tag there is a public key and a private key which remains secret," explains Jörn-Marc Schmidt. This is a development which could be made use of everywhere where proof of authenticity is important. The research results are the gratifying outcome of the Crypta research project of the FIT-IT funding line of the BMVIT, which supports application-oriented research in information technology in particular.


Source

Monday, May 9, 2011

Toward Optical Computing in Handheld Electronics: Graphene Optical Modulators Could Lead to Ultrafast Communications

The team of researchers, led by UC Berkeley engineering professor Xiang Zhang, built a tiny optical device that uses graphene, a one-atom-thick layer of crystallized carbon, to switch light on and off. This switching ability is the fundamental characteristic of a network modulator, which controls the speed at which data packets are transmitted. The faster the data pulses are sent out, the greater the volume of information that can be sent. Graphene-based modulators could soon allow consumers to stream full-length, high-definition, 3-D movies onto a smartphone in a matter of seconds, the researchers said.

"This is the world's smallest optical modulator, and the modulator in data communications is the heart of speed control," said Zhang, who directs a National Science Foundation (NSF) Nanoscale Science and Engineering Center at UC Berkeley."Graphene enables us to make modulators that are incredibly compact and that potentially perform at speeds up to ten times faster than current technology allows. This new technology will significantly enhance our capabilities in ultrafast optical communication and computing."

In this latest work, described in the May 8 advanced online publication of the journalNature, researchers were able to tune the graphene electrically to absorb light in wavelengths used in data communication. This advance adds yet another advantage to graphene, which has gained a reputation as a wonder material since 2004 when it was first extracted from graphite, the same element in pencil lead. That achievement earned University of Manchester scientists Andre Geim and Konstantin Novoselov the Nobel Prize in Physics last year.

Zhang worked with fellow faculty member Feng Wang, an assistant professor of physics and head of the Ultrafast Nano-Optics Group at UC Berkeley. Both Zhang and Wang are faculty scientists at Lawrence Berkeley National Laboratory's Materials Science Division.

"The impact of this technology will be far-reaching," said Wang."In addition to high-speed operations, graphene-based modulators could lead to unconventional applications due to graphene's flexibility and ease in integration with different kinds of materials. Graphene can also be used to modulate new frequency ranges, such as mid-infrared light, that are widely used in molecular sensing."

Graphene is the thinnest, strongest crystalline material yet known. It can be stretched like rubber, and it has the added benefit of being an excellent conductor of heat and electricity. This last quality of graphene makes it a particularly attractive material for electronics.

"Graphene is compatible with silicon technology and is very cheap to make," said Ming Liu, post-doctoral researcher in Zhang's lab and co-lead author of the study."Researchers in Korea last year have already produced 30-inch sheets of it. Moreover, very little graphene is required for use as a modulator. The graphite in a pencil can provide enough graphene to fabricate 1 billion optical modulators."

It is the behavior of photons and electrons in graphene that first caught the attention of the UC Berkeley researchers.

The researchers found that the energy of the electrons, referred to as its Fermi level, can be easily altered depending upon the voltage applied to the material. The graphene's Fermi level in turn determines if the light is absorbed or not.

When a sufficient negative voltage is applied, electrons are drawn out of the graphene and are no longer available to absorb photons. The light is"switched on" because the graphene becomes totally transparent as the photons pass through.

Graphene is also transparent at certain positive voltages because, in that situation, the electrons become packed so tightly that they cannot absorb the photons.

The researchers found a sweet spot in the middle where there is just enough voltage applied so the electrons can prevent the photons from passing, effectively switching the light"off."

"If graphene were a hallway, and electrons were people, you could say that, when the hall is empty, there's no one around to stop the photons," said Xiaobo Yin, co-lead author of the Nature paper and a research scientist in Zhang's lab."In the other extreme, when the hall is too crowded, people can't move and are ineffective in blocking the photons. It's in between these two scenarios that the electrons are allowed to interact with and absorb the photons, and the graphene becomes opaque."

In their experiment, the researchers layered graphene on top of a silicon waveguide to fabricate optical modulators. The researchers were able to achieve a modulation speed of 1 gigahertz, but they noted that the speed could theoretically reach as high as 500 gigahertz for a single modulator.

While components based upon optics have many advantages over those that use electricity, including the ability to carry denser packets of data more quickly, attempts to create optical interconnects that fit neatly onto a computer chip have been hampered by the relatively large amount of space required in photonics.

Light waves are less agile in tight spaces than their electrical counterparts, the researchers noted, so photon-based applications have been primarily confined to large-scale devices, such as fiber optic lines.

"Electrons can easily make an L-shaped turn because the wavelengths in which they operate are small," said Zhang."Light wavelengths are generally bigger, so they need more space to maneuver. It's like turning a long, stretch limo instead of a motorcycle around a corner. That's why optics require bulky mirrors to control their movements. Scaling down the optical device also makes it faster because the single atomic layer of graphene can significantly reduce the capacitance -- the ability to hold an electric charge -- which often hinders device speed."

Graphene-based modulators could overcome the space barrier of optical devices, the researchers said. They successfully shrunk a graphene-based optical modulator down to a relatively tiny 25 square microns, a size roughly 400 times smaller than a human hair. The footprint of a typical commercial modulator can be as large as a few square millimeters.

Even at such a small size, graphene packs a punch in bandwidth capability. Graphene can absorb a broad spectrum of light, ranging over thousands of nanometers from ultraviolet to infrared wavelengths. This allows graphene to carry more data than current state-of-the-art modulators, which operate at a bandwidth of up to 10 nanometers, the researchers said.

"Graphene-based modulators not only offer an increase in modulation speed, they can enable greater amounts of data packed into each pulse," said Zhang."Instead of broadband, we will have 'extremeband.' What we see here and going forward with graphene-based modulators are tremendous improvements, not only in consumer electronics, but in any field that is now limited by data transmission speeds, including bioinformatics and weather forecasting. We hope to see industrial applications of this new device in the next few years."

Other UC Berkeley co-authors of this paper are graduate student Erick Ulin-Avila and post-doctoral researcher Thomas Zentgraf in Zhang's lab; and visiting scholar Baisong Geng and graduate student Long Ju in Wang's lab.

This work was supported through the Center for Scalable and Integrated Nano-Manufacturing (SINAM), an NSF Nanoscale Science and Engineering Center. Funding from the Department of Energy's Basic Energy Science program at Lawrence Berkeley National Laboratory also helped support this research.


Source

Sunday, May 8, 2011

Robot Engages Novice Computer Scientists

A product of CMU's famed Robotics Institute, Finch was designed specifically to make introductory computer science classes an engaging experience once again.

A white plastic, two-wheeled robot with bird-like features, Finch can quickly be programmed by a novice to say"Hello, World," or do a little dance, or make its beak glow blue in response to cold temperature or some other stimulus. But the simple look of the tabletop robot is deceptive. Based on four years of educational research sponsored by the National Science Foundation, Finch includes a number of features that could keep students busy for a semester or more thinking up new things to do with it.

"Students are more interested and more motivated when they can work with something interactive and create programs that operate in the real world," said Tom Lauwers, who earned his Ph.D. in robotics at CMU in 2010 and is now an instructor in the Robotics Institute's CREATE Lab."We packed Finch with sensors and mechanisms that engage the eyes, the ears -- as many senses as possible."

Lauwers has launched a startup company, BirdBrain Technologies, to produce Finch and now sells them online atwww.finchrobot.comfor$99 each.

"Our vision is to make Finch affordable enough that every student can have one to take home for assignments," said Lauwers, who developed the robot with Illah Nourbakhsh, associate professor of robotics and director of the CREATE Lab. Less than a foot long, Finch easily fits in a backpack and is rugged enough to survive being hauled around and occasionally dropped.

Finch includes temperature and light sensors, a three-axis accelerometer and a bump sensor. It has color-programmable LED lights, a beeper and speakers. With a pencil inserted in its tail, Finch can be used to draw pictures. It can be programmed to be a moving, noise-making alarm clock. It even has uses beyond a robot; its accelerometer enables it to be used as a 3-D mouse to control a computer display.

Robot kits suitable for students as young as 12 are commercially available, but often cost more than the Finch, Lauwers said. What's more, the idea is to use the robot to make computer programming lessons more interesting, not to use precious instructional time to first build a robot.

Finch is a plug-and-play device, so no drivers or other software must be installed beyond what is used in typical computer science courses. Finch connects with and receives power from the computer over a 15-foot USB cable, eliminating batteries and off-loading its computation to the computer. Support for a wide range of programming languages and environments is coming, including graphical languages appropriate for young students. Finch currently can be programmed with the Java and Python languages widely used by educators.

A number of assignments are available on the Finch Robot website to help teachers drop Finch into their lesson plans, and the website allows instructors to upload their own assignments or ideas in return for company-provided incentives. The robot has been classroom-tested at the Community College of Allegheny County, Pa., and by instructors in high school, university and after-school programs.

"Computer science now touches virtually every scientific discipline and is a critical part of most new technologies, yet U.S. universities saw declining enrollments in computer science through most of the past decade," Nourbakhsh said."If Finch can help motivate students to give computer science a try, we think many more students will realize that this is a field that they would enjoy exploring."


Source

Saturday, May 7, 2011

Unique Norwegian Nano-Product: Processor Chips With a Global Market

"Actually, we are the only ones who have succeeded in developing radar transceivers like these," says Dag T. Wisland, CEO of Novelda AS.

Small company, heavyweight technology

With just 20 employees, Novelda develops high-performance nano-electronics that pave the way for new, advanced radar technology.

Although the company is small, its technology is absolutely cutting-edge. Novelda's silicon chips, which measure just 2 x 2 mm, have made an international breakthrough. Each chip contains nearly two million transistors and 512 radars that simultaneously sense and transmit information.

Unlike conventional radar devices, which must be placed some metres away from the object to be measured, Novelda's can be located directly on the object. This capability opens up opportunities for product development with all sorts of exciting applications.

"We have customers located all over the world who are developing applications based on our technology," explains Chief Marketing Officer Aage Kalsæg."In the health care sector alone, our sensors are used in solutions being developed for monitoring heart rate, taking wireless ECG readings, and measuring fluid in the lungs."

"Some of the other exciting development projects are snow depth radars that combine GPS with water content measurement, as well as radars that can penetrate walls and rubble and find people trapped in collapsed buildings. The possibilities are endless."

Intensive R&D is crucial

Novelda's path -- from start-up company in 2004 to technological market leader -- has been an arduous one. Continuity in research is a critical element of the company's success. NOVELDA has received public funding from the Research Council of Norway and its programmes such as User-driven Research-based Innovation (BIA) and Core Competence and Growth in ICT (VERDIKT), as well as with EUREKA's Eurostars Programme with its funding and support specifically dedicated to SMEs.


Source

Friday, May 6, 2011

EEG Headset With Flying Harness Lets Users 'Fly' by Controlling Their Thoughts

Creative director and Rensselaer MFA candidate Yehuda Duenyas describes the"Infinity Simulator" as a platform similar to a gaming console -- like the Wii or the Kinect -- writ large.

"Instead of you sitting and controlling gaming content, it's a whole system that can control live elements -- so you can control 3-D rigging, sound, lights, and video," said Duenyas, who works under the moniker"xxxy.""It's a system for creating hybrids of theater, installation, game, and ride."

Duenyas created the"Infinity Simulator" with a team of collaborators, including Michael Todd, a Rensselaer 2010 graduate in computer science. Duenyas will exhibit the new system in the art installation"The Ascent" on May 12 at Curtis R. Priem Experimental Media and Performing Arts Center (EMPAC).

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd.

Within the theater, the rigging -- including the harness -- is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The"Infinity Simulator," a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

"We've built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it," said Duenyas."The 'Infinity Simulator' is the center; everything talks to the 'Infinity Simulator.'"

The May 12"The Ascent" installation is only one experience made possible by the new platform, Duenyas said.

"'The Ascent' embodies the maiden experience that we'll be presenting," Duenyas said."But we've found that it's a versatile platform to create almost any type of experience that involves rigging, video, sound, and light. The idea is that it's reactive to the users' body; there's a physical interaction."

Duenyas, a Brooklyn-based artist and theater director, specializes in experiential theater performances.

"The thing that I focus on the most is user experience," Duenyas said."All the shows I do with my theater company and on my own involve a lot of set and set design -- you're entering into a whole world. You're having an experience that is more than going to a show, although a show is part of it."

The"Infinity Simulator" stemmed from an idea Duenyas had for such a theatrical experience.

"It started with an idea that I wanted to create a simulator that would give people a feeling of infinity," Duenyas said. His initial vision was that of a room similar to a Cave Automated Virtual Environment -- a room paneled with projection screens -- in which participants would be able to float effortlessly in an environment intended to evoke a glimpse into infinity.

At Rensselaer, Duenyas took advantage of the technology at hand to explore his idea, first with a video game he developed in 2010, then -- working through the Department of the Arts -- with EMPAC's computer-controlled 3-D theatrical flying harness.

"The charge of the arts department is to allow the artists that they bring into the department to use technology to enhance what they've been doing already," Duenyas said."In coming here (EMPAC), and starting to translate our ideas into a physical space, so many different things started opening themselves up to us."

The 2010 video game, also developed with Todd, tracked the movements -- pitch and yaw -- of players suspended in a custom-rigged harness, allowing players to soar through simulated landscapes. Duenyas said that that game (also called the"Infinity Simulator") and the new platform are part of the same vision.

EMPAC Director Johannes Goebel saw the game on display at the 2010 GameFest and discussed the custom-designed 3-D theatrical flying rig in EMPAC with Duenyas. Working through the Arts Department, Duenyas submitted a proposal to work with the rig, and his proposal was accepted.

Duenyas and his team experimented -- first gaining peripheral control over the system, and then linking it to the EEG headset -- and created the Ascent installation as an initial project. In the installation, the Infinity Simulator is programmed to respond to relaxation.

"We're measuring two brain states -- alpha and theta -- waking consciousness and everyday brain computational processing," said Duenyas."If you close your eyes and take a deep breath, that processing power decreases. When it decreases below a certain threshold, that is the trigger for you to elevate."

As a user rises, their ascent triggers a changing display of lights, sound, and video. Duenyas said he wants to hint at transcendental experience, while keeping the door open for a more circumspect interpretation.

"The point is that the user is trying to transcend the everyday and get into this meditative state so they can have this experience. I see it as some sort of iconic spiritual simulator. That's the serious side," he said."There's also a real tongue-in-cheek side of my work: I want clouds, I want Terry Gilliam's animated fist to pop out of a cloud and hit you in the face. It's mixing serious religious symbology, but not taking it seriously."

The humor is prompted, in part, by the limitations of this earliest iteration of Duenyas' vision.

"It started with, 'I want to have a glimpse of infinity,' 'I want to float in space.' Then you get in the harness and you're like 'man, this harness is uncomfortable,'" he said."In order to achieve the original vision, we had to build an infrastructure, and I still see development of the infinity experience is a ways off; but what we can do with the infrastructure in a realistic time frame is create 'The Ascent,' which is going to be really fun, and totally other."

Creating the"Infinity Simulator" has prompted new possibilities.

"The vision now is to play with this fun system that we can use to build any experience," he said."It's sort of overwhelming because you could do so many things -- you could create a flight through cumulus clouds, you could create an augmented physicality parkour course where you set up different features in the room and guide yourself to different heights. It's limitless."


Source

Thursday, May 5, 2011

Transistors Reinvented Using New 3-D Structure

The three-dimensional Tri-Gate transistors represent a fundamental departure from the two-dimensional planar transistor structure that has powered not only all computers, mobile phones and consumer electronics to-date, but also the electronic controls within cars, spacecraft, household appliances, medical devices and virtually thousands of other everyday devices for decades.

"Intel's scientists and engineers have once again reinvented the transistor, this time utilizing the third dimension," said Intel President and CEO Paul Otellini."Amazing, world-shaping devices will be created from this capability as we advance Moore's Law into new realms."

Scientists have long recognized the benefits of a 3-D structure for sustaining the pace of Moore's Law as device dimensions become so small that physical laws become barriers to advancement. The key to this latest breakthrough is Intel's ability to deploy its novel 3-D Tri-Gate transistor design into high-volume manufacturing, ushering in the next era of Moore's Law and opening the door to a new generation of innovations across a broad spectrum of devices.

Moore's Law is a forecast for the pace of silicon technology development that states that roughly every 2 years transistor density will double, while increasing functionality and performance and decreasing costs. It has become the basic business model for the semiconductor industry for more than 40 years.

Unprecedented Power Savings and Performance Gains

Intel's 3-D Tri-Gate transistors enable chips to operate at lower voltage with lower leakage, providing an unprecedented combination of improved performance and energy efficiency compared to previous state-of-the-art transistors. The capabilities give chip designers the flexibility to choose transistors targeted for low power or high performance, depending on the application.

The 22nm 3-D Tri-Gate transistors provide up to 37 percent performance increase at low voltage versus Intel's 32nm planar transistors. This incredible gain means that they are ideal for use in small handheld devices, which operate using less energy to"switch" back and forth. Alternatively, the new transistors consume less than half the power when at the same performance as 2-D planar transistors on 32nm chips.

"The performance gains and power savings of Intel's unique 3-D Tri-Gate transistors are like nothing we've seen before," said Mark Bohr, Intel Senior Fellow."This milestone is going further than simply keeping up with Moore's Law. The low-voltage and low-power benefits far exceed what we typically see from one process generation to the next. It will give product designers the flexibility to make current devices smarter and wholly new ones possible. We believe this breakthrough will extend Intel's lead even further over the rest of the semiconductor industry."

Continuing the Pace of Innovation -- Moore's Law

Transistors continue to get smaller, cheaper and more energy efficient in accordance with Moore's Law -- named for Intel co-founder Gordon Moore. Because of this, Intel has been able to innovate and integrate, adding more features and computing cores to each chip, increasing performance, and decreasing manufacturing cost per transistor.

Sustaining the progress of Moore's Law becomes even more complex with the 22nm generation. Anticipating this, Intel research scientists in 2002 invented what they called a Tri-Gate transistor, named for the three sides of the gate. This announcement follows further years of development in Intel's highly coordinated research-development-manufacturing pipeline, and marks the implementation of this work for high-volume manufacturing.

The 3-D Tri-Gate transistors are a reinvention of the transistor. The traditional"flat" two-dimensional planar gate is replaced with an incredibly thin three-dimensional silicon fin that rises up vertically from the silicon substrate. Control of current is accomplished by implementing a gate on each of the three sides of the fin -- two on each side and one across the top -- rather than just one on top, as is the case with the 2-D planar transistor. The additional control enables as much transistor current flowing as possible when the transistor is in the"on" state (for performance), and as close to zero as possible when it is in the"off" state (to minimize power), and enables the transistor to switch very quickly between the two states (again, for performance).

Just as skyscrapers let urban planners optimize available space by building upward, Intel's 3-D Tri-Gate transistor structure provides a way to manage density. Since these fins are vertical in nature, transistors can be packed closer together, a critical component to the technological and economic benefits of Moore's Law. For future generations, designers also have the ability to continue growing the height of the fins to get even more performance and energy-efficiency gains.

"For years we have seen limits to how small transistors can get," said Moore."This change in the basic structure is a truly revolutionary approach, and one that should allow Moore's Law, and the historic pace of innovation, to continue."

World's First Demonstration of 22nm 3-D Tri-Gate Transistors

The 3-D Tri-Gate transistor will be implemented in the company's upcoming manufacturing process, called the 22nm node, in reference to the size of individual transistor features. More than 6 million 22nm Tri-Gate transistors could fit in the period at the end of this sentence.

Intel has demonstrated the world's first 22nm microprocessor, codenamed"Ivy Bridge," working in a laptop, server and desktop computer. Ivy Bridge-based Intel® Core™ family processors will be the first high-volume chips to use 3-D Tri-Gate transistors. Ivy Bridge is slated for high-volume production readiness by the end of this year.

This silicon technology breakthrough will also aid in the delivery of more highly integrated Intel® Atom™ processor-based products that scale the performance, functionality and software compatibility of Intel® architecture while meeting the overall power, cost and size requirements for a range of market segment needs.


Source

Wednesday, May 4, 2011

Evolutionary Lessons for Wind Farm Efficiency

Senior Lecturer Dr Frank Neumann, from the School of Computer Science, is using a"selection of the fittest" step-by-step approach called"evolutionary algorithms" to optimise wind turbine placement. This takes into account wake effects, the minimum amount of land needed, wind factors and the complex aerodynamics of wind turbines.

"Renewable energy is playing an increasing role in the supply of energy worldwide and will help mitigate climate change," says Dr Neumann."To further increase the productivity of wind farms, we need to exploit methods that help to optimise their performance."

Dr Neumann says the question of exactly where wind turbines should be placed to gain maximum efficiency is highly complex."An evolutionary algorithm is a mathematical process where potential solutions keep being improved a step at a time until the optimum is reached," he says.

"You can think of it like parents producing a number of offspring, each with differing characteristics," he says."As with evolution, each population or 'set of solutions' from a new generation should get better. These solutions can be evaluated in parallel to speed up the computation."

Other biology-inspired algorithms to solve complex problems are based on ant colonies.

"Ant colony optimisation" uses the principle of ants finding the shortest way to a source of food from their nest.

"You can observe them in nature, they do it very efficiently communicating between each other using pheromone trails," says Dr Neumann."After a certain amount of time, they will have found the best route to the food -- problem solved. We can also solve human problems using the same principles through computer algorithms."

Dr Neumann has come to the University of Adelaide this year from Germany where he worked at the Max Planck Institute. He is working on wind turbine placement optimisation in collaboration with researchers at the Massachusetts Institute of Technology.

"Current approaches to solving this placement optimisation can only deal with a small number of turbines," Dr Neumann says."We have demonstrated an accurate and efficient algorithm for as many as 1000 turbines."

The researchers are now looking to fine-tune the algorithms even further using different models of wake effect and complex aerodynamic factors.


Source

Tuesday, May 3, 2011

College Students' Use of Kindle DX Points to E-Reader’s Role in Academia

The UW last year was one of seven U.S. universities that participated in a pilot study of the Kindle DX, a larger version of the popular e-reader. UW researchers who study technology looked at how students involved in the pilot project did their academic reading.

"There is no e-reader that supports what we found these students doing," said first author Alex Thayer, a UW doctoral student in Human Centered Design and Engineering."It remains to be seen how to design one. It's a great space to get into, there's a lot of opportunity."

Thayer is presenting the findings in Vancouver, B.C. at the Association for Computing Machinery's Conference on Human Factors in Computing Systems, where the study received an honorable mention for best paper.

"Most e-readers were designed for leisure reading -- think romance novels on the beach," said co-author Charlotte Lee, a UW assistant professor of Human Centered Design and Engineering."We found that reading is just a small part of what students are doing. And when we realize how dynamic and complicated a process this is, it kind of redefines what it means to design an e-reader."

Some of the other schools participating in the pilot project conducted shorter studies, generally looking at the e-reader's potential benefits and drawbacks for course use. The UW study looked more broadly at how students did their academic reading, following both those who incorporated the e-reader into their routines and those who did not.

"We were not trying to evaluate the device, per se, but wanted to think long term, really looking to the future of e-readers, what are students trying to do, how can we support that," Lee said.

The researchers interviewed 39 first-year graduate students in the UW's Department of Computer Science& Engineering, 7 women and 32 men, ranging from 21 to 53 years old.

By spring quarter of 2010, seven months into the study, less than 40 percent of the students were regularly doing their academic reading on the Kindle DX. Reasons included the device's lack of support for taking notes and difficulty in looking up references. (Amazon Corp., which makes the Kindle DX, has since improved some of these features.)

UW researchers continued to interview all the students over the nine-month period to find out more about their reading habits, with or without the e-reader. They found:

  • Students did most of the reading in fixed locations: 47 percent of reading was at home, 25 percent at school, 17 percent on a bus and 11 percent in a coffee shop or office.
  • The Kindle DX was more likely to replace students' paper-based reading than their computer-based reading.
  • Of the students who continued to use the device, some read near a computer so they could look up references or do other tasks that were easier to do on a computer. Others tucked a sheet of paper into the case so they could write notes.
  • With paper, three quarters of students marked up texts as they read. This included highlighting key passages, underlining, drawing pictures and writing notes in margins.
  • A drawback of the Kindle DX was the difficulty of switching between reading techniques, such as skimming an article's illustrations or references just before reading the complete text. Students frequently made such switches as they read course material.
  • The digital text also disrupted a technique called cognitive mapping, in which readers used physical cues such as the location on the page and the position in the book to go back and find a section of text or even to help retain and recall the information they had read.

Lee predicts that over time software will help address some of these issues. She even envisions niche software that could support reading styles specific to certain disciplines.

"You can imagine that a historian going through illuminated texts is going to have very different navigation needs than someone who is comparing algorithms," Lee said.

It's likely that desktop computers, laptops, tablet computers and yes, even paper, will play a role in academic reading's future. But the authors say e-readers will also find their place. Thayer imagines the situation will be similar to today's music industry, where mp3s, CDs and LPs all coexist in music-lovers' listening habits.

"E-readers are not where they need to be in order to support academic reading," Lee concludes. But asked when e-readers will reach that point, she predicts:"It's going to be sooner than we think."

Other co-authors are Linda Hwang, Heidi Sales, Pausali Sen and Ninad Dalal of the UW.


Source