Wednesday, March 30, 2011

Physicists Rotate Beams of Light

Light waves can oscillate in different directions -- much like a string that can vibrate up and down or left and right -- depending on the direction in which it is picked. This is called the polarization of light. Physicists at the Vienna University of Technology have now, together with researchers at Würzburg University, developed a method to control and manipulate the polarization of light using ultra thin layers of semiconductor material.

For future research on light and its polarization this is an important step forward -- and this breakthrough could even open up possibilities for completely new computer technology. The experiment can be viewed as the optical version of an electronic transistor. The results of the experiment have now been published in the journalPhysical Review Letters.

Controlling light with magnetic fields

The polarization of light can change, when it passes through a material in a strong magnetic field. This phenomenon is known as the"Faraday effect.""So far, however, this effect had only been observed in materials in which it was very weak," professor Andrei Pimenov explains. He carried out the experiments at the Institute for Solid State Physics of the TU Vienna, together with his assistant Alexey Shuvaev. Using light of the right wavelength and extremely clean semiconductors, scientists in Vienna and Würzburg could achieve a Faraday effect which is orders of magnitude stronger than ever measured before.

Now light waves can be rotated into arbitrary directions -- the direction of the polarization can be tuned with an external magnetic field. Surprisingly, an ultra-thin layer of less than a thousandth of a millimeter is enough to achieve this."Such thin layers made of other materials could only change the direction of polarization by a fraction of one degree," says professor Pimenov. If the beam of light is then sent through a polarization filter, which only allows light of a particular direction of polarization to pass, the scientists can, rotating the direction appropriately, decide whether the beam should pass or not.

The key to this astonishing effect lies in the behavior of the electrons in the semiconductor. The beam of light oscillates the electrons, and the magnetic field deflects their vibrating motion. This complicated motion of the electrons in turn affects the beam of light and changes its direction of polarization.

An optical transistor

In the experiment, a layer of the semiconductor mercury telluride was irradiated with light in the infrared spectral range."The light has a frequency in the terahertz domain -- those are the frequencies, future generations of computers may operate with," professor Pimenov believes."For years, the clock rates of computers have not really increased, because a domain has been reached, in which material properties just don't play along anymore." A possible solution is to complement electronic circuits with optical elements. In a transistor, the basic element of electronics, an electric current is controlled by an external signal. In the experiment at TU Vienna, a beam of light is controlled by an external magnetic field. The two systems are very much alike."We could call our system a light-transistor," Pimenov suggests.

Before optical circuits for computers can be considered, the newly discovered effect will prove useful as a tool for further research. In optics labs, it will play an important role in research on new materials and the physics of light.


Source

Thursday, March 24, 2011

BrainGate Neural Interface System Reaches 1,000-Day Performance Milestone

Results from five consecutive days of device use surrounding her 1,000th day in the device trial appeared online March 24 in theJournal of Neural Engineering.

"This proof of concept -- that after 1,000 days a woman who has no functional use of her limbs and is unable to speak can reliably control a cursor on a computer screen using only the intended movement of her hand -- is an important step for the field," said Dr. Leigh Hochberg, a Brown engineering associate professor, VA rehabilitation researcher, visiting associate professor of neurology at Harvard Medical School, and director of the BrainGate pilot clinical trial at MGH.

The woman, identified in the paper as S3, performed two"point-and-click" tasks each day by thinking about moving the cursor with her hand. In both tasks she averaged greater than 90 percent accuracy. Some on-screen targets were as small as the effective area of a Microsoft Word menu icon.

In each of S3's two tasks, performed in 2008, she controlled the cursor movement and click selections continuously for 10 minutes. The first task was to move the cursor to targets arranged in a circle and in the center of the screen, clicking to select each one in turn. The second required her to follow and click on a target as it sequentially popped up with varying size at random points on the screen.

From fundamental neuroscience to clinical utility

Under development since 2002, the investigational BrainGate system is a combination of hardware and software that directly senses electrical signals produced by neurons in the brain that control movement. By decoding those signals and translating them into digital instructions, the system is being evaluated for its ability to give people with paralysis control of external devices such as computers, robotic assistive devices, or wheelchairs. The BrainGate team is also engaged in research toward control of advanced prosthetic limbs and toward direct intracortical control of functional electrical stimulation devices for people with spinal cord injury, in collaboration with researchers at the Cleveland FES Center.

The system is currently in pilot clinical trials, directed by Hochberg at MGH.

BrainGate uses a tiny (4x4 mm, about the size of a baby aspirin) silicon electrode array to read neural signals directly within brain tissue. Although external sensors placed on the brain or skull surface can also read neural activity, they are believed to be far less precise. In addition, many prototype brain implants have eventually failed because of moisture or other perils of the internal environment.

"Neuroengineers have often wondered whether useful signals could be recorded from inside the brain for an extended period of time," Hochberg said."This is the first demonstration that this microelectrode array technology can provide useful neuroprosthetic signals allowing a person with tetraplegia to control an external device for an extended period of time."

Moving forward

Device performance was not the same at 2.7 years as it was earlier on, Hochberg added. At 33 months fewer electrodes were recording useful neural signals than after only six months. But John Donoghue -- VA senior research career scientist, Henry Merritt Wriston Professor of Neuroscience, director of the Brown Institute for Brain Science, and original developer of the BrainGate system -- said no evidence has emerged of any fundamental incompatibility between the sensor and the brain. Instead, it appears that decreased signal quality over time can largely be attributed to engineering, mechanical or procedural issues. Since S3's sensor was built and implanted in 2005, the sensor's manufacturer has reported continual quality improvements. The data from this study will be used to further understand and modify the procedures or device to further increase durability.

"None of us will be fully satisfied with an intracortical recording device until it provides decades of useful signals," Hochberg said."Nevertheless, I'm hopeful that the progress made in neural interface systems will someday be able to provide improved communication, mobility, and independence for people with locked-in syndrome or other forms of paralysis and eventually better control over prosthetic, robotic, or functional electrical stimulation systems {stimulating electrodes that have already returned limb function to people with cervical spinal cord injury}, even while engineers continue to develop ever-better implantable sensors."

In addition to demonstrating the very encouraging longevity of the BrainGate sensor, the paper also presents an advance in how the performance of a brain-computer interface can be measured, Simeral said."As the field continues to evolve, we'll eventually be able to compare and contrast technologies effectively."

As for S3, who had a brainstem stroke in the mid-1990s and is now in her late 50s, she continues to participate in trials with the BrainGate system, which continues to record useful signals, Hochberg said. However, data beyond the 1000th day in 2008 has thus far only been presented at scientific meetings, and Hochberg can only comment on data that has already completed the scientific peer review process and appeared in publication.

In addition to Simeral, Hochberg, and Donoghue, other authors are Brown computer scientist Michael Black and former Brown computer scientist Sung-Phil Kim.

About the BrainGate collaboration

This advance is the result of the ongoing collaborative BrainGate research at Brown University, Massachusetts General Hospital, and Providence VA Medical Center. The BrainGate research team is focused on developing and testing neuroscientifically inspired technologies to improve the communication, mobility, and independence of people with neurologic disorders, injury, or limb loss.

For more information, visitwww.braingate2.org.

The implanted microelectrode array and associated neural recording hardware used in the BrainGate research are manufactured by BlackRock Microsystems, LLC (Salt Lake City, UT).

This research was funded in part by the Rehabilitation Research and Development Service, Department of Veterans Affairs; The National Institutes of Health (NIH), including NICHD-NCMRR, NINDS/NICHD, NIDCD/ARRA, NIBIB, NINDS-Javits; the Doris Duke Charitable Foundation; MGH-Deane Institute for Integrated Research on Atrial Fibrillation and Stroke; and the Katie Samson Foundation.

The BrainGate pilot clinical trial was previously directed by Cyberkinetics Neurotechnology Systems, Inc., Foxborough, MA (CKI). CKI ceased operations in 2009. The clinical trials of the BrainGate2 Neural Interface System are now administered by Massachusetts General Hospital, Boston, Mass. Donoghue is a former chief scientific officer and a former director of CKI; he held stocks and received compensation. Hochberg received research support from Massachusetts General and Spaulding Rehabilitation Hospitals, which in turn received clinical trial support from Cyberkinetics. Simeral received compensation as a consultant to CKI.


Source

Wednesday, March 23, 2011

Toward Computers That Fit on a Pen Tip: New Technologies Usher in the Millimeter-Scale Computing Era

And a compact radio that needs no tuning to find the right frequency could be a key enabler to organizing millimeter-scale systems into wireless sensor networks. These networks could one day track pollution, monitor structural integrity, perform surveillance, or make virtually any object smart and trackable.

Both developments at the University of Michigan are significant milestones in the march toward millimeter-scale computing, believed to be the next electronics frontier.

Researchers are presenting papers on each at the International Solid-State Circuits Conference (ISSCC) in San Francisco. The work is being led by three faculty members in the U-M Department of Electrical Engineering and Computer Science: professors Dennis Sylvester and David Blaauw, and assistant professor David Wentzloff.

Bell's Law and the promise of pervasive computing

Nearly invisible millimeter-scale systems could enable ubiquitous computing, and the researchers say that's the future of the industry. They point to Bell's Law, a corollary to Moore's Law. (Moore's says that the number of transistors on an integrated circuit doubles every two years, roughly doubling processing power.)

Bell's Law says there's a new class of smaller, cheaper computers about every decade. With each new class, the volume shrinks by two orders of magnitude and the number of systems per person increases. The law has held from 1960s' mainframes through the '80s' personal computers, the '90s' notebooks and the new millennium's smart phones.

"When you get smaller than hand-held devices, you turn to these monitoring devices," Blaauw said."The next big challenge is to achieve millimeter-scale systems, which have a host of new applications for monitoring our bodies, our environment and our buildings. Because they're so small, you could manufacture hundreds of thousands on one wafer. There could be 10s to 100s of them per person and it's this per capita increase that fuels the semiconductor industry's growth."

The first complete millimeter-scale system

Blaauw and Sylvester's new system is targeted toward medical applications. The work they present at ISSCC focuses on a pressure monitor designed to be implanted in the eye to conveniently and continuously track the progress of glaucoma, a potentially blinding disease. (The device is expected to be commercially available several years from now.)

In a package that's just over 1 cubic millimeter, the system fits an ultra low-power microprocessor, a pressure sensor, memory, a thin-film battery, a solar cell and a wireless radio with an antenna that can transmit data to an external reader device that would be held near the eye.

"This is the first true millimeter-scale complete computing system," Sylvester said.

"Our work is unique in the sense that we're thinking about complete systems in which all the components are low-power and fit on the chip. We can collect data, store it and transmit it. The applications for systems of this size are endless."

The processor in the eye pressure monitor is the third generation of the researchers' Phoenix chip, which uses a unique power gating architecture and an extreme sleep mode to achieve ultra-low power consumption. The newest system wakes every 15 minutes to take measurements and consumes an average of 5.3 nanowatts. To keep the battery charged, it requires exposure to 10 hours of indoor light each day or 1.5 hours of sunlight. It can store up to a week's worth of information.

While this system is miniscule and complete, its radio doesn't equip it to talk to other devices like it. That's an important feature for any system targeted toward wireless sensor networks.

A unique compact radio to enable wireless sensor networks

Wentzloff and doctoral student Kuo-Ken Huang have taken a step toward enabling such node-to-node communication. They've developed a consolidated radio with an on-chip antenna that doesn't need the bulky external crystal that engineers rely on today when two isolated devices need to talk to each other. The crystal reference keeps time and selects a radio frequency band. Integrating the antenna and eliminating this crystal significantly shrinks the radio system. Wentzloff's is less than 1 cubic millimeter in size.

He and Huang's key innovation is to engineer the new antenna to keep time on its own and serve as its own reference. By integrating the antenna through an advanced CMOS process, they can precisely control its shape and size and therefore how it oscillates in response to electrical signals.

"Antennas have a natural resonant frequency for electrical signals that is defined by their geometry, much like a pure audio tone on a tuning fork," Wentzloff said."By designing a circuit to monitor the signal on the antenna and measure how close it is to the antenna's natural resonance, we can lock the transmitted signal to the antenna's resonant frequency."

"This is the first integrated antenna that also serves as its own reference. The radio on our chip doesn't need external tuning. Once you deploy a network of these, they'll automatically align at the same frequency."

The researchers are now working on lowering the radio's power consumption so that it's compatible with millimeter-scale batteries.

Greg Chen, a doctoral student in the Department of Electrical Engineering and Computer Science, presents"A Cubic-Millimeter Energy-Autonomous Wireless Intraocular Pressure Monitor." The researchers are collaborating with Ken Wise, the William Gould Dow Distinguished University Professor of Electrical Engineering and Computer Science on the packaging of the sensor, and with Paul Lichter, chair of the Department of Ophthalmology and Visual Sciences at the U-M Medical School, for the implantation studies. Huang presents"A 60GHz Antenna-Referenced Frequency-Locked Loop in 0.13μm CMOS for Wireless Sensor Networks." This research is funded by the National Science Foundation. The university is pursuing patent protection for the intellectual property, and is seeking commercialization partners to help bring the technology to market.


Source

Tuesday, March 22, 2011

Simulating Tomorrow's Accelerators at Near the Speed of Light

But realizing the promise of laser-plasma accelerators crucially depends on being able to simulate their operation in three-dimensional detail. Until now such simulations have challenged or exceeded even the capabilities of supercomputers.

A team of researchers led by Jean-Luc Vay of Berkeley Lab's Accelerator and Fusion Research Division (AFRD) has borrowed a page from Einstein to perfect a revolutionary new method for calculating what happens when a laser pulse plows through a plasma in an accelerator like BELLA. Using their"boosted-frame" method, Vay's team has achieved full 3-D simulations of a BELLA stage in just a few hours of supercomputer time, calculations that would have been beyond the state of the art just two years ago.

Not only are the recent BELLA calculations tens of thousands of times faster than conventional methods, they overcome problems that plagued previous attempts to achieve the full capacity of the boosted-frame method, such as violent numerical instabilities. Vay and his colleagues, Cameron Geddes of AFRD, Estelle Cormier-Michel of the Tech-X Corporation in Denver, and David Grote of Lawrence Livermore National Laboratory, publish their latest findings in the March, 2011 issue of the journalPhysics of Plasma Letters.

Space, time, and complexity

The boosted-frame method, first proposed by Vay in 2007, exploits Einstein's Theory of Special Relativity to overcome difficulties posed by the huge range of space and time scales in many accelerator systems. Vast discrepancies of scale are what made simulating these systems too costly.

"Most researchers assumed that since the laws of physics are invariable, the huge complexity of these systems must also be invariable," says Vay."But what are the appropriate units of complexity? It turns out to depend on how you make the measurements."

Laser-plasma wakefield accelerators are particularly challenging: they send a very short laser pulse through a plasma measuring a few centimeters or more, many orders of magnitude longer than the pulse itself (or the even-shorter wavelength of its light). In its wake, like a speedboat on water, the laser pulse creates waves in the plasma. These alternating waves of positively and negatively charged particles set up intense electric fields. Bunches of free electrons, shorter than the laser pulse,"surf" the waves and are accelerated to high energies.

"The most common way to model a laser-plasma wakefield accelerator in a computer is by representing the electromagnetic fields as values on a grid, and the plasma as particles that interact with the fields," explains Geddes, a member of the BELLA science staff who has long worked on laser-plasma acceleration."Since you have to resolve the finest structures -- the laser wavelength, the electron bunch -- over the relatively enormous length of the plasma, you need a grid with hundreds of millions of cells."

The laser period must also be resolved in time, and calculated over millions of time steps. As a result, while much of the important physics of BELLA is three-dimensional, direct 3-D simulation was initially impractical. Just a one-dimensional simulation of BELLA required 5,000 hours of supercomputer processor time at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC).

Choosing the right frame

The key to reducing complexity and cost lies in choosing the right point of view, or"reference frame." When Albert Einstein was 16 years old he imagined riding along in a frame moving with a beam of light -- a thought experiment that, 10 years later, led to his Theory of Special Relativity, which establishes that there is no privileged reference frame. Observers moving at different velocities may experience space and time differently and even see things happening in a different order, but calculations from any point of view can recover the same physical result.

Among the consequences are that the speed of light in a vacuum is always the same; compared to a stationary observer's experience, time moves more slowly while space contracts for an observer traveling near light speed. These different points of view are called Lorentz frames, and changing one for another is called a Lorentz transformation. The"boosted frame" of the laser pulse is the key to enabling calculations of laser-plasma wakefield accelerators that would otherwise be inaccessible.

A laser pulse pushing through a tenuous plasma moves only a little slower than light through a vacuum. An observer in the stationary laboratory frame sees it as a rapid oscillation of electromagnetic fields moving through a very long plasma, whose simulation requires high resolution and many time steps. But for an observer moving with the pulse, time slows, and the frequency of the oscillations is greatly reduced; meanwhile space contracts, and the plasma becomes much shorter. Thus relatively few time steps are needed to model the interaction between the laser pulse, the plasma waves formed in its wake, and the bunches of electrons riding the wakefield through the plasma. Fewer steps mean less computer time.

Eliminating instability

Early attempts to apply the boosted-frame method to laser-plasma wakefield simulations encountered numerical instabilities that limited how much the calculation frame could be boosted. Calculations could still be speeded up tens or even hundreds of times, but the full promise of the method could not be realized.

Vay's team showed that using a particular boosted frame, that of the wakefield itself -- in which the laser pulse is almost stationary -- realizes near-optimal speedup of the calculation. And it fundamentally modifies the appearance of the laser in the plasma. In the laboratory frame the observer sees many oscillations of the electromagnetic field in the laser pulse; in the frame of the wake, the observer sees just a few at a time.

Not only is speedup possible because of the coarser resolution, but at the same time numerical instabilities due to short wavelengths can be suppressed without affecting the laser pulse. Combined with special techniques for interpreting the data between frames, this allows the full potential of the boosted-frame principle to be reached.

"We produced the first full multidimensional simulation of the 10 billion-electron-volt design for BELLA," says Vay."We even ran simulations all the way up to a trillion electron volts, which establishes our ability to model the behavior of laser-plasma wakefield accelerator stages at varying energies. With this calculation we achieved the theoretical maximum speedup of the boosted-frame method for such systems -- a million times faster than similar calculations in the laboratory frame."

Simulations will still be challenging, especially those needed to tailor applications of high-energy laser-plasma wakefield accelerators to such uses as free-electron lasers for materials and biological sciences, or for homeland security or other research. But the speedup achieves what might otherwise have been virtually impossible: it puts the essential high-resolution simulations within reach of new supercomputers.

This work was supported by the U.S. Department of Energy's Office of Science, including calculations with the WARP beam-simulation code and other applications at the National Energy Research Scientific Computing Center (NERSC).


Source

Monday, March 21, 2011

Running on a Faster Track: Researchers Develop Scheduling Tool to Save Time on Public Transport

Dr. Tal Raviv and his graduate student Mor Kaspi of Tel Aviv University's Department of Industrial Engineering in the Iby and Aladar Fleischman Faculty of Engineering have developed a tool that makes passenger train journeys shorter, especially when transfers are involved -- a computer-based system to shave precious travel minutes off a passenger's journey.

Dr. Raviv's solution, the"Service Oriented Timetable," relies on computers and complicated algorithms to do the scheduling."Our solution is useful for any metropolitan region where passengers are transferring from one train to another, and where train service providers need to ensure that the highest number of travellers can make it from Point A to Point B as quickly as possible," says Dr. Raviv.

Saves time and resources

In the recent economic downturn, more people are seeking to scale back their monthly transportation costs. Public transportation is a win-win -- good for both the bank account and the environment. But when travel routes are complicated by transfers, it becomes a hard job to manage who can wait -- and who can't -- between trains.

Another factor is consumer preference. Ideally, each passenger would like a direct train to his destination, with no stops en route. But passengers with different itineraries must compete for the system's resources. Adding a stop at a certain station will improve service for passengers for whom the station is the final destination, but will cause a delay for passengers who are only passing through it. The question is how to devise a schedule which is fair for everyone. What are the decisions that will improve the overall condition of passengers in the train system?

It's not about adding more resources to the system, but more intelligently managing what's already there, Dr. Raviv explains.

More time on the train, less time on the platform

In their train timetabling system, Dr. Raviv and Kaspi study the timetables to find places in the train scheduling system that can be optimized so passengers make it to their final destination faster.

Traditionally, train planners looked for solutions based on the frequency of trains passing through certain stops. Dr. Raviv and Kaspi, however, are developing a high-tech solution for scheduling trains that considers the total travel time of passengers, including their waiting time at transfer stations.

"Let's say you commute to Manhattan from New Jersey every day. We can find a way to synchronize trains to minimize the average travel time of passengers," says Dr. Raviv."That will make people working in New York a lot happier."

The project has already been simulated on the Israel Railway, reducing the average travel time per commuter from 60 to 48 minutes. The tool can be most useful in countries and cities, he notes, where train schedules are robust and very complicated.

The researchers won a competition of the Railway Application Section of the International Institute for Operation Research and Management Science (INFORMS) last November for their computer program that optimizes a refuelling schedule for freight trains. Dr. Raviv also works on optimizing other forms of public transport, including the bike-sharing programs found in over 400 cities around the world today.


Source

Sunday, March 20, 2011

Teaching Robots to Move Like Humans

The research was presented March 7 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"It's important to build robots that meet people's social expectations because we think that will make it easier for people to understand how to approach them and how to interact with them," said Andrea Thomaz, assistant professor in the School of Interactive Computing at Georgia Tech's College of Computing.

Thomaz, along with Ph.D. student Michael Gielniak, conducted a study in which they asked how easily people can recognize what a robot is doing by watching its movements.

"Robot motion is typically characterized by jerky movements, with a lot of stops and starts, unlike human movement which is more fluid and dynamic," said Gielniak."We want humans to interact with robots just as they might interact with other humans, so that it's intuitive."

Using a series of human movements taken in a motion-capture lab, they programmed the robot, Simon, to perform the movements. They also optimized that motion to allow for more joints to move at the same time and for the movements to flow into each other in an attempt to be more human-like. They asked their human subjects to watch Simon and identify the movements he made.

"When the motion was more human-like, human beings were able to watch the motion and perceive what the robot was doing more easily," said Gielniak.

In addition, they tested the algorithm they used to create the optimized motion by asking humans to perform the movements they saw Simon making. The thinking was that if the movement created by the algorithm was indeed more human-like, then the subjects should have an easier time mimicking it. Turns out they did.

"We found that this optimization we do to create more life-like motion allows people to identify the motion more easily and mimic it more exactly," said Thomaz.

The research that Thomaz and Gielniak are doing is part of a theme in getting robots to move more like humans move. In future work, the pair plan on looking at how to get Simon to perform the same movements in various ways.

"So, instead of having the robot move the exact same way every single time you want the robot to perform a similar action like waving, you always want to see a different wave so that people forget that this is a robot they're interacting with," said Gielniak.


Source

Saturday, March 19, 2011

Mini Disks for Data Storage

Tiny magnets organize themselves in vortices in the researchers' mini disks. The individual magnets can twist either in a clockwise or a counterclockwise direction in the disk. These two different states can be used in data processing just like switching the electricity"on" and"off" in conventional computers. In contrast to conventional memory storage systems, these magnetic vortices can be switched by the electrons' intrinsic spin and with far less power consumption.

In the exterior section of a vortex the magnetic particles align nearly parallel to one another while the space in the disk's center is insufficient for such a parallel arrangement. Therefore, the elementary magnets in the center of a vortex twist away from the surface of the disk in order to gain space and thus, orient themselves once again next to one another without consuming much energy.

The formation of a vortex only works smoothly if the individual magnetic disks maintain some distance to one another or are relatively big. In order to achieve a high data storage density for compact and efficient devices, manufacturers and users ask for the smallest possible data processing units, which in turn also feature small magnetic vortices and require a closely packed structure. Then, however, the tiny magnets in each disk"feel" their neighbors in the adjacent disks and start to interact. This interaction, though, is a poor prerequisite for memory storage systems.

Therefore Norbert Martin and Jeffrey McCord eliminated the cylindrical shape of the small magnetic disks and instead prepared them with slanted edges. The tiny magnets at the edges are thus forced in the direction of the slant. This orientation creates in turn a magnetic field perpendicular to the disk surface, which then is in the preferred direction of the slant. This requires a lot less energy than the symmetric orientation of this magnetic field for the disks with vertical outer edges. Accordingly, magnetic vortices form more easily with slanted edges.

To create these vortices, Norbert Martin places tiny glass spheres with a diameter of 0.30 thousandth of a millimeter (300 nanometers) on top of a thin magnetic layer. Under specific conditions, all of these glass spheres arrange next to each other and therefore form a mask of tiny hexagons with small gaps. When the scientists direct argon ions at this layer, these atomic and electrically charged projectiles penetrate the gaps between the glass spheres and force particles out of the magnetic layer located under the gaps. The array of the glass spheres, thus, functions as a mask: One magnetic disk remains below each individual glass sphere, while the magnetic layer under the gaps erodes. During the bombardment, though, the argon ions remove material from the glass spheres which, according to that, continuously decrease in size. At the end of the process the diameter of the glass spheres is only 260 nanometers, instead of the original 300 nanometers. This permits the argon ions to reach also areas which are located further inside the magnetic disks that are emerging beneath the glass spheres over time. Because the time of bombardment is shorter in these places, less material is removed on the inside. The desired slanted edge is therefore created virtually on its own.


Source