Quantcast
Channel: MIT News - Electrical Engineering
Viewing all 81 articles
Browse latest View live

iCampus Student Competition yields online tools for improved on campus experiences

$
0
0
The MIT Council on Educational Technology (MITCET) and the Office of Educational Innovation and Technology (OEIT) announced the winner and runners-up for the 2013 iCampus Student Prize competition at the Office of Digital Learning retreat held on May 17. The annual competition is offered each year to all current MIT undergraduates and graduate students (both individuals and groups) to encourage development of technology to improve aspects of MIT’s education and student life.

The grand prize was awarded to Aakanksha Sarda, a rising senior in the Department of Electrical Engineering and Computer Science (EECS), for her work on "WhichClass," an online-exploration tool to aid students in planning their selection of classes. WhichClass will enable students to filter classes and visualize connections between classes within and across departments. In addition to its primary audience of students, the iCampus judges saw the potential of WhichClass to better understand the relationships between courses across departments. These insights are especially important as MIT continues to explore all aspects of digital learning. Sarda will continue developing WhichClass working with the OEIT in the fall. 

EECS junior Abubakar Abid received the runner-up award for the project he and his EECS teammates — sophomores Abdulrahman Alfozan and Aziz Alghunaim — developed, called "Lounge": An electronic platform that speeds up and automates the on-campus housing process. Lounge also gives the dorms the flexibility to preserve individual dorm traditions. Abid, who accepted the award at the May 17 retreat, announced that Lounge has been used to successfully run Maseeh Hall’s Fall 2013 room assignment process. OEIT expects the Lounge team to continue to refine its software and work with more dorms for future implementation. 

The other runner-up project, titled "EduCase" was created by EECS graduate student Sara Itani and EECS senior Adin Schmahmann. Itani and Schmahmann described EduCase as the easiest, quickest and cheapest way to record video lectures — no cameraman, no hours wasted editing. A professor walks into a class, folds open his EduCase, and presses a button for a hassle-free-lecture-recording experience. The judges were very interested in the potential of EduCase to help streamline the process of recording lecture videos as MIT expands further into digital and online learning. OEIT will work with the EduCase team as they continue to develop the project.

Building on the entrepreneurial spirit of service exhibited by MIT students to solve the world’s problems, the iCampus Student Competition encourages projects that are developed to the point where MIT can adopt them for integration into its educational and student life programs. Support for the iCampus Student Prize comes from a fund endowed by Microsoft Research. 

"All the projects were simply terrific," said OEIT Strategic Advisor for the Office of Digital Learning (ODL) and OEIT Director Vijay Kumar. "The iCampus Student Prize activity is a wonderful example of the innovative and creative engagement of our students in developing creative and constructive opportunities for the application of digital technology at MIT. This is particularly significant at a time when the MIT community is so deeply engaged in understanding the impact of these technologies on the future of teaching and learning at MIT."

MIT visiting scientist Kanako Miura, 36, dies while bicycling in Boston

$
0
0
On Sunday, May 19, MIT visiting scientist Dr. Kanako Miura, 36, died after being struck by a motor vehicle while riding her bicycle in Boston’s Back Bay neighborhood.

Miura, an expert in humanoid robotics, came to MIT last fall during a yearlong sabbatical from the National Institute of Advanced Industrial Science and Technology (AIST) in Japan, where she worked as a senior researcher at the Intelligent Systems Research Institute. At MIT, she worked with Professor Russ Tedrake’s Robot Locomotion Group at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

In an email sent to the MIT community on May 19, MIT President L. Rafael Reif wrote, “Our hearts go out to her friends and colleagues at MIT, and especially the Miura family, who must absorb this terrible loss from so far away.”

A native of Japan, Miura received her B.E. in aerospace engineering and her M.E. and Ph.D. in information science from Tohoku University in Japan. She also received a Ph.D. in Electronique, Electrotechnique, Automatique from l'Universite Louis Pasteur, France in 2004. She was a postdoctoral fellow with Tohoku University from 2004-05, and a researcher with NTT DoCoMo (Japan’s largest mobile service provider) from 2005 to 2007. In 2007, she joined the Intelligent Systems Research Institute at the AIST where she worked on the HRP-4C, or Miim robot, a humanoid robot designed to mimic the features of a human female.

“I am shocked and saddened by this tragic loss, and my heart goes out to Dr. Miura’s friends and colleagues in the Robot Locomotion Group and, most importantly, to her family,” said Professor Daniela Rus, director of CSAIL. “Dr. Miura’s research in humanoid robotics was essential to the field, and she made many important contributions during her time at MIT. She truly became an integral part of the CSAIL community during her time here and her loss will be felt by all.”

Miura’s research focused on human motion analysis and motion generation for humanoid robots in particular. For Tedrake and his research group the opportunity to learn from Miura about her approach to walking, planning and control for humanoid robots was an unparalleled chance to gain insight and knowledge from one of the experts in the field of humanoid robotics. Tedrake and his team were excited to apply Miura's experience to the techniques his group has developed in the realm of dynamics and motion control.

“We learned so much from her from the time she arrived, not only about her specific approach to humanoids, but also about the field of humanoid robots in general. As we were trying to enter this new field of humanoid robotics, she was our expert,” Tedrake said.

With the Robot Locomotion Group, Miura conducted research using the language of optimization for control design used in Tedrake’s group to reinterpret the successful results Miura had had in her work at the AIST in Japan. Through their collaborative efforts, Miura and Tedrake hoped to come up with new, state-of-the-art techniques for walking and motion control in humanoid robots.

The timing of Miura’s arrival at MIT was serendipitous, according to Tedrake, as his research group was just starting a new research project with the group’s first humanoid, a robot named HUBO.

During her time at CSAIL, Miura became an essential part of the Robot Locomotion Group, both for her technical expertise and for the warmth, care and friendship she provided to all her colleagues. Tedrake recalled how Miura was an enthusiastic participant in the group runs his research group frequently takes, hosted one of the Robot Locomotion Group parties at her apartment and even became one of the leaders of the team’s group coffee breaks. During Robot Day at the Cambridge Science Festival, Miura spent hours explaining the technology behind humanoid robots to crowds of young children.

“She was really part of the fabric of our group. She was not just a visitor in our group, she became a close friend and a member of our family,” said Tedrake. “The energy she brought to her work was contagious, and her enthusiasm was easy to see. She loved giving tours, and showing off the lab, and she had an unfailing optimism in the future and importance of humanoid robots.”

Mark Pearrow, a software engineer in the Robot Locomotion Group, worked closely with Miura on integrating her knowledge of bipedal locomotion into Tedrake’s research group's software and getting it to run on HUBO. “She was bright, patient, very giving of her time and knowledge, humble and funny. It was an honor to be able to work with her and I will miss her terribly,” Pearrow said. “I just hope we can find the wisdom and strength to carry her spirit forward in our lives and work.”

All members of the MIT community who feel affected by this death are encouraged to contact Mental Health Services at 617-253-2916.

This article will be updated with information about plans to honor Miura’s memory as details become available.

Engineering course ‘demystifies’ entrepreneurship

$
0
0
Undergraduates at MIT have been known to spend countless hours conceiving and creating marketable innovations in labs and classrooms. Sometimes, however, they may struggle with turning their ideas into commercial reality.

“One reason is that these students tend to focus all their time on inventing, as opposed to exploring the journey that awaits a startup founder,” says Ken Zolot, a seasoned entrepreneur and a senior lecturer in the Department of Electrical Engineering and Computer Science.

For the past four years, Zolot has led course 6.933 (The Founder’s Journey), in which engineering undergraduates — along with some students from other disciplines — focus on entrepreneurship and business strategy. Along the way, Zolot says, the course aims to “demystify and inspire entrepreneurship” so students will be equipped to continue with entrepreneurial pursuits — and maybe bring their inventions to fruition.

MIT offers many other entrepreneurship classes for undergraduates — many of them in the MIT Sloan School of Management — but Zolot’s course is uniquely geared toward students studying engineering and the sciences.

For the course, student teams develop startup concepts, refining those ideas through a variety of class projects throughout the semester. They also receive advice from other MIT professors, guest entrepreneurs and others. For their final projects, the teams must pitch their startup ideas to visiting judges, including venture capitalists.

The gradual learning process, which mimics the early stages of building a company, is especially valuable for MIT’s engineering students, says Zolot, who has co-founded several companies, including Egenera and Rethink Robotics.

“Many engineering students came to MIT hoping to learn what it takes to turn an idea into a reality,” he says. “They’ve stumbled upon a great innovation and want to be the next Steve Jobs. So the course answers their initial questions: ‘Who do you need to team up with?’ ‘How do you validate an idea?’ ‘Is entrepreneurship something you even want to do?’”

In the course’s relatively short history, a few successful companies have arisen from student concepts conceived and developed in 6.933 — such as Hipmunk, a popular travel webpage, and Rest Devices, a company that designs health-monitoring clothing.

“Engineering students may come to MIT with different skills than, say, our MBA students — who are already looking to do business courses and don’t need to be inspired into entrepreneurship — but they have great ideas,” Zolot says. “Not only do we want to teach our engineers how to build great things, but we also want for them to experience taking a core technology and transforming into something that has meaningful impact on the world.”

‘Their own clubhouse’


It’s not only the course’s content that’s beneficial for students; being plugged into a community of engineering entrepreneurs also pays dividends, Zolot says. “It’s good for engineers to have their own clubhouse,” he says. “It gives them a chance to meet like-minded co-founders of startups and be inspired by role models.”

Additionally, the course offers fledgling entrepreneurs a controlled and risk-free environment for product and startup experimentation, Zolot says. “It’s like a safe sandbox where they can test out ideas and meet some of those inevitable early failures,” he says.

In that way, Zolot says, students learn valuable entrepreneurial skills, primarily iteration and improvisation — conducting experiments with their inventions and refining as they go.

“It’s about seeing design as a living process, with multiple stakeholders,” Zolot says. “Some engineers mistakenly think they should work diligently behind closed doors devising the perfect plan, then release that plan to the world.”

But that’s not how startups rise, Zolot says. “I want them instead to launch lots of small experiments, get out in the world, figure out what assumptions they can easily test, and make data-driven decisions that direct each stage of the design of the startup,” he says.

That philosophy seems to have resonated with Latif Alam, a junior majoring in economics. “If it’s one thing that this course taught me, it’s coming up with a product to commercialize quickly,” says Alam, who was part of Dolphin Health, a team working on a smartphone app that tracks users’ vital signs and other measures of health. “It’s about coming up with the simplest viable product and getting it into customers’ hands.”

Guest speakers over the course’s five-year history have included Drew Houston, founder and CEO of Dropbox; Khan Academy founder Sal Khan; Chris Hughes, co-founder of Facebook; Robin Chase, co-founder of Zipcar; Scott Kirsner, who writes the “Innovation Economy” column for The Boston Globe; and Dharmesh Shah, co-founder of Hubspot, among dozens of others.

Beginning the journey

This semester’s students demonstrated their newfound business knowledge during their final pitches to visiting judges: They enthusiastically fleshed out their startups’ projected revenues, manufacturing costs, and potential competitors and corporate partnerships, among other things.

During the class, junior Claire O’Connell worked on ChariTweet — a startup that aims to connect Twitter accounts to charities for easy donations. She said the course sparked her entrepreneurial spirit not only because of its curriculum, but also because of the mentors she met along the way — namely, Joe Chung, co-founder and managing director of Red Star Ventures.

“It’s definitely one of the most valuable classes I’ve taken,” says O’Connell, who is majoring in brain and cognitive sciences. “I’ve been exposed to so many people I wouldn’t have [met] otherwise. I almost feel like I’m now part of the Boston entrepreneurship community.”

Echoing her classmate’s sentiments was Elizabeth Phillips, a senior majoring in chemical engineering, who worked on Sense-a-Mold, which aims to manufacture food-storage containers equipped with sensors to detect spoiled food.

Phillips may not pursue her startup — she has graduate school and other interests on her plate — but she says the course was inspiring for her. “I felt like I left each class wanting to go out and start a business,” she says.

Those who wish to further develop their startups can enroll in a companion course, 6.S078 (Entrepreneurship Project), that Zolot co-teaches with David Gifford, a professor of electrical engineering and computer science. In that class, students receive help from outside entrepreneurs and venture capitalists — who may also invest in their startups.

Reinventing invention

$
0
0
An expandable table. A collapsible CNC router. Motorized wheels whose diameter can enlarge and contract depending on the terrain. These are a few of the examples of "transformable design" now on display from the course, "Mechanical Invention Through Computation" led by visiting designer, engineer and inventor Chuck Hoberman. The seminar, co-taught with MIT professors Erik Demaine and Daniela Rus from the Computer Science and Artificial Intelligence Laboratory (CSAIL), was driven by a simple question: How can you create new transformable objects?

This is the question that has shaped Hoberman's unique 20-year career at the nexus of art and science, design and engineering. In 1990, he patented the Hoberman sphere: a mechanism resembling a geodesic dome that was created from a series of scissor-like joints (similar to those found on a cherry picker) allowing the object to expand and contract. Since then, Hoberman has invented a variety of shape-shifting products ranging in scale from toys, shelters, stage sets, medical devices, sculptures, buildings and furniture. There seems to be no end to the potential applications for these kinds of dynamic products, which use kinematics — the geometry of motion — to produce surprising and unique movements.

With Demaine and Rus, the course investigated how this kind of mechanical invention could be further optimized using mathematical analysis and computational processes. According to the exhibit text, "The inventive process itself is ripe for innovation." Like Hoberman, both Demaine and Rus have also built their careers researching reconfigurable forms, making breakthroughs in the areas of programmable matter and computational origami respectively. Together, they have developed a sheet that can assemble into three-dimensional shapes of its own accord — such as a boat or a paper airplane — similar to the way that proteins found in nature can fold and refold into complex shapes to achieve different behaviors.

The resulting student projects ranged in scale and function, from pin joints that could only be seen with a microscope to large retractable tables. There was a DIY expandable lamp created completely from off-the-shelf materials; a foldable trapezoidal kite modeled after one designed by Alexander Graham Bell; a winged Phoenix-like sculpture based on a J.G. Ballard science fiction story; and a skirt that used "inflatable origami" to change size and shape. Every project was the result of continual prototyping with different types of designs and materials, employing methods both digital and physical.

This kind of mechanical intelligence is the wave of the future, as programming meets material to create supple and versatile new forms. In a world defined by flux, we increasingly require products and structures to be flexible, dynamic, and responsive to their changing environments. By transforming the process of invention — a creative act at the intersection of the technical, the scientific and the expressive — the course helped prepare the next generation of inventors for this challenge. "We're really just beginning to scratch the surface," Hoberman said at the exhibit's opening.

The exhibit will be on view through June 20 in the Ray and Maria Stata Center, third floor, Room 32-338. The exhibit was co-presented by the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Art, Science & Technology (CAST).

An electrical switch for magnetism

$
0
0


Researchers at MIT have developed a new way of controlling the motion of magnetic domains — the key technology in magnetic memory systems, such as a computer’s hard disk. The new approach requires little power to write and no power to maintain the stored information, and could lead to a new generation of extremely low-power data storage.

The new approach controls magnetism by applying a voltage, rather than a magnetic field. It could lead to magnetic storage devices in which data is written on microscopic nanowires or tracks, with magnetic “bits” of data hurtling along them like cars on a racetrack.

The new findings are described in a paper published this week in the journal Nature Nanotechnology, written by assistant professor of materials science and engineering Geoffrey Beach and graduate students Uwe Bauer and Satoru Emori.

“For hundreds of years, if you had a magnetic material and you wanted to change the direction in which the material was magnetized, you needed another magnet,” Beach explains. His team’s work represents an entirely new way to switch magnetic states using just a change in voltage, with no magnetic field — a much lower-power process. What’s more, once the magnetic state is switched, it holds that change, providing stable data storage that requires no power except during reading and writing.

The researchers show that this effect can be used to enable new concepts such as “racetrack memory,” with magnetic bits speeding along a magnetic track. While there have been laboratory demonstrations of such devices, none have come close to viability for data storage: The missing piece has been a means to precisely control the position and to electrically select individual magnetic bits racing along the magnetic track.

“Magnetic fields are very hard to localize,” Beach says: If you’re trying to create tiny magnetic bits on a nanowire or track, the magnetic fields from the electromagnets used to read and write data tend to spread out, making it difficult to prevent interaction with adjacent strips, especially as devices get smaller and smaller.

But the new system can precisely select individual magnetic bits represented by tiny domains in a nanowire. The MIT device can stop the movement of magnetic domains hurtling at 20 meters per second, or about 45 mph, “on a dime,” Beach says. They can then be released on demand simply by toggling the applied voltage.

To achieve this feat, the MIT team built a new type of device that controls magnetism in much the same way that a transistor controls a flow of electricity. The key ingredient is a layer of ion-rich material in which atoms have been stripped of electrons, leaving them with an electric charge. A voltage applied to a small electrode above this thin layer can either attract or repel those ions; the ions, in turn, can modify the properties of an underlying magnet and halt the flow of magnetic domains. This could lead to a new family of “magneto-ionic” devices, the researchers suggest.

The effect depends on chemical interactions at the boundary between thin layers of magnetic metal and solid-state electrolyte materials that are sandwiched together, Beach says. “So it’s really the interfacial chemistry that determines the magnetic properties,” he says.

In practice, such a system would use a wire or strip of ferromagnetic material with a series of regularly spaced, small electrodes on top of it. The magnetic bits between these electrodes can then be selectively written or read.

Once the orientation of the magnetic bit between two electrodes has been set by this device, “it inherently will retain its direction and position even in the absence of power,” Beach says. So, in practice, you could set a magnetic bit, “then turn the power off until you need to read it back,” he says.

Because the magnetic switching requires no magnetic field, “there is next to no energy dissipation,” Beach says. What’s more, the resulting pinning of the magnetic bits is extremely strong, resulting in a stable storage system.

The key ingredients of the system are “very simple oxide materials,” Bauer says. In particular, these tests used gadolinium oxide, which is already used in making capacitors and in semiconductor manufacturing.

Dan Allwood, a researcher in materials physics at the University of Sheffield who was not involved in this research, says that it “not only offers a novel technical path to control dynamic magnetization processes in patterned nanostructures, but in doing so also presents new physical processes in how voltage can influence magnetic behavior more generally. Understanding the detailed origins of these effects could allow the creation of simple, low-power information-technology devices.”

In addition to magnetic storage systems, the MIT team says, this technology could also be used to create new electronic devices based on spintronics, in which information is carried by the spin orientation of the atoms. “It opens up a whole new domain,” Beach says. “You can do both data storage and computation, potentially at much lower power.”

The work was supported by the National Science Foundation.

An electrical switch for magnetism

$
0
0


Researchers at MIT have developed a new way of controlling the motion of magnetic domains — the key technology in magnetic memory systems, such as a computer’s hard disk. The new approach requires little power to write and no power to maintain the stored information, and could lead to a new generation of extremely low-power data storage.

The new approach controls magnetism by applying a voltage, rather than a magnetic field. It could lead to magnetic storage devices in which data is written on microscopic nanowires or tracks, with magnetic “bits” of data hurtling along them like cars on a racetrack.

The new findings are described in a paper published this week in the journal Nature Nanotechnology, written by assistant professor of materials science and engineering Geoffrey Beach and graduate students Uwe Bauer and Satoru Emori.

“For hundreds of years, if you had a magnetic material and you wanted to change the direction in which the material was magnetized, you needed another magnet,” Beach explains. His team’s work represents an entirely new way to switch magnetic states using just a change in voltage, with no magnetic field — a much lower-power process. What’s more, once the magnetic state is switched, it holds that change, providing stable data storage that requires no power except during reading and writing.

The researchers show that this effect can be used to enable new concepts such as “racetrack memory,” with magnetic bits speeding along a magnetic track. While there have been laboratory demonstrations of such devices, none have come close to viability for data storage: The missing piece has been a means to precisely control the position and to electrically select individual magnetic bits racing along the magnetic track.

“Magnetic fields are very hard to localize,” Beach says: If you’re trying to create tiny magnetic bits on a nanowire or track, the magnetic fields from the electromagnets used to read and write data tend to spread out, making it difficult to prevent interaction with adjacent strips, especially as devices get smaller and smaller.

But the new system can precisely select individual magnetic bits represented by tiny domains in a nanowire. The MIT device can stop the movement of magnetic domains hurtling at 20 meters per second, or about 45 mph, “on a dime,” Beach says. They can then be released on demand simply by toggling the applied voltage.

To achieve this feat, the MIT team built a new type of device that controls magnetism in much the same way that a transistor controls a flow of electricity. The key ingredient is a layer of ion-rich material in which atoms have been stripped of electrons, leaving them with an electric charge. A voltage applied to a small electrode above this thin layer can either attract or repel those ions; the ions, in turn, can modify the properties of an underlying magnet and halt the flow of magnetic domains. This could lead to a new family of “magneto-ionic” devices, the researchers suggest.

The effect depends on chemical interactions at the boundary between thin layers of magnetic metal and solid-state electrolyte materials that are sandwiched together, Beach says. “So it’s really the interfacial chemistry that determines the magnetic properties,” he says.

In practice, such a system would use a wire or strip of ferromagnetic material with a series of regularly spaced, small electrodes on top of it. The magnetic bits between these electrodes can then be selectively written or read.

Once the orientation of the magnetic bit between two electrodes has been set by this device, “it inherently will retain its direction and position even in the absence of power,” Beach says. So, in practice, you could set a magnetic bit, “then turn the power off until you need to read it back,” he says.

Because the magnetic switching requires no magnetic field, “there is next to no energy dissipation,” Beach says. What’s more, the resulting pinning of the magnetic bits is extremely strong, resulting in a stable storage system.

The key ingredients of the system are “very simple oxide materials,” Bauer says. In particular, these tests used gadolinium oxide, which is already used in making capacitors and in semiconductor manufacturing.

Dan Allwood, a researcher in materials physics at the University of Sheffield who was not involved in this research, says that it “not only offers a novel technical path to control dynamic magnetization processes in patterned nanostructures, but in doing so also presents new physical processes in how voltage can influence magnetic behavior more generally. Understanding the detailed origins of these effects could allow the creation of simple, low-power information-technology devices.”

In addition to magnetic storage systems, the MIT team says, this technology could also be used to create new electronic devices based on spintronics, in which information is carried by the spin orientation of the atoms. “It opens up a whole new domain,” Beach says. “You can do both data storage and computation, potentially at much lower power.”

The work was supported by the National Science Foundation.

Dennis Freeman appointed dean for undergraduate education

$
0
0
Dennis Freeman, professor of electrical engineering, has been appointed as MIT’s next dean for undergraduate education, effective July 1, Chancellor Eric Grimson announced today. Freeman succeeds Daniel Hastings, the Cecil and Ida Green Education Professor of Aeronautics and Astronautics and Engineering Systems, who has served as dean for undergraduate education since 2006.

In describing Freeman’s commitment to undergraduate education, Grimson said, “He brings to the job many years of dedication to undergraduate teaching, curricular innovation, thoughtful advising and mentoring and creative experiments in lab-based, lecture-based and online-based education.” Freeman served as education officer in the Department of Electrical Engineering and Computer Science (EECS) from 2008 to 2011, and currently serves as the department’s undergraduate officer.

Freeman has a lengthy record of leadership on committees charged with supporting and governing the undergraduate experience at MIT. He has served on the Committee on Curricula, the Task Force on the Undergraduate Commons, the Educational Commons Subcommittee, the Committee on Global Educational Opportunities for MIT Undergraduate Education, the Corporation Joint Advisory Committee and the Institute-wide Planning Task Force, and has chaired the Committee on the Undergraduate Program.

An exceptional educator, Freeman has received numerous teaching awards, including the Ruth and Joel Spira Award for Distinguished Teaching, the Irving M. London Teaching Award and the Bose Award for Excellence in Teaching. He has been a MacVicar Faculty Fellow since 2006, and has on three occasions been the students’ selection as the best academic advisor in EECS.

He recently contributed to the development and teaching of 6.01 (Introduction to EECS I), which introduces software engineering, feedback and control, circuits, probability and planning in a series of hands-on activities involving a mobile robot. Last fall, he worked with EECS department head Anantha Chandrakasan to launch the department’s new “SuperUROP” program, which attracted 77 students to complete yearlong research projects.

Freeman is a member of the Research Laboratory of Electronics, where his group studies cochlear micromechanics. His group was the first to directly measure sound-induced motions of cells and accessory structures in the inner ear. Earlier this spring, the group discovered a new mechanism that could help explain the sensitivity and frequency selectivity of human hearing.

Through leadership of the offices that comprise the DUE — the Admissions Office, Global Education and Career Development, the Office of Experiential Learning, the Office of Faculty Support, the Office of Minority Education, the Office of the Registrar, the Office of Undergraduate Advising and Academic Programming, ROTC Programs, Student Financial Services and the Teaching and Learning Laboratory — the dean for undergraduate education holds a wide range of responsibilities that support and enhance integrated student learning inside and outside of the classroom.

Grimson described the importance of the dean’s role in reimagining the future of an MIT education. “With the evolution of MITx under the new Office of Digital Learning,” Grimson said, “I expect Professor Freeman to play a key collaborative role in furthering the development of new approaches to residential education.”

Freeman will also work to bring a recent faculty resolution on freshman advising to fruition. At last month’s Institute Faculty Meeting, the faculty approved a motion that calls on the administration — and the dean for undergraduate education, in particular — to partner with the faculty governance system to ensure that every freshman is paired with an MIT faculty member.

Freeman believes that technological advances are enabling entirely new ways to think about teaching and learning. “The potential of MITx is not only improved access to content,” he said, “but also the development of entirely new modes of interaction for members of the MIT community.” A strong advocate of MIT’s Undergraduate Research Opportunities Program (UROP), Freeman envisions a future in which MITx can enable even more opportunities to learn by doing. "If we can rely on a solid foundation built on MITx, then faculty will have more freedom to interact with students in projects and UROPs.”

Freeman also believes that technological advances could transform other important aspects of the undergraduate experience. “Students want and deserve information about the best possible subjects to achieve their career goals,” he said. “Imagine a system in which students and faculty could access not only comments from end-of-term surveys, but also access advice from successful alumni about the kinds of experiences that were helpful in their careers. Such a system is not technologically difficult, but could add new dimensions to student advising.”

Freeman is a fellow of the Acoustical Society of America. He received an SB degree in electrical engineering from Pennsylvania State University, and SM and PhD degrees in electrical engineering and computer science from MIT.

Professor of Biology Graham C. Walker chaired the search committee, whose members also included faculty members Steve Graves, Anette “Peko” Hosoi, John Ochsendorf, Christine Ortiz, Julie Soriero, Gigliola Staffilani, Collin Stultz and Kai von Fintel; students Chris Smith, Grace Young and Anu Sinha; and senior associate dean Elizabeth Reed.

Amar Bose ’51, SM ’52, ScD ’56, Bose Corporation’s founder, has died at 83

$
0
0
Amar Bose ’51, SM ’52, ScD ’56, a former member of the MIT faculty and the founder of Bose Corporation, has died. He was 83.

Dr. Bose received his bachelor’s degree, master’s degree and doctorate from MIT, all in electrical engineering. He was asked to join the faculty in 1956, and he accepted with the intention of teaching for no more than two years. He continued as a member of the MIT faculty until 2001.

During his long tenure at MIT, Dr. Bose made his mark both in research and in teaching. In 1956, he started a research program in physical acoustics and psychoacoustics: This led to his development of many patents in acoustics, electronics, nonlinear systems and communication theory.

Throughout his career, he was cited for excellent teaching. In a 1969 letter to the faculty, then-dean of the School of Engineering R. L. Bisplinghoff wrote, “Dr. Bose is known and respected as one of M.I.T.’s great teachers and for his imaginative and forceful research in the areas of acoustics, loudspeaker design, two-state amplifier-modulators, and nonlinear systems.”

Paul Penfield Jr., professor emeritus of electrical engineering, was a colleague of Dr. Bose, and he recalls what made Dr. Bose different. “Amar was personally creative,” he said, “but unlike so many other creative people, he was also introspective. He could understand and explain his own thinking processes and offer them as guides to others. I’ve seen him do this for several engineering and management problems. At some deep level, that is what teaching is really all about. Perhaps that helps explain why he was such a beloved teacher.”

Dr. Bose received the Baker Teaching Award in 1963-64; he would receive further awards in later years. In 1989, the Bose Award for Excellence in Teaching was established by the School of Engineering to recognize outstanding contributions to undergraduate education by members of its faculty. The award was established in order to serve as a tribute to the quality of Dr. Bose’s teaching; it is the School’s highest award for teaching. In 1995, the School established another teaching award, the Junior Bose Award: recipients are chosen from among School of Engineering faculty members who are being proposed for promotion to associate professor without tenure.

“Amar Bose was an exceptional human being and an extraordinarily gifted leader,” MIT President L. Rafael Reif said. “He made quality mentoring and a joyful pursuit of excellence, ideas and possibilities the hallmark of his career in teaching, research and business. I learned from him, and was inspired by him, every single time I met with him. Over the years, I have seen the tremendous impact he has had on the lives of many students and fellow faculty at MIT. This proud MIT graduate, professor and innovator was a true giant who over decades enriched the Institute he loved with his energy, dedication, motivation and wisdom. I have never known anyone like him. I will miss him. MIT will miss him. The world will miss him.”

In 1964, Dr. Bose started Bose Corporation based on research he conducted at MIT. From its inception, the company has remained privately owned, with a focus on long-term research.

“Dr. Bose founded Bose Corporation almost 50 years ago with a set of guiding principles centered on research and innovation. That focus has never changed, and never will,” said Bob Maresca, president of Bose Corporation. “Bose Corporation will remain privately held, and stay true to Dr. Bose’s ideals. We are as committed to this as he was to us. Today and every day going forward, our hearts are with Dr. Bose; and we will do everything we can to make him proud of the company he built.”

In 2011, to fulfill his lifelong dream to support MIT education, Dr. Bose gave to MIT the majority of the stock of Bose Corporation in the form of nonvoting shares. Under the terms of the gift, dividends from those shares will be used by MIT to sustain and advance MIT’s education and research mission. MIT cannot sell its Bose shares, and does not participate in the management or governance of the company.

In expressing appreciation to Dr. Bose on the occasion of the gift, MIT’s then-president, Susan Hockfield, said of him, “His insatiable curiosity propelled remarkable research, both at MIT and within the company he founded. Dr. Bose has always been more concerned about the next two decades than about the next two quarters.”

“Amar Bose was a legend at MIT,” said MIT Chancellor Eric Grimson, who served as a faculty colleague of Dr. Bose in the Department of Electrical Engineering and Computer Science. “He was an incredible teacher, an inspiring mentor, a deep and insightful researcher. He has influenced multiple generations of students, both directly through the classroom and laboratory, and through the many students he influenced who have themselves pursued careers as faculty, propagating Professor Bose’s approach to mentorship and teaching.”

Dr. Bose was given many awards and honors during his lifetime. He was a Fulbright Postdoctoral Scholar, an elected member of the National Academy of Engineering and of the American Academy of Arts and Sciences, and a fellow of the Institute of Electrical and Electronics Engineers.

Vanu G. Bose ’87, SM ’94, PhD ’99, son of Dr. Bose, said, “Personally, my single greatest educational experience at MIT was being a teaching assistant for my father in his acoustics course (6.312). While my father is well known for his success as an inventor and businessman, he was first and foremost a teacher. I could not begin to count the number of people I’ve met who’ve told me that my father was the best professor they ever had and how taking 6.01 from him changed their life.

“My father’s 66-year relationship with MIT was an integral part of his life. He would often talk about his mentors, professors Ernst Guillemin, Norbert Wiener, Y. W. Lee and Jerome Wiesner, as having played critical roles in shaping his life and work. It was because of everything that MIT did for him that my father was so pleased to be able to give back to MIT through his gift.”

Gifts in memory of Dr. Amar G. Bose may be made to MIT and will be directed to undergraduate scholarship support, building upon Dr. Bose's legacy of support for students: contact Bonny Kellermann '72, Director of Memorial Gifts, at bonnyk@mit.edu, for details. Gifts may also be directed to the Lahey Clinic.

This article will be updated with information about plans to honor Dr. Bose’s memory as details become available.

CSAIL researchers develop new ways to streamline, simplify 3-D printing

$
0
0
With recent advances in three-dimensional (3-D) printing technology, it is now possible to produce a wide variety of 3-D objects, utilizing computer graphics models and simulations. But while the hardware exists to reproduce complex, multi-material objects, the software behind the printing process is cumbersome, slow and difficult to use, and needs to improve substantially if 3-D technology is to become more mainstream.

On July 25, a team of researchers from the MIT Computer Science and Artificial Intelligence Lab (CSAIL) will present two papers at the SIGGRAPH computer graphics conference in Anaheim, California, which propose new methods for streamlining and simplifying the 3-D printing process, utilizing more efficient, intuitive and accessible technologies.

“Our goal is to make 3-D printing much easier and less computationally complex,” said Associate Professor Wojciech Matusik, co-author of the papers and a leader of the Computer Graphics Group at CSAIL. “Ours is the first work that unifies design, development and implementation into one seamless process, making it possible to easily translate an object from a set of specifications into a fully operational 3-D print.”

3-D printing poses enormous computational challenges to existing software. For starters, in order to fabricate complex surfaces containing bumps, color gradations and other intricacies, printing software must produce an extremely high-resolution model of the object, with detailed information on each surface that is to be replicated. Such models often amount to petabytes of data, which current programs have difficulty processing and storing.

To address these challenges, Matusik and his team developed OpenFab, a programmable “pipeline” architecture. Inspired by RenderMan, the software used to design computer-generated imagery commonly seen in movies, OpenFab allows for the production of complex structures with varying material properties. To specify intricate surface details and the composition of a 3-D object, OpenFab uses “fablets”, programs written in a new programming language that allow users to modify the look and feel of an object easily and efficiently.

“Our software pipeline makes it easier to design and print new materials and to continuously vary the properties of the object you are designing,” said Kiril Vidimče, lead author of one of the two papers and a PhD student at CSAIL. “In traditional manufacturing most objects are composed of multiple parts made out of the same material. With OpenFab, the user can change the material consistency of an object, for example designing the object to transition from stiff at one end to flexible and compressible at the other end.”

Thanks to OpenFab’s streaming architecture, data about the design of the 3-D object is computed on demand and sent to the printer as it becomes available, with little start-up delay. So far, Matusik’s research team has been able to replicate a wide array of objects using OpenFab, including an insect embedded in amber, a marble table and a squishy teddy bear.

In order to create lifelike objects that are hard, soft, reflect light and conform to touch, users must currently specify the material composition of the object they wish to replicate. This is no easy task, as it’s often easier to define the desired end-state of an object -- for example, saying that an object needs to be soft -- than to determine which materials should be used in its production.

To simplify this process, Matusik and his colleagues developed a new methodology called Spec2Fab. Instead of requiring explicit design specifications for each region of a print, and testing every possible combination, Spec2Fab employs a “reducer tree”, which breaks the object down into more manageable chunks. Spec2Fab’s “tuner network” then uses the reducer tree to automatically determine the material composition of an object.

By combining existing computer graphics algorithms, Matusik’s team has used Spec2Fab to create a multitude of 3-D prints, creating optical effects like caustic images and objects with specific deformation and textural properties.

“Spec2Fab is a small but powerful toolbox for building algorithms that can produce an endless array of complex, printable objects,” said Desai Chen, a PhD student at CSAIL and lead author of one of the papers presented at SIGGRAPH.

The two papers to be presented at SIGGRAPH are “OpenFab: A Programmable Pipeline for Multi-Material Fabrication,” authored by Kiril Vidimče, Szu-Po Wang, Jonathan Ragan-Kelley and Wojciech Matusik; and “Spec2Fab: A Reducer-Tuner Model for Translating Specifications to 3-D Prints,” authored by Desai Chen, David I. W. Levin, Piotr Didyk, Pitchaya Sitthi-Amorn and Wojciech Matusik.

For more information on OpenFab, please visit: http://openfab.mit.edu/. For more information on Spec2Fab, please visit: http://spec2fab.mit.edu.

New energy source for future medical implants: sugar

$
0
0
MIT engineers have developed a fuel cell that runs on the same sugar that powers human cells: glucose. This glucose fuel cell could be used to drive highly efficient brain implants of the future, which could help paralyzed patients move their arms and legs again.

The fuel cell, described in the June 12 edition of the journal PLoS ONE, strips electrons from glucose molecules to create a small electric current. The researchers, led by Rahul Sarpeshkar, an associate professor of electrical engineering and computer science at MIT, fabricated the fuel cell on a silicon chip, allowing it to be integrated with other circuits that would be needed for a brain implant.

The idea of a glucose fuel cell is not new: In the 1970s, scientists showed they could power a pacemaker with a glucose fuel cell, but the idea was abandoned in favor of lithium-ion batteries, which could provide significantly more power per unit area than glucose fuel cells. These glucose fuel cells also utilized enzymes that proved to be impractical for long-term implantation in the body, since they eventually ceased to function efficiently.

The new twist to the MIT fuel cell described in PLoS ONE is that it is fabricated from silicon, using the same technology used to make semiconductor electronic chips. The fuel cell has no biological components: It consists of a platinum catalyst that strips electrons from glucose, mimicking the activity of cellular enzymes that break down glucose to generate ATP, the cell’s energy currency. (Platinum has a proven record of long-term biocompatibility within the body.) So far, the fuel cell can generate up to hundreds of microwatts — enough to power an ultra-low-power and clinically useful neural implant.

“It will be a few more years into the future before you see people with spinal-cord injuries receive such implantable systems in the context of standard medical care, but those are the sorts of devices you could envision powering from a glucose-based fuel cell,” says Benjamin Rapoport, a former graduate student in the Sarpeshkar lab and the first author on the new MIT study.

Rapoport calculated that in theory, the glucose fuel cell could get all the sugar it needs from the cerebrospinal fluid (CSF) that bathes the brain and protects it from banging into the skull. There are very few cells in the CSF, so it’s highly unlikely that an implant located there would provoke an immune response. There is also significant glucose in the CSF, which does not generally get used by the body. Since only a small fraction of the available power is utilized by the glucose fuel cell, the impact on the brain’s function would likely be small.

Karim Oweiss, an associate professor of electrical engineering, computer science and neuroscience at Michigan State University, says the work is a good step toward developing implantable medical devices that don’t require external power sources.

“It’s a proof of concept that they can generate enough power to meet the requirements,” says Oweiss, adding that the next step will be to demonstrate that it can work in a living animal.

A team of researchers at Brown University, Massachusetts General Hospital and other institutions recently demonstrated that paralyzed patients could use a brain-machine interface to move a robotic arm; those implants have to be plugged into a wall outlet.

Mimicking biology with microelectronics

Sarpeshkar’s group is a leader in the field of ultra-low-power electronics, having pioneered such designs for cochlear implants and brain implants. “The glucose fuel cell, when combined with such ultra-low-power electronics, can enable brain implants or other implants to be completely self-powered,” says Sarpeshkar, author of the book “Ultra Low Power Bioelectronics.” This book discusses how the combination of ultra-low-power and energy-harvesting design can enable self-powered devices for medical, bio-inspired and portable applications.

Sarpeshkar’s group has worked on all aspects of implantable brain-machine interfaces and neural prosthetics, including recording from nerves, stimulating nerves, decoding nerve signals and communicating wirelessly with implants. One such neural prosthetic is designed to record electrical activity from hundreds of neurons in the brain’s motor cortex, which is responsible for controlling movement. That data is amplified and converted into a digital signal so that computers — or in the Sarpeshkar team’s work, brain-implanted microchips — can analyze it and determine which patterns of brain activity produce movement.

The fabrication of the glucose fuel cell was done in collaboration with Jakub Kedzierski at MIT’s Lincoln Laboratory. “This collaboration with Lincoln Lab helped make a long-term goal of mine — to create glucose-powered bioelectronics — a reality,” Sarpeshkar says. Although he has just begun working on bringing ultra-low-power and medical technology to market, he cautions that glucose-powered implantable medical devices are still many years away.

Communication scheme makes popular applications ‘gracefully mobile’

$
0
0
The Secure Shell, or SSH, is a popular program that lets computer users log onto remote machines. Software developers use it for large collaborative projects, students use it to work from university servers, customers of commercial cloud-computing services use it access their accounts, and system administrators use it to manage computers on their networks.

First released in 1995, SSH was designed for an Internet consisting of stationary machines, and it hasn’t evolved with the mobile Internet. Among other problems, it can’t handle roaming: If you close your laptop at the office and reopen it at home, your SSH session will have died; the same goes for an SSH session on a tablet computer that switches from a Wi-Fi connection to the cellular network.

At the Usenix Annual Technical Conference in Boston this month, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory presented a paper describing a new remote-login program called Mosh, for mobile shell, which solves many of SSH’s problems. The researchers also believe that the communication scheme underlying Mosh could improve the performance of a host of other mobile applications.

Even before they presented the paper, they made Mosh freely available on a number of different websites; it’s now been downloaded at least 70,000 times. “That’s from the ones that we’re able to track,” says Keith Winstein, a graduate student in MIT’s Department of Electrical Engineering and Computer Science and lead developer of Mosh.

Echo location

Besides roaming, another of the problems that Mosh addresses is the delayed “echoing” of keystrokes in SSH. During a standard SSH session, when a user strikes a key on the keyboard, nothing appears onscreen until information about the keystroke travels to the remote machine, which performs a computation and sends back the result. That’s because, in many applications commonly run through SSH, keystrokes don’t necessarily correspond directly to displayed symbols: In an email program, for instance, the “N” key might call up the next email; similarly, when a user enters a password, it shouldn’t appear onscreen.

Mosh has an algorithm running in the background that deduces when keystrokes should be displayed and when they shouldn’t. Until the remote computer confirms Mosh’s predictions, the characters onscreen are underlined. “I have never seen it display anything wrong,” says Hari Balakrishnan, a professor in the Department of Electrical Engineering and Computer Science and Winstein’s coauthor on the Usenix paper.

The reason Mosh handles roaming so much better than SSH does is that it abandons the Transmission Control Protocol, or TCP — the framework that governs almost all the traffic in today’s Internet.

“TCP has some wonderful ideas embedded in it — congestion control, ways of doing reliability and so forth,” Balakrishnan says. “But it has this one big, big problem: It provides a reliable, in-order byte-stream abstraction between two fixed endpoints. If you were to pick the worst possible abstraction for the mobile world, it would be that.”

With mobile applications, Balakrishnan explains, it’s not as crucial that every byte of information be displayed in exactly the order in which it was sent. If you’ve lost connectivity while using the map application on a smartphone, for instance, when the network comes back up, you probably want an accurate map of your immediate surroundings; you don’t want to wait while the phone reloads data about where you were when the network went down.

State currency

Winstein and Balakrishnan developed their own communications protocol, which they call SSP, for state synchronization protocol. SSP, Balakrishnan says, works more like the protocols that govern videoconferencing, where getting timely data about the most recent state of the application is more important than getting exhaustive data about previous states.

Mosh is already proving itself useful: At his computer in his office, Balakrishnan pulls up the connection log for one of the servers in MIT’s Athena network; a third of the people logged into it are using Mosh. But in ongoing research, Winstein and Balakrishnan are investigating how SSP can be improved and extended so that other applications can use it as well.

“We have sort of a broader agenda here,” Winstein says. “Mosh is a gracefully mobile application. But there’s a lot of even more popular network applications that have the same problems, like Gmail, or Google Chat, or Skype. None of these programs gracefully handle mobility, even though they’re intended for mobile devices.”

“Mosh is a fine bit of engineering, focused on the precise requirements of the application at hand,” says Jon Howell, a researcher at Microsoft Research who specializes in web applications. Winstein, Howell says, “spent a lot of effort on the gritty practical details of correct terminal behavior.”

Howell points out that there’s a good deal of existing research on state synchronization, the technique underlying SSP, and he’s not sure that SSP offers any clear advantages over its predecessors. “If [Winstein] pushes the SSP idea farther, I suspect he'll find himself covering some of the [same] territory,” Howell says. “Perhaps he'll discover a new strategy or a new class of applications for which synchronization is particularly useful.”

New chip captures power from multiple sources

$
0
0
Researchers at MIT have taken a significant step toward battery-free monitoring systems — which could ultimately be used in biomedical devices, environmental sensors in remote locations and gauges in hard-to-reach spots, among other applications.

Previous work from the lab of MIT professor Anantha Chandrakasan has focused on the development of computer and wireless-communication chips that can operate at extremely low power levels, and on a variety of devices that can harness power from natural light, heat and vibrations in the environment. The latest development, carried out with doctoral student Saurav Bandyopadhyay, is a chip that could harness all three of these ambient power sources at once, optimizing power delivery.

The energy-combining circuit is described in a paper being published this summer in the IEEE Journal of Solid-State Circuits.

“Energy harvesting is becoming a reality,” says Chandrakasan, the Keithley Professor of Electrical Engineering and head of MIT’s Department of Electrical Engineering and Computer Science. Low-power chips that can collect data and relay it to a central facility are under development, as are systems to harness power from environmental sources. But the new design achieves efficient use of multiple power sources in a single device, a big advantage since many of these sources are intermittent and unpredictable.

“The key here is the circuit that efficiently combines many sources of energy into one,” Chandrakasan says. The individual devices needed to harness these tiny sources of energy — such as the difference between body temperature and outside air, or the motions and vibrations of anything from a person walking to a bridge vibrating as traffic passes over it — have already been developed, many of them in Chandrakasan’s lab.

Combining the power from these variable sources requires a sophisticated control system, Bandyopadhyay explains: Typically each energy source requires its own control circuit to meet its specific requirements. For example, circuits to harvest thermal differences typically produce only 0.02 to 0.15 volts, while low-power photovoltaic cells can generate 0.2 to 0.7 volts and vibration-harvesting systems can produce up to 5 volts. Coordinating these disparate sources of energy in real time to produce a constant output is a tricky process.

So far, most efforts to harness multiple energy sources have simply switched among them, taking advantage of whichever one is generating the most energy at a given moment, Bandyopadhyay says, but that can waste the energy being delivered by the other sources. “Instead of that, we extract power from all the sources,” he says. The approach combines energy from multiple sources by switching rapidly between them.

Another challenge for the researchers was to minimize the power consumed by the control circuit itself, to leave as much as possible for the actual devices it’s powering — such as sensors to monitor heartbeat, blood sugar, or the stresses on a bridge or a pipeline. The control circuits optimize the amount of energy extracted from each source.

The system uses an innovative dual-path architecture. Typically, power sources would be used to charge up a storage device, such as a battery or a supercapacitor, which would then power an actual sensor or other circuit. But in this control system, the sensor can either be powered from a storage device or directly from the source, bypassing the storage system altogether. “That makes it more efficient,” Bandyopadhyay says. The chip uses a single time-shared inductor, a crucial component to support the multiple converters needed in this design, rather than separate ones for each source. 

David Freeman, chief technologist for power-supply solutions at Texas Instruments, who was not involved in this work, says, “The work being done at MIT is very important to enabling energy harvesting in various environments. The ability to extract energy from multiple different sources helps maximize the power for more functionality from systems like wireless sensor nodes.”

Only recently, Freeman says, have companies such as Texas Instruments developed very low-power microcontrollers and wireless transceivers that could be powered by such sources. “With innovations like these that combine multiple sources of energy, these systems can now start to increase functionality,” he says. “The benefits from operating from multiple sources not only include maximizing peak energy, but also help when only one source of energy may be available.”

The work has been funded by the Interconnect Focus Center, a combined program of the Defense Advanced Research Projects Agency and companies in the defense and semiconductor industries.

Searching genomic data faster

$
0
0
In 2001, the Human Genome Project and Celera Genomics announced that after 10 years of work at a cost of some $400 million, they had completed a draft sequence of the human genome. Today, sequencing a human genome is something that a single researcher can do in a couple of weeks for less than $10,000.

Since 2002, the rate at which genomes can be sequenced has been doubling every four months or so, whereas computing power doubles only every 18 months. Without the advent of new analytic tools, biologists’ ability to generate genomic data will soon outstrip their ability to do anything useful with it.

In the latest issue of Nature Biotechnology, MIT and Harvard University researchers describe a new algorithm that drastically reduces the time it takes to find a particular gene sequence in a database of genomes. Moreover, the more genomes it’s searching, the greater the speedup it affords, so its advantages will only compound as more data is generated.

In some sense, this is a data-compression algorithm — like the one that allows computer users to compress data files into smaller zip files. “You have all this data, and clearly, if you want to store it, what people would naturally do is compress it,” says Bonnie Berger, a professor of applied math and computer science at MIT and senior author on the paper. “The problem is that eventually you have to look at it, so you have to decompress it to look at it. But our insight is that if you compress the data in the right way, then you can do your analysis directly on the compressed data. And that increases the speed while maintaining the accuracy of the analyses.”

Exploiting redundancy

The researchers’ compression scheme exploits the fact that evolution is stingy with good designs. There’s a great deal of overlap in the genomes of closely related species, and some overlap even in the genomes of distantly related species: That’s why experiments performed on yeast cells can tell us something about human drug reactions.

Berger; her former graduate student Michael Baym PhD ’09, who’s now a research affiliate in the MIT math department and a postdoc in systems biology at Harvard Medical School; and her current grad student Po-Ru Loh developed a way to mathematically represent the genomes of different species — or of different individuals within a species — such that the overlapping data is stored only once. A search of multiple genomes can thus concentrate on their differences, saving time.

“If I want to run a computation on my genome, it takes a certain amount of time,” Baym explains. “If I then want to run the same computation on your genome, the fact that we’re so similar means that I’ve already done most of the work.”

In experiments on a database of 36 yeast genomes, the researchers compared their algorithm to one called BLAST, for Basic Local Alignment Search Tool, one of the most commonly used genomic-search algorithms in biology. In a search for a particular genetic sequence in only 10 of the yeast genomes, the new algorithm was twice as fast as BLAST; but in a search of all 36 genomes, it was four times as fast. That discrepancy will only increase as genomic databases grow larger, Berger explains.

Matchmaking

The new algorithm would be useful in any application where the central question is, as Baym puts it: “I have a sequence; what is it similar to?” Identifying microbes is one example. The new algorithm could help clinicians determine causes of infections, or it could help biologists characterize “microbiomes,” collections of microbes found in animal tissue or particular microenvironments; variations in the human microbiome have been implicated in a range of medical conditions. It could be used to characterize the microbes in particularly fertile or infertile soil, and it could even be used in forensics, to determine the geographical origins of physical evidence by its microbial signatures.

“The problem that they’re looking at — which is, given a sequence, trying to determine what known sequences are similar to it — is probably the oldest problem in computational biology, and it’s perhaps the most commonly asked question in computational biology,” says Mona Singh, a professor of computer science at Princeton University and a faculty member at Princeton’s Lewis-Sigler Institute for Integrative Genomics. “And the problem, just for that reason, is of central importance.”

In the last 10 years, Singh says, biologists have tended to think in terms of “reference genomes” — genomes, such as the draft human sequence released in 2001, that try to generalize across individuals within a species and even across species. “But as we’re getting more and more individuals even within a species, and more very closely related sequenced distinct species, I think we’re starting to move away from the idea of a single reference genome,” Singh says. “Their approach is really going to shine when you have many closely related organisms.”

Berger’s group is currently working to extend the technique to information on proteins and RNA sequences, where it could pay even bigger dividends. Now that the human genome has been mapped, the major questions in biology are what genes are active when, and how the proteins they code for interact. Searches of large databases of biological information are crucial to answering both questions.

‘Rising Stars in EECS’ convene at MIT

$
0
0
On Nov. 1 and 2, nearly three dozen of the world’s top young female electrical engineers and computer scientists gathered at MIT to experience something rare: outnumbering men in the room.

The Institute’s Department of Electrical Engineering and Computer Science (EECS) invited the women to its inaugural “Rising Stars in EECS” workshop. Attendees came from MIT, Stanford University, the University of California at Berkeley, Cornell University, Carnegie Mellon University, the Max Planck Institute in Munich, École Polytechnique Fédérale de Lausanne in Switzerland and other research institutions to network with one another and with faculty from MIT and elsewhere. On the cusp of entering the workforce, these PhD candidates and postdocs came for guidance on launching careers as professors and to raise their visibility in the field of electrical engineering and computer science.

“You see so few women [in electrical engineering and computer science], it’s nice to see them all together,” said Lydia Chilton, a PhD candidate at the University of Washington who studies crowdsourcing and other aspects of human-computer interaction. “As I’ve gotten older I really value the female colleagues that I have. I feel like I interact with them more naturally.”

Floraine Berthouzoz, who works in computer graphics at UC Berkeley, is the only female doctoral candidate in a research group of about 20 people. “I’m at the end of my PhD, and I thought this was an amazing opportunity to meet other women interested in education and to get some advice,” she said.

While women’s representation in most science and engineering fields has increased substantially over the past 25 years, their participation in the fields of electrical engineering and computer science has been halting. A recent National Science Foundation survey shows that women make up just 22 percent of PhDs in computer science. Moving up the ranks of academia, the numbers become even more stark: Women constitute a mere 10 percent of tenure-track faculty in the top electrical engineering academic departments, according to the National Academies

“When you talk to search committees at different universities, one of the complaints is that there aren’t many applications from women,” said MIT’s Polina Golland, an associate professor in EECS who organized Rising Stars with the head of her department, Anantha P. Chandrakasan, the Keithley Professor of Electrical Engineering. They plan to hold the workshop annually.

Departments of electrical engineering and computer science at MIT and peer institutions are keen to add more women to academia’s pipeline by identifying potential candidates and encouraging them to apply for faculty positions, Golland said. She believes that the workshop was a significant step in the right direction.

It afforded a valuable opportunity, she said, for elite researchers to meet peers in the broad field of electrical engineering and computer science — outside the confines of their respective subdisciplines — and to be inspired by established women faculty.

Kristen Dorsey, a PhD candidate studying microelectromechanical systems at Carnegie Mellon, said: “Before attending [the workshop], I thought, ‘Well, I’d like to be a faculty member, but I don’t have what it takes. I don’t have the publication record, I don’t know what I’m doing.’” However, being invited to MIT and meeting women who have successful careers in academia gave her a measure of confidence, she said.

Dorsey and other attendees, including Jenna Wiens of MIT, added that meeting talented researchers from the wide variety of fields in electrical engineering and computer science exposed them to new possibilities for collaboration and professional support.

“It’s a great networking opportunity to meet other women who are interested in pursuing careers in academia whom I could forge relationships with early on — and then hopefully tap into that network later,” said Wiens, whose research focuses on machine learning and data mining.

That was precisely Chandrakasan’s vision when he began planning Rising Stars with Golland as part of his department’s 2012 strategic plan. He was inspired by the success that MIT’s Department of Aeronautics and Astronautics has had with a similar annual workshop for women. In his welcoming remarks, he told the invitees, “As you start thinking about applying for faculty positions, I hope you can use each other as a resource. This is a group I hope will stay together.”

MIT School of Engineering Dean Ian Waitz, who launched the AeroAstro women’s workshop in 2009 when he was head of that department, was also there to welcome the Rising Stars invitees, as was Cynthia Barnhart, MIT’s associate dean of engineering.

The workshop’s attendees shared their research during formal presentations and poster sessions on topics ranging from improving online video-streaming rates to tamper-proofing circuits to modeling the risk of infection among hospital patients. They networked over breakfast with senior women faculty from MIT, including National Medal of Science honoree Mildred Dresselhaus and Turing Award recipient Barbara Liskov. They also got a primer on how the promotion process works from a panel of faculty representing Harvard University, MIT, Boston University and the University of Rochester.

The issue of work-life balance arose at the panel on promotions and at occasional moments throughout the workshop. The panelists noted that it has become common policy at universities to adjust the “tenure clock” for women who have to take time off to care for infants, and that male faculty can and do take advantage of family-leave policies.

During the Q&A session with junior faculty, MIT’s Dana Weinstein, the Steve and Renee Finn Career Development Assistant Professor in EECS, pointed out that while being in a high-profile academic position keeps one very busy, “There is lot of flexibility in your schedule. You can have a life outside of work.”

Looking ahead to a career as a faculty member, Chilton was enthusiastic about other ways in which academia offers flexibility. “I like that it doesn’t preclude me from doing other things, such as a startup. And I really like being on the cutting edge. I feel like, even though I’m making things that aren’t immediately practical, they will be in five years.”

It is Chandrakasan’s and Golland’s bet that a cohort of passionate and talented women like Chilton will advance electrical engineering and computer science — not only through groundbreaking research, but by paving the way for a next, substantially larger generation of female engineers and scientists.

Ozdaglar selected as the inaugural Steven and Renee Finn Innovation Fellow

$
0
0
Department of Electrical Engineer and Computer Science (EECS) Professor Asu Ozdaglar has been named the inaugural Steven and Renee Finn Innovation Fellow. Made possible by a gift from EECS alumnus Steven Finn '68, SM '69, EE '70, ScD '75 and his wife, Renee, the fellowship provides tenured, mid-career faculty in EECS with resources for up to three years to pursue new research and development paths, and to make potentially important discoveries through early stage research.

Since 2003, Ozdaglar has been a member of the EECS faculty, as well as a member of Laboratory for Information and Decision Systems and the Operations Research Center. Her research interests include optimization theory with emphasis on nonlinear programming and convex analysis; game theory with applications in communication, social and economic networks; and distributed optimization and control. She is the co-author of the book, titled “Convex Analysis and Optimization” (Athena Scientific, 2003). She also co-directs (with Sandy Pentland, the Toshiba Professor of Media Arts and Science) the Center for Connection Science and Engineering, a virtual center at MIT to bring together network initiatives from across the Institute.

Ozdaglar is the recipient of a Microsoft fellowship, the MIT Graduate Student Council Teaching award, the NSF Career award, Class of 1943 Career Development Chair, the 2008 Donald P. Eckman award of the American Automatic Control Council, and is a 2011 Kavli Fellow of the National Academy of Sciences. She served on the Board of Governors of the Control System Society in 2010. She is currently the area co-editor for a new area for the journal Operations Research, titled "Games, Information and Networks"; an associate editor for IEEE Transactions on Automatic Control, and the chair of the Control System Society Technical Committee “Networks and Communications Systems."

Chips that can steer light

$
0
0
If you want to create a moving light source, you have a few possibilities. One is to mount a light emitter in some kind of mechanical housing — the approach used in, say, theatrical spotlights, which stagehands swivel and tilt to track performers.

Another possibility, however, is to create an array of light emitters and vary their “phase” — the alignment of the light waves they produce. The out-of-phase light waves interfere with one another, reinforcing each other in some directions but annihilating each other in others. The result is a light source that doesn’t move, but can project a beam in any direction.

Such “phased arrays” have been around for more than a century, used most commonly in radar transmitters, which can be as much as 100 feet tall. But in this week’s issue of Nature, researchers from MIT’s Research Laboratory of Electronics (RLE) describe a 4,096-emitter array that fits on a single silicon chip. Chips that can steer beams of light could enable a wide range of applications, including cheaper, more efficient, and smaller laser rangefinders; medical-imaging devices that can be threaded through tiny blood vessels; and even holographic televisions that emit different information when seen from different viewing angles.

In their Nature paper, the MIT authors — Michael Watts, an associate professor of electrical engineering, Jie Sun, a graduate student in Watts’ lab and first author on the paper, Sun’s fellow graduate students Erman Timurdogan and Ami Yaacobi, and Ehsan Shah Hosseini, an RLE postdoc — report on two new chips. Both chips take in laser light and re-emit it via tiny antennas etched into the chip surface.

Calculated incoherence

A wave of light can be thought of as a sequence of crests and troughs, just like those of an ocean wave. Laser light is coherent, meaning that the waves composing it are in phase: Their troughs and crests are perfectly aligned. The antennas in the RLE researchers’ chips knock that coherent light slightly out of phase, producing interference patterns.

In the 4,096-antenna chip — a 64-by-64 grid of antennas — the phase shifts are precalculated to produce rows of images of the MIT logo. The antennas are not simply turned off and on in a pattern that traces the logo, as the pixels in a black-and-white monitor would be. All of the antennas emit light, and if you were close enough to them (and had infrared vision), you would see a regular array of pinpricks of light. Seen from more than a few millimeters away, however, the interference of the antennas’ phase-shifted beams produces a more intricate image.

In the other chip, which has an eight-by-eight grid of antennas, the phase shift produced by the antennas is tunable, so the chip can steer light in arbitrary directions. In both chips, the design of the antenna is the same; in principle, the researchers could have built tuning elements into the antennas of the larger chip. But “there would be too many wires coming off the chip,” Watts says. “Four thousand wires is more than Jie wanted to solder up.”

Indeed, Watts explains, wiring limitations meant that even the smaller chip is tunable only a row or column at a time. But that’s enough to produce some interesting interference patterns that demonstrate that the tuning elements are working. The large chip, too, largely constitutes a proof of principle, Watts says. “It’s kind of amazing that this actually worked,” he says. “It’s really nanometer precision of the phase, and you’re talking about a fairly large chip.”

Precision engineering

In both chips, laser light is conducted across the chip by silicon ridges known as “waveguides.” Drawing light from the optical signal attenuates it, so antennas close to the laser have to draw less light than those farther away. If the calculation of either the attenuation of the signal or the variation in the antennas’ design is incorrect, the light emitted by the antennas could vary too much to be useful.

Both chips represent the state of the art in their respective classes. No two-dimensional tunable phased array has previously been built on a chip, and the largest previous non-tunable (or “passive”) array had only 16 antennas. Nonetheless, “I think we can go to much, much larger arrays,” Watts says. “It’s now very believable that we could make a 3-D holographic display.”

“I think it’s one of the first clearly competitive applications where photonics wins,” says Michal Lipson, an associate professor of electrical and computer engineering at Cornell University and head of the Cornell Nanophotonics Group. Within the photonics community, Lipson says, most work is geared toward “the promise that if photonics is embedded in electronic systems, it’s going to really improve things. Here, [the MIT team] has developed a complete system. It’s not a small component: This system is ready to go. So it’s very convincing.”

Lipson adds that the tuning limitations of the MIT researchers’ prototype chips is no reason to doubt the practicality of the design. “It’s just physically hard to come up with a very high number of contacts that are external,” she says. “Now, if you were to integrate everything so that it’s all on silicon, there shouldn’t be any problem to integrate those contacts.”

‘Invisible’ particles could enhance thermoelectric devices

$
0
0
Thermoelectric devices — which can either generate an electric current from a difference in temperature or use electricity to produce heating or cooling without moving parts — have been explored in the laboratory since the 19th century. In recent years, their efficiency has improved enough to enable limited commercial use, such as in cooling systems built into the seats of automobiles. But more widespread use, such as to harness waste heat from power plants and engines, calls for better materials.

Now, a new way of enhancing the efficiency of such devices, developed by researchers at MIT and Rutgers University, could lead to such wider applications. The new work, by mechanical engineering professor Gang Chen, Institute Professor Mildred Dresselhaus, graduate student Bolin Liao, and recent postdoc Mona Zebarjadi and research scientist Keivan Esfarjani (both of whom are now on the faculty at Rutgers), has been published in the journal Advanced Materials.

Although thermoelectric devices have been available commercially since the 1950s, their efficiency has been low due to materials limitations. A newer impetus for thermoelectric systems dates to the early 1990s, when Dresselhaus worked on a project, funded by the U.S. Navy, to improve thermoelectric materials for silent cooling systems for submarines. Chen, who was then working on thermal insulating properties of nanostructures, teamed with her to advance thermoelectric materials.

The group’s finding that nanoscale materials could have properties significantly different from those of larger chunks of the same material — work that involved tiny particles of one material embedded in another, forming nanocomposites — ultimately helped improve thermoelectric device efficiency. The latest work continues that research, tuning the composition, dimensions and density of the embedded nanoparticles to maximize the thermoelectric properties of the material.

Detailed computer modeling of the new material shows it could improve the parameters that are key to an effective thermoelectric system: high electrical conductivity (so that electricity flows easily), low thermal conductivity (so as to maintain a temperature gradient), and optimization of a property known as the Seebeck coefficient, which expresses how much heat an electron carries, on average.

The new work also draws upon methods developed by optics researchers who have been attempting to create invisibility cloaks — ways of making objects invisible to certain radio waves or light waves using nanostructured materials that bend light. The MIT team applied similar methods to embed particles that could reduce the material’s thermal conductivity while keeping its electrical conductivity high.

“It’s kind of like a cloak for electrons,” Dresselhaus says. “We got inspiration from the optical people.”

The concept that made the improvements feasible, the researchers explain, is something called anti-resonance — which causes electrons of most energy levels to be blocked by the embedded particles, while those in a narrow range of energies pass through with little resistance.

Liao and Zebarjadi, who carried out this work as a postdoc at MIT, conceived of making the nanoparticles invisible to the flow of electrons using this anti-resonance principle. By tuning the size of the nanoparticles, the researchers made them invisible to the electrons, but not the phonons — the virtual particles that carry heat.

In addition, they found that the embedded nanoparticles actually enhanced the flow of electrons. “We can increase the electrical conductivity significantly,” Zebarjadi says.

That basic effect had been observed before, she says, but only in gases, not solids. “When we saw that, we said, that would be nice if we could have such a scattering [of electrons] in solids,” Zebarjadi says — a result she and her colleagues were ultimately able to achieve.

The technique is inspired by a concept called modulation doping, which is used in the manufacture of semiconductor devices. So far, the work has been theoretical. The next step will be to build actual test devices, the team members say. “There are a lot of challenges on the experimental side,” Chen says.

Joseph Heremans, a professor of physics at Ohio State University, calls the work “fabulous Harry Potter stuff, yet believable … really new, and totally surprising.” However, he notes that the effect is limited to a narrow range of electron energy, and will require fine-tuning to get the energy level just right. “This may prove impossible to achieve in the lab, we just won’t know until someone tries,” he says.

The research was supported by the MIT Solid State Solar Thermal Energy Conversion (S3TEC), an Energy Frontier Center funded by the U.S. Department of Energy through MIT's Materials Processing Center.

Paul Juodawlkis elevated to Fellow of the Optical Society

$
0
0
Dr. Paul W. Juodawlkis, assistant leader of the Electro-optical Materials and Devices Group at MIT Lincoln Laboratory, was elevated to the rank of Fellow of the Optical Society (OSA) last month. He was recognized for his "significant contributions to optically sampled analog-to-digital conversion and the development of the slab-coupled optical waveguide amplifier."

The Optical Society, originally called the Optical Society of America, was founded in 1916 to expand and disseminate knowledge of optics and to promote collaboration among investigators, designers and users of optical systems. This international association is the leading professional society of optics and photonics, and its membership of more than 18,000 includes optics and photonics scientists, engineers, educators and business leaders.

At Lincoln Laboratory, Juodawlkis's research and leadership efforts since 1999 have been focused on the development of optical sampling techniques for photonic analog-to-digital converters, quantum-well electrorefractive modulators, high-power waveguide photodiodes, and high-power semiconductor optical amplifiers and their application in mode-locked lasers and narrow-linewidth external-cavity lasers. From 1988 to 1993, he served as a radar systems engineer on a multisensor airborne test bed program in the Tactical Defense Systems Group at the Laboratory. Between 1993 and 1999, he was a member of the Ultrafast Optical Communications Laboratory at the Georgia Institute of Technology.

Juodawlkis has authored or co-authored more than 100 peer-reviewed journal and conference publications. A senior member of the Institute of Electrical and Electronics Engineers (IEEE), he is very active in the IEEE Photonics Society. He has served as chair of the IEEE Photonics Society Technical Committee on Microwave Photonics (2003–2006) and as a member of various technical committees for other Photonics Society conferences. Currently, he is completing a three-year term as an elected member of the IEEE Photonics Society’s Board of Governors and is a Technical Steering Committee member of the Society’s Boston Chapter.

Juodawlkis served as general co-chair of the 2012 Conference on Lasers and Electro-Optics (CLEO) and as program co-chair of the 2010 CLEO. At the plenary session of this year's CLEO in June, he will be recognized as a 2013 Fellow. He holds a BS degree from Michigan Technological University, an MS degree from Purdue University, and a PhD degree from the Georgia Institute of Technology, all in electrical engineering.

Special deal on photon-to-electron conversion: Two for one!

$
0
0
Throughout decades of research on solar cells, one formula has been considered an absolute limit to the efficiency of such devices in converting sunlight into electricity: Called the Shockley-Queisser efficiency limit, it posits that the ultimate conversion efficiency can never exceed 34 percent for a single optimized semiconductor junction.

Now, researchers at MIT have shown that there is a way to blow past that limit as easily as today’s jet fighters zoom through the sound barrier — which was also once seen as an ultimate limit.

Their work appears this week in a report in the journal Science, co-authored by graduate students including Daniel Congreve, Nicholas Thompson, Eric Hontz and Shane Yost, alumna Jiye Lee ’12, and professors Marc Baldo and Troy Van Voorhis.

The principle behind the barrier-busting technique has been known theoretically since the 1960s, says Baldo, a professor of electrical engineering at MIT. But it was a somewhat obscure idea that nobody had succeeded in putting into practice. The MIT team was able, for the first time, to perform a successful “proof of principle” of the idea, which is known as singlet exciton fission. (An exciton is the excited state of a molecule after absorbing energy from a photon.)

In a standard photovoltaic (PV) cell, each photon knocks loose exactly one electron inside the PV material. That loose electron then can be harnessed through wires to provide an electrical current.

But in the new technique, each photon can instead knock two electrons loose. This makes the process much more efficient: In a standard cell, any excess energy carried by a photon is wasted as heat, whereas in the new system the extra energy goes into producing two electrons instead of one.

While others have previously “split” a photon’s energy, they have done so using ultraviolet light, a relatively minor component of sunlight at Earth’s surface. The new work represents the first time this feat has been accomplished with visible light, laying a pathway for practical applications in solar PV panels.

This was accomplished using an organic compound called pentacene in an organic solar cell. While that material’s ability to produce two excitons from one photon had been known, nobody had previously been able to incorporate it within a PV device that generated more than one electron per photon.

“Our whole project was directed at showing that this splitting process was effective,” says Baldo, who is also the director of the Center for Excitonics, sponsored by the U.S. Department of Energy. “We showed that we could get through that barrier.”

The theoretical basis for this work was laid long ago, says Congreve, but nobody had been able to realize it in a real, functioning system. “In this system,” he says, “everyone knew you could, they were just waiting for someone to do it.”

“This is the landmark event we had all been waiting to see,” adds Richard Friend, the Cavendish Professor of Physics at the University of Cambridge, who was not involved in this research. “This is really great research.”

Since this was just a first proof of principle, the team has not yet optimized the energy-conversion efficiency of the system, which remains less than 2 percent. But ratcheting up that efficiency through further optimization should be a straightforward process, the researchers say. “There appears to be no fundamental barrier,” Thompson says.

While today’s commercial solar panels typically have an efficiency of at most 25 percent, a silicon solar cell harnessing singlet fission should make it feasible to achieve efficiency of more than 30 percent, Baldo says — a huge leap in a field typically marked by slow, incremental progress. In solar cell research, he notes, people are striving “for an increase of a tenth of a percent.”

Solar panel efficiencies can also be improved by stacking different solar cells together, but combining solar cells is expensive with conventional solar-cell materials. The new technology instead promises to work as an inexpensive coating on solar cells.

The work made use of a known material, but the team is now exploring new materials that might perform the same trick even better. “The field is working on materials that were chanced upon,” Baldo says — but now that the principles are better understood, researchers can begin exploring possible alternatives in a more systematic way.

Christopher Bardeen, a professor of chemistry at the University of California at Riverside who was not involved in this research, calls this work “very important” and says the process used by the MIT team “represents a first step towards incorporating an exotic photophysical process (fission) into a real device. This achievement will help convince workers in the field that this process has real potential for boosting organic solar cell efficiencies by 25 percent or more.”

The research was performed in the Center for Excitonics and supported by the U.S. Department of Energy. MIT has filed for a provisional patent on the technology.

The brains behind research on the brain

$
0
0
While studying physics and electrical engineering as an MIT undergraduate in the late 1990s, Mehmet Fatih Yanik managed to avoid taking any biology classes until his final semester, when he was forced to sign up for a course.

“I believed that biology was very boring, and just about memorizing facts,” he says.

So having no choice but to attend the course, Yanik sat at the back of the lecture hall, expecting the worst. But as time went on, he found he was getting more and more interested in the course, which was run by professor of biology Eric Lander, one of the leaders of the Human Genome Project and founding director of the Broad Institute. “I started to sit closer and closer to the front of the classroom,” he says.

By the end of the semester Yanik was hooked, and had decided he wanted to study biology.

But how could he pursue his newfound passion for human biology while making use of his background in engineering? Returning to MIT in 2006 after completing a PhD in applied physics at Stanford University, Yanik realized he could use his skills to tackle some of the big challenges in neuroscience.


Mehmet Fatih YanikPhoto: M. Scott Brauer

Taming the brain’s complexity

“I realized that the complexity of the human brain is so great that our existing efforts were just too slow to discover how the thousands of genes and neuronal circuits, involving thousands to billions of neurons, work,” Yanik says. “What we needed were sophisticated high-throughput screening and intelligent systems that would allow us to do neuroscience more rapidly, so that we could understand how each individual gene, molecule, neuron and neuronal circuit in the brain works, and rapidly search for therapeutics.”

To this end Yanik, now tenured as an associate professor of electrical engineering and biological engineering, founded the High-Throughput Neurotechnology Group at MIT, with the aim of developing advanced analysis, screening and therapeutic technologies for the nervous system.

The group first developed microfluidic chips to manipulate small animals such as the invertebrate Caenorhabditis elegans and the zebrafish, both used extensively as model organisms in biological research. “We have machines that can take these animals from their ‘hotels,’ rotate them, image them, perform microsurgery and regeneration studies, while testing drugs on them very rapidly,” Yanik says.

Previously, researchers studying C. elegans and zebrafish would have to manually place them under a microscope, orient them precisely and then take an image of these animals. “Oftentimes this manual handling was too slow and imprecise to allow scientists to screen large numbers of organisms,” Yanik says.

In contrast, the technologies developed by Yanik’s group can screen organisms at a rate of thousands of animals per day, while still remaining accurate enough to cut even individual nerve fibers with special lasers to test whether they can be repaired by different drugs. “By automating these using microfluidics, high-speed imaging and sophisticated analysis techniques, we have made high-throughput screening of these organisms possible,” Yanik says.

The team has also developed high-throughput techniques to rapidly generate micropatterns of proteins on substrates, upon which individual neurons can be grown. They can then study how these neurons form connections, and administer drugs to discover how they affect this process.

Similarly, the group’s high-throughput screening systems allow them to study neuronal circuits within cultured animal or human brain tissues by probing single neurons.

Paving the way for discoveries

Each of these new technologies has led to exciting biological discoveries, according to Yanik. The C. elegans screening chip, for example, led to the discovery of genes and druglike small molecules that enhance neuronal-fiber regeneration.

The researchers have also used their systems to discover molecules that enhance the wiring of mammalian neurons.

Yanik and his colleagues are now attempting to use their zebrafish-screening technology to study genes known to affect human development, and to play a role in various neurological disorders. They also hope to study and image the way in which the brains of these organisms transmit electrical signals, and how this information is processed under the influence of various neurological drugs.

“We are also developing assays to figure out how we can make specific types of neurons that are lost in different diseases, such as Parkinson’s or epilepsy, by testing large numbers of combinations of regulatory genes,” Yanik says.

Yanik grew up in Antalya, on the Mediterranean coast of southwestern Turkey. While in high school he joined the Turkish Physics Olympics team, winning bronze medals at International Physics Olympiads in China and Australia.

Partly as a result of that success, Yanik was accepted to MIT for his undergraduate studies, and moved to the United States at the age of 17. While studying for his PhD at Stanford he developed a technique to stop and store pulses of light within microchips. He also began taking courses in cellular biology and neuroscience at Stanford Medical School.

Ultimately, he hopes his research will help to improve the lives of people with a range of diseases. “Hopefully the technologies we are developing will help the research community to discover therapeutics for various neurological diseases,” he says.
Viewing all 81 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>