WebProNews

Tag: MIT

  • New MIT Algorithm Predicts Twitter Trends Hours in Advance

    Researchers at the Massachusetts Institute of Technology (MIT) have announced a new algorithm they say is capable of predicting Twitter trends far in advance.

    The algorithm is claimed to predict with 95% accuracy the topics that will show up on Twitter’s trending topics list. It can make these predictions an average of an hour and a half before Twitter lists the topic as a trend, and can sometimes predict trends as much as four or five hours in advance.

    Devavrat Shah, associate professor in the electrical engineering and computer science department at MIT, and MIT graduate student Stanislav Nikolov will present the algorithm at the Interdisciplinary Workshop on Information and Decision in Social Networks in November.

    Shah stated that the algorithm is a nonparametric machine-learning algorithm, meaning it makes no assumptions about the shape of patterns. It compares changes over time in the number of tweets about a new topic to the changes over time seen in every sample in the training set. Also, training set samples with statistics similar to the new topic are more heavily weighted when determining a prediction. Shah compared it to voting, where each sample gets a vote, but some votes count more than others.

    This method is different from the standard approach to machine learning, where researchers create a model of the pattern whose specifics need to be inferred. In theory, the new approach could apply to any quantity that varies over time (including the stock market), given the right subset of training data.

    For Shah and Nikolov’s initial experiments, they used data from 200 Twitter topics that were listed as trends and 200 that were not. “The training sets are very small, but we still get strong results,” said Shah. In addition to the algorithm’s 95% prediction rate, it also had only a 4% false-positive rate.

    The accuracy of the system can increase with additional training sets, but the computing costs will also increase. However, Shah revealed that the algorithm has been designed to execute across separate machines, such as web servers. “It is perfectly suited to the modern computational framework,” said Shah.

    “It’s very creative to use the data itself to find out what trends look like,” said Ashish Goel, associate professor of management science at Stanford University and a member of Twitter’s technical advisory board. “It’s quite creative and quite timely and hopefully quite useful.

    “People go to social-media sites to find out what’s happening now. So in that sense, speeding up the process is something that is very useful.”

    (Image courtesy MIT)

  • Like-A-Hug Makes Facebook Likes Mean a Little More

    Facebook just announced that they’ve hit one billion monthly active users. As a part of that milestone, the company also told us that in the last three and a half years (approximately), Facebook users have hit the “like” button over 1.13 trillion times.

    Let that sink in for a second. 1.13 trillion likes. With that kind of cultural dominance, the “like” has become a universal emotion, sometimes divorced from any other tangible human emotion. People “like” an engagement announcement that same way they “like” a funny picture. With Facebook users liking billions of posts every day, maybe the “like” has lost its punch? Does a “like” really mean anything anymore?

    What if a like could translate into a physical hug? That way, when you “liked” your sister’s baby photo from across the country, she could feel your “like,” and it may just feel a little bit more like love.

    That’s the idea behind the Like-A-Hug, a project from Melissa Chow, Andy Payne, and Phil Seaton of the MIT Media Lab.

    “Like-A-Hug is a wearable social media vest that allows for hugs to be given via Facebook, bringing us closer despite physical distance. The vest inflates when friends ‘Like’ a photo, video, or status update on the wearer’s wall, thereby allowing us to feel the warmth, encouragement, support, or love that we feel when we receive hugs. Hugs can also be sent back to the original sender by squeezing the vest and deflating it,” says the project site.

    Check out the concept below:

    All that stuff about “likes” turning into love and meaning something is nice, but it’s doubtful that you could get that many people to wear an inflatable vest in public. Of course, the idea is probably not going to translate into an actual product – but that wasn’t really the point.

    “Connecting it to Facebook conceptually was simply a way to explore how social media might push past the traditional graphic user interface (GUI),” says Chow.

    Plus, nothing beats a real hug.

  • This Robot Worm Looks Like A Chinese Finger Trap

    This Robot Worm Looks Like A Chinese Finger Trap

    There are multiple robots out there that can perform a variety of functions. Just yesterday, we saw a robot that can perform Jewish mourning rituals. If a robot can perform religious rites, surely it can also move like a worm. Researchers at MIT, Harvard and Seoul National University must have thought the exact same thing.

    Meet Meshworm, it’s a robot that’s encased in a “flexible, meshlike tube that makes up its body.” The inside of the robot is controlled by what the researchers call “artificial muscle” that’s made up of wire made from nickel and titanium. It moves by using heat to stretch and contract the wires. Here’s the meshworm in action:

    While I think the robot looks more like a Chinese finger trap, the scientists say the design is meant to mimic the lowly earthworm. They say that earthworms, snails and sea cucumbers move through a process called peristalsis. By that, they mean that the worm moves by “alternatively squeezing and stretching muscles along the length of their bodies.”

    What makes the Meshworm so unique, beyond its movement, is that its more resilient than a regular earthworm. You can step on Meshworm and it will keep on moving. The materials that make up the robot can’t be damaged so easily as they’re made of soft materials that are meant to bend under pressure.

    So what could Meshworm mean for the future of robotics? It would make robots more versatile and able to stand more damage than their counterparts. The scientists at MIT specifically mention these robots as being able to “explore hard-to-reach spaces and traverse bumpy terrain.”

    What may be the most interesting part about this robot is that the scientists say it’s showing signs of “body morphing capability.” For all the futurists terrified of a robot controlled future, this is where you start to get afraid. If a robot is suitably soft enough, it could potentially mold itself into any object. The robot might even be your next door neighbor – mild mannered Mr. Johnson. I bet you never thought he was a robot, but he never thought the cute caterpillar was a robot either until it was too late.

  • Here Are The U.S. Airports With The Most Influence In A Pandemic

    Ok, this is cool. And also rather frightening, as I just caught Contagion on HBO the other night. Researchers at MIT’s Department of Civil and Environmental Engineering took a good look at the first few days of a contagious disease outbreak and determined which U.S. airports would be the biggest influencers in its spread.

    They looked at the top 40 largest airports in the country, and found that traffic isn’t necessarily the only indicator of how much an airport would be to blame.

    Unlike existing models, the new MIT model incorporates variations in travel patterns among individuals, the geographic locations of airports, the disparity in interactions among airports, and waiting times at individual airports to create a tool that could be used to predict where and how fast a disease might spread.

    “The results from our model are very different from those of a conventional model that relies on the random diffusion of travelers … [and] similar to the advective flow of fluids,” says researcher Christos Nicolaides. “The advective transport process relies on distinctive properties of the substance that’s moving, as opposed to diffusion, which assumes a random flow. If you include diffusion only in the model, the biggest airport hubs in terms of traffic would be the most influential spreaders of disease. But that’s not accurate.”

    Here’s what they mean: Although Honolulu’ airport is only about 1/3 as large as New York’s Kennedy International (in terms of traffic), it’s positioning in the “air transportation network” and its connection to many distant well-connected hubs makes it almost as influential in the spread of our unknown contagion.

    Here are the final rankings in terms of most influential in the spread of disease:

    1. Kennedy International (New York)
    2. Los Angeles
    3. Honolulu
    4. San Francisco
    5. Newark
    6. Chicago (O’Hare)
    7. Washington (Dulles)
    8. Hartsfield-Jackson International (Atlanta)

    Hartsfield-Jackson ranked only 8th in terms of influence, although it serves the most flights out of any airport in consideration.

    [via The Verge]

  • New Portable Charger Gives You “Several Weeks” Of Charges

    “It’s like having a plug in your pocket,” says Mouli Ramani of Lilliputian Systems. “This new power system will transform how consumers use their CE devices.”

    For everyone that has run out of juice halfway through the day, and for everyone that has seen their smartphones flicker and fade away just when they needed it most – a new partnership between tech company Lilliputian Systems and specialty retailer Brookstone might make you pretty happy.

    The two companies have just announced a new portable charging system, designed, developed and manufactured by Lilliputian and sold and marketed by Brookstone. The device will be named under the Brookstone brand.

    The portable fuel cell is about as big as a “thick smartphone,” according to CNET, and contains recyclable cartridges that are about the size of a Zippo. The cartridges are made of plastic and filled with lighter fluid, which is what powers the charging. Although there’s no word yet on total pricing of the portable device, the cartridges will run you a few buck a piece.

    “Today’s Smartphones use much more power, increasing the need for a more efficient way to recharge when on the go,” said Steven Schwartz, Vice President of Merchandising and Product Development at Brookstone. “Lilliputian’s groundbreaking technology provides power, wherever and whenever you need it. This breakthrough aligns well with our commitment to innovative solutions that make life easier.”

    Charging on the go is always a pain, as most everyone is well aware. The good thing about this new portable device is that it obviously does not need an outlet, and it works with any USB-enabled device. That means smartphones, tablets, iPods, ereaders, etc. can all be charged by this new device. Each cartridge will give an iPhone, for instance, around a dozen full charges. Depending on how much you’re playing Words with Friends and surfing the interwebs, that could mean anywhere from 2-3 weeks without ever having to plug your device into the wall.

    The new technology has already been approved by both the UN International Civil Aviation Organization and the U.S. Dept. of Transportation. Naturally, that means that you’ll be able to carry them on your plane.

    Lilliputian Systems, an MIT spin-off company, just recently got into the manufacturing game. If their product works as well as advertised, they’re going to be incredibly busy.

  • Harvard, MIT Launching Free edX Courses

    Harvard recently announced its new initiative to add book metadata available for public search for all of its libraries, and now the ivy league research school has teamed up with MIT to offer free online courses, a new program called edX.

    The edX initiative is a non-profit organization that is funded by both institutions, with each contributing roughly $30 million to the program. Those who sign up for an edX course can get a taste of what it’s like to take a class at Harvard or MIT. The program is free, unless a virtual student seeks a certificate declaring mastery over a specific course. Naturally, no actual credit hours are offered and there is no admissions process, though there are quizzes and exams. The list of courses available will be officially announced by the end of the summer, and classes will begin in the fall.

    Harvard President Drew Faust states, “EdX gives Harvard and MIT an unprecedented opportunity to dramatically extend our collective reach by conducting groundbreaking research into effective education and by extending online access to quality higher education.” MIT President Susan Hockfield adds, “EdX represents a unique opportunity to improve education on our own campuses through online learning, while simultaneously creating a bold new educational path for millions of learners worldwide.”

    In related news, it was recently reported that Sebastian Thrun, former Research Professor of Computer Science at Stanford University and Google Fellow, quit his job to found Udacity, a free online university with a mission to “change the future of education”. Likewise, Stanford, along with Princeton, Michigan and the University of Pennsylvania, have also recently announced free online courses.

  • MIT Researchers Create Glare-Free Glass

    MIT Researchers Create Glare-Free Glass

    MIT researchers have created a type of glass that is nearly invisible. MIT News reports that the glass and the process of creating it were described in a paper published by the journal ACS Nano. The paper is titled (get ready) “Nanotextured Silica Surfaces with Robust Super-Hydrophobicity and Omnidirectional Broadband Super-Transmissivity.”

    The abstract to the paper describes the structure of the glass:

    Taking cues from nature, we use tapered conical nanotextures to fabricate the multifunctional surfaces; the slender conical features result in large topographic roughness whilst the axial gradient in the effective refractive index minimizes reflection through adiabatic index-matching between air and the substrate.

    Right. What all this means is that the researchers (Kyoo-Chul Park, Hyungryul J. Choi, Chih-Hao Chang, Robert E. Cohen, Gareth H. McKinley, and George Barbastathis) have created a technique to manufacture a glass textured in such a way that it “virtually eliminates glare.” This means the glass is almost perfectly see-through, something that is hard to even imagine. The researchers state the glass is also anti-fogging and self-cleaning, making it perfect for windows.

    All I can think is that this type of glass is going to cause problems and break noses the world-over. However, a little more thought reveals what the plan undoubtedly is. Glare is really only an issue for the screens we use daily. Unless you are reading this on an e-reader or in the dark, chances are you are contending with glare right now. Glare-free monitors and touchscreen devices just got closer, thanks to these researchers. Also, a fog-free car windshield would save a lot of time on those cold winter mornings. The researchers also mention photovoltaic solar cells as another possible application.

    Check out how water droplets bounce off the surface of the glass in this video provided by MIT News, and leave a comment about how glare-free glass might be useful, other than for bird murder.

    (image by Hyungryul Choi and Kyoo-Chul Park, courtesy of MIT News)

  • Hack Turns Building Into a Playable Game of Tetris

    Students at MIT converted one side of the Green Building at MIT into a full color working version of the game Tetris.

    According to IHTFP: Interesting Hacks To Fascinate People: The MIT Gallery of Hacks, Hackers at MIT consider Tetris on the Green Building to be the “Holy Grail of hacks”.

    The Green Building is home to the MIT Earth and Planetary Sciences department. With square evenly spaced windows, it was the ideal place for the game.

    The hack began with the word “TETRIS” scrolling down the side of the building. When play commenced, each level would get progressively paler, making it difficult to identify which type of block you had to work with. The last level involved colors shifting on-screen, making it even more difficult.

    According to BostInno, this kind of hack has been done before. Students at Delft University of Technology in the Netherlands accomplished a similar hack all the way back in 1995.

  • Nanotechnology Leaps Forward With New Cancer Drug

    Nanotechnology Leaps Forward With New Cancer Drug

    A team of scientists, engineers and physicians have found promising effects of a first-in-class targeted cancer drug called BIND-014 in treating solid tumors.

    BIND-014 is the first targeted and programmed nanomedicine to enter human clinical studies. In the study, the researchers demonstrate BIND-014’s ability to effectively target a receptor expressed in tumors to achieve high tumor drug concentrations, as well as show remarkable efficacy, safety and pharmacological properties compared to the parent chemotherapeutic drug, docetaxel (Taxotere).

    “BIND-014 demonstrates for the first time that it is possible to generate medicines with both targeted and programmable properties that can concentrate the therapeutic effect directly at the site of disease, potentially revolutionizing how complex diseases such as cancer are treated,” said Omid Farokhzad, MD, a physician-scientist in the Brigham and Women’s Hospital Department of Anesthesiology, associate professor at Harvard Medical School, and study co- senior author.

    “Previous attempts to develop targeted nanoparticles have not successfully translated into human clinical studies because of the inherent difficulty of designing and scaling up a particle capable of targeting, long-circulation via immune-response evasion, and controlled drug release,” said Robert Langer, ScD, David H. Koch Institute Professor, MIT and study co-senior author.
    According to the researchers, the drug is the first of its kind to reach clinical evaluation and demonstrates a differentially high drug concentration in tumors by targeting drug encapsulated nanoparticles directly to the site of tumors. This leads to substantially better efficacy and safety.

    “It is wonderful to witness a world-class team of scientists, engineers, physicians, for-profit and non-project organizations converge to develop this potentially revolutionary technology for treatment of cancers. The effectiveness of this team has been remarkable and serves as model for translational research” said Edward J. Benz, Jr. MD, President of Dana-Farber Cancer Institute.

    The research and development of the first targeted programmable nanomedicine to show anti-tumor effects in humans represents the culmination of more than a decade of investigation initially carried out in academic labs at BWH and MIT.

  • MIT Brains Work On “Smart Sand” Robots

    Researchers at MIT are working on a project that could bring sci-fi fantasies to reality. But, then again, when aren’t they?

    Nowadays, if you want something built, you take wood or other materials and build or cut it out of that. But, what if you could have a computer model of what you want, and have that thing magically appear out of a box of sand?

    That is the very vision the brains at the Distributed Robotics Laboratory (DRL) at MIT’s Computer Science and Artificial Intelligence Laboratory are pursuing. It involves a lot of programming and seemingly-simple twiddling, but it could change the way things are made with the same kind of promise that 3-D printing has people so excited about. The development is called “smart sand”.

    From the MITNews:

    At the IEEE International Conference on Robotics and Automation in May — the world’s premier robotics conference — DRL researchers will present a paper describing algorithms that could enable such “smart sand.” They also describe experiments in which they tested the algorithms on somewhat larger particles — cubes about 10 millimeters to an edge, with rudimentary microprocessors inside and very unusual magnets on four of their sides.

    Unlike many other approaches to reconfigurable robots, smart sand uses a subtractive method, akin to stone carving, rather than an additive method, akin to snapping LEGO blocks together. A heap of smart sand would be analogous to the rough block of stone that a sculptor begins with. The individual grains would pass messages back and forth and selectively attach to each other to form a three-dimensional object; the grains not necessary to build that object would simply fall away. When the object had served its purpose, it would be returned to the heap. Its constituent grains would detach from each other, becoming free to participate in the formation of a new shape.

    Of course, ten-millimeter cubes is hardly what you would call “sand”, but the idea is to get the functionality and algorithms in place, then shrink the size of it over time.

    “Take the core functionalities of their pebbles,” says [Robert] Wood, who directs Harvard’s Microrobotics Laboratory. “They have the ability to latch onto their neighbors; they have the ability to talk to their neighbors; they have the ability to do some computation. Those are all things that are certainly feasible to think about doing in smaller packages.”

    “It would take quite a lot of engineering to do that, of course,” Wood cautions. “That’s a well-posed but very difficult set of engineering challenges that they could continue to address in the future.”

    This video gives you an idea of the “subtractive” methods of building that the MIT folks are working on.

  • MIT Photon Camera Can See 3D Objects Around Corners

    MIT Photon Camera Can See 3D Objects Around Corners

    Leave it to the MIT Media Lab to come up with crazy new stuff. Now they’ve made a camera that can see around corners. Well, kind of. It’s not gonna take a picture of you that your Mom would recognize, but it does create 3D images of objects that are in another room.

    According to Nature.com, the camera works by exploiting what researcher Ramesh Raskar calls “echoes of light.” Essentially, a very high-speed camera fires a super quick (50-femtosecond) laser pulse at a wall, that pulse breaks up into photons and scatters, and some of those photons hit objects out of view of the camera, ricochet off the objects, bounce back to the wall they first hit, and re-enter the camera lens. The camera’s time resolution is two picoseconds, the time it takes light to travel 0.6 mm, so when photons reach the camera at different times, positions, and angles, a computer is able to determine how far each photon has traveled with an accuracy of less than a millimeter. To adjust for different photons hitting the camera at the same time, the laser’s position is changed 60 times, and the results, processed through a complicated algorithm, yield a 3D image that would otherwise be hidden from view.

    Right now the process takes several minutes, but researchers have a goal of speeding up the process to under 10 seconds, reports Nature.

    The implications of such a device are all over the place, especially if researchers meet their goal of speeding up the process to someday create realtime images, even of moving objects. Safety is cited as the major advantage of the technology: Nature suggests the camera could be used in dangerous or inaccessible places, like moving machinery or contaminated areas. Beyond that, far more advanced versions of the technology could in the future help rescuers know what’s going on inside a room before they enter. This could be invaluable in fires, hostage-rescue situations, and even bomb scenarios. Of course, if there’s anything that operates faster than the 50-femtosecond laser pulse, it’s governments’ desire to weaponize new technologies. I wouldn’t be surprised to someday see related technologies mounted on gun barrels or military and police vehicles, or integrated into individual warfighters’ heads-up displays. Of course, it could also be used for spying — both on criminals and law-abiding citizens alike.

    But that’s a long way off for a camera that still takes several minutes to create a single image. Regardless of the potential for future application or abuse, it’s a really impressive technology. Here’s a video from Nature that shows how the camera works:

    [Nature.com]

  • MIT Teaching Drones to Read Hand Gestures

    MIT Teaching Drones to Read Hand Gestures

    Presently, the U.S. military is attempting to bring combat drones onto aircraft carriers. One of the major obstacles concerns how the unmanned fliers will interact with carrier personel on flight decks. Yale Song, a Ph.D. student from the Massachusetts Institute of Technology, is working to solve this problem, as well as to improve general human-robot interaction. Song started by programming a computer to obey hand signals:

    To recognize a hand gesture, a computer must look at the positioning of the human body to discern where the hand is, and also figure out when the gesture begins, and when it ends. With aircraft carrier deck crews being in constant motion, Song has devised a way for computers to decode hand gestures on the fly. Song’s system calculates the probability of a series of hand signals as they might signify a specific gesture, to make a decision on what command is most likely trying to be conveyed.

    With a library of 24 gestures made by 20 different people, the system can correctly identify what commands are being given 76% of the time. The system struggles with quick or erratic gestures, and cannot process “slang” gestures whatsoever. A made-up gesture that might seem obvious to a human would make no sense to a drone. Especially a drone infected with a virus. Also, 76% is an okay score, but not when human lives and very expensive military equipment is at stake.

    MIT’s said this on the matter:

    Part of the difficulty in training the classification algorithm is that it has to consider so many possibilities for every pose it’s presented with: For every arm position there are four possible hand positions, and for every hand position there are six possible arm positions. In ongoing work, the researchers are retooling the algorithm so that it considers arm position and hand position separately, which drastically cuts down on the computational complexity of its task. As a consequence, it should learn to identify gestures from the training data much more efficiently.

    Song also states that he’s working on a gesture-recognition feedback method – if a drone didn’t understand what was being conveyed, it could nod or shake it’s camera. This all seems very tricky in regards to a robot plane coming in at roughly 150 mph to land on a 500′ long flight deck.

  • MIT’s Free Online Course Enrollment Begins Today

    MIT’s Free Online Course Enrollment Begins Today

    The power and knowledge that education brings to somebody is priceless. MIT is cutting the price for that priceless education down to its namesake – free.

    Bloomberg is reporting that the Massachusetts Institute of Technology will begin offering a free online course today that anybody can take. Students around the world can take the class and upon completion will receive a certificate signifying their accomplishment.

    The class titled “6.002x: Circuits and Electronics” is the first free course offered by the school and will begin March 5. The course will focus on the inner-workings of smartphones and other “cool gadgets” according to Anart Agarwal, the lead instructor for the course. Subsequent classes will be offered with a small fee.

    This isn’t the first time that MIT has dabbled in free education and materials for aspiring students. For the past 10 years, the school has provided documents and lecture notes online through its OpenCourseWare program. It’s new MITx initiative will have non-MIT students’ performance assessed and be rewarded certificates if they show mastery in the subject.

    “Anybody anywhere that has the time, motivation, drive to learn this kind of material should be given the opportunity to do so,” MIT Provost L. Rafael Reif said. The new program will feature more interactive features than the current OpenCourseWare does.

    Students will receive video lectures, midterms and final exams, weekly deadlines to complete homework and labs, and access to a discussion forum. The school said that students can expect to spend about 10 hours a week on the course.

    Enrollment in the first course in unlimited, but the college isn’t sure how many students will apply for the program. They do require that students interested in the “introductory course” have a background in advanced physics and mathematics.

    For those interested in free education, you can sign up here.

  • Wireless Charging As You Drive For Electric Cars

    Wireless Charging As You Drive For Electric Cars

    A Stanford University research team has designed a high-efficiency charging system that uses magnetic fields to wirelessly transmit large electric currents between metal coils placed several feet apart. The long-term goal of the research is to develop an all-electric highway that wirelessly charges cars and trucks as they cruise down the road.

    The new technology has the potential to dramatically increase the driving range of electric vehicles and eventually transform highway travel, according to the researchers. A wireless charging system would address a major drawback of plug-in electric cars – their limited driving range. The all-electric Nissan Leaf, for example, gets less than 100 miles on a single charge, and the battery takes several hours to fully recharge. A charge-as-you-drive system would overcome these limitations. You could potentially drive for an unlimited amount of time without having to recharge.

    The wireless power transfer is based on a technology called magnetic resonance coupling. Two copper coils are tuned to resonate at the same natural frequency – like two wine glasses that vibrate when a specific note is sung. The coils are placed a few feet apart. One coil is connected to an electric current, which generates a magnetic field that causes the second coil to resonate. This magnetic resonance results in the invisible transfer of electric energy through the air from the first coil to the receiving coil. Wireless power transfer will only occur if the two resonators are in tune. Objects tuned at different frequencies will not be affected, including humans.

    In 2007, researchers at the Massachusetts Institute of Technology used magnetic resonance to light a 60-watt bulb. The experiment demonstrated that power could be transferred between two stationary coils about six feet apart, even when humans and other obstacles are placed in between.

    The MIT researchers wondered if their system could be modified to charge a car moving at highway speeds. The car battery would provide an additional boost for acceleration or uphill driving.

    Here’s how the system would work: A series of coils connected to an electric current would be embedded in the highway. Receiving coils attached to the bottom of the car would resonate as the vehicle speeds along, creating magnetic fields that continuously transfer electricity to charge the battery.
    To determine the most efficient way to transmit 10 kilowatts of power to a real car, the Stanford team created computer models of systems with metal plates added to the basic coil design.

  • Photonic Chips, Light To Replace Electricity In Our Microchips

    Right now, as you’re reading this, all kinds of electronic processes are running in your computer to make sure the information you desire is showing up on your monitor. What you might not know is that electricity isn’t the most efficient means of relaying this information, but it’s all we have. Until now. You can send your thank-you letters to MIT, who has made huge strides towards the development of photonic chips.

    Many modern day communication systems use fiber optics to transfer information from one location to another. Tiny beams of light transmit much of the information we access on a daily basis. The big hurdle is when the information gets to a location or computer, the information needs to be converted to electronic form so our systems can process the information, and then processed as light again so we can read it.

    Caroline Ross, the Toyota Professor of Materials Science and Engineering at MIT, has developed a new component, she calls a “diode for light“.

    So why haven’t we been using light in our microchips until now?

    The problem is harnessing light in such a way that the lasers powering them wouldn’t reduce in efficiency when transferring the information. However, they discovered a material, Garnet, which they could add to microchips, allowing the light to transfer properly.

    One practical advancement from this discovery is these photonic chips can be processed using the standards we have now. Ross explains, “It simplifies making an all-optical chip. The design of the circuit can be produced just like an integrated-circuit person can design a whole microprocessor. Now, you can do an integrated optical circuit.

    This means the advancement discovered by MIT could have a much quicker shift to the commercial market. The improvements the technology could provide our computational devices is astronomical. First and foremost, light travels much faster than electrons. Also, the wires which are required to transmit electronic signals can only carry a single data stream while light can carry various streams of data through a single fiber or circuit.

    The realistic advancements generated from these photonic chips will take us into the next phase of processing power. While the creative dreamer in me is simply asking, “Does this advancement get us one or even two steps closer to having actual lightsabers?

  • Flying Planes With An iPhone

    I guess all that time spent playing Cube Runner could prove useful after all.

    An associate professor of aeronautics and her students at MIT have developed a system for controlling small unmanned aerial vehicles (UAV) with your basic iPhone. Professor Missy Cummings and her class teamed up with Boeing’s research and development center in Seattle to pilot a small UAV with a few twists and turns of an Apple smartphone.

    And it’s not your typical remote controlled plane. During the test, the controller was in Seattle while the UAV was buzzing around a football field at MIT campus – about 2,500 miles apart. Here’s how it works, according to Cummings –

    “We’ve set the system up so that the iPhone connects into the cell network through a Wi-Fi hotspot capability, and on the other end, the vehicle is hooked up to a ground station that’s also hooked into a wireless hotspot. So the vehicle and the iPhone are then communicating over the internet, and this allows us to send whatever commands we want.”

    The iPhone controls the UAV in two ways. First, the “pilot” can set location points on a map, and direct the plane to fly to that spot. Or, in a more awesome video game-like fashion, the pilot can use a built in camera to fly the plane manually, simply by tilting the iPhone to signal the desired direction. Check it out in action:

    One of the goals of the project is to create a system that is easy to learn. This system is so intuitive, it can be mastered in a matter of minutes.

    Cummings, who directs the Humans and Automation Lab at MIT, focuses her research on how to make control systems that are easy for people to learn and use. In principle, she says, the control system she and her team have created for smartphones could be used to control any aircraft, even a jumbo jet. In practice, it could easily replace the control systems not only for military drones, but for UAVs used by emergency personnel: for example, to track the progress of a forest fire in a remote area from a safe distance.

    What do you think about the possibility of an iPhone piloting a jet? Let us know in the comments.