WebProNews

Category: AITrends

ArtificialIntelligenceTrends

  • Google Engineer Talks Natural Language Project

    Google Engineer Talks Natural Language Project

    Google director of engineering Ray Kurzweil just became a Google employee in December, and he spoke with SingularityHub about a natural langauge project he’ll be working on at the company (he also recently wrote a book called How To Create A Mind).

    “I envision, some years from now, the majority of search queries will be answered without you actually asking. It’ll just know this is something you’re going to want to see,” he says.

    Of course this is already what Google is trying to accomplish with Google Now.

    He says that his personal vision is that the project will be part of Google’s core technology, as opposed to a standalone product, though notes that it’s premature to speculate.

    He also suggests that people will give the technology permission to listen in on their lives, so they can better serve them (also like Google Now).

    [via reddit]

  • The Algorithm That Leads To The Robotic Revolution Has Been Found

    The Algorithm That Leads To The Robotic Revolution Has Been Found

    It was both charming and frightening when Google made the robot brain that learned what a cat was just by watching videos of them on YouTube. The kind of artificial intelligence that makes these feats possible will obviously be the downfall of man at the hands of the robots. Despite the warnings of many paranoid people, science continues to march forward towards our eventual extinction.

    It all starts with an algorithm proposed by Dr. Łukasz Kaiser of Universite Paris Diderot. The algorithm proposes that machines can learn how anything works just by watching how it works. It’s like if Google’s cat loving computer learned what a cat was and then learned how to care for a cat by watching more videos of these actions.

    This algorithm is not being tested on cats though. The tests are currently centered around games and learning how to play said games. The hope is that a computer can watch people playing a game like Connect 4 and then learn how to beat a human opponent just by watching them. Sure, machines can beat humans in Chess, but the machine has to be programmed by a human with all the potential moves available to it.

    What makes Dr. Kaiser’s research so fascinating, and terrifying, is that machines would only have to watch to learn. We as humans learn by observing things around us and we’re giving machines that same ability. Of course, now we have to discuss machine rights and whether or not I’m a machinist. Look, I don’t hate machines, but I would not like to be killed by something that can’t feel basic emotions.

    If you want to learn more about the downfall of humanity, check out Dr. Kaiser’s presentation on his research at the Third Conference on Artificial General Intelligence.

    Lukasz Kaiser-Playing General Structure Rewriting Games from Raj Dye on Vimeo.

    [h/t: Wired]

  • 16,000 Computers Find a Cat On the Internet

    16,000 Computers Find a Cat On the Internet

    [UPDATE] Google has chimed in on the results of the experiment and why it thinks machine learning is important for its future.

    [ORIGINAL]
    Google X is the secretive inner-lab where Google engineers try to make their wildest science fiction dreams come true. Two of the most famous recent projects to be announced publicly from Google X are the self-driving cars Google is now testing throughout the country and the recently announced Google Glass augmented reality display headset. Those technologies have been announced, but there are no doubt other, even more high-tech projects being kept secret.

    The lid was lifted on another Google X project this week when Andrew Ng, the director of Stanford’s artificial intelligence Lab, told the New York Times about his recent experiments in machine learning with Google. Ng and Google engineers have created one of the largest artificial neural networks in the world, with 16,000 connected processors. Ng recently gave the network an interesting task: find a cat on YouTube.

    Ng is an expert on machine learning, a branch of artificial intelligence research concerned with developing learning algorithms. Using state-of-the art machine learning techniques, the Google X neural network was able to teach itself what a cat looks like using 10 million images from YouTube videos. Ng stated that the result of the simulation surprised researchers, as the network was not ever told what a cat was.

    Ng and other researchers will be presenting the results of the simulation later this week at the International Conference on Machine Learning. In addition to his machine learning research and teaching at Stanford, Ng is one of the co-founders of Coursera, the free online university that offers classes from professors at universities such as Stanford, the University of Michigan, and Princeton. If you are interested in just how the artificial neural network was able to identify a feline, take a look at the video below. In it, Ng explains to the 2011 Bay Area Vision Meeting the concepts behind the technology that was used to accomplish the feat:

    (Picture via arxiv.org)

  • Baby Robot Learns Its First Words

    At team of scientists lead by the Caroline Lyon at the University of Hertfordshire, UK have developed a “baby robot” that learns human speech in much the same way a human child learns. The robot converses with a human teacher and soon a string of incoherent babble from the robot beings forming actual words, in the same way a baby’s noises come together to form speech.

    The project may lead to robots and AI that form words and sentences on their own, actually learning a language in the same way humans do. This would lead to more natural responses in AI and less errors when using programs like Siri.

    It can also be very helpful in understanding how humans learn language in a more profound way. It is known that between the ages of 6 and 14 months, children begin to go from babbling strings of nonsensical syllables to actually forming words. Once they have a few base words, these then provide contextual clues for learning other words, eventually giving children a full vocabulary of words to choose from and form complete sentences.

    The team at Hertfordshire was motivated by their understanding of this process to program a humanoid robot called iCub with all of the syllables in the English language. This allows the robot (who they call DeeChee) to babble (rather creepily) like a baby. Researchers then treated DeeChee like a child, speaking and attempting to teach the robot about rudimentary ideas like color and shape.

    DeeChee was programmed in listen to the teacher and then respond with its own syllables. Each time it heard a syllable repeated by the teacher it would give that syllable a higher score in its 40,000 syllable lexicon. It would then use these syllables more often, and even begin forming words to match what the teacher said.

    When the teacher showed approval for recognizable words, these syllables where given even more value in the computers programming. As the process continued, more and more recognizable words would begin to show up amid the babble. The scientists believe this closely replicates the early stages of speech development. Babies often become sensitive to repeat words and attempt to replicate them in their own speech.

    “That words can emerge from babble using a statistical learning process not specific to language demonstrates that this stage of language acquisition does not require hard-wired grammar faculties”, said Lyon in the research publication PLoS One. In other words, this method is the absolute first step humans take in the process of learning speech. Grammar and the cognitive association of words follow in humans, but not with DeeChee, who will never use those syllables to form complete sentences or understand their meaning.

    But it is the first step in what could eventually lead to advanced AI that develops its own language from the ground up. As Lyons puts it: “If you want the robot to work with natural speech, then you might need to teach it from the very beginning.”

    [via: NewScientist]

    [img source: iCub.org]

  • I Like Turtlebot2:  Hobby Robotics Gives Your Laptop Legs

    I Like Turtlebot2: Hobby Robotics Gives Your Laptop Legs

    This motion sensing robot from Willow Garage and Yujin Robot turns your laptop into a moving robot. So the next time you look for your laptop it may just be taking a stroll around your house.

    The real reason for this technology is for robotics nerds to have a Robot Operating System (ROS) compatible base for building your own robot. At its core it is just a basic robot. It has its brains in the form of the laptop, a power supply built into the base, the start of a body, and a Microsoft Kinect built in to give the robot eyes and ears.

    Students and teachers of robotics are falling in love with the Kinect. As a digital set of eyes and ears, it has some wonderful applications of robotics. Students have used it as a way to control robots, and with Turtlebot, they use it as sensors for the robots themselves.

    Turtlebot was designed for students as a low cost, yet highly advanced introduction to robotics. The design of Turtlebot is simple, and it works right out of the box. The challenge is to use ROS to create new and advanced robotics using the basic framework.

    If we get people started in robotics from a young age, we will be able to see some great things from them as their knowledge progresses. Having them jumping right into ROS from the get go, already puts them on the right track to learning the current state of robotics without having to rehash what has already been done. If they jump right into it, and see that they can do some amazing things from the beginning, students are more likely to stick with a subject that is difficult to get into.

    Follow these links for some cool and/or wacky things going on in robotics today. It will be interesting to see what the students of today will come up with tomorrow.

    Paralyzed Woman Drinks Coffee Using a Robotic Arm

    Guy Builds Functional Portal Turret in Robotics Class

    Flying Robots Synced So Well They Play Music Together

    Amazing Robot Can Jump 30 Feet High

    Meet The Robot That Eats And Poops To Power Itself

    [source: robots.net]

  • Computer Programs That Think Like Humans

    In the 1800s, “intelligence” meant that you were good at memorizing things. Today “intelligence” is measured through IQ tests where the average score for humans is 100. Researchers at the Department of Philosophy, Linguistics and Theory of Science at the University of Gothenburg, Sweden, have created a computer program that can score 150.

    IQ tests are based on two types of problems: progressive matrices, which test the ability to see patterns in pictures, and number sequences, which test the ability to see patterns in numbers. The most common math computer programs score below 100 on IQ tests with number sequences. For Claes Strannegård, researcher at the Department of Philosophy, Linguistics and Theory of Science, this was a reason to try to design “smarter” computer programs.

    “We’re trying to make programs that can discover the same types of patterns that humans can see,” he says.

    The research group believes that number sequence problems are only partly a matter of mathematics – psychology is important too. Strannegård demonstrates this point:

    “1, 2, …, what comes next? Most people would say 3, but it could also be a repeating sequence like 1, 2, 1 or a doubling sequence like 1, 2, 4. Neither of these alternatives is more mathematically correct than the others. What it comes down to is that most people have learned the 1-2-3 pattern.”

    The group is therefore using a psychological model of human patterns in their computer programs. They have integrated a mathematical model that models human-like problem solving. The program that solves progressive matrices scores IQ 100 and has the unique ability of being able to solve the problems without having access to any response alternatives. The group has improved the program that specializes in number sequences to the point where it is now able to ace the tests, implying an IQ of at least 150.

    “Our programs are beating the conventional math programs because we are combining mathematics and psychology. Our method can potentially be used to identify patterns in any data with a psychological component, such as financial data. But it is not as good at finding patterns in more science-type data, such as weather data, since then the human psyche is not involved,” says Strannegård.

    The research group has recently started collaborating with the Department of Psychology at Stockholm University, with a goal to develop new IQ tests with different levels of difficulty.

    “We have developed a pretty good understanding of how the tests work. Now we want to divide them into different levels of difficulty and design new types of tests, which we can then use to design computer programmes for people who want to practice their problem solving ability,” says Strannegård.

  • Professor Teaches Students How To Create Their Own Google

    Professor Teaches Students How To Create Their Own Google

    Sebastian Thrun, an ex Stanford professor decided in January to give up his tenure at the school and reach out for larger audiences. Specializing in machine learning and robotics, Thrun is excited to leave the constraints of formal education and become a key player in a new online university called Udacity.

    The professor believes in reaching the people who truly have an aptitude for his material rather than just those who have the financial means. His goal is to reach students all over the World. Also, the professor would like to cover a wider range of topics than he could offer at Stanford.

    Udacity:

    We believe university-level education can be both high quality and low cost. Using the economics of the Internet, we’ve connected some of the greatest teachers to hundreds of thousands of students in almost every country on Earth.

    Starting on February 20th Udacity will begin online learning offering two classes:

    CS 101: BUILDING A SEARCH ENGINE
    Learn programming in seven weeks. We’ll teach you enough about computer science that you can build a web search engine like Google or Yahoo!

    CS 373: PROGRAMMING A ROBOTIC CAR
    In seven weeks you’ll learn how to program all the major systems of a robotic car, by the leader of Google and Stanford’s autonomous driving teams.

  • Google Funds AI Project to Implement “Regret”

    Google recently announced that it will help fund groundbreaking research by computer scientists and economists at Tel Aviv University.  The Blavatnik School of Computer Science is attempting to help computers make better decisions using a term they dubbed “regret.”

    Head of the program Professor Yishay Mansour began this project earlier this year at the International Conference on Learning Theory in Haifa, Israel.  He and the other researchers are working on algorithms that would allow computers to learn from their past failures in an effort to make better predictions.  This is referred to as “minimizing virtual regret” by Mansour.

    “If the servers and routing systems of the Internet could see and evaluate all the relevant variables in advance, they could more efficiently prioritize server resource requests, load documents and route visitors to an Internet site, for instance,” says Mansour.

    “Regret” is not really comparable to the human emotion that follows a night of heavy drinking or a bad relationship, but is more along the lines of measuring the distance between the desired outcome and the actual outcome.

    Since the actions of people are wildly unpredictable, the algorithm would need to allow adaptation on the fly, and real-time decision making.

    “We are able to change and influence the decision-making of computers in real-time. Compared to human beings, help systems can much more quickly process all the available information to estimate the future as events unfold – whether it’s a bidding war on an online auction site, a sudden spike of traffic to a media website, or demand for an online product,” says Mansour.

    All of this research greatly interests Google, as would be expected.  As Google grows, their need to be able to process large amounts of data in real-time also grows.  Apparently the search giant is particularly interested in how this new technology can benefit AdWords and AdSense.

    Masour will work with a 20 person team on the project, headed by Professor Noam Nisan of Hebrew University.  Also involved will be the head of Google Israel, Professor Yossi Matias, a Tel Aviv University faculty member.

    I, for one, welcome our new computer overlords.

  • 10 Details About How Google Handles Natural Language Search

    Google has posted a thought-provoking piece to the Official Google Blog, discussing at length, Google’s system for understanding synonyms in search. As author Steven Baker says, "An irony of computer science is that tasks humans struggle with can be performed easily by computer programs, but tasks humans can perform effortlessly remain difficult for computers."

    Google considers understanding human language to be one of the hardest problems in artificial intelligence, and the key to returning the best possible search results. While it is far from perfect now, Google has invested a great deal of time into this (5 years of research to be exact).

    To cut to the chase, here are some things pertaining to Google’s handling of synonyms that you should keep in mind:

    1. Google contantly monitors its system for handling synonyms with regard to search result relevance.

    2. Google says synonyms affect 70% of user searches across over 100 languages.

    3. For every 50 queries where synonyms significantly improve search results, Google has only found one "truly bad" synonym.

    4. Google does not normally fix bad synonyms by hand, but rather makes changes to its algorithms to try and correct the problem. "We hope it will be fixed automatically in some future changes," Baker says.

    5.
    Google has recently made a change to how its synonyms are displayed: in SERP snippets, terms are bolded, just like the actual words you searched for.

    6. Google uses "many techniques" to extract synonyms. Its systems analyze perabytes of data to build "an intricate understanding of what words can mean in different contexts"

    7. Some words or initials can have tons of different meanings, and Google uses other words in the query to help determine the correct ones. For example, there are over 20 possible meanings for the term "GM" that Google’s system knows something about.

    GM Synonyms

    8. Google includes variants on terms (such as singular and plural versions) within its "umbrella of synonyms".

    9.
    Google still makes mistakes with synonyms.

    10. You can turn off a synonym in a search by adding a "+" before the term or by putting the words in quotation marks.

    Google wants feedback on algorithm mistakes. They’ll take it through the web search help center forum, or through a Twitter hashtag: #googlesyns.

    It will be interesting to see how far Google progresses in the area of natural language search, because Baker is absolutely right in that it is a key to providing more relevant results. If they can understand exactly what we want from our language, without us having to tweak it too much, that will be a tremendous stride for search. Instead of us trying to figure out what Google wants us to say, Google would just understand what we say. Luckily people have gotten much better at searching over the years, learning to enter longer, more specific queries.

    Related Articles:

    Google Launches Social Search Experiment

    Optimizing for Mixed Media Search Results

    Succeeding In SEO Requires Change