WebProNews

Category: AITrends

ArtificialIntelligenceTrends

  • The Conversational Computing Revolution is Upon Us

    “Weā€™ve long dreamed of talking computers,” noted Barry Briggs, consultant and former CTO for Microsoft, where he helped lead the companyā€™s transition to the cloud and is generally known as a pioneer in the computing industry. What Briggs is referring to is advent of talking devices and conversational interfaces which are just now beginning to reshape how we use computers, and more importantly how we interact with data.

    Formerly, according to Briggs, talking computers (such as ELIZA) were more or less a trick. “After a time, because of the programā€™s simplicity, the novelty wears off,” he said.

    However, things are advancing so fast that conversational Star Trek style computer and device interaction is foreshadowing a transformative societal shift. Briggs said in January 2014, “The limitations are really gone. We have built software for decades now thinking about what are the limitations that the hardware or the amount of storage for the network place upon us. Those limitations don’t exist anymore.”

    Fast forward to today Briggs writes:

    “Because of the nearly limitless computing and storage capacity in the cloud, and because of great advances in AI, machine learning, speech recognition, and data storage and analytics, Weizenbaumā€™s primitive ELIZA program has evolved into something far more magical and useful,” says Briggs. “Perhaps, even, weā€™re at the advent of the next big shift in computing, fueled by artificial intelligence and built around a behavior that is most natural to humans: conversations.”

    Briggs sees bots at the advent of this conversational shift. “Want a pizza? Just ask Dominoā€™s chatbot. Or PizzaHutā€™s chatbot. Need to get somewhere? Ask Uber.” Ā 

    He wonders, “Can bots become the new UI?”

    “For business, the transformation of conversational computing is just beginning. As bots are connected to corporate databases, for example, theyā€™ll simplify tasks from onsite repairs to scheduling meetings into simple conversational actions like, ā€œWhat parts do I need to fix this?ā€ or ā€œWhat time is Customer X available next Monday?ā€

    Eventually, by taking advantage of the massive data storage and mining capabilities available in the cloud, bots will get to know you, providing intelligent suggestions like, ā€œWhile youā€™re on site with the customer, Iā€™d suggest examining the engine gearbox, Iā€™m seeing some early failures in other installs,ā€ and learning from previous experiences: ā€œDid the fix I suggested last time help?ā€

    Weā€™ve come a long way from Weizenbaumā€™s ELIZA. What started as a bit of sleight-of-hand programming has turned into an entirely new, intuitive and efficient way of interacting with computers. Conversational bots built on cloud-based artificial intelligence enable new frontiers in customer intimacy, simplify access to information, and help businesses and consumers make more informed decisions ā€“ quicker.”

    Read the full article at the Microsoft Transform Blog…

  • Google Has a New Messaging App In the Works

    It looks like Google has new plans to make a bigger mark in the messaging space beyond its Messenger and Hangouts apps.

    The Wall Street Journal is reporting that the company has been working on a new mobile messaging service that utilizes artificial intelligence and chatbot technology for over a year.

    From the sound of it, Google has Facebook Messenger specifically in its sights. Messenger has been the fastest growing of the top mobile apps over the past year. It’s even catching up to YouTube in the U.S. according to Nielsen.

    Facebook has of course been making a lot of improvements to Messenger, including an assistant-like feature to help people get things done. It has even added functionality to get Uber rides.

    According to the Journal:

    For its new service, Google, a unit of Alphabet Inc., plans to integrate chatbots, software programs that answer questions inside a messaging app, the people familiar with the matter said. Users will be able to text friends or a chatbot, which will scour the Web and other sources for information to answer a question, those people said. It is unclear when Google will launch the service, or what it will be called.

    Naturally, Google isn’t commenting.

    Images via Google, Nielsen

  • Rohinie Bisesar, Canadian Stabbing Suspect, Confessed To Crime Through Email

    Rohinie Bisesar, the woman who randomly stabbed a Toronto woman in the chest, is now facing murder charges after she appeared in court yesterday. Bisesar confessed to the crime through an email saying she ā€œfelt the need to be extreme.ā€

    She was initially charged with attempted murder, carrying a concealed weapon and aggravated assault. On Thursday afternoon, the Toronto Police, confirmed that the charges against the 40-year-old former financier has been upgraded to second-degree murder.

    Rosemarie ā€œKimā€ Junor, 28, a healthcare worker, died Wednesday night after sustaining stab wounds in the chest.

    The incident happened on Friday, December 11 at a Shoppers Drug Mart in Torontoā€™s underground PATH network at 66 Wellington Street West. According to investigators, Rohinie Bisesar was clad in business attire and armed with a knife when she marched into the store and stabbed Junor without provocation.

    Junor studied at York University in Toronto, where Rohinie Bisesar got her MBA from the Schulich School of Business. Authorities see the stabbing incident as random, stating that the there is no evidence that the women knew each other.

    Rohinie Bisesar admitted to the crime through an email she sent to The National Post just before her arrest. ā€œI am sorry about the incidence. I felt the need to be extreme to see if it would work. I would normally not do such a thing,ā€ Bisesar wrote.

    The woman also asked in her email: ā€œDo you know any top professionals in artificial intelligence, biotechnology, nanotechnology, satellites? Maybe Military. Maybe Government?ā€

    Bisesar added, ā€œSomething has been happening to me and this is not my normal self and I would like to know who and why this is happening. There is either a single person or more responsible and who and why would be nice to know.ā€

    Karl Gutowski, a friend of Rohinie Bisesarā€™s, confirmed that the email address was her personal email. He said Bisesar has been dealing with financial and personal problems in the past few years. He added that her mental state had gotten worse and she was hospitalized in 2014.

  • Facebook Artificial Intelligence Breakthroughs Detailed In New Paper

    Facebook Artificial Intelligence Breakthroughs Detailed In New Paper

    Facebook announced some new milestones it has achieved in its artificial intelligence research including the ability to train computers to identify objects in photos and understand natural language as well as predict and plan.

    The advancements came from the company’s AI Research (FAIR) team, which will present them in a research paper at AI conference NIPS. The paper discusses details of a system that “segments, or distinguishes between, objects in a photo.” The system, Facebook says, segments images 30% faster and uses 10 times less training data compared to industry benchmarks.

    Mike Schroepfer says in a Facebook Engineering post:

    Many people think of Facebook as just the big blue app, or even as the website, but in recent years weā€™ve been building a family of apps and services that provide a wide range of ways for people to connect and share. From text to photos, through video and soon VR, the amount of information being generated in the world is only increasing. In fact, the amount of data we need to consider when we serve your News Feed has been growing by about 50 percent year over year ā€” and from what I can tell, our waking hours aren’t keeping up with that growth rate. The best way I can think of to keep pace with this growth is to build intelligent systems that will help us sort through the deluge of content.

    To tackle this, Facebook AI Research (FAIR) has been conducting ambitious research in areas like image recognition and natural language understanding.

    You can watch a trio of demo videos here.

  • Google RankBrain: 10 Things We Know About It

    Google revealed in an interview with Bloomberg Business published on Monday that it has a new ranking signal called RankBrain and that it considers it to be the third most important out of hundreds.

    In a nutshell, RankBrain is a machine learning-based signal that helps Google better deal with queries it hasn’t seen before, which actually makes up a substantial amount of the queries the search engine sees day-to-day.

    If this interests you, you’ll definitely want to check out the interview. We also covered it here, but if you’re more interested in an at-a-glance takeaway, here’s a quick list of ten things we know about RankBrain based on what Greg Corrado, a senior research scientist at Google, told Bloomberg.

    10 Things we know about Google’s RankBrain signal

    1. RankBrain is the third most important ranking signal in Google Search.

    2. RankBrain was deployed several months ago.

    3. RankBrain uses artificial intelligence to put written language into mathematical entities (vectors) that computers can understand.

    4. If RankBrain sees a word/phrase it doesn’t know, the machine guesses what words/phrases might have similar meanings.

    5. RankBrain specifically helps with never-before-seen search queries.

    6. RankBrain is better than humans (even Googlers) at guessing which results Google would rank number one for various queries.

    7. RankBrain is the first Google search ranking signal that actually learns on its own.

    8. Turning RankBrain off is as damaging to users as turning off half of Wikipedia pages.

    9. RankBrain is so effective, Google engineers were surprised at how well it worked.

    10. Machine learning is a major focus of Google right now, which probably means we’ll see RankBrain itself and other endeavors in this area improve greatly in the future.

    Image via Google

  • RankBrain: Google’s 3rd Most Important Ranking Signal

    RankBrain: Google’s 3rd Most Important Ranking Signal

    RankBrain is reportedly the third most important signal Google’s search algorithms use when determining what content to show users in search results. Out of over 200 signals, this is one of the most powerful. And we’ve never heard of it until now.

    RankBrain was revealed in a Bloomberg Business interview with Greg Corrado, a senior research scientist at Google. It was introduced into Google’s search algorithm on a wide scale earlier this year, and according to the Corrado, it quickly became the third most important signal out of “hundreds”.

    Do you feel like Google’s search results have become significantly better this year? Have you noticed much difference? Share your thoughts in the comments.

    So what it is exactly? It’s apparently the first Google ranking signal that actually learns.

    For more of a quick “at-a-glance” look at what we know about RankBrain, go here.

    Corrado told Bloomberg, “The other signals, they’re all based on discoveries and insights that people in information retrieval have had, but there’s no learning.”

    According to the article, a “very large fraction” of Google queries are interpreted by the artificial intelligence system known as RankBrain. It also helps Google deal with “the 15 percent of queries a day it gets which its systems have never seen before,” such as “ambiguous queries, like ‘What’s the title of the consumer at the highest level of a food chain?’” the report explains.

    “RankBrain uses artificial intelligence to embed vast amounts of written language into mathematical entities — called vectors — that the computer can understand,” it says. “If RankBrain sees a word or phrase it isnā€™t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries.”

    According to the report, RankBrain has performed better than Corrado and company have expected, and has had a ten percent better success rate than humans at Google asked to guess which results Google would rank number one for various queries. Corrado even indicated that based on experiments Google has run, turning RankBrain off is as damaging to users as turning off half of Wikipedia pages.

    RankBrain is only one of many ways Google is increasingly turning to machine learning to improve its products. Google CEO Sundar Pichai discussed the company’s efforts several times throughout Alphabet’s Q3 earnings conference call last week.

    In prepared remarks (via a transcript of the call from SeekingAlpha), he told listeners, “Our investments in machine learning and artificial intelligence are a priority for us. Machine learning has long powered things like voice search, translation, and much more. And our machine learning is hard at work in mobile services like Now on Tap, which quickly assist you by providing additional useful information for whatever youā€™re doing, right in the moment, anywhere on your phone. If youā€™re an Android user that runs Marshmallow, try it out by long pressing the home button, when youā€™re in the Map, itā€™s very cool.”

    “Another example is the Google photos app, which leverages powerful machine learning technology to help people discover, organize and share their photos,” he added. ” Itā€™s a great product that people love. In fact, in just a few months since we launched it at Google I/O, photos is now used by over a 100 million users who have collectively uploaded more than 50 billion photos and videos.”

    During the Q&A portion of the call, Pichai said, “On mobile search – to me – increasingly we see – we already announced that or 50% of our searchers are on mobile. Mobile gives us very unique opportunities in terms of better understanding users and over time as we use things like machine learning, I think we can make great strides. So my long-term view on this is, it is ask compelling or in fact even better than the desktop, but it will take us time to get there, and weā€™re going to be focused to be get that.”

    In response to a later question, he said, “Machine learning is core transformative way by which we are rethinking everything we are doing. Weā€™ve been investing in this area for a while. We believe we are state-of-the-art here. And the progress particularly in the last two years has been pretty dramatic. And so we are – we are thoughtfully applying it across all our products, be it search, be it ads, be it YouTube and Play et cetera. And we are in early days, but you will see us in a systematic manner, think about how we can apply machine learning to all these areas.”

    Clearly machine learning is going to permeate more and more of the overall Google experience as time goes on, and with RankBrain having become such an important factor to search in such a short amount of time, we’d have to expect Google’s search experience to continue to improve rapidly.

    RankBrain has reportedly been deployed for a “few months”.

    So as a webmaster/site owner, is there anyway you can take advantage of this third most important ranking signal? Unfortunately, there’s probably not a lot you can do to directly influence how RankBrain views your content. That said, the signal could very well help Google better point people to your content as it better understands what users are looking for, particularly when it comes to long tail searches, which still account for a substantial number of queries Google sees on a regular basis.

    As for which signals are more important to Google than RankBrain, Google won’t come out and say, but experts in the field like Danny Sullivan think they’re most likely links (the signal that put Google on the map in the first place) and words (as in the words users enter in searches and the words on website’s page).

    Do you expect RankBrain to have an effect on SEO strategy? Share your thoughts in the comments.

  • ‘M’ Is Facebook’s Personal Assistant Inside Messenger

    ‘M’ Is Facebook’s Personal Assistant Inside Messenger

    Ever since Facebook split Messenger from its flagship app (a move that wasn’t everyone’s favorite), the company has been making addition after addition to its core functionality. It’s pretty clear that Facebook wants Messenger to wear many hats.

    Today, the company has unveiled the robot hat.

    Facebook Messaging head David Marcus has pulled the cover off M, Facebook’s ambitious new addition to Messenger that serves as “a personal digital assistant inside of Messenger that completes tasks and finds information on your behalf.” We heard reports of this last month (when it was codenamed “Moneypenny”).

    According to Marcus, M is powered by AI but “supervised by people.”

    “Unlike other AI-based services in the market, M can actually complete tasks on your behalf. It can purchase items, get gifts delivered to your loved ones, book restaurants, travel arrangements, appointments and way more,” he says.

    M is clearly Facebook’s answer to Siri and Cortana, Apple and Microsoft’s respective personal assistants. But Facebook’s M has the benefit of cross-platform reach, especially considering Facebook Messenger is now globally available to anyone with a phone number.

    Screen Shot 2015-08-26 at 2.02.34 PM

    Facebook has been hard at work turning Messenger into something way more than a texting app.

    The company turned it into a developer platform, allowing appmakers to build directly for the platform. More germane to the news of M, Facebook is trying to turn Messenger into a platform to connect businesses with their customers ā€“ for the purposes of customer service, order tracking, and more.

    A few months ago, Facebook gave Messenger its own web version. Then, Facebook allowed games to be played inside the platform. Messenger also got P2P payments last month.

    Marcus says Facebook is “beginning to test” M. Considering what it is ā€“ a massive AI project that’s at least for now dependent upon human oversight ā€“ and also considering how unbelievably slow Facebook is when it comes to some product rollouts, I wouldn’t expect to see M pop up in your Messenger contacts that soon.

  • Stephen Hawking, Elon Musk, and Hundreds More Call for Ban on Autonomous Weapons

    Stephen Hawking, Elon Musk, and Hundreds More Call for Ban on Autonomous Weapons

    According to Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, and hundreds of AI and robotics researchers, governments should ban autonomous weapons in order to prevent a “military AI arms race.”

    In a letter signed by over 1,000, Musk, Hawking and others say that most AI researchers “have no interest in building AI weapons, and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.”

    The letter, which will be officially announced at the International Joint Conferences on Artificial Intelligence (IJCAI) in Buenos Aires, is organized by the Future of Life Institute. FLI “are a volunteer-run research and outreach organization working to mitigate existential risks facing humanity. We are currently focusing on potential risks from the development of human-level artificial intelligence.”

    According to the organization, its mission is “to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future.”

    And to FLI and the signatories of this open letter, flying death robots do not an optimistic future make.

    Here’s the full text of the letter:

    Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is ā€” practically if not legally ā€” feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

    Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

    Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons ā€” and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

    In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

    Elon Musk, Steve Wozniak, and Stephen Hawking have all gone on record plenty of times with concerns about artificial intelligence.

    Image via Stephen Hawking, Facebook

  • Mark Zuckerberg on AI, Poking, and Whether or Not the Machines Win

    Mark Zuckerberg on AI, Poking, and Whether or Not the Machines Win

    Facebook CEO Mark Zuckerberg recently held another Q&A, in which he talked artificial intelligence, virtual reality, and the future of the company. He also had nice interchanges with Stephen Hawking and Arnold Schwarzenegger, and answered a question about Poking.

    Here are some of his most interesting responses.

    On the topic of Facebook’s “real name policy” and its effect on the transgender community:

    Real names are an important part of how our community works for a couple of reasons.

    First, it helps keep people safe. We know that people are much less likely to try to act abusively towards other members of our community when they’re using their real names. There are plenty of cases — for example, a woman leaving an abusive relationship and trying to avoid her violent ex-husband — where preventing the ex-husband from creating profiles with fake names and harassing her is important. As long as he’s using his real name, she can easily block him.

    Second, real names help make the service easier to use. People use Facebook to look up friends and people they meet all the time. This is easy because you can just type their name into search and find them. This becomes much harder if people don’t use their real names.

    That said, there is some confusion about what our policy actually is. Real name does not mean your legal name. Your real name is whatever you go by and what your friends call you. If your friends all call you by a nickname and you want to use that name on Facebook, you should be able to do that. In this way, we should be able to support everyone using their own real names, including everyone in the transgender community. We are working on better and more ways for people to show us what their real name is so we can both keep this policy which protects so many people in our community while also serving the transgender community.

    On the future of technology:

    In 10 years, I hope we’ve improved a lot of how the world connects. We’re doing a few big things:

    First, we’re working on spreading internet access around the world through Internet.org. This is the most basic tool people need to get the benefits of the internet — jobs, education, communication, etc. Today, almost 2/3 of the world has no internet access. In the next 10 years, Internet.org has the potential to help connect hundreds of millions or billions of people who do not have access to the internet today.

    As a side point, research has found that for every 10 people who gain access to the internet, about 1 person is raised out of poverty. So if we can connect the 4 billion people in the world who are unconnected, we can potentially raise 400 million people out of poverty. That’s perhaps one of the greatest things we can do in the world.

    Second, we’re working on AI because we think more intelligent services will be much more useful for you to use. For example, if we had computers that could understand the meaning of the posts in News Feed and show you more things you’re interested in, that would be pretty amazing. Similarly, if we could build computers that could understand what’s in an image and could tell a blind person who otherwise couldn’t see that image, that would be pretty amazing as well. This is all within our reach and I hope we can deliver it in the next 10 years.

    Third, we’re working on VR because I think it’s the next major computing and communication platform after phones. In the future we’ll probably still carry phones in our pockets, but I think we’ll also have glasses on our faces that can help us out throughout the day and give us the ability to share our experiences with those we love in completely immersive and new ways that aren’t possible today.

    Those are just three of the things we’re working on for the next 10 years. I’m pretty excited about the future

    On Facebook’s AI initiatives:

    Most of our AI research is focused on understanding the meaning of what people share.

    For example, if you take a photo that has a friend in it, then we should make sure that friend sees it. If you take a photo of a dog or write a post about politics, we should understand that so we can show that post and help you connect to people who like dogs and politics.

    In order to do this really well, our goal is to build AI systems that are better than humans at our primary senses: vision, listening, etc.

    For vision, we’re building systems that can recognize everything that’s in an image or a video. This includes people, objects, scenes, etc. These systems need to understand the context of the images and videos as well as whatever is in them.

    For listening and language, we’re focusing on translating speech to text, text between any languages, and also being able to answer any natural language question you ask.

    This is a pretty basic overview. There’s a lot more we’re doing and I’m looking forward to sharing more soon.

    From Stephen Hawking:

    I would like to know a unified theory of gravity and the other forces. Which of the big questions in science would you like to know the answer to and why?

    That’s a pretty good one!

    I’m most interested in questions about people. What will enable us to live forever? How do we cure all diseases? How does the brain work? How does learning work and how we can empower humans to learn a million times more?

    I’m also curious about whether there is a fundamental mathematical law underlying human social relationships that governs the balance of who and what we all care about. I bet there is.

    From Arnold Schwarzenegger:

    Mark, I always tell people that nobody is too busy to exercise, especially if Popes and Presidents find time. You’ve got to be one of the busiest guys on the planet, and younger generations can probably relate to you more than they can the Pope – so tell me how you find time to train and what is your regimen like? And by the way – will the machines win?

    Staying in shape is very important. Doing anything well requires energy, and you just have a lot more energy when you’re fit.

    I make sure I work out at least three times a week — usually first thing when I wake up. I also try to take my dog running whenever I can, which has the added bonus of being hilarious because that basically like seeing a mop run.

    And no, the machines don’t win

    On his definition of happiness:

    To me, happiness is doing something meaningful that helps people and that I believe in with people I love.

    I think lots of people confuse happiness with fun. I don’t believe it is possible to have fun every day. But I do believe it is possible to do something meaningful that helps people people every day.

    As I’ve grown up, I’ve gained more appreciation for my close relationships — my wife, my partners at work, my close friends. Nobody builds something by themselves. Long term relationships are very important.

    On why he has a $1 set salary at Facebook:

    I’ve made enough money. At this point, I’m just focused on making sure I do the most possible good with what I have. The main way I can help is through Facebook — giving people the power to share and connecting the world. I’m also focusing on my education and health philanthropy work outside of Facebook as well. Too many people die unnecessarily and don’t get the opportunities they deserve. There are lots of things in the world that need to get fixed and I’m just lucky to have the chance to work on fixing some of them.

    On the future of Facebook:

    There are a few important trends in human communication that we hope to improve.

    First, people are gaining the power to share in richer and richer ways. We used to just share in text, and now we post mainly with photos. In the future video will be even more important than photos. After that, immersive experiences like VR will become the norm. And after that, we’ll have the power to share our full sensory and emotional experience with people whenever we’d like.

    Second, people are gaining the power to communicate more frequently. We used to have to be with someone in person. Then we had these bulky computers at our desks or that we could carry around. Now we have these incredible devices in our pockets all the time, but we only use them periodically throughout the day. In the future, we’ll have AR and other devices that we can wear almost all the time to improve our experience and communication.

    One day, I believe we’ll be able to send full rich thoughts to each other directly using technology. You’ll just be able to think of something and your friends will immediately be able to experience it too if you’d like. This would be the ultimate communication technology.

    Our lives improve as our communication tools get better in many ways. We can build richer relationships with the people we love and care about. We know about what’s going on in the world and can make better decisions in our jobs and lives. We are also more informed and can make better decisions collectively as a society. This increase in the power people have to share is one of the major forces driving the world today.

    And finally …

    Why did you come up with Poking?

    It seemed like a good idea at the time.

    It always does, Mark. It always does.

  • Steve Wozniak Is Also Concerned About Our Future Robot Overlords

    Steve Wozniak Is Also Concerned About Our Future Robot Overlords

    Apple co-founder Steve Wozniak has become the latest tech figure to warn us that the computers are learning and they’re going to destroy us all at some point.

    ā€œComputers are going to take over from humans, no question,ā€ said Wozniak in an interview with the Australian Financial Review.

    ā€œLike people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually theyā€™ll think faster than us and theyā€™ll get rid of the slow humans to run companies more efficiently.”

    You might recall that others have recently spoken out about the existential threat that unchecked Artificial Intelligence poses to the human race.

    ā€œI am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and donā€™t understand why some people are not concerned,ā€ said Bill Gates in a recent reddit AMA.

    Tesla and spaceX founder Elon Musk has been the most vocal about his AI concerns.

    See?

    ā€œWill we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I donā€™t know about that ā€¦ But when I got that thinking in my head about if Iā€™m going to be treated in the future as a pet to these smart machines ā€¦ well Iā€™m going to treat my own pet dog really nice,ā€ says Wozniak.

    Be nice to the machines, guys. They might remember.

  • Watson-Powered Toy Uses IBM Supercomputer to Develop Relationship with Your Kid

    We’ve come a long way from Barbie.

    If you’re looking for an interactive toy that can talk with your kid, tell knock-knock jokes, and help them learn by getting to know their specific needs ā€“ there’s a Kickstarter campaign for you. It’s called CogniToys, and it’s a super-smart toy dinosaur with its own personality.

    “Each toy will get to know the child and grow with him/her, interacting directly with them to create an experience around each child’s personal interests. The toy will explore favorite colors, favorite toys, interests and use these to customize engagement. Even better, the toy has a personality of its own that changes over time,” says its makers, Elemental Path.

    The CogniToys dino will let kids “ask thousands of questions and receive age-appropriate answers, give commands to the toy allowing the child to discover hidden talents, and hear a number of stories and even create their own stories.”

    So, how does it do this? With the help of IBM’s supercomputer, Watson. You may remember it as the one kicking all the ass on Jeopardy a while back.

    Elemental Path recently won the grand prize in IBM’s Watson Mobile App Developer Challenge, which granted the company access to the supercomputer and all its knowledge. The company quickly moved into the pre-production/prototype phase, and are now fielding donations via Kickstarter to help get the toy’s production into full swing.

    And they’re doing pretty well. In fewer than three days, the project has already doubled its $50,000 goal and is approaching 1,000 individual backers.

    You can back the Kickstarter right now and a $99 pledge gets you a CogniToy Dino. Elemental Path says it’s going to make additional colors of its product with the extra money it receives. Although, with the current rate of pledges holding, the company is going to have to make up some new stretch goals.

    Image via CogniToys, Kickstarter

  • Bill Gates Is Concerned About the Rise of the Machines, and Why the Hell Aren’t You?

    Bill Gates Is Concerned About the Rise of the Machines, and Why the Hell Aren’t You?

    Billionaire philanthropist Bill Gates, who knows a thing or two about technology, is concerned about the day when our artificial intelligence gets a little too smart for its own good and decides it probably doesn’t have to listen to the dumb humans anymore.

    Also, why aren’t you concerned?, he wonders.

    Gates hosted a reddit AMA on Wednesday, and was asked about the “existential threat” of machine superintelligence. His answer? Hell yeah it’s a issue.

    “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned,” said Gates.

    As of late, Tesla and SpaceX founder Elon Musk has been the most outspoken rich tech dude when it comes to the dangers of AI. He recently called for extensive research to make sure AI doesn’t kill us all, and even make a huge donation to the cause. In the past, he’s said things like …

    We need to be super careful with AI. Potentially more dangerous than nukes.

    and

    You have no idea how fast [AI] is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I donā€™t understand.

    Gates also had this to say during his AMA:

    “There will be more progress in the next 30 years than ever. Even in the next 10 problems like vision and speech understanding and translation will be very good. Mechanical robot tasks like picking fruit or moving a hospital patient will be solved. Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.”

    Oh goody.

    Image via thegatesnotes, YouTube

  • Elon Musk Calls for Research to Make Sure Artificial Intelligence Doesn’t Kill Us All

    Elon Musk Calls for Research to Make Sure Artificial Intelligence Doesn’t Kill Us All

    UPDATED BELOW

    For Tesla and SpaceX CEO Elon Musk, figuring out how to avoid the “potential pitfalls” of artificial intelligence is just as ā€“ if not more ā€“ important than advancing it.

    Musk, who has been warning us about the possible dangers of AI for some time now, is once again calling for more research into AI safety. Musk has signed and is promoting an open letter from the Future of Life Institute that calls for “research not only on making AI more capable, but also on maximizing the societal benefit … ”

    “The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems,” says the letter.

    “There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

    The Future of Life Institute is a “a volunteer-run research and outreach organization working to mitigate existential risks facing humanity.” The group’s current focus is on “potential risks from the development of human-level artificial intelligence.”

    You may be unfamiliar with this specific interest of Musk’s, but the billionaire has been rather outspoken about it ā€“ especially in the last year or so. In June of last year, Musk pretty much admitted to investing in an up-and-coming AI company to keep an eye on them.

    “Yeah. I mean, I donā€™t think ā€“ in the movie Terminator, they didnā€™t create A.I. to ā€“ they didnā€™t expect, you know some sort of Terminator-like outcome. It is sort of like the Monty Python thing: Nobody expects the Spanish inquisition. Itā€™s just ā€“ you know, but you have to be careful,” he said.

    Soon after, he tweeted that AI was “potentially more dangerous than nukes.”

    Then, a few months later, Musk had this to say as a reply to an article on a futurology site:

    “I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen … ”

    Point being ā€“ Elon Musk is pretty concerned about the robot apocalypse, and think you should be too.

    “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do,” says the letter.

    Yeah, not what they want to do. That’s when everything goes to hell in a handbasket.

    UPDATE 1: Musk has just donated $10 million to The Future of Life Institute.

  • Elon Musk Once Again Warns of the Looming Robot Apocalypse

    Elon Musk Once Again Warns of the Looming Robot Apocalypse

    Tesla and SpaceX founder Elon Musk has once again taken to a public forum to warn everyone that they shouldn’t sleep on recent developments in the artificial intelligence field. In short, Musk says that the chance of “something seriously dangerous happening” is likely in five years or so, and a near certainty within a decade.

    Musk posted his warning on science and futurology site edge.org, as a reply to an article titled The Myth of AI. At some point, Musk deleted his comment ā€“ but quick redditors over at the futurology subreddit caught it.

    Here’s what he had to say:

    The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.

    I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen…

    This is by no means Musk’s first warning of the type.

    In August, he tweeted that AI was potentially more dangerous than nukes.

    A few months ago he vocalized his concerns regarding a possible Skynet scenario. In fact, he pretty much admitted to investing in an AI company so that he could keep an eye on them.

    And barely three weeks ago, speaking at MIT’s Aeronautics and Astronautics departmentā€™s Centennial Symposium, Musk compared harnessing AI to controlling a demon.

    “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, itā€™s probably that. So we need to be very careful with the artificial intelligence.

    ā€œIncreasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we donā€™t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where thereā€™s the guy with the pentagram and the holy water, itā€™s like yeah heā€™s sure he can control the demon. Didnā€™t work out.ā€

    I don’t know if you’re inclined to buy into the plausibility of a robot apocalypse but if you’re going to listen to someone, Elon Musk has to be near the top of the list. Ignore at your own risk.

  • Elon Musk Makes Bizarre Matrix-Style Prediction

    Elon Musk Makes Bizarre Matrix-Style Prediction

    Elon Musk, Internet darling and CEO of Tesla Motors, was speaking before the MIT Aeronautics and Astronautics departmentā€™s Centennial Symposium on Friday. In the middle of his session, he distractedly mused on some foreboding thoughts about artificial intelligence.

    “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, itā€™s probably that. So we need to be very careful with the artificial intelligence.

    “Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we donā€™t do something very foolish.

    “With artificial intelligence we are summoning the demon. In all those stories where thereā€™s the guy with the pentagram and the holy water, itā€™s like yeah heā€™s sure he can control the demon. Didnā€™t work out.”

    This is not the first time that Musk has issued a warning about artificial intelligence. Back in August he tweeted:

    Musk referenced ā€œSuperintelligence by Bostromā€, evidently referring to Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, a recent book.

    The Amazon description for Bostromā€™s book reads:

    Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?

    If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful – possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

    Most of what lay persons ā€œknowā€ about artificial intelligence and the dangers therein lies in movies like The Matrix and the Terminator films and television show. In these imaginings, artificial intelligence grows controlled until it reaches a tipping point where it quickly outpaces manā€™s ability to control it. It progresses on, unfeeling and cold, reasoning that man is too flawed and weak to be depended upon as a facet of its existence. It then seeks to either eliminate or subjugate humans.

  • Google Beefs Up Its AI Work With New Acquisitions And Oxford University Partnership

    Google Beefs Up Its AI Work With New Acquisitions And Oxford University Partnership

    Google announced that it has formed a partnership with the Artificial Intelligence research teams at Oxford University to “accelerate” its efforts in AI, and has hired seven founders of two AI startups out of the university.

    Google acquired AI company DeepMind earlier this year for reportedly somewhere in the $400 million to $500 million range, and it’s the result of that – Google DeepMind – that is doing the partnering with Oxford and adding the founders of Dark Blue Labs and Vision Factory to its team.

    “Prof Nando de Freitas, Prof Phil Blunsom, Dr Edward Grefenstette and Dr Karl Moritz Hermann, who teamed up earlier this year to co-found Dark Blue Labs, are four world leading experts in the use of deep learning for natural language understanding,” says Demis Hassabis, co-founder of DeepMind and Vice President of Engineering at Google. “They will be spearheading efforts to enable machines to better understand what users are saying to them.”

    “Also joining the DeepMind team will be Dr Karen Simonyan, Max Jaderberg and Prof Andrew Zisserman, one of the worldā€™s foremost experts on computer vision systems, a Fellow of the Royal Society, and the only person to have been awarded the prestigious Marr Prize three times,” Hassabis adds. “As co-founders of Vision Factory, their aim was to improve visual recognition systems using deep learning. Dr Simonyan and Prof Zisserman developed one of the winning systems at the recent 2014 ImageNet competition, which is regarded as the most competitive and prestigious image recognition contest in the world.”

    While the three professors are joining Google, they’ll also be holding joint appointments at Oxford, and will continue to spend part of their time there. Google will also be making an undisclosed, but “substantial” donation to create a partnership with Oxford’s computer science and engineering departments. This will include an internship program, and a series of lectures and workshops.

    Image via Google

  • Elon Musk Continues to Warn of Impending Robot Apocalypse

    Elon Musk Continues to Warn of Impending Robot Apocalypse

    While the rest of the country freaks out about some possible Ebolaā€“Dustin Hoffmanā€“Rene Russo scenario, Tesla and SpaceX founder Elon Musk is keeping his eye on the ball.

    Musk, who knows a thing or two about things, continues to warn us about the greatest threat to human existence ā€“ the rise of artificial superintelligence and the subsequent robot apocalypse. According to Musk, AI could wind up being more dangerous to the human race than nuclear weapons.

    You tell ’em, Elon.

    You may recall that Elon Musk has been warning of the possible sure dangers of artificial intelligence for some time. Back in June, we learned how the tech tycoon is particularly concerned about Skynet. It’s worth rehashing this conversation Musk had with two CNBC hosts concerning his investments in a new AI company:

    MUSK: Right. I was also an investor in DeepMind before Google acquired it and Vicarious. Mostly I sort of ā€“ itā€™s not from the standpoint of actually trying to make any investment return. Itā€™s really, I like to just keep an eye on whatā€™s going on with artificial intelligence. I think there is potentially a dangerous outcome there and we need to ā€“

    EVANS: Dangerous? How so?

    MUSK: Potentially, yes. I mean, there have been movies about this, you know, like ā€˜Terminator.ā€™

    EVANS: Well yes, but movies are ā€“ even if that is the case, what do you do about it? I mean, what dangers do you see that you can actually do something about?

    MUSK: I donā€™t know.

    BOORSTIN: Well why did you invest in Vicarious? What exactly does Vicarious do? What do you see it doing down the line?

    MUSK: Well, I mean, Vicarious refers to it as recursive cortical networks. Essentially emulating the human brain. And so I think ā€“

    BORRSTIN: So you want to make sure that technology is used for good and not Terminator-like evil?

    MUSK: Yeah. I mean, I donā€™t think ā€“ in the movie ā€œTerminator,ā€ they didnā€™t create A.I. to ā€“ they didnā€™t expect, you know some sort of Terminator-like outcome. It is sort of like the Monty Python thing: Nobody expects the Spanish inquisition. Itā€™s just ā€“ you know, but you have to be careful. Yeah, you want to make sure that ā€“

    In other words, Elon Musk wants to keep an eye on things. Good lookin out.

    Of course, it’s entirely possible that Elon Musk is a Terminator and is simply trying to gain our trust.

    God creates AI, God destroys AI, God creates idiot humans, idiot humans destroy God, idiots humans create AI, robots brutally annihilate idiot humans, Musk inherits the Earth. I think it’s something like that.

  • Elon Musk Is Pretty Concerned About Skynet

    Elon Musk Is Pretty Concerned About Skynet

    Tesla and SpaceX CEO Elon Musk, often referred to as the real life Tony Stark, can envision a world where we are all overtaken by our robot overlords.

    Specifically, the whole Terminator scenario.

    CNBC’s Julia Boorstin and Kelly Evans recently asked Musk about a recent investment he made in Vicarious, a company founded in 2010 that is ā€œbuilding software that thinks and learns like a human.ā€ After noting that Musk rarely invests in companies other than his own, Boorstin asked Musk why Vicarious?

    And he basically went all Sarah Connor on them. You have to read this amazon interaction, courtesy Business Insider:

    MUSK: Right. I was also an investor in DeepMind before Google acquired it and Vicarious. Mostly I sort of ā€“ it’s not from the standpoint of actually trying to make any investment return. It’s really, I like to just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there and we need to ā€“

    EVANS: Dangerous? How so?

    MUSK: Potentially, yes. I mean, there have been movies about this, you know, like ‘Terminator.’

    EVANS: Well yes, but movies are ā€“ even if that is the case, what do you do about it? I mean, what dangers do you see that you can actually do something about?

    MUSK: I don’t know.

    BOORSTIN: Well why did you invest in Vicarious? What exactly does Vicarious do? What do you see it doing down the line?

    MUSK: Well, I mean, Vicarious refers to it as recursive cortical networks. Essentially emulating the human brain. And so I think ā€“

    BORRSTIN: So you want to make sure that technology is used for good and not Terminator-like evil?

    MUSK: Yeah. I mean, I donā€™t think ā€“ in the movie “Terminator,” they didn’t create A.I. to ā€“ they didn’t expect, you know some sort of Terminator-like outcome. It is sort of like the Monty Python thing: Nobody expects the Spanish inquisition. Itā€™s just ā€“ you know, but you have to be careful. Yeah, you want to make sure that ā€“

    “I don’t know. But there are some scary outcomes. And we should try to make sure the outcomes are good, not bad,” said Musk.

    Elon Musk invested in an AI company to keep an eye on them. Oh my god that’s incredible.

    Musk was one of a handful of high-profile tech names to participate in a reported $40 million investment in Vicarious this spring. Musk joined Mark Zuckerberg and Ashton Kutcher as investors in the company that’s “ā€œdeveloping machine learning software based on the computational principles of the human brain.ā€ PayPal founder Peter Thiel and Facebook co-founder Dustin Moskovitz are also investors in Vicarious.

  • Turing Test Passed by Chatbot Simulating a 13-Year-Old

    Turing Test Passed by Chatbot Simulating a 13-Year-Old

    Computer science researchers this week revealed that the road to artificial intelligence (AI) has been paved a bit further. A computer algorithm developed in Saint Petersburg, Russia passed the Turing test at an event at the Royal Society of London this weekend.

    The Turing test, laid out by computer scientist Alan Turing in 1950, is a method by which researchers determine how human-like a computer can be. The test involves humans interacting with the computer in a blind test and evaluating whether they believe it to be human.

    The algorithm that passed the test is a chatbot named Eugene. The program attempts to simulate what it would be like to have a conversation with a 13-year-old boy.

    Eugene convinced around one-third of the 30 judges at the event that it was actually a 13-year-old boy. According to the rules of the event, a score of over 30% is considered a success.

    There is some controversy, however, as to whether Eugene is the first program to successfully pass the Turing test. Critics point out that another chatbot named Cleverbot passed the Turing test back in 2011 – and with nearly 60% of its judges believing it to be human. The fact that Eugene only simulates a 13-year-old is also a point of contention.

    “Some will claim that the Test has already been passed,” said Kevin Warwick, a professor at the University of Reading, which organized the event. “The words Turing Test have been applied to similar competitions around the world. However this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday.”

    Eugene has been under development since 2001. The program’s lead developer, Vladimir Veselov, stated that the team’s idea was for Eugene to be able to speak about anything, but have its claimed age disguise holes in its knowledge.

    “We spent a lot of time developing a character with a believable personality,” said Vesselov. “This year we improved the ‘dialog controller’ which makes the conversation far more human-like when compared to programs that just answer questions. Going forward we plan to make Eugene smarter and continue working on improving what we refer to as ‘conversation logic.’”

    Image via the University of Reading

  • Google Is Following Its Robot Purchases With An Artificial Intelligence Acquisition

    Google Is Following Its Robot Purchases With An Artificial Intelligence Acquisition

    Update: DeepMind will reportedly be integrated into Google’s search team. That may be good news for those fearing a robot apocalypse, but who’s to say the robots won’t make use of other technologies employed throughout the company?

    Google has reportedly acquired artificial intelligence technology company DeepMind. Multiple reports indicate that Google has confirmed the acquisition, but not the price. These reports, citing unnamed sources, have it somewhere in the $400 million – $500 million range.

    Not much is known about just what Google has in mind for this acquisition, but according to Liz Gannes at Re/code, it’s essentially a talent acquisition, which includes the talent of Demis Hassabis, who is described as a “games prodigy and neuroscientist.”

    Naturally, given other recent acquisitions by Google, one’s mind tends to gravitate toward self-aware robots, who may or may not destroy us all. DeepMind, however, specializes in games and ecommerce. So far at least.

    This is what the company says about itself on its homepage:

    DeepMind is a cutting edge artificial intelligence company. We combine the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms.

    Founded by Demis Hassabis, Shane Legg and Mustafa Suleyman, the company is based in London and supported by some of the most iconic technology entrepreneurs and investors of the past decade. Our first commercial applications are in simulations, e-commerce and games.

    According to The Information, Google has agreed to set up an ethics board to make sure its AI technology “isn’t abused.”

    That report also claims Facebook was in talks to acquire the company last year.

    Image via DeepMind

  • IBM’s Watson Cursed Like a Sailor After Being Taught the Urban Dictionary

    IBM’s Watson Cursed Like a Sailor After Being Taught the Urban Dictionary

    It’s a fact that Watson, IBM’s massive AI project, is smarter than the average human. I mean, it kicked Ken Jennings’ ass on Jeopardy that one time. “Smart,” in that respect, meant the ability to pull knowledge from terabytes worth of Wikipedia data based on verbal clues.

    But Ken Jennings (and you and me) has Watson beat in one measure of intelligence: human language. Once that fact no longer holds true, well, we’re all in a hell of a lot of trouble.

    Nevertheless, IBM is trying to improve Watson’s human language prowess. And to do that, Watson needs to understand how humans talk – how they really talk. I’m talking about slang, of course. People simply don’t realize just how complicated human language really is. Teaching Watson proper and direct English is nowhere near good enough to turn it into a fully functional conversation partner. I mean, how the hell is it going to know how to respond to YOLO?

    So, to work on that slang element of human language, IBM researchers decided to teach Watson the Urban Dictionary – you know, the online database of anything and everything human begins say – from the inane to the foul.

    Apparently, this led to a problem. Watson developed a mouth like a sailor. From Fortune:

    Watson couldn’t distinguish between polite language and profanity — which the Urban Dictionary is full of. Watson picked up some bad habits from reading Wikipedia as well. In tests it even used the word “bullshit” in an answer to a researcher’s query.

    Ultimately, Brown’s 35-person team developed a filter to keep Watson from swearing and scraped the Urban Dictionary from its memory. But the trial proves just how thorny it will be to get artificial intelligence to communicate naturally. Brown is now training Watson as a diagnostic tool for hospitals. No knowledge of OMG required.

    Suck it, Trebek.

    [Fortune via The Atlantic]