WebProNews

Category: AITrends

ArtificialIntelligenceTrends

  • Tesla’s Model S and Model X Refresh Ditches the Gear Shift

    Tesla’s Model S and Model X Refresh Ditches the Gear Shift

    Tesla has unveiled its refreshed Model S and Model X, including a radically redesigned steering wheel and an absent gear shifter.

    Whether automatic or standard, the gear shifter has been a major part of almost every vehicle ever made. Leave it to Tesla to rethink it…and ditch it.

    Electrek got a look at an internal Tesla document that discusses how the vehicles will know which drive mode to active:

    The vehicle uses its Autopilot sensors to intelligently and automatically determine intended drive modes and select them. For example, if the front of Model S/X is facing a garage wall, it will detect this and automatically shift to Reverse once the driver presses the brake pedal. This eliminates one more step for the drivers of the world’s most intelligent production cars.

    According to Electrek, “Tesla is making sure that people are not too confused about it by adding force touch controls for ‘Park, Reverse, Neutral, and Drive’ drive modes at the base of the phone charger on the center console.”

    Needless to say, this is a huge departure from what people are familiar with, but Tesla has built an entire business of upending the traditional auto. This is one of those features many people will probably wonder how they ever lived without.

  • Baidu Gets Permission to Test Self-Driving Cars in California

    Baidu Gets Permission to Test Self-Driving Cars in California

    Baidu is the latest company to receive permission to test self-driving cars in California.

    Self-driving and autonomous vehicles are considered one of the next big steps in the automotive industry. Virtually every manufacturer is working on some kind of autonomous software, with varying degrees of success.

    Baidu is the latest to company ready to test its self-driving tech, and California has granted it a permit to test three autonomous cars, without a driver behind the wheel, according to Reuters. Baidu is the sixth company to receive a permit to test without a driver, with a total of 58 companies cleared to test self-driving cars with a backup driver.

    As Reuters points out, Baidu is currently testing some 500 self-driving vehicles, although most are in China. The company has also mostly tested vehicles with a backup driver, and has yet to announce when it will start testing vehicles with no driver.

  • Apple Shakeup Has Hardware Chief Taking On New Secret Project

    Apple Shakeup Has Hardware Chief Taking On New Secret Project

    Apple has announced a shakeup of its executive team, with hardware chief Dan Riccio taking on a new project.

    Dan Riccio has been with Apple since 1998 as a leader on the Product Design team. From there he became vice president of iPad Hardware Engineering in 2010, and then executive team leader of Hardware Engineering in 2012. Riccio now moves on as a vice president of engineering. Most significantly, Apple says his new role will involving working on a new project and reporting directly to CEO Tim Cook.

    In the meantime, John Ternus will take over Apple’s Hardware Engineering in place of Riccio.

    “Every innovation Dan has helped Apple bring to life has made us a better and more innovative company, and we’re thrilled that he’ll continue to be part of the team,” said Tim Cook, Apple’s CEO. “John’s deep expertise and wide breadth of experience make him a bold and visionary leader of our Hardware Engineering teams. I want to congratulate them both on these exciting new steps, and I’m looking forward to many more innovations they’ll help bring to the world.”

    “Working at Apple has been the opportunity of a lifetime, spent making the world’s best products with the most talented people you could imagine,” said Riccio. “After 23 years of leading our Product Design or Hardware Engineering teams — culminating with our biggest and most ambitious product year ever — it’s the right time for a change. Next up, I’m looking forward to doing what I love most — focusing all my time and energy at Apple on creating something new and wonderful that I couldn’t be more excited about.”

    While Apple is not giving any hints about what project Riccio will be working on, one possibility is the Apple Car. The project is currently being led by Apple AI chief John Giannandrea, but it seems likely Riccio’s vast experience in hardware engineering would be an invaluable asset to the project. Alternatively, Riccio could be helming an entirely unrelated, yet to be revealed project.

  • Amazon’s Alexa Has ‘Hunches’ and Acts On Them

    Amazon’s Alexa Has ‘Hunches’ and Acts On Them

    Amazon’s Alexa has crossed a significant milestone, having “hunches” regarding what needs to be done and taking the appropriate action.

    As smart assistants become more ubiquitous, a key feature is the ability to predict what a person will want, when they will want it and take preemptive action. This is especially important for Amazon as Alexa’s head scientist, Rohit Prasad, is arguing that AI should be judged based on how helpful it is, rather than the old Turing Test.

    According to the company, Alexa now learns about a person’s habits and forms “hunches” about what they want to do. For example, if a person normally turns off a particular light before going to bed, but forget to one night, Alexa can either remind them or automatically turn it off. In contrast, if a person leaves a particular light on, such as a porch light, Alexa will leave it on as well.

    Amazon says the new feature will allow Alexa to “proactively turn off the lights, adjust the thermostat, turn down the water heater, or start the robotic vacuum when Alexa has a hunch that everyone is away from home or asleep.”

    Users can enable or disable where Alexa proactively acts on the Hunches feature, although it’s a safe bet many customers will enable it.

  • Gartner: ‘Responsible AI a Societal Concern’

    Gartner: ‘Responsible AI a Societal Concern’

    Gartner has released its Predicts 2021 reports and the outlook for artificial intelligence (AI) includes some troubling growing pains.

    AI is one of the fastest growing industries, and one of the most controversial. Experts have come out on all sides of the debate, with some believing it will help solve some of mankind’s most elusive challenges.

    Others, including Elon Musk, believe it represents one of the biggest existential threats to humanity. Recent research suggests that a super-intelligent AI will be impossible to control, further raising concerns.

    Gartner’s latest reports indicate there are a number of more pressing issues that could pose challenges for AI researchers and the industry at large. Gartner highlights five specific ways AI will impact society:

    By 2025, the concentration of pretrained AI models among 1% of AI vendors will make responsible AI a societal concern.

    In 2023, 20% of successful account takeover attacks will use deepfakes to socially engineer users to turn over sensitive data or move money into criminal accounts.

    By 2024, 60% of AI providers will include a means to mitigate possible harm as part of their technologies.

    By 2025, 10% of governments will use a synthetic population with realistic behavior patterns to train AI while avoiding privacy and security concerns.

    By 2025, 75% of conversations at work will be recorded and analyzed, enabling the discovery of added organizational value and risk.

    These issues illustrate the need for companies and organizations to take the necessary steps now to ensure AI is a force for good.

  • Google Moving Against Second AI Ethics Researcher

    Google Moving Against Second AI Ethics Researcher

    Still mired in controversy over its firing of Dr. Timnit Gebru, Google appears to be repeating history with Margaret Mitchell, its Ethics AI lead.

    Google drew widespread condemnation from critics inside and outside the company for its firing of Dr. Gebru, one of the world’s leading AI ethics researchers. Gebru was forced out following the publication of a paper critical of some of the AI technology Google uses in its products. Google says Gebru resigned, but both Gebru and her team say she was fired after demands she retract the paper.

    The incident prompted CEO Sundar Pichai to send an email to employees, apologizing for what happened and promising the company would do better in the future.

    It would appear that promise may be short-lived, as Google is now taking action against Mitchell. Gebru tweeted the news on Tuesday.

    VentureBeat reached out to Google and received the following statement:

    Our security systems automatically lock an employee’s corporate account when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered. In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts. We explained this to the employee earlier today. We are actively investigating this matter as part of standard procedures to gather additional details.

    Mitchell has been a vocal critic of Google’s handling of Gebru’s termination, tweeting the following just five hours before Gebru tweeted about the action taken against her:

    One big point of contention is the integrity of the research performed by Google’s scientists and researchers. In an email to Google’s leadership, the company’s AI researchers emphasized what’s at stake:

    Google’s short-sighted decision to fire and retaliate against a core member of the Ethical AI team makes it clear that we need swift and structural changes if this work is to continue, and if the legitimacy of the field as a whole is to persevere.

    This research must be able to contest the company’s short-term interests and immediate revenue agendas, as well as to investigate AI that is deployed by Google’s competitors with similar ethical motives.

    Unfortunately, those concerns seem to have fallen on deaf ears. That letter was sent in mid-December. In late December, however, Google told researchers to “take great care to strike a positive tone” on “sensitive topics,” such as as AI, according to an email obtained by Reuters.

    With Margaret Mitchell, the company’s Ethical AI lead, now potentially on the chopping block, Google is on the verge of losing all credibility among AI researchers.

  • Oracle Forms New Cloud and AI Organization

    Oracle Forms New Cloud and AI Organization

    Oracle has formed a new organization, focused on the cloud and artificial intelligence (AI) and helmed by executive VP Don Johnson.

    Oracle has been making significant headway in the cloud market, although it still lags behind market leaders AWS, Microsoft Azure and Google Cloud. Nonetheless, the company is doubling down on its cloud and AI business, and has scored some big wins agains its bigger rivals.

    According to Business Insider, Oracle is tapping Don Johnson, the former Oracle Cloud Infrastructure (OCI) boss to run the new organization, called Oracle Cloud Platform & AI Services. Johnson was once considered a top contender for the co-CEO job, making his appointment to the new role an indication of its importance.

    Interestingly, the new organization does not replace or operate independently of OCI, but will serve as an extension and expansion of it.

    “It’s important to note: this is an extension of OCI, not a division of it,” said an email announcing the change that was seen by Business Insider. “Together we’ll operate this as a unified OCI team, with a common all-hands, product roadmap, the usual meetings and processes, etc. One big tent and a common culture.”

    The email also emphasized how much the company is betting on the cloud moving forward.

    “Oracle is now fundamentally a cloud company, with a clear and simple vision: a marriage of the best cloud infrastructure, and leading data platform, together with the most pervasive cloud applications,” the email continued.

  • Prepare For Skynet: Researcher Say Super-Intelligent AI Will Be Impossible to Control

    Prepare For Skynet: Researcher Say Super-Intelligent AI Will Be Impossible to Control

    As artificial intelligence (AI) continues to evolve and improve, researchers are offering a dire warning, saying super-intelligent AI will be impossible to control.

    AI is one of the most controversial technological developments. Its proponents claim it will revolutionize industries, solve a slew of the toughest problems and lead to the betterment of humankind. Its critics believe it represents an existential threat to humanity, and will eventually evolve beyond man’s ability to control it.

    An international team of researchers are now saying AI will evolve beyond our ability to control it, based on theoretical calculations. In a paper published in the Journal of Artificial Intelligence Research, researchers Manuel Alfonseca, Manuel Cebrian, Antonio Fernandez Anta, Lorenzo Coviello, Andrés Abeliuk and Iyad Rahwan make the case “that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself.”

    While it may seem unlikely that AI could evolve to such a point, co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development, argues that AI is already reaching this point to some degree.

    A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.

    The study, entitled “Superintelligence Cannot be Contained: Lessons from Computability Theory,” could very well have far-reaching implications for AI research.

  • Walmart, Target and Amazon Using AI to Dictate Return Policy

    Walmart, Target and Amazon Using AI to Dictate Return Policy

    Some of the biggest retailers are using artificial intelligence (AI) to help dictate their return policies.

    For many consumers, once they return an item they never give it another thought. For retailers, however, returns can represent a significant loss. There are a number of factors that can make it even worse, such as the size of item, shipping and shelf life.

    Walmart, Target and Amazon are turning to AI to help them optimize their return process. According to The Wall Street Journal, the retailers are using AI to determine when it is worth processing a return, versus letting the customer keep the product and issuing them a refund instead.

    Lorie Anderson of Vancouver, WA, tried to return makeup to Target, as well as batteries to Walmart. In both cases, the retailers told her to keep the items and still issued a refund.

    “They were inexpensive, and it wouldn’t make much financial sense to return them by mail,” Ms. Anderson, 38 years old, said. “It’s a hassle to pack up the box and drop it at the post office or UPS. This was one less thing I had to worry about.”

    Target even encourages customers to donate items they receive a refund for.

    AI has been making its way into a wide range of industries. This is merely the latest example of how it can be used to help companies make better decisions.

  • Equifax Acquiring Fraud Prevention Company Kount

    Equifax Acquiring Fraud Prevention Company Kount

    Equifax has announced it is acquiring Kount, one of the leading fraud prevention companies.

    Equifax made headlines in 2017 when it suffered one of the worst cyberattacks in history. The hack was a dark spot on a company whose entire existence revolves around consumer data and credit. To make matters worse, the company’s response was widely panned by critics, demonstrating a continued lack of good security measures. The company’s latest acquisition shows how serious it is about improving its security and offering the best options to its customers.

    Kount provides AI-driven fraud protection, helping businesses engage with customers while establishing online digital trust. Identity trust allows companies to establish a trust level for each and every transaction and account action, allowing businesses to determine the level of risk they are willing to take.

    The company’s portfolio will be an important addition to Equifax’s efforts to keep its customers safe.

    “As digital migration accelerates, managing authentication and online fraud while optimizing the consumer’s experience has become one of our customers’ top challenges. The acquisition of Kount will expand Equifax’s differentiated data assets to bring global businesses the information and solutions they need to establish identity trust online,” said Mark W. Begor, CEO of Equifax. “Equifax is taking advantage of our strong 2020 outperformance and cash generation to make this strategic acquisition. Our data and technology cloud investments allow us to quickly and aggressively integrate new data and analytics assets like Kount into our global capabilities and bring new market leading products and solutions to our customers.”

    “More than 9,000 brands worldwide rely on the Kount Identity Trust Global Network to protect against digital fraud while enabling personalized customer experiences and new e-commerce channels,” said Bradley Wiskirchen, CEO of Kount. “We are excited to be able to offer Kount solutions with an expansive set of Equifax data, analytics and products. Equifax’s global reach will accelerate Kount’s international adoption, allowing us to help more businesses around the world to better protect their digital innovations and their customers against emerging threats while improving the customer experience.”

    The deal is worth $640 million and is expected to close in the first quarter of 2021.

  • Apple Car Still Years Away According to Reports

    Apple Car Still Years Away According to Reports

    The rumored Apple Car has been making headlines again, although recent reports are placing its debut several years away.

    The Apple Car seems to be Apple’s on-again, off-again project, with it taking different forms throughout the years. Dubbed “Project Titan” it was alternately believed to be a full car, an OEM AI system for manufacturers to adopt and integrate into their vehicles, or an aftermarket system that could be integrated into a range of vehicles. It now seems as though Apple is once again aiming for a full automobile.

    The most recent rumors placed a possible Apple Car debut in 2021, while some reports place it in 2024. According to the latest reports from Bloomberg, however, it seems the Apple Car is still several years away, with production slated to begin in 2024.

    At the same time, Bloomberg says Apple is continuing to work on a third-party system for integration with other manufacturers, and could still switch its plans to back that effort instead.

    Either way, it appears Tesla has nothing to fear from Apple for at least the next few years.

  • Staying Relevant as Artificial Intelligence Continues to Advance

    Staying Relevant as Artificial Intelligence Continues to Advance

    What to do when the machines take over

    This is 2020. When last did you write a letter and send it in the mail to someone two countries away?

    A great percentage of jobs that are supposed to be handled extensively by humans are being done by computers and electronic devices now. It’s called “automation” but for many job sectors, it’s a nightmare that millions of workers have to live through. It’s one of the biggest and most terrifying disadvantages of technology – human displacement. Alongside reducing human interaction and physical connection, people are rapidly losing their jobs and relevance in the global workforce to Artificial Intelligence. It’s a scary reality for many professions and it’s only a matter of time before several human-handled jobs become entirely obsolete.

    The main objective of Artificial Intelligence is to create efficient problem-solving systems with unrestrained versatility. As machines are being progressively programmed to be self-learning, self-reasoning, and self-correcting, there’s literally nothing they cannot achieve in due time. Presently, several categories of human jobs are rapidly being taken over by AI. A few of them are discussed below:

    Retail workers: It’s no wonder why 33% of Canadians have refused to use self-checkout. Hundreds of thousands of retail workers around the world are at risk of being replaced by automated machines that serve customers directly with zero human intervention.

    Book-keeping clerks: It’s been a long time since anyone ever heard of book-keepers. With technologies like Microsoft Office, Quickbooks and other incredible software, this job has become almost fully obsolete.

    Receptionists: With fully networked and sophisticated call managing systems taking over, it’s only a matter of time before companies scratch the need for a human face at the reception area.

    Courier services: While they are not in immediate threat, the near future sees automated drones delivering and self-driving delivery trucks doing the jobs of courier service workers and truck drivers.

    Advertising salespersons: With the rise of 3D animation and sophisticated cartooning, most industries are gearing toward marketing their products with these soft characters rather than humans.

    From factory workers and lift operators to bowling alley pinsetters and security guards, countless jobs are at massive automation risk and everyone must get in line with the “new normal” to stay relevant.

    Next-generation survival –diversifying your skills

    According to Michael Peres, a renowned serial-entrepreneur, leading journalist, and software engineer adjusting skills and aligning one’s interests with artificial intelligence is the only way to carve out a solid niche in the future. The 30-year-old Canadian is the creator of the Breaking 9 to 5 work model, a limit-exclusive concept that promotes the adoption of careers that are not restricted by time or confined to particular locations. “Breaking 9-5” does not necessarily recommend that you must quit your regular day job. However, there are no limits to what you can do with your time and life. It promotes a culture where you can work a 9-5 from the seat of a plane en-route a tour trip while taking orders on your merch website or providing services as a freelancer.

    Mikey Peres believes in worthy sacrifices for a better future where people are not afraid to adapt and blend in with the more forceful trends. He believes there are three essential steps to remaining relevant in the next generation as AI takes over:

    Find ways to be creative, unique, and provide values that are hard to replicate: You must come up with something that would be difficult or impracticable for a computer to replace. Find a service or a product that is unique to a particular parameter that cannot easily be industrialized or “snatched away”.

    Develop a diverse set of skills to quickly adapt to an ever-changing environment: There’s no way to survive with analog skills in a computerized world. You must be digitally equipped to thrive and grow in a world where virtually everything is being done by and on computers. Learn about AI, and choose a digital skill that works in collaboration with AI, such as web development, animation, graphic design, digital journalism, digital marketing, and so many more.

    Develop skills that don’t have constraints like time and location: There should be no “chains on your feet” and 24 hours each day should seamlessly run into one another (be sure to get enough sleep, though). Essentially, build a flexible career that does not limit you to a certain location year-round or give you restricting work hours. Do you. Let the world adjust.

    Another forward-thinking next-generation personality that has massively tapped into the AI treasure trove is Elon Musk – of course. The 49-year-old tech billionaire, business magnate, and industrial engineer foresaw a future where machines have a greater percentage of the workforce space than humans. He capitalized on it and while building SpaceX, his aerospace company, Musk became one of the earliest investors and now the CEO of Tesla, the world’s pioneer for renewable energy and self-driving vehicles.

    Despite being one of the global pioneers of the concept, Musk has often described Artificial Intelligence as “dangerous” and a global threat to human survival.  

    Musk advises about coping with the force of AI: “It’s very important that we have the advent of AI in a good way that is something that if you could look into a crystal ball and see the future, you would like that outcome because it is something that could go wrong and as we’ve talked about many times. And so we really need to make sure it goes right.”

  • OpenAI Debuts AI That Draws Images From Text Prompts

    OpenAI Debuts AI That Draws Images From Text Prompts

    OpenAI has debuted DALL·E, an AI model that can draw images based on text prompts it receives.

    While AI is relatively good at duplicating things, it’s a significant leap for AI to create, and especially to create based on nothing more than a text prompt. DALL·E, “a portmanteau of the artist Salvador Dalí and Pixar’s WALL·E,” can do just that, drawing images from descriptions given to it.

    DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs. We’ve found that it has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images.

     

    Text Prompt - An illustration of a baby daikon radish in a tutu walking a dog - Credit OpenAI
    Text Prompt – An illustration of a baby daikon radish in a tutu walking a dog – Credit OpenAI

    This breakthrough opens the door to using language to manipulate visual images.

    GPT-3 showed that language can be used to instruct a large neural network to perform a variety of text generation tasks. Image GPT showed that the same type of neural network can also be used to generate images with high fidelity. We extend these findings to show that manipulating visual concepts through language is now within reach.

     

    Text Prompt - An armchair in the shape of an avocado - Credit OpenAI
    Text Prompt – An armchair in the shape of an avocado – Credit OpenAI

    The Holy Grail is the ability to engage in verbal communication with an AI, and having that AI understand and respond accordingly. OpenAI’s latest breakthrough is a step in that direction.

  • DHS Tested Mask-Thwarting Facial Recognition

    DHS Tested Mask-Thwarting Facial Recognition

    The Department of Homeland Security (DSH) has been testing facial recognition systems that can recognize faces with masks on.

    With mask-wearing mandates in effect in many parts of the country, and health professionals urging everyone to wear a mask, facial recognition technology has been one of the casualties. Traditional facial recognition relies on seeing and comparing the entire face. As more individuals wear masks, however, companies have had to adjust their algorithms to focus on the area around the eyes.

    DHS is particularly interested in the technology, as facial recognition is playing an increasingly bigger role in airport security. DHS tested 60 different systems against 582 diverse volunteers, representing 60 countries.

    Without masks, the median success rate was 93%, with the best system scoring 100%. With masks, the median success rate was considerably lower, coming in at 77%, although the best system scored and impressive 96%.

    The testing shows there is quite a bit of variance from one facial recognition system to the next. However, at the upper end of the scale there is minimal difference between wearing a mask and going maskless.

  • Social AI? Uber Researchers Propose New Language Model

    Social AI? Uber Researchers Propose New Language Model

    Researchers at Uber are proposing a new artificial intelligence (AI) language model that emphasizes positive, social interaction.

    AI is one of the most important developments in the tech industry. Increasingly, is is being used in a wide array of fields, often with the goal to assist individuals in mundane tasks. Chat bots, support agents and conversational AIs are just a few examples. One challenge, however, is making AIs that people will engage with.

    Researchers at Uber believe they have the solution, and have written a paper emphasizing the importance of developing an AI language model around positive, social interaction.

    Goal-oriented conversational agents are becoming prevalent in our daily lives. For these systems to engage users and achieve their goals, they need to exhibit appropriate social behavior as well as provide informative replies that guide users through tasks.

    The researchers hypothesized that an AI using positive interaction would encourage more engagement.

    We, therefore, hypothesize that users would prefer a conversaional agent with more polite or more positive language and be more willing to engage with, respond to and persist in the interaction when conversing with an agent using polite and positive language.

    Uber’s researchers tested their hypothesis in a ride-sharing environment, where new drivers’ on-boarding was guided by text messages from customer support representatives (CSR).

    In this Study 1 we investigated whether and how social language is related to user engagement in task-oriented conversations. We used existing machine learning models to measure politeness and positivity in our analyses. The results show that the politeness level in CSR messages was positively correlated with driver’s responsiveness and completion of their first trip. We also found that positivity positively predicts driver’s first trip, but it has a negative relationship to driver responsiveness even after removing congratulatory milestone messages or messages that do not have any question mark, which usually have positive sentiment and/or do not require responses from drivers.

    Uber’s research could be an important stepping stone in the ongoing development of AI, ensuring it best supports human needs.

  • Alphabet and Google Employees Form Union

    Alphabet and Google Employees Form Union

    Alphabet and Google employees have formed a union in response to missteps by management.

    The Alphabet Workers Union (AWU) has been formed with support from the Communications Workers of America (CWA). The union is the first in the company’s history, and one of just a few in the tech industry at large.

    Support for unionization has been growing within Alphabet/Google for some time, and management’s actions have only increased that support. In late 2020, the company was accused of illegally spying on, and eventually firing, employees who were trying to form a union, leading to a complaint by the National Labor Relations Board.

    Google also landed in hot water for firing Dr. Timnit Gebru, one of the world’s leading AI ethics researchers. While the company maintains Dr. Gebru resigned, her colleagues insist the company forced her out. The move drew condemnation from experts inside and outside the company. While CEO Sundar Pichai tried to address the issue in an email to employees, it was widely criticized as being tone-deaf.

    Dr. Gebru’s firing was directly referenced in a statement announcing the formation of the AWU:

    Most recently, the company fired Dr. Timnit Gebru, a leading artificial intelligence researcher, for no reason whatsoever. The firing has caused outrage from thousands of us, including Black and Brown workers who are heartbroken by the company’s actions and unsure of their future at Google.

    The statement also addressed the company’s “Don’t Be Evil” slogan. Once a motto the company proudly displayed and adhered to, it has increasingly become an afterthought, as the company has worked with China, accepted military contracts, mishandled sexual abuse allegations, intimidated workers and more, issues that have further alienated workers:

    Workers who have organized to stop these trends have been met by intimidation, suppression, and blatantly illegal firings, as recently confirmed by the National Labor Relations Board. Instead of listening to workers, Google hired IRI, a notorious anti-union firm, to suppress their organizing. This is how Google’s executives have chosen to interact with workers.

    The only tactic that has ensured workers are respected and heard is collective action. Project Maven was cancelled when thousands of Googlers pledged they would not work on unethical tech. Forced arbitration was ended when Googlers walked out across the globe.

    Employees made it clear the AWU would work to address these issues, and use their collective power to force Google’s hand into making better decisions.

    “This union builds upon years of courageous organizing by Google workers,” said Nicki Anselmo, Program Manager. “From fighting the ‘real names’ policy, to opposing Project Maven, to protesting the egregious, multi-million dollar payouts that have been given to executives who’ve committed sexual harassment, we’ve seen first-hand that Alphabet responds when we act collectively. Our new union provides a sustainable structure to ensure that our shared values as Alphabet employees are respected even after the headlines fade.”

    “This is historic—the first union at a major tech company by and for all tech workers,” said Dylan Baker, Software Engineer. “We will elect representatives, we will make decisions democratically, we will pay dues, and we will hire skilled organizers to ensure all workers at Google know they can work with us if they actually want to see their company reflect their values.”

    It remains to be seen how Alphabet/Google will respond, but management has an opportunity to reset relations with employees and regain some of the respect it has squandered.

  • Amazon’s Head Alexa Scientist: ‘The Turing Test Is Obsolete’

    Amazon’s Head Alexa Scientist: ‘The Turing Test Is Obsolete’

    Amazon’s head scientist of Alexa is arguing that the Turing Test is obsolete as an AI test and should be replaced.

    Alan Turing published his famous paper 70 years ago, wherein he outlined the Turing Test as a way to evaluate artificial intelligence to see if it had achieved true intelligence. Since then, it has been the gold standard researchers have used in their efforts to advance AI.

    Writing in Fast Company, Rohit Prasad says the Turing Test is now obsolete.

    The Turing Test is fraught with limitations, some of which Turing himself debated in his seminal paper. With AI now ubiquitously integrated into our phones, cars, and homes, it’s become increasingly obvious that people care much more that their interactions with machines be useful, seamless and transparent—and that the concept of machines being indistinguishable from a human is out of touch. Therefore, it is time to retire the lore that has served as an inspiration for seven decades, and set a new challenge that inspires researchers and practitioners equally.

    Prasad makes the case that the work of modern AI researchers should focus on making AIs that complement humanity, rather than ones that are indistinguishable from humans.

    Instead of obsessing about making AIs indistinguishable from humans, our ambition should be building AIs that augment human intelligence and improve our daily lives in a way that is equitable and inclusive. A worthy underlying goal is for AIs to exhibit human-like attributes of intelligence—including common sense, self-supervision, and language proficiency—and combine machine-like efficiency such as fast searches, memory recall, and accomplishing tasks on your behalf. The end result is learning and completing a variety of tasks and adapting to novel situations, far beyond what a regular person can do.

    Prasad’s point of view has a lot of merit, and could fundamentally change many researchers’ approach to the field. Changing expectations could also help address concerns from those who believe AI is the biggest existential threat to humanity. By focusing on complementary AI systems, instead of ones that duplicate human intelligence, some of those concerns may be nullified.

    Either way, Prasad’s argument is well-worth a read in its entirety.

  • Boston Dynamics’ Robots Dance Together

    Boston Dynamics’ Robots Dance Together

    Boston Dynamics’ robots showed some impressive dance skills, with four robots dancing to The Contours’ Do You Love Me.

    Boston Dynamics is one of the leading robotics firms in the world, and has a history of showing off its robots in whimsical ways. In 2018, its SpotMini danced to Bruno Mars’ Uptown Funk.

    The company’s robots have now upped their game, with four robots, representing three different models, dancing in sync to Do You Love Me.

    Hyundai recently announced it was acquiring a controlling interest in Boston Dynamics. Hyundai is working on non-traditional automobiles, including ones that switch from wheels to walking legs for travel over uneven terrain that would otherwise not be traversable, making Boston Robotics a perfect fit.

    With Hyundai’s stake in the robotics firm, who knows, perhaps we’ll one day see dancing cars.

  • Google AI Researchers Cite Demands, Want Academic Integrity

    Google AI Researchers Cite Demands, Want Academic Integrity

    Google is experiencing more fallout from its handling of Dr. Timnit Gebru’s dismissal, with the company’s AI researchers making demands.

    The company was cast in the spotlight when news broke that Dr. Gebru, one of the world’s leading AI ethics researchers had left the company. Google claimed Gebru had resigned, but she and her coworkers say she was fired.

    Much of the issue stemmed from Gebru and her fellow researchers authoring a paper that raised concerns about the kind of AI Google uses in a number of projects. The controversy led CEO Sundar Pichai to apologize for how the situation was handled, although even the apology drew criticism for being tone-deaf, both from those inside and outside the company.

    AI researchers within the company are now demanding changes, according to an email seen by Bloomberg. One such demand is that a company vice president, Megan Kacholia, be removed from the reporting chain. The researches said they had “lost trust in her as a leader.”

    The researches also demanded the freedom to pursue research, even if it conflicted with Google’s short-term interests.

    “Google’s short-sighted decision to fire and retaliate against a core member of the Ethical AI team makes it clear that we need swift and structural changes if this work is to continue, and if the legitimacy of the field as a whole is to persevere,” the letter reads.

    “This research must be able to contest the company’s short-term interests and immediate revenue agendas, as well as to investigate AI that is deployed by Google’s competitors with similar ethical motives,” the researchers added.

    Google’s response could have profound impacts on the company’s AI endeavors moving forward. Although it is one of the leading companies in the field, if Google loses the respect of the AI community, it could quickly find itself struggling to attract top talent — especially if that talent has legitimate reason to believe it will be censored.

  • US Air Force Achieves First Military AI Flight

    US Air Force Achieves First Military AI Flight

    The US Air Force (USAF) has achieved a first for artificial intelligence (AI), flying a military aircraft with an AI crewman alongside the pilot.

    The US military has been working to transform its abilities, in an effort to meet new, high-tech threats. AI is one of the most obvious ways in which the military is looking to adopt new technology, with the promise of AI being able to handle tasks better than a human.

    In the test flight, the USAF AI, known as ARTUµ, “was responsible for sensor employment and tactical navigation, while the pilot flew the aircraft and coordinated with the AI on sensor operation.” During a reconnoissance mission with a simulated missile strike, ARTUµ focused on finding enemy launchers while the pilot looked out for enemy aircraft. Just like a human piloting team, the pilot and ARTUµ shared the aircraft’s radar.

    “We know that in order to fight and win in a future conflict with a peer adversary, we must have a decisive digital advantage,” said Air Force Chief of Staff Gen. Charles Q. Brown, Jr. “AI will play a critical role in achieving that edge, so I’m incredibly proud of what the team accomplished. We must accelerate change and that only happens when our Airmen push the limits of what we thought was possible.”

    ARTUµ represents a major step forward in AI development, and is one step closer toward the kind of AI sidekicks that have previously been the thing of science fiction.

  • Google CEO Criticized For Response to AI Researcher’s Exit

    Google CEO Criticized For Response to AI Researcher’s Exit

    Google CEO Sundar Pichai has sent an email to Google employees in an effort to address backlash the company is facing over Dr. Timnit Gebru’s exit.

    Timnit Gebru is one of the leading artificial intelligence ethics researcher in the world, widely respected for her expertise. An issue arose as a result of a research paper Gebru and other researchers were working on. The paper tackled the ethical issues with large-scale AI language models (LLMs), and was approved internally on October 8. According to Gebru, she was later asked to remove her name from the paper because an internal review found it to be objectionable.

    As Gebru later pointed out in an interview with Wiredresearchers must be free to go where the research takes them.

    You’re not going to have papers that make the company happy all the time and don’t point out problems. That’s antithetical to what it means to be that kind of researcher.

    Google’s head of AI, Jeff Dean, said the paper was not submitted with the necessary two-week lead time. Gebru’s team, however, wrote in a blog post supporting Gebru that “this is a standard which was applied unevenly and discriminatorily.”

    As a result, Gebru gave her supervisors some conditions she wanted met, otherwise she would work toward an amicable exit from the company. According to her team, the conditions “were for 1) transparency around who was involved in calling for the retraction of the paper, 2) having a series of meetings with the Ethical AI team, and 3) understanding the parameters of what would be acceptable research at Google.”

    Instead of working with Gebru, her supervisors accepted her “resignation” effective immediately. Gebru’s team is quick to point out that “Dr. Gebru did not resign,” (italics theirs) and was instead terminated.

    The company’s actions brought swift and vocal backlash. Some 2,351 Googlers, along with 3,729 supporters in academia, industry and civil society have signed a petition in support of Gebru at the time of writing. It seems Pichai and Company realize the situation is not going away without being addressed.

    In an email to employees, first published by Axios, Pichai attempted to do damage control, apologizing for what happened and vowing to do better in the future.

    So far, the email has not been met with praise. Gebru took to Twitter to criticize the lack of accountability, as well as the insinuation she was an “angry Black woman” for whom a de-escalation strategy was needed.

    Similarly, others are criticizing Pichai’s email for essentially being tone-deaf. Jack Clark, Open AIPolicy Director, is one such voice.

    In our initial coverage of this situation, we stated: “It goes without saying that Google is providing a case study in how not to handle this kind of situation.”

    In the aftermath of Pichai’s email, that statement continues to ring true.

    Here’s the email in full:

    Hi everyone,

    One of the things I’ve been most proud of this year is how Googlers from across the company came together to address our racial equity commitments. It’s hard, important work, and while we’re steadfast in our commitment to do better, we have a lot to learn and improve. An important piece of this is learning from our experiences like the departure of Dr. Timnit Gebru.

    I’ve heard the reaction to Dr. Gebru’s departure loud and clear: it seeded doubts and led some in our community to question their place at Google. I want to say how sorry I am for that, and I accept the responsibility of working to restore your trust.

    First – we need to assess the circumstances that led up to Dr. Gebru’s departure, examining where we could have improved and led a more respectful process. We will begin a review of what happened to identify all the points where we can learn — considering everything from de-escalation strategies to new processes we can put in place. Jeff and I have spoken and are fully committed to doing this. One of the best aspects of Google’s engineering culture is our sincere desire to understand where things go wrong and how we can improve.

    Second – we need to accept responsibility for the fact that a prominent Black, female leader with immense talent left Google unhappily. This loss has had a ripple effect through some of our least represented communities, who saw themselves and some of their experiences reflected in Dr. Gebru’s. It was also keenly felt because Dr. Gebru is an expert in an important area of AI Ethics that we must continue to make progress on — progress that depends on our ability to ask ourselves challenging questions.

    It’s incredibly important to me that our Black, women, and underrepresented Googlers know that we value you and you do belong at Google. And the burden of pushing us to do better should not fall on your shoulders. We started a conversation together earlier this year when we announced a broad set of racial equity commitments to take a fresh look at all of our systems from hiring and leveling, to promotion and retention, and to address the need for leadership accountability across all of these steps. The events of the last week are a painful but important reminder of the progress we still need to make.

    This is a top priority for me and Google leads, and I want to recommit to translating the energy that we’ve seen this year into real change as we move forward into 2021 and beyond.

    — Sundar