WebProNews

Tag: Machine Learning

  • Battle of the AIs: Walmart Takes on Amazon

    Battle of the AIs: Walmart Takes on Amazon

    Walmart is looking to challenge Amazon using a tool the latter already relies on: artificial intelligence (AI).

    Amazon is the world’s leading e-commerce platform and has been challenging Walmart, Target, and other traditional brands in the broader retail market. A key to Amazon’s success has been its use of AI and machine learning (ML) for more than two decades. According to TheStreet, Walmart is getting in on the action, testing its own AI for the last few years.

    While Amazon made headlines for its new Proteus and Cardinal warehouse robots, its use of AI goes far beyond robots. The company uses AI and ML to handle multiple aspects of customer service and delivery, including product suggestions, re-order reminders, and more.

    Walmart is looking to roll out similar solutions in an effort to better compete with the e-commerce giant. As TheStreet points out, the pandemic put Walmart’s plans into overdrive. Between labor shortages and wage increases, AI is suddenly a critical component now, rather than being something that may be useful in the future.

    As part of its initiative, Walmart purchased just over 10% of AI firm Symbotic Inc. The company plans to use Symbotic to help run its distribution centers and relieve its employees of some of the manual, labor-intensive tasks.

    Once the realm of science fiction, the last few years have helped make AI an everyday reality that companies of all sizes depend on. Just ask Walmart.

  • Linux Foundation Tackles Data Collaboration With Permissive License

    Linux Foundation Tackles Data Collaboration With Permissive License

    The Linux Foundation has announced the CDLA-Permissive-2.0 license agreement to make it easier to share AI and ML data.

    The rise of artificial intelligence and machine learning have created a need for a new type of license that allows data sets and learning models to be shared, as well as incorporated into AI and ML applications.

    The Linux Foundation described the challenges in a blog post:

    Open data is different. Various laws and regulations treat data differently from software or other creative content. Depending on what the data is and which country’s laws you’re looking at, the data often may not be subject to copyright protection, or it might be subject to different laws specific to databases, i.e., sui generis database rights in the European Union. 

    Additionally, data may be consumed, transformed, and incorporated into Artificial Intelligence (AI) and Machine Learning (ML) models in ways that are different from how software and other creative content are used. Because of all of this, assumptions made in commonly-used licenses for software and creative content might not apply in expected ways to open data.

    While the Linux Foundation previously offered the CDLA-Permissive-1.0 license, it was often criticized for being too long and complex. In contrast, 2.0 is less than a page long and is greatly simplified over its predecessor.

    In response to perceptions of CDLA-Permissive-1.0 as overly complex, CDLA-Permissive-2.0 is short and uses plain language to express the grant of permissions and requirements. Like version 1.0, the version 2.0 agreement maintains the clear rights to use, share and modify the data, as well as to use without restriction any “Results” generated through computational analysis of the data.

    A key element of the new license is the ability to collaborate and maintain compatibility with other licenses, such as Creative Commons licenses. The addition of CDLA-Permissive-2.0 is already being met with acclaim from the industry, with both IBM and Microsoft making data sets available using the language.

    “IBM has been at the forefront of innovation in open data sets for some time and as a founding member of the Community Data License Agreement. We have created a rich collection of open data sets on our Data Asset eXchange that will now utilize the new CDLAv2, including the recent addition of CodeNet – a 14-million-sample dataset to develop machine learning models that can help in programming tasks.” Ruchir Puri, IBM Fellow, Chief Scientist, IBM Research

  • Microsoft Releases OpenAI-Powered Code Completion Tool

    Microsoft Releases OpenAI-Powered Code Completion Tool

    Microsoft is leveraging its agreement with OpenAI to radically change the nature of low-code development with its first AI-powered code completion tool.

    OpenAI is an AI research organization, founded on the principle of researching AI in a safe, responsible way. OpenAI’s GPT-3 is one of the leading natural language models, and it runs exclusively on Microsoft Azure. Microsoft also has an exclusive license to the GPT-3 code, giving it wide latitude to incorporate the model in its own products.

    The partnership is bearing fruit, with Microsoft incorporating GPT-3 in its Power Apps low code development platform, adding natural, conversational language to the programming process.

    The new AI-powered features will allow an employee building an e-commerce app to describe a programming goal using conversational language like “find products where the name starts with ‘kids.’” A fine-tuned GPT-3 model then offers choices for transforming the command into a Microsoft Power Fx formula, the open source programming language of the Power Platform, such as “Filter(‘BC Orders’ Left(‘Product Name’,4)=”Kids”).

    For the time being, GPT-3’s features are limited to use with Microsoft Power Fx, but the future possibilities are visually endless.

    “Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call no code,” said Charles Lamanna, corporate vice president for Microsoft’s low code application platform.

  • IBM Brings It’s Quantum System One to Germany

    IBM Brings It’s Quantum System One to Germany

    IBM has unveiled its first quantum computer outside the US, bringing the Quantum System One to Germany.

    Quantum computing is considered the next big evolution of computing, capable of achieving things modern computers can’t. Everything from artificial intelligence to financial markets to encryption algorithms will be impacted by quantum computing. As a result, countries around the world are racing to advance the technology.

    IBM unveiled the new computer in partnership with Fraunhofer-Gesellschaft, Europe’s largest application-oriented research organization.

    “Quantum computing opens up new possibilities for industry and society,” says  Hannah Venzl, the coordinator of Fraunhofer Competence Network Quantum Computing. “Drugs and vaccines could be developed more quickly, climate models improved, logistics and transport systems optimized, or new materials better simulated. To make it all happen, to actively shape the rapid development in quantum computing, we need to build up expertise in Europe.”

    The new computer is already hard at work, testing simulations for new materials for energy storage systems, analyzing energy supply infrastructures, financial asset portfolios and improved deep learning for machine learning applications.

    “I am very pleased about the launch of the IBM Quantum System One in Germany, the most powerful quantum computer in Europe,” said Arvind Krishna, IBM CEO (translated from German.) “This is a turning point from which the German economy, industry and society will benefit greatly. Quantum computers promise to solve completely new categories of problems that are unattainable even for today’s most powerful conventional computers.”

  • Rise of Skynet: AI Drones Attack Humans Without Authorization

    Rise of Skynet: AI Drones Attack Humans Without Authorization

    AI-driven drones appear to have attacked humans without authorization, according to a new report by the U.N.

    Many critics view AI technology as an existential threat to humanity, seeing some variation of the Terminator franchise’s Skynet wiping humanity out. Those critics may have just been given the strongest support yet for their fears, with AI drones attacking retreating soldiers without being instructed to.

    According to the U.N. report, via The Independent, Libyan government forces were fighting Haftar Affiliated Forces (HAF) forces.

    “Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2,” read the UN report.

    What makes the Kargu so dangerous is that it’s a “loitering” drone, designed to autonomously pick its own targets based on machine learning. If one such drone isn’t dangerous enough, the Kargu has swarming abilities, enabling 20 such drones to work together in a coordinated swarm.

    “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” wrote the report’s experts.

    The incident is sure to raise questions about the ongoing safety issues surrounding AI drone use, especially in the context of military applications.

  • Apple Snaps Up Google AI Scientist Who Resigned Over Handling of AI Team

    Apple Snaps Up Google AI Scientist Who Resigned Over Handling of AI Team

    Apple has scored a big win, hiring Samy Bengio after he resigned from Google following the firing of Google’s AI ethics team leaders.

    Google landed in hot water after the controversial firings of Timnit Gebru and Margaret Mitchell. Google was accused of interfering with academic integrity and criticized for its treatment of women and Black employees.

    In the wake of the incidents, some engineers departed the company, citing its handling of the entire situation. Sammy Bengio, however, was the most high-profile departure. As a 14-year veteran of the company, and one of the earliest involved in Google Brain, his departure was seen as a real blow to the company, according to Reuters.

    Google’s loss is Apple gain, as the Cupertino company has hired Bengio. Reuters reports Bengio will be leading a new AI research unit under John Giannandrea, senior vice president of machine learning and AI strategy.

    Despite being first to the market with its Siri virtual assistant, Apple has fallen behind Google and Amazon. It’s a safe bet Bengio’s new role will lead to significant, and much-needed, improvements for Siri. His work may also contribute to Apple’s other projects, including the AI component in the upcoming Apple Car.

  • FTC: Make Sure Your AI Algorithms Are Unbiased…Or Else

    FTC: Make Sure Your AI Algorithms Are Unbiased…Or Else

    The Federal Trade Commission (FTC) has sent a stark warning for companies to ensure their AI algorithms are unbiased…or else.

    AI is being adopted across a wide spectrum of industries. Unfortunately, studies repeatedly demonstrate the propensity for AI algorithms to be biased. In many cases, this is the result of the datasets used to train AIs not reflecting the necessary diversity.

    In a blog post, the FTC addresses this issue:

    Watch out for discriminatory outcomes. Every year, the FTC holds PrivacyCon, a showcase for cutting-edge developments in privacy, data security, and artificial intelligence. During PrivacyCon 2020, researchers presented work showing that algorithms developed for benign purposes like healthcare resource allocation and advertising actually resulted in racial bias. How can you reduce the risk of your company becoming the example of a business whose well-intentioned algorithm perpetuates racial inequity? It’s essential to test your algorithm – both before you use it and periodically after that – to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.

    The FTC also warns companies to be careful not to overpromise what their AI can do, such as advertising a product that delivers “100% unbiased hiring decisions,” yet was created with data that wasn’t truly diverse. The FTC advises companies to be transparent, use independent standards and be truthful about how they will use customer data.

    The FTC warns that companies failing to follow its advice will deal with the consequences:

    Hold yourself accountable – or be ready for the FTC to do it for you. As we’ve noted, it’s important to hold yourself accountable for your algorithm’s performance. Our recommendations for transparency and independence can help you do just that. But keep in mind that if you don’t hold yourself accountable, the FTC may do it for you. For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA. Whether caused by a biased algorithm or by human misconduct of the more prosaic variety, the FTC takes allegations of credit discrimination very seriously, as its recent action against Bronx Honda demonstrates.

  • Google Open Sources Lyra Audio Codec

    Google Open Sources Lyra Audio Codec

    Google has announced it is open sourcing its Lyra audio codec, a codec that uses machine learning to compress the audio and preserve quality.

    As voice and videoconferencing has become more ubiquitous, audio codecs haven’t done a very good job of keeping up. As Google points out in blog post, many modern video codecs have better compression than audio ones.

    To solve this problem, we have created Lyra, a high-quality, very low-bitrate speech codec that makes voice communication available even on the slowest networks. To do this, we’ve applied traditional codec techniques while leveraging advances in machine learning (ML) with models trained on thousands of hours of data to create a novel method for compressing and transmitting voice signals.

    Google is now open sourcing Lyra in an effort to help it gain widespread acceptance.

    As part of our efforts to make the best codecs universally available, we are open sourcing Lyra, allowing other developers to power their communications apps and take Lyra in powerful new directions. This release provides the tools needed for developers to encode and decode audio with Lyra, optimized for the 64-bit ARM android platform, with development on Linux. We hope to expand this codebase and develop improvements and support for additional platforms in tandem with the community.

    Lyra is currently in beta, with Google wanting feedback from developers as soon as possible.

    We are releasing Lyra as a beta version today because we wanted to enable developers and get feedback as soon as possible. As a result, we expect the API and bitstream to change as it is developed. All of the code for running Lyra is open sourced under the Apache license, except for a math kernel, for which a shared library is provided until we can implement a fully open solution over more platforms. We look forward to seeing what people do with Lyra now that it is open sourced. Check out the code and demo on GitHub, let us know what you think, and how you plan to use it!

  • Amazon Lookout for Metrics Now Available

    Amazon Lookout for Metrics Now Available

    Amazon has made Lookout for Metrics available to all its customers, providing a way to monitor and diagnose business anomalies.

    A preview version of Lookout for Metrics was first launched at re:Invent 2020. The service uses machine learning to analyze a business’ operations and automatically detect and diagnose anomalies. It could be a potential business opportunity, a technical issue or any one of the myriad challenges a data-driven business faces.

    “We’re excited to announce the general availability of Amazon Lookout for Metrics, a new service that uses machine learning (ML) to automatically monitor the metrics that are most important to businesses with greater speed and accuracy,” write Ankita Verma and Chris King for AWS. “The service also makes it easier to diagnose the root cause of anomalies like unexpected dips in revenue, high rates of abandoned shopping carts, spikes in payment transaction failures, increases in new user sign-ups, and many more. Lookout for Metrics goes beyond simple anomaly detection. It allows developers to set up autonomous monitoring for important metrics to detect anomalies and identify their root cause in a matter of few clicks, using the same technology used by Amazon internally to detect anomalies in its metrics—all with no ML experience required.”

    Lookout for Metrics connects to 19 of the most popular data sources, including Amazon Simple Storage Solution (Amazon S3), Amazon Relational Database Service (Amazon RDS), Amazon Redshift, Amazon CloudWatch, Salesforce, Marketo and Zendesk.

  • Adobe Photoshop Uses AI to Increase Image Pixels by Four Times

    Adobe Photoshop Uses AI to Increase Image Pixels by Four Times

    Adobe is bringing Super Resolution to Photoshop, using artificial intelligence to increase an image’s pixels by four times.

    Everyone has seen a TV show where the character says those two infamous words: “Zoom in.” As anyone who’s actually worked with digital photos can attest, any photo magnifying is limited by the size and quality of the image. If a picture doesn’t have the necessary pixel density, it can only be enlarged so much before it becomes pixilated and loses its clarity.

    Beyond being an inaccurate staple of virtually every police procedural, there are a number of practical situations where this can be a limitation. Printing a photo taken with a low-resolution camera is a perfect example, as it takes a higher resolution photo to look good when printed.

    “Super Resolution is also a pixels project, but of a different kind,” writes Adobe’s Eric Chan. “Imagine turning a 10-megapixel photo into a 40-megapixel photo. Imagine upsizing an old photo taken with a low-res camera for a large print. Imagine having an advanced ‘digital zoom’ feature to enlarge your subject. There’s more goodness to imagine, but we’re getting ahead of ourselves. To understand Super Resolution properly, we must first talk about Enhance Details.”

    Super Resolution uses AI to intelligently expand a photo, keeping it crisp with minimal artifacts.

    “The term ‘Super Resolution’ refers to the process of improving the quality of a photo by boosting its apparent resolution,” continues Chan. “Enlarging a photo often produces blurry details, but Super Resolution has an ace up its sleeve — an advanced machine learning model trained on millions of photos. Backed by this vast training set, Super Resolution can intelligently enlarge photos while maintaining clean edges and preserving important details.”

    Super Resolution is now available in Camera Raw 13.2 and will soon be included in Lightroom and Lightroom Classic.

  • Microsoft and HPE Partner to Deliver AI and Edge Computing to Space

    Microsoft and HPE Partner to Deliver AI and Edge Computing to Space

    Microsoft and Hewlett Packard Enterprise (HPE) have partnered to bring AI and edge computing to the International Space Station (ISS).

    HPE has been working with NASA to create a commercial, off-the-shelf supercomputer for use on the ISS. The Spaceborne Computer-2 (SBC-2) is specifically built on the HPE Edgeline Converged Edge system, designed for the harshest edge environments — which space certainly qualifies as.

    Microsoft and HPE are working to connect the SBC-2 to Azure, to enable cloud computing, along with AI and machine learning development in the ultimate edge environment.

    “HPE and Microsoft are collaborating to further accelerate space exploration by delivering state-of-the art technologies to tackle a range of data processing needs while in orbit. By bringing together HPE’s Spaceborne Computer-2, which is based on the HPE Edgeline Converged Edge system for advanced edge computing and AI capabilities, with Microsoft Azure to connect to the cloud, we are enabling space explorers to seamlessly transmit large data sets to and from Earth and benefit from an edge-to-cloud experience. We look forward to collaborating with Microsoft on their Azure Space efforts, which share our vision to accelerate discovery and help make breakthroughs to support life and sustainability in future, extended human missions to space.” —Dr. Mark Fernandez, Solutions Architect of Converged Edge Systems at HPE and Principal Investigator for Spaceborne Computer-2

    Microsoft first announced its Azure Space program in October, as a concerted effort to bring cloud computing to space.

    “Today’s announcement advances Azure Space in bringing Azure AI and machine learning to new space missions and emphasizes the true power of hyperscale computing in support of edge scenarios—connecting anyone, anywhere to the cloud,” writesTom Keane Corporate Vice President, Azure Global, Microsoft Azure. “Our collaboration with HPE is just the first step in an incredible journey and will provide researchers and students access to these insights and technologies, inspiring the next generation of those who wish to invent with purpose, on and off the planet.”

  • AI May Improve Smart Speakers by Detecting Voice Direction

    AI May Improve Smart Speakers by Detecting Voice Direction

    Researchers at Carnegie Mellon University have created a machine learning model to detect the direction of an incoming voice.

    Current smart speakers and voice-activated devices rely on activation keywords to listen and then respond to commands. While largely effective, it can create problems when there are multiple devices that use the same keyword, or when someone uses that keyword in normal conversation.

    The researchers at Carnegie Mellon University set out to solve this problem by using machine learning to help address the problem of addressability. In other words, help devices know if a command was directed at them specifically.

    The research aimed to recreate elements of human-human communication, specifically how people can address a specific person in a crowded room. If computers can learn directional conversation, it will make it much easier to control devices and interact with them much like interacting with a human being.

    “In this research, we explored the use of speech as a directional communication channel,” write (PDF) researchers Karan Ahuja, Andy Kong, Mayank Goel and Chris Harrison. “In addition to receiving and processing spoken content, we propose that devices also infer the Direction of Voice (DoV). Note this is different from Direction of Arrival (DoA) algorithms, which calculate from where a voice originated. In contrast, DoV calculates the direction along which a voice was projected.

    “Such DoV estimation innately enables voice commands with addressability, in a similar way to gaze, but without the need for cameras. This allows users to easily and naturally interact with diverse ecosystems of voice-enabled devices, whereas today’s voice interactions suffer from multi-device confusion.”

    This research is an important development and could have a profound impact on how humans interact with everything from smart speakers to more advanced AIs.

  • More Evidence Apple Is Working On Its Own Search Engine

    More Evidence Apple Is Working On Its Own Search Engine

    More evidence would suggest that Apple is working on its own search engine to help challenge Google’s dominance.

    In many ways, the new report doesn’t add much to previous reports from August. When the news first broke, Coywolf founder Jon Henshaw noticed a web crawler called AppleBot crawling his website. At the same time, AppleInsider noticed changes in how iOS 14 handled search vs iOS 13.

    Now the Financial Times says that multiple search experts are saying that Applebot is showing a steep increase in activity. The company also points to Apple’s poaching of John Giannandrea, Google’s head of search, two and a half years ago. He currently serves as Apple’s senior vice president of Machine Learning and AI Strategy, putting him in a strategic position to have a significant impact on the company’s efforts.

    Other experts believe Apple has the technical expertise to build a successful search engine.

    “They [Apple] have a credible team that I think has the experience and the depth, if they wanted to, to build a more general search engine,” said Bill Coughran, Google’s former engineering chief, according to FT.

    The timing may ultimately work in Apple’s favor as the company’s deal with Google, to make its search engine the default on iOS, is one of the factors in the government’s antitrust lawsuit against Google.

  • Microsoft Unlocks Power Of 5G For Telecommunications

    Microsoft Unlocks Power Of 5G For Telecommunications

    “Today starts a new chapter in our close collaboration with the telecommunications industry to unlock the power of 5G and bring cloud and edge closer than ever,” said Microsoft Azure Executive Vice President Jason Zander in a blog announcement. “We’re building a carrier-grade cloud and bringing more Microsoft technology to the operator’s edge. This, in combination with our developer ecosystem, will help operators to future proof their networks, drive down costs, and create new services and business models.”

    Jason Zander, Executive Vice President, Microsoft Azure, announces new collaborations with the telecommunications industry that will unlock the power of 5G and bring cloud and edge closer than ever:

    The increasing demand for always-on connectivity, immersive experiences, secure collaboration, and remote human relationships is pushing networks to their limits, while the market is driving down price. The network infrastructure must ensure operators are able to optimize costs and gain efficiencies, while enabling the development of personalized and differentiated services. To address the requirements of rolling out 5G, operators will face strong challenges, including high capital expenditure (CapEx) investments, an increased need for scale, automation, and secure management of the massive volume of data it will generate.

    Today starts a new chapter in our close collaboration with the telecommunications industry to unlock the power of 5G and bring cloud and edge closer than ever. We’re building a carrier-grade cloud and bringing more Microsoft technology to the operator’s edge. This, in combination with our developer ecosystem, will help operators to future proof their networks, drive down costs, and create new services and business models.

    In Microsoft, operators get a trusted partner who will empower them to unlock the potential of 5G. Enabling them to offer a range of new services such as ultra-reliable low-latency connectivity, mixed reality communications services, network slicing, and highly scalable IoT applications to transform entire industries and communities.

    By harnessing the power of Microsoft Azure, on their edge, or in the cloud, operators can transition to a more flexible and scalable model, drive down infrastructure cost, use AI and machine learning (ML) to automate operations and create service differentiation. Furthermore, a hybrid and hyper-scale infrastructure will provide operators with the agility they need to rapidly innovate and experiment with new 5G services on a programmable network.

    More specifically, we will further support operators as they evolve their infrastructure and operations using technologies such as software-defined networking, network function virtualization, and service-based architectures. We are bringing to market a carrier-grade platform for edge and cloud to support the operator’s goals to future proof their infrastructure with disaggregated, and containerized network architectures. Recognizing that not everything will move to the public cloud, we will meet operators where they are—whether at the enterprise edge, the network edge, or in the cloud.

    Our approach is built on the acquisitions of industry leaders in cloud-native network functions—Affirmed Networks and Metaswitch and on the development of Azure Edge Zones. By bringing together hundreds of engineers with deep experience in the telecommunications space, we are ensuring that our product development process is catering to the most relevant networking needs of the operators. We will leverage the strengths of Microsoft to extend and enhance the current capabilities of industry-leading products such as Affirmed’s 5G core and Metaswitch’s UC portfolio. These capabilities, combined with Microsoft’s broad developer ecosystem and deep business to business partnership programs, provide Microsoft with a unique ability to support the operators as they seek to monetize the capabilities of their networks.

    Your customer, your service, powered by our technology

    As we build out our partnerships with different operators, it is clear to us that there will be different approaches to technology adoption based on business needs. Some operators may choose to adopt the Azure platform and select a varied mix of virtualized or containerized network function providers. We also have operators that have requested complete end-to-end services as components for their offers. As a part of these discussions, many operators have identified points of control that are important to them, for example:

    • Control over where a slice, network API, or function is presented to the customer.
    • Definition of where and how traffic enters and exits their network.
    • Visibility and control over where key functions are executed for a given customer scenario.
    • Configuration and performance parameters of core network functions.

    As we build out Azure for Operators, we recognize the importance of ensuring operators have the control and visibility they require to manage their unique industry requirements. To that end, here is how our assets come together to provide operators with the platform they need.

    Communication Service Providers

    Interconnect

    It starts with the ability to interconnect deeply with the operator’s network around the globe. We have one of the largest networks that connect with operators at more than 170 points of presence and over 20,000 peering connections around the globe, putting direct connectivity within 25 miles of 85 percent of the world’s GDP. More than 200 operators have already chosen to integrate with the Azure network through our ExpressRoute service, enabling enterprises and partners to link their corporate networks privately and securely to Azure services. We also provide additional routes to connect to the service through options as varied as satellite connectivity and TV White Space spectrum.

    Edge platform

    This reach helps us to supply operators with cloud computing options that meet the customer wherever those capabilities are needed: at the enterprise edge, the network edge, the network core, or in the cloud. The various form factors, optimized to support the location in which they are deployed, are supported by the Azure platform—providing virtual machine and container services with a common management framework, DevOps support, and security control.

    Network functions

    We believe in an open platform that leverages the strengths of our partners. Our solutions are a combination of virtualized and containerized services as composable functions, developed by us and by our Network Equipment Provider partners, to support operators’ services such as the Radio Access Network, Mobile Packet Core, Voice and Interconnect services, and other network functions.

    Technology from Affirmed and Metaswitch Networks will provide services for Mobile Packet Core, Voice, and Interconnect services.

    Cloud solutions and Azure IoT for operators

    By exposing these services through the Azure platform, we can combine them with other Azure capabilities such as Azure Cognitive Services (used by more than 1 million developers processing more than 10 billion transaction per day), Azure Machine Learning, and Azure IoT, to bring the power of AI and automation to the delivery of network services. These capabilities, in concert with our partnerships with OSS and BSS providers, enables us to help operators streamline and simplify operations, create new services to monetize the network, and gain greater insights into customer behavior.

    In IoT our primary focus is simplifying our solutions to accelerate what we can do together from the edge to the cloud. We’ve done so by creating a platform that provides simple and secure provisioning of applications and devices to Azure cloud solutions through Azure IoT Central, which is the fastest and easiest way to build IoT solutions at scale. IoT Central enables customers to provision an IoT app in seconds, customize it in hours, and go to production the same day. IoT Plug and Play dramatically simplifies all aspects of IoT device support and provides devices that “just work” with any solution and is the perfect complement to achieve speed and simplicity through IoT Central. Azure IoT Central also gives the Mobile Operator the opportunity to monetize more of the IoT solution and puts them in a position to be a re-seller of the IoT Central application platform through their own solutions. Learn more about using Azure IoT for operators here.

    Cellular connectivity is increasingly important for IoT solutions and represents a vast and generational shift for mobile operators as the share of devices in market shifts towards the enterprise. We will continue our deep partnership with operators to enable fast and efficient app development and deployment, which is critical to success at the edge. This will help support scenarios such as asset tracking across industries, manufacturing and distribution of smart products, and responsive supply chains. It will also help support scenarios where things are geographically dispersed, such as smart city automation, utility monitoring, and precision agriculture.

    Where we go next

    Our early engagement with partners such as Telstra and Etisalat helped us shape this path. We joined the 5G Open Innovation Lab as the founding public cloud partner to accelerate enterprise startups and launch new innovations to foster new 5G use cases with even greater access to leading-edge networks. The Lab will create long-term, sustainable developer and commercial ecosystems that will accelerate the delivery of exciting new capabilities at the edge, including pervasive IoT intelligence and immersive mixed reality. And this is just the beginning. I invite you to learn more about our solutions and watch the series of videos we have curated for you.

  • Xanadu Releases Photonic Quantum Cloud

    Xanadu Releases Photonic Quantum Cloud

    Xanadu has released their photonics quantum computing platform, planning to double its power every six months.

    Quantum computing is ‘the next big thing’ in computing, promising to usher in an all-new era. Quantum computing will fundamentally change multiple industries, including artificial intelligence, machine learning, cryptography and more.

    Multiple companies are now making quantum computing available to customers. Xanadu’s approach is different than some competitors. Instead of quantum computers that must be cooled below the temperature of deep space, Xanadu’s photonic quantum processors can run at room temperature.

    “We believe that photonics offers the most viable approach towards universal fault-tolerant quantum computing with Xanadu’s ability to network a large number of quantum processors together. We are excited to provide this ecosystem, a world-first for both quantum and classical photonics,” said Christian Weedbrook, Xanadu Founder and CEO. “Our architecture is new, designed to scale-up like the Internet versus traditional mainframe-like approaches to quantum computing.”

    Unlike traditional computing, that revolves around binary bits with a value of either 0 or 1, quantum computing revolves around qubits. Rather than being binary, qubits can exist in both states simultaneously. The more qubits a quantum computer has, the more powerful it is. Xanadu believes they can double the power of their processors every six months.

    “We believe we can roughly double the number of qubits in our cloud systems every six months,” said Weedbrook. “Future machines will also offer improved performance and new features like increased qubit connectivity, unlocking more applications for customers.”

  • AI Company Leaks 2.5 Million Medical Records

    AI Company Leaks 2.5 Million Medical Records

    Cense AI has inadvertently leaked 2.5 million detailed medical records of auto accident victims.

    Cense AI is an “SaaS platform that helps business in implementing Intelligent Process Automation, intelligent bots to automate tasks without disrupting current system.” The company specializes in “simplifying implementation of business process automation using Machine Learning and Natural Language Processing.”

    According to security researcher Jeremiah Fowler, working in collaboration with Secure Thoughts, Cense AI left two folders with medical data exposed on the same IP address as the company’s website. The two folders contained a combined “2.5 million records that appeared to contain sensitive medical data and PII (Personally Identifiable Information). The records included names, insurance records, medical diagnosis notes, and much more.” In addition, there were clinics, insurance providers and accounts contained in the data.

    This is a massive breach on the part of a company trusted with the most sensitive type of customer information, and serves as a cautionary example of what can happen when outside companies are given access to medical data.

    What’s more, to date, there has not been any public statement, blog post or explanation on Cense’s part. In other words, this appears to be another case study in how not to handle a data breach.

  • Google Uses Machine Learning to Decipher Hieroglyphs

    Google Uses Machine Learning to Decipher Hieroglyphs

    Google has unveiled Fabricius, a tool that uses machine learning to decipher and translate ancient Egyptian hieroglyphs.

    Ancient Egypt has captured the imagination of people the world over for centuries. Hieroglyphs offer a glimpse into that world, but they are notoriously difficult to decipher. It has traditionally involved using volumes of books to check and cross-check symbols. Google’s new tool is designed to make the process easier, and opens hieroglyphs to the public at large.

    “Fabricius includes the first digital tool – that is also being released as open source to support further developments in the study of ancient languages – that decodes Egyptian hieroglyphs built on machine learning,” writes Chance Coughenour, Google Arts & Culture Program Manager. “Specifically, Google Cloud’s AutoML technology, AutoML Vision, was used to create a machine learning model that is able to make sense of what a hieroglyph is. In the past you would need a team of Data Scientists, a lot of code, and plenty of time, now AutoML Vision allows developers to easily train a machine to recognize all kinds of objects.”

    Fabricius is available in English and Arabic and stands to revolutionize the study of Egyptian hieroglyphs and history. This represents another arena where machine learning is making a valuable impact.

  • Verizon Chooses Google Cloud Contact Center AI

    Verizon Chooses Google Cloud Contact Center AI

    Google Cloud has scored a major win as Verizon has chosen its Contact Center AI to help power its customer service experience.

    Google has developed a reputation as being one of the most AI and machine learning-friendly cloud solutions. This latest deal lends credence to that, as Verizon is looking to use Google’s conversational language AI to help speed up wait times and improve customer service.

    Verizon plans to deploy the technology to assist both customers and live agents. For customers, the conversational AI will help them get to the right agent faster, without having to go through menu prompts. They’ll be able to simply speak or type their request and the AI will route them to the agent or department that can best assist. For the live agents, the AI will contribute by retrieving documentation and other materials that can help the agent better assist the customer.

    “Verizon’s commitment to innovation extends to all aspects of the customer experience,” said Shankar Arumugavelu, global CIO & SVP, Verizon. “These customer service enhancements, powered by the Verizon collaboration with Google Cloud, offer a faster and more personalized digital experience for our customers while empowering our customer support agents to provide a higher level of service.”

    “We’re proud to work with Verizon to help enable its digital transformation strategy,” said Thomas Kurian, CEO of Google Cloud. “By helping Verizon reimagine the customer experience through our AI and ML expertise, we can create an experience that not only delights consumers, but also helps differentiate Verizon in the market.”

    This is a big win for Verizon’s customers and Google Cloud, and will help Google further its reputation in the AI field.

  • Google and NVIDIA Partner to Bring A100 to the Cloud

    Google and NVIDIA Partner to Bring A100 to the Cloud

    Google Cloud has become the first cloud provider to offer NVIDIA’s new A100 Tensor Core GPU.

    NVIDIA made a name for itself making high-powered graphics processing units (GPU). While many people associate GPUs with gaming and video, since NVIDIA’s GeForce 8 series, released in 2006, GPUs have been making inroads in areas traditionally ruled by the central processing unit (CPU). Because of the GPU’s ability to handle large quantities of parallel data, they are ideal for offloading intensive operations, including machine learning and artificial intelligence.

    The new A100 is designed with this in mind. Built on the NVIDIA Ampere architecture, the A100 boasts a 20x performance improvement for machine learning and inference computing. This represents the single biggest generational leap ever for NVIDIA.

    “Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads, ” said Manish Sainani, director of Product Management at Google Cloud. “With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

    This will likely be a big hit with Google’s customer base, especially since machine learning support is an area where Google Cloud is particularly strong.

  • MIT Removes AI Training Dataset Over Racist Concerns

    MIT Removes AI Training Dataset Over Racist Concerns

    MIT has removed a massive dataset after finding it contained racist, misogynistic terms and offensive images.

    Artificial intelligence (AI) and machine learning (ML) systems use datasets as training data. MIT created the Tiny Images dataset, which contained some 80 million images.

    In an open letter, Bill Freeman and Antonio Torralba, both professors at MIT, as well as NYU professor Rob Fergus, outlined issues they became aware of, and the steps they took to resolve them.

    “It has been brought to our attention that the Tiny Images dataset contains some derogatory terms as categories and offensive images,” write the professors. “This was a consequence of the automated data collection procedure that relied on nouns from WordNet. We are greatly concerned by this and apologize to those who may have been affected.

    “The dataset is too large (80 million images) and the images are so small (32 x 32 pixels) that it can be difficult for people to visually recognize its content. Therefore, manual inspection, even if feasible, will not guarantee that offensive images can be completely removed.

    “We therefore have decided to formally withdraw the dataset. It has been taken offline and it will not be put back online. We ask the community to refrain from using it in future and also delete any existing copies of the dataset that may have been downloaded.”

    This has been an ongoing issue with AI and ML training data, with some experts warning that it is far too easy for these systems to inadvertently develop biases based on the data. With their announcement, it appears MIT is certainly doing their share to try to rectify that issue.

  • Facebook Beefs Up Messenger Security

    Facebook Beefs Up Messenger Security

    Facebook has announced significant new measures to increase the security of Messenger, as well as combat predators and scammers.

    Tech giants have increasingly been under pressure to do more to protect their users, especially minors. Social media and online platforms have become the tool of choice for many individuals looking to prey on children. Even adults are often faced with a plethora of security risks and potential scams.

    In a blog post, Jay Sullivan, Director of Product Management, Messenger Privacy and Safety, outlines a number of new features the company is implementing.

    Facebook is moving its messaging service to end-to-end encryption, which will provide a far greater degree of privacy. At the same time, it has required the company to come up with new ways to help protect its users, since end-to-end encryption prevents it from reading or monitoring messages. Instead, Facebook has turned to machine learning to analyze patterns of behavior that could indicate something is amiss.

    “Keeping minors safe on our platforms is one of our greatest responsibilities,” writes Sullivan. “Messenger already has special protections in place for minors that limit contact from adults they aren’t connected to, and we use machine learning to detect and disable the accounts of adults who are engaging in inappropriate interactions with children. Our new feature educates people under the age of 18 to be cautious when interacting with an adult they may not know and empowers them to take action before responding to a message.”

    Facebook is also using new safety notices as a way to better educate people and help them spot scams sooner. Overall, these features are welcome news from Facebook and should go a long way toward protecting its users.