As the TikTok negotiations run into trouble, President Trump has indicated he has no intention of extending a ban deadline of September 15.
TikTok has earned the ire of US officials with repeated accusations of privacy and security violations. As a result, officials have repeatedly labeled it a threat to national security and Trump announced a ban that is scheduled to go into effect September 15.
Microsoft and Walmart joined forces and emerged as early frontrunners to buy the social media platform’s US operations. Oracle also expressed interest, with Trump speaking favorably about a possible deal. Unfortunately, for the first time in years, the Chinese government altered its export rules to prohibit selling technologies that include AI, impacting TikTok’s algorithm.
It’s unclear if a deal will be able to be reached, although Trump has made it clear there will be no extension granted.
Xanadu has released their photonics quantum computing platform, planning to double its power every six months.
Quantum computing is ‘the next big thing’ in computing, promising to usher in an all-new era. Quantum computing will fundamentally change multiple industries, including artificial intelligence, machine learning, cryptography and more.
Multiple companies are now making quantum computing available to customers. Xanadu’s approach is different than some competitors. Instead of quantum computers that must be cooled below the temperature of deep space, Xanadu’s photonic quantum processors can run at room temperature.
“We believe that photonics offers the most viable approach towards universal fault-tolerant quantum computing with Xanadu’s ability to network a large number of quantum processors together. We are excited to provide this ecosystem, a world-first for both quantum and classical photonics,” said Christian Weedbrook, Xanadu Founder and CEO. “Our architecture is new, designed to scale-up like the Internet versus traditional mainframe-like approaches to quantum computing.”
Unlike traditional computing, that revolves around binary bits with a value of either 0 or 1, quantum computing revolves around qubits. Rather than being binary, qubits can exist in both states simultaneously. The more qubits a quantum computer has, the more powerful it is. Xanadu believes they can double the power of their processors every six months.
“We believe we can roughly double the number of qubits in our cloud systems every six months,” said Weedbrook. “Future machines will also offer improved performance and new features like increased qubit connectivity, unlocking more applications for customers.”
Microsoft has unveiled a new tool designed to help fight deepfakes in real-time.
‘Deepfake’ is a term used to describe photos or video that have been altered by artificial intelligence (AI). This is in contrast to so-called ‘shallowfakes,’ or media that is altered using traditional methods and software. As the technology continues to evolve and improve, it will become increasingly difficult to identify a deepfake—much harder than identifying shallowfakes.
Security experts have been warning that deepfakes will begin to have major ramifications across all aspects of business, politics and life in general. A deepfake released at the right moment could have profound implications on an election, ruin a person’s career or make a person vulnerable to blackmail. As a result, companies have been scrambling to come up with ways to reliably identify altered photos and videos.
Microsoft’s latest effort, Microsoft Video Authenticator, is a major step in that direction. Video Authenticator is designed to “analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated,” write Tom Burt, Corporate Vice President of Customer Security & Trust and Eric Horvitz, Chief Scientific Officer.
When analyzing videos, the software is able to score each frame in real-time by looking at elements the naked eye cannot see, such as blending boundaries, fading and greyscale elements.
It’s safe to say that deepfakes will likely be the next digital arms race between those trying to promote them and those working to identify and fight against them. Fortunately, companies like Microsoft are pulling out all the stops to stay ahead of the threat.
IBM is turning to artificial intelligence (AI) to simulate crowds in the first-ever, spectator-less US Open.
Because of the COVID-19 pandemic, like many sports, the United States Tennis Association (USTA) has had to adapt to the safety measures required to contain the virus. As the USTA’s digital partner for 29 years, IBM set to work recreating the stadium experience using AI.
“Among the challenges the USTA faced this year was how to recreate the sound of fans inside the stadium,” reads the blog post. “IBM leveraged its AI Highlights technology to recreate crowd sounds gleaned from hundreds of hours of video footage captured during last years’ tournament. In past years, AI Highlights used Watson to digest video footage and rank the excitement level of each clip to compile a highlight reel in near-real time and classify specific crowd reactions, including the crowd roar, to give each clip a crowd reaction score. This insight will be used this year to dynamically serve up those sounds based on similar play from last year. The AI Sounds tools will be available to the production teams in-stadium and at ESPN.”
In addition to the stadium experience, IBM is using AI to help fans better connect with the experience. For example, Match Insights with Watson Discovery uses natural language processing (NLP) and natural language generation to take statistics and other structured data and translate it into narrative form. This will help fans get up-to-speed and become “experts” on the players and matchups.
“COVID-19 brought disruption to sports as a whole, and the ability of fans to experience live sporting events has been heavily impacted in 2020. At the same time, the pandemic accelerated the need for engaging technologies using AI and underpinned by a scalable hybrid cloud,” said Noah Syken, Vice President of Sports & Entertainment Partnerships, IBM. “As the technology partner to the USTA, we transformed our offerings to meet tennis fans where they are this year – experiencing the sport through the US Open digital properties everywhere.”
Reports indicate Apple may be working on its own search engine, a move that would have far-reaching repercussions.
Apple and Google have a long-running deal, whereby Google pays Apple billions to be the default search engine on iOS devices. Apple has alternately used Bing and Google to power Siri’s search features over the years. With iOS and iPadOS 14, however, Siri will bypass Google search results page, instead taking the user directly to the site. This would seem to indicate Apple is beginning to distance itself from third-part search engines
In addition, there has been a noticeable uptick in Apple job postings calling for search engineers. Coywolf founder Jon Henshaw has noticed Apple’s web crawler, Applebot, has been crawling his website daily. Apple has also updated its information on Applebot.
There’s a number of things Apple could gain by unveiling its own search engine. First and foremost, it would give Apple the ability to deliver on its promise to protect user privacy. No matter how much Apple may work to do that on users’ devices or its own services, when they use Google or Bing, they give up much of their privacy to those companies and their partners. Apple could build a search engine that features the same industry-leading privacy as their other products.
In addition, as Henshaw points out, Apple could customize the experience in a typical Apple way, providing something unique that offers an entirely new take on search. Whatever Apple is working on, it may well upend the search industry as we know it.
Ford, Bedrock and Bosch are set to address one of the more irritating elements of a night out—valet parking.
The three companies are using Ford Escape test vehicles in combination with Bedrock’s Assembly Garage and Bosch smart infrastructure. The goal is to create an automated valet system that will make it possible for drivers to simply walk away while an artificial intelligence (AI) handles parking the car.
Once perfected, a driver will be able to use a smartphone app to send the vehicle to park itself, or go to other vehicle services, such as a charging station or car wash. In addition to labor cost savings, it’s estimated an automated system will allow garage operators to accommodate as much as 20 percent more vehicles. The system is designed to be retrofitted to existing garage structures, or integrated into new construction.
“We are continually searching for opportunities to expand our leading suite of Ford Co-Pilot360 driver-assist technologies that help people drive more confidently and we believe automated valet parking technology holds great promise,” said Ken Washington, chief technology officer at Ford Motor Company. “Our work with Bosch and Bedrock also aligns with our vision for the future, which includes increasingly automated vehicles that are more aware of their surroundings while requiring less on-board computing to help improve design, packaging and affordability.”
“We strive to be at the forefront of parking and mobility initiatives in Detroit because we recognize the importance of interconnectivity between real estate and mobility,” said Heather Wilberger, chief information officer at Bedrock. “In addition to drastically reducing park time, we see this solution as the first step to bringing automated parking to our city, providing the ultimate convenience for our tenants, visitors, neighborhoods and residents.”
“For Bosch, automated valet parking brings together our deep cross-domain experience in mobility and building technologies to deliver a smart infrastructure solution that improves everyday life,” said Mike Mansuetti, president of Bosch in North America. “This technology enables consumers to see the benefit of highly automated technology as the vehicle handles a task such as parking in a garage.”
This is another example of the many ways AI promises to make everyday tasks easier.
“Just 20 years ago the one really big player was Microsoft.,” says former Google CEO Eric Schmidt. “Microsoft has now been joined by four other very large companies each of which is run cleverly but in a different way. We benefit from that brutal competition. The reason it’ll be different in 20 years is because artificial intelligence will create a whole bunch of new platform winners.”
Eric Schmidt, former CEO of Google, who is launching a new podcast today, discusses how AI will spawn a whole new batch of tech platform winners:
AI Will Create a Bunch Of New Platform Winners
Just 20 years ago the one really big player was Microsoft. Microsoft has now been joined by four other very large companies each of which is run cleverly but in a different way. They have different ways in which they win and they lose. We benefit from that brutal competition. Look at what you have in a mobile phone. The competition between Android andIOS and Apple phones and the Android ecosystem has brought a supercomputer into your pocket. That’s going to continue.
The reason it’ll be different in 20 years is because artificial intelligence will create a whole bunch of new platform winners. Remember that the way this works is the US establishes global platforms that everyone else uses. We are forgetting that it is US leadership at the platform level, whether it’s Google or Apple or what have you, that has brought us to this point where we have multi-trillion-dollar corporations that are leading the market.
Be Careful About Breaking Up The App Store Model
I don’t know enough about the Epic Games/Apple dispute because I left the board a decade ago. However, the important thing about the app stores is that they provide some level of security, branding, and protection for the user. In China, for example, Google does not have a single app store because of regulatory issues. So there has always been issues of is the app that you’re using certified and so forth.
I would be careful about breaking up the app store model as it does provide some security and protection. We can quibble about how they’re managed but the important thing is when you use an app store you can rely that what’s on it is represented to be what it really is. Just think of all those viruses that you are not getting as a result of the app store.
IBM and AT&T are deepening their 5G and edge computing partnership, with the goal of accelerating the business world’s digital transformation.
The two companies are working at IBM’s Thomas J. Watson Research Center, where they are “deploying AT&T’s 5G and multi-access edge computing (MEC)—a private cellular, low latency solution that can process data on a business site’s premise, instead of routing traffic over public networks.”
The two companies have a years-long history of working together, a partnership that helped them both respond to the coronavirus pandemic by helping customers with their work from home needs. Now the two companies are building on that track record, working on new ways to enable remote work, especially in those industries where it has not yet been possible.
“Combining 5G with edge computing, for example, could open the door to breakthroughs in robotics and the ability to perform intricate machine work from remote locations,” write Mo Katibeh, AT&T Chief Product and Platform Officer and Steve Canepa, IBM General Manager of the Global Communications Sector. “One of the tasks we are exploring at Yorktown Heights envisions enabling a researcher to remotely adjust locations of IoT network devices in a laboratory. Another envisions allowing a systems administrator to remotely rewire machines in a data center to provide a more agile environment.”
At the same time, the two companies are working to help employees be able to safely return to work.
“We are addressing workplace safety in a system driven by IBM AI and made feasible at scale by AT&T LTE and 5G mobile network technologies,” continue Katibeh and Canepa. “That includes AT&T MEC. This solution from AT&T enables the development and deployment of new capabilities that rely on ultra-low latency, higher security and privacy, improved bandwidth conservation and greater control of data.
“The low latency of 5G allows for remote operations in industrial settings, helping to keep workers from harmful situations. And if any dangerous situations do arise, edge computing is designed to let businesses capture and analyze data quickly without extra storage or processing on a central cloud.
“That same processing ability can help employees look after their health with devices to monitor their temperature, oxygen levels and blood pressure with instantaneous feedback. Hospitals can even take advantage of similar advances to make their current infrastructures more reliable, while implementing advances like wireless surgery, robotics, virtual reality simulations.”
IBM and AT&T’s partnership is poised to leverage emerging technologies to help companies now and in the future.
An AI-powered “pilot” went undefeated in five rounds of simulated dogfighting with a top Air Force pilot.
Fans of the Terminator franchise are familiar with Skynet, the artificial intelligence that turned on humanity, nearly wiping it out and sending the Terminators to eliminate human targets. Critics of AI have long claimed that it represents one of the greatest existential threats humanity has ever faced.
The latest development is not likely to assuage any concerns, as an AI performed flawlessly in simulated combat against a top Air Force pilot.
“The event was the culmination of an effort that the Defense Advanced Research Projects Agency (DARPA) began last year as an adjacent project to the larger Air Combat Evolution (ACE) program, which is focused on exploring how artificial intelligence and machine learning may help automate various aspects of air-to-air combat,” writesThe Drive’s Joseph Trevithicka.
Originally, there were eight companies that were part of the competition. After flying against each other, Heron Systems came out on top and advanced to the final round where their AI beat the human pilot.
Should AI ever become a Skynet-like threat, it appears it won’t have any problem controlling the sky.
At Tech Day, China’s version of GM’s EV Day, GM has announced major advancements coming to its vehicles in China.
China is GM’s largest market. As a result, the company took the opportunity to outline major initiatives that it plans to bring to fruition in the Chinese market.
Among those advancements is 5G connectivity, which the company plans to implement in 2022. GM plans to have all Cadillac and most Chevrolet and Buick models connected by then, with connected services provided via over-the-air updates.
GM also plans for 40% of its new vehicle launches in China to be electrified models within the next five years. The company has already vowed to invest more than $20 billion in electric and automated vehicles by 2025. GM plans on bringing together 5G, AI, big data and smart cities to help make its plans a reality.
“As GM’s largest market and a global center of innovation, China will play a crucial role in making our vision a reality,” said Mary Barra, chairman and CEO of GM. “With our joint venture partner SAIC, we are blending global insights and scale with local market expertise to redefine what is possible for our customers and for society.”
Cense AI has inadvertently leaked 2.5 million detailed medical records of auto accident victims.
Cense AI is an “SaaS platform that helps business in implementing Intelligent Process Automation, intelligent bots to automate tasks without disrupting current system.” The company specializes in “simplifying implementation of business process automation using Machine Learning and Natural Language Processing.”
According to security researcher Jeremiah Fowler, working in collaboration with Secure Thoughts, Cense AI left two folders with medical data exposed on the same IP address as the company’s website. The two folders contained a combined “2.5 million records that appeared to contain sensitive medical data and PII (Personally Identifiable Information). The records included names, insurance records, medical diagnosis notes, and much more.” In addition, there were clinics, insurance providers and accounts contained in the data.
This is a massive breach on the part of a company trusted with the most sensitive type of customer information, and serves as a cautionary example of what can happen when outside companies are given access to medical data.
What’s more, to date, there has not been any public statement, blog post or explanation on Cense’s part. In other words, this appears to be another case study in how not to handle a data breach.
Google has announced it is investing $450 million in security company ADT, in a multi-year partnership that will give Google a 6.6% stake.
The deal is a win for both companies. Google benefits from ADT’s security expertise, not to mention its 20,000 professionals, who will soon be selling and installing Nest devices and services. ADT, on the other hand, benefits from Google’s AI-driven smart home developments.
“Over time, Nest’s devices, powered by Google’s machine learning capabilities will enhance ADT’s security monitoring and become the cornerstone of ADT’s smart home offering,” writes Rishi Chandra, Vice President and GM, Nest. “The goal is to give customers fewer false alarms, more ways to receive alarm events, and better detection of potential incidents inside and around the home. It will also provide people with more helpful notifications that make everyday life more convenient, like package detection. ADT customers will also have access to Nest Aware, a service that keeps people informed about important events at home, including intelligent alerts and event history recording for up to 30 days.”
Google has repeatedly been in the news lately, with its recent Fitbit deal under intense scrutiny in the US and the EU. Regulators are concerned with how Google will use the data it acquires from the wearables maker. It’s possible this scrutiny was a motivating factor in Google investing in ADT, rather than attempting to buy it or a competing firm outright. Whatever the motivation, it’s evident Google has high hopes for what the partnership will bring.
“Together, we aim to create the next generation of the helpful home—based on new security solutions that will better protect and connect people to their homes and families,” writes Chandra.
The pandemic has shown us the business opportunities for new and emerging technologies, and even technologies typically thought of as gaming tech can have legitimate business uses and opportunities. Virtual and augmented reality technologies have given people the ability to travel, learn, and do business in a unique way throughout the pandemic, and now these technologies are converging in a new way to form extended reality, or XR technology.
Extended reality is a way to describe the mixed reality platforms that are gaining popularity. These platforms can be used for work, travel, and exercise. For work, Frame allows users to host meetings in a virtual space with as many as 20 participants for a more realistic-feeling meeting experience. Oculus Quest allows users to travel virtually, visiting such landmarks as Chernobyl, Machu Picchu, Antarctica, and even ancient cities as they once were.
When it comes to sports, extended reality is generating even more realistic experiences for users. This is important especially now during the pandemic when people are stuck at home and unable to play their typical competitive sports. The WHO has urged people to get physical activity on a daily basis, which seems to grow more difficult as the pandemic wears on.
Extended reality sports include things like mountain climbing, golf, tennis, and more. To make the play seem more realistic, real sporting equipment is used and is outfitted with sensors that allow the player to experience the game as it is intended to be experienced.
This technology is presenting brand new business opportunities, as well. The popularity of such gaming platforms is growing, and by 2023 the market for extended reality is expected to reach $18 billion.
Increasingly Americans are being forced to take part in activities at home, but even before the pandemic they were choosing to spend more time at home than previous generations. This has booted demand for XR sports, as it gives people an opportunity to take part in communal physical activities from the safety of their own homes.
This technology uses motion tracking, artificial intelligence, and biomechanical modeling to achieve realistic gameplay. Sensors on sporting equipment coupled with sensors watching or on the user track movements to simulate their part in the gameplay. Machine learning adapts to a player to present more realistic competition.
The possibilities for this technology are endless. In the real world, gaming centers are starting to pop up using this technology. Golf simulators and tennis simulators are some of the most popular and prevalent, and eventually there will be several different kinds of virtual sports offerings.
Players will be able to enter a socially distanced pod and play a realistic version of their favorite competitive sport using real sports equipment. Competition happens virtually online and leaderboards keep track of who is performing the best and where they are located.
Gameplay is realistic as is the feel of competition, something that is currently missing in many home-based virtual reality games. As this technology progresses, the possibilities are endless. Learn more about the future of XR sports from the infographic below.
Google has unveiled Fabricius, a tool that uses machine learning to decipher and translate ancient Egyptian hieroglyphs.
Ancient Egypt has captured the imagination of people the world over for centuries. Hieroglyphs offer a glimpse into that world, but they are notoriously difficult to decipher. It has traditionally involved using volumes of books to check and cross-check symbols. Google’s new tool is designed to make the process easier, and opens hieroglyphs to the public at large.
“Fabricius includes the first digital tool – that is also being released as open source to support further developments in the study of ancient languages – that decodes Egyptian hieroglyphs built on machine learning,” writes Chance Coughenour, Google Arts & Culture Program Manager. “Specifically, Google Cloud’s AutoML technology, AutoML Vision, was used to create a machine learning model that is able to make sense of what a hieroglyph is. In the past you would need a team of Data Scientists, a lot of code, and plenty of time, now AutoML Vision allows developers to easily train a machine to recognize all kinds of objects.”
Fabricius is available in English and Arabic and stands to revolutionize the study of Egyptian hieroglyphs and history. This represents another arena where machine learning is making a valuable impact.
In further evidence robot domination may yet be in our future, a study has shown robot stock analysts outperform their human counterparts, leading to better investments.
There has been a fair amount of hand-wringing about what role robots will play in the future, and whether mankind will be able to control a true artificial intelligence. World domination aside, the economic possibilities and threats robots pose are just beginning to be understood. While many have believed it would largely be physical jobs, such as manufacturing, that would be taken over by robots, recent studies have shown that high-paying, white collar jobs are also at risk.
Now a study by Indiana University professors Braiden Coleman, Kenneth J. Merkley and Joseph Pacelli has demonstrated that robots even make better stock analysts than humans.
“First, Robo-Analysts collectively produce a more balanced distribution of buy, hold, and sell recommendations than do human analysts, which suggests that they are less subject to behavioral biases and conflicts of interest,” reads the study abstract. “Second, consistent with automation facilitating a greater scale of research production, Robo-Analysts revise their reports more frequently than human analysts and also adopt different production processes. Their revisions rely less on earnings announcements, and more on the large, volumes of data released in firms’ annual reports. Third, Robo-Analysts’ reports exhibit weaker short-window return reactions, suggesting that investors do not trade on their signals. Importantly, portfolios formed based on the buy recommendations of Robo-Analysts appear to outperform those of human analysts, suggesting that their buy calls are more profitable. Overall, our results suggest that Robo-Analysts are a valuable, alternative information intermediary to traditional sell-side analysts.”
The study is a fascinating read on the potential of robots to help revolutionize another industry, and especially one that many may not think of as a candidate for robot takeover.
Microsoft is continuing its shift to AI for its MSN editorial team, laying off additional personnel months after its initial layoffs.
Several weeks ago, Microsoft laid off roughly 50 contractors working for the company’s MSN property. The contractors were involved in producing news, identifying trending stories and optimizing content. Microsoft decided to use AI instead. At the time, full-time, in-house employees were thought to be safe.
According to the latest report, however, it appears that safety was short-lived. GeekWire is reporting that “the company is now laying off an unspecified number of direct employees from MSN, including some senior leaders on the Microsoft News editorial team, according to people familiar with the situation.”
While the news does not bode well for MSN’s news staff, at least some believe its the beginning of the end for MSN as well. One editor who was previously replaced expressed to GeekWire that relying on an algorithm for the news would probably be the undoing of MSN.
Microsoft has rolled out Together mode to Teams in an effort to significantly improve video conferencing.
As social distancing and remote work have become standard, video conferencing and communication tools have become critical components for individuals and businesses alike. Whether its corporate teams keeping in touch, churches conducting services or individuals keeping up with family and friends, Teams, Zoom, FaceTime, Skype and others have become lifelines.
At the same time, video fatigue has taken its toll, with widespread reports of video conferencing being exhausting and draining on its users. Microsoft has set out to address that with its new Together mode.
Together mode uses AI segmentation tech to create the illusion that everyone is together in the same place, such as a meeting room, auditorium or coffee shop. This creates a much more familiar and comfortable experience, as opposed to the traditional grid placement.
“We’re social creatures, and the social and spatial awareness systems in the brain can finally function more naturally” within Together mode, says Microsoft’s Jaron Lanier.
The end result is a more engaging experience “by helping you focus on other people’s faces and body language and making it easier to pick up on the non-verbal cues that are so important to human interaction. It’s great for meetings in which multiple people will speak, such as brainstorms or roundtable discussions, because it makes it easier for participants to understand who is talking.”
With no immediate end in sight to the pandemic, improvements like this will go a long way toward helping people stay productive and connected.
Google Cloud has become the first cloud provider to offer NVIDIA’s new A100 Tensor Core GPU.
NVIDIA made a name for itself making high-powered graphics processing units (GPU). While many people associate GPUs with gaming and video, since NVIDIA’s GeForce 8 series, released in 2006, GPUs have been making inroads in areas traditionally ruled by the central processing unit (CPU). Because of the GPU’s ability to handle large quantities of parallel data, they are ideal for offloading intensive operations, including machine learning and artificial intelligence.
The new A100 is designed with this in mind. Built on the NVIDIA Ampere architecture, the A100 boasts a 20x performance improvement for machine learning and inference computing. This represents the single biggest generational leap ever for NVIDIA.
“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads, ” said Manish Sainani, director of Product Management at Google Cloud. “With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA T4 GPUs. We are excited to see what our customers will do with these new capabilities.”
This will likely be a big hit with Google’s customer base, especially since machine learning support is an area where Google Cloud is particularly strong.
MIT has removed a massive dataset after finding it contained racist, misogynistic terms and offensive images.
Artificial intelligence (AI) and machine learning (ML) systems use datasets as training data. MIT created the Tiny Images dataset, which contained some 80 million images.
In an open letter, Bill Freeman and Antonio Torralba, both professors at MIT, as well as NYU professor Rob Fergus, outlined issues they became aware of, and the steps they took to resolve them.
“It has been brought to our attention that the Tiny Images dataset contains some derogatory terms as categories and offensive images,” write the professors. “This was a consequence of the automated data collection procedure that relied on nouns from WordNet. We are greatly concerned by this and apologize to those who may have been affected.
“The dataset is too large (80 million images) and the images are so small (32 x 32 pixels) that it can be difficult for people to visually recognize its content. Therefore, manual inspection, even if feasible, will not guarantee that offensive images can be completely removed.
“We therefore have decided to formally withdraw the dataset. It has been taken offline and it will not be put back online. We ask the community to refrain from using it in future and also delete any existing copies of the dataset that may have been downloaded.”
This has been an ongoing issue with AI and ML training data, with some experts warning that it is far too easy for these systems to inadvertently develop biases based on the data. With their announcement, it appears MIT is certainly doing their share to try to rectify that issue.
Lyft has begun testing its autonomous vehicles on California roads again.
When the pandemic caused people to stay at home, Lyft’s public testing program was suspended. As California has eased restrictions, however, the company has resumed its testing program.
“We’re excited to announce that our autonomous vehicles (AVs) are back on the road — and that during the shelter in place we continued to make progress by doubling down on simulation,” reads the company’s blog post. “Simulation is an important part of our testing program, enabling us to test beyond road miles.”
At the same time, the company downplayed any impact the temporary hiatus had.
“While road testing remains a critical aspect of our program, simulation allows us to leverage existing on-road data in many more ways, and multiple times over, to help improve and validate our software,” continues the blog. “With Lyft’s unique data and Level 5’s advancements in simulation, we believe we’re reducing the road miles needed by several orders of magnitude. Our focus on simulation over the last few months allowed us to maintain Level 5’s momentum toward our goal to improve access to safe and reliable transportation for millions of Lyft riders everywhere.”
This is good news for Lyft and the autonomous vehicle industry in general. Especially in view of the pandemic and social distancing, the demand for autonomous, driverless vehicles may see an increase for reasons few would ever have expected.
IBM CEO Arvind Krishna has written Congress to inform them the company no longer offers general purpose facial recognition and analysis software.
Krishna wrote the letter in the context of responsible use of technology, such as facial recognition, on the part of law enforcement. The letter is a direct response to the death of George Floyd and others, as well as the accusations of police brutality that have led to mass protests. Channeling IBM’s long history of support for civil rights, Krishna cited the letter Thomas J. Watson, Jr., then president of IBM, sent to employees in 1953, vowing to hire individuals who were qualified, “regardless of race, color or creed.”
Krishna encourages Congress to enact laws to tackle police misconduct, including a federal registry that would track instances. The letter also asks Congress to review and revisit use-of-force policies, as well as the qualified immunity police officers enjoy.
The letter also makes clear that IBM will do its part to prevent its technology from being used in a way that is inconsistent with the company’s values. Krishna writes:
IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.
Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.
Finally, national policy also should encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.
The move is a bold one for IBM, as facial recognition is already proving to be a valuable technology. In the wake of recent events, however, it’s likely IBM won’t be the only company to take such a stand.