WebProNews

Category: AITrends

ArtificialIntelligenceTrends

  • Atlas VPN: Half of IT Pros See AI as an Existential Threat to Humanity

    Atlas VPN: Half of IT Pros See AI as an Existential Threat to Humanity

    AI may be the “next big thing,” but IT professionals see it as an existential threat to humanity.

    AI is one of the most controversial types of new technology. While its proponents see it as a solution to a wide array of problems, critics see it contributing to existing biases under the best of circumstances and destroying humanity under the worst.

    According to a new report by Atlas VPN, 49% of IT professionals fall into the latter group, seeing it as a threat to humanity. In addition, 55% see it creating major privacy issues.

    At the same time, however, IT pros are not oblivious to AI’s potential benefits. 74% see its value for task automation, freeing up time and resources for strategic planning and initiatives.

    “The AI we have today can benefit businesses by making various tasks easier,” writes Atlas VPN’s Vilius Kardelis. “However, that does not guarantee it is always positive. AI is a tool with potentially harmful consequences if used in the wrong hands. Despite this, it appears unlikely that it will pose an existential threat to humanity in the near future.”

    With so many in the IT industry so wary of AI, it’s clear there is still a long way to go until the necessary safeguards are developed and put into place.

  • White House Introduces AI Bill of Rights Blueprint

    White House Introduces AI Bill of Rights Blueprint

    The White House has introduced a blueprint for an AI Bill of Rights in an effort to address some of the biggest issues with the tech.

    Artificial intelligence is poised to revolutionize countless industries and possibly society itself. The tech has alternatively been hailed as mankind’s savior or the greatest existential threat ever faced. Even when not concerned about world-ending results, many critics are still worried about inequality, biases, and privacy.

    The White House is looking to address the major concerns with AI’s growing reach via its AI Bill of Rights. The Bill of Rights includes five guiding principles, excerpts of which are included below:

    Safe and Effective Design

    “Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.”

    Algorithmic Discrimination Protections

    “Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.”

    Data Privacy

    “Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used.”

    Notice and Explanation

    “Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.”

    Human Alternatives, Consideration, and Fallback

    “You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law.”

    Overall, the White House’s AI Bill of Rights is a major step forward and could be a boon for AI development.

  • Nvidia, Arm, and Intel Collaborate on AI Standard

    Nvidia, Arm, and Intel Collaborate on AI Standard

    Three of the biggest names in the semiconductor industry are collaborating on a new standard for AI interchange format.

    AI is considered one of the biggest technological advancements of the modern era. In order to reach its potential, however, companies and researchers need to have common standards for hardware and software interoperability.

    Nvidia, Arm, and Intel have authored the FP8 Formats for Deep Learning white paper, proposing an “8-bit floating point (FP8) specification.” The specification will help optimize memory usage, thereby accelerating AI development. The specification works with AI training, as well as inference, and is natively supported in Nvidia’s Hopper architecture.

    “NVIDIA, Arm, and Intel have published this specification in an open, license-free format to encourage broad industry adoption,” writes Shar Narasimhan, a director of product marketing. “They will also submit this proposal to IEEE.

    “By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI. “

  • AI Chip Revenue Set for a 30% CAGR, Will Hit $130 Billion by 2030

    AI Chip Revenue Set for a 30% CAGR, Will Hit $130 Billion by 2030

    AI chip revenue is set for stellar growth in the coming years, hitting $130 billion by 2030.

    AI is poised to be one of the most revolutionary advancements in the history of technology, with the prospect of transforming countless industries. Behind the AI revolution are the chips that power it and the companies that make them. According to GlobalData, the industry is poised for a compound annual growth rate (CAGR) of 30%, going from $12 billion a year in 2021 to $130 billion in 2030.

    “This rapid expansion will be driven by chips specifically optimized for AI with their share of the combined micro-component and digital logic semiconductor market set to increase from less than 10% in 2021 to at least 40% by 2030,” says Josep Bori, Research Director at GlobalData Thematics Intelligence.

    “Deep learning neural networks continue to expand their capabilities, now including face recognition, medical diagnosis, and self-driving cars,” Bori continues. “This has been led by an improvement in the mathematical models used and the exponential growth in the model sizes and training data sets.”

    GlobalData says most of the AI chip development is currently coming from redesigning existing microprocessors to better handle AI loads. At the same time, there is the possibility that a revolutionary leap in technology, such as quantum computing or neuromorphic chips, could lead to major advancements in the field.

    Despite the promising outlook for the AI chip industry, GlobalData warns that ongoing tension between the US and China could put future prospects in jeopardy.

    “In our view, the ongoing trade dispute between China and the US has negative implications for the global progress of AI semis technology,” added Bori. “We believe China will play a leading role in AI, due to its leadership in AI software and IoT technology, and its progress in low end chips manufacturing. However, unless China solves its access to extreme ultraviolet (EUV) lithography technology, currently indirectly prevented by US sanctions, it will likely struggle in AI in the datacenter, and most likely autonomous vehicles.”

  • Verizon Connect Makes Fleet Management Easier with AI Dashcam

    Verizon Connect Makes Fleet Management Easier with AI Dashcam

    Verizon Connect has unveiled its latest innovation, AI Dashcam, in an effort to help fleet managers improve safety.

    Fleet managers must juggle a number of responsibilities, not the least of which is helping their drivers be as safe as possible. Verizon has unveiled AI Dashcam in the hopes it helps fleet managers gain a better insight into potential safety issues and makes in-cab coaching easier.

    “The new AI Dashcam includes advanced features that are designed to boost safety and provide greater insight for fleet managers,” said Erin Cave, Verizon Connect director of product management. “The new technology provides a significant step to help our customers future-proof their fleets.”

    The Advanced Driver Assistance Systems (ADAS) will track a slew of safety factors, including driver fatigue, phone usage, distracted driving, tailgating, interactions with pedestrians and cyclists, and more. Verizon says the features also include:

    • Modular design: Easy to upgrade from single road-facing camera to dual camera functions with the driver-facing add-on.
    • Smaller and sleeker: Single device for both road- and driver-facing footage and easy to self-install with less cables or wires.
    • Future-proof: The hardware will enable valuable enhancements in the near future.
    • Privacy-by-design: The new hardware comes with lens caps for privacy when video recordings are unwanted or not needed.
    • In-cab audio alerts: Drivers will be alerted of unsafe driving events in real time.
  • Nukes and AI: Eric Schmidt Believes Both Should Be Regulated by Treaties

    Nukes and AI: Eric Schmidt Believes Both Should Be Regulated by Treaties

    Former Google CEO Eric Schmidt believes AI should be in the same category as nuclear weapons and regulated by similar treaties.

    AI is poised to be one of the most revolutionary technologies in mankind’s history. As such, viewpoints about what changes AI will bring are all over the map, with some believing AI will help save mankind and others believing it represents the single biggest existential threat to its survival.

    Schmidt was speaking at the Aspen Security Forum when he discussed his role at Google and the developments that were happening 20 years ago, saying he was very “naive about the impact of what we were doing.”

    After acknowledging how powerful information can be, he then goes on to describe the issues with creating a trust/no trust equilibrium with technologies like AI. During the Cold War, for example, nations developed a “no surprise” rule, wherein the world’s powers would notify each other if they were conducting a missile test. This eliminated the risk of a misunderstanding triggering World War III.

    Schmidt is concerned that there is no such system in place for how AI is developed or used, leading to the very real possibility of an escalation in the harmful use of AI.

    “We don’t have anyone working on that, and yet AI is that powerful,” Schmidt concluded.

  • Google Fires Engineer Who Claimed Its AI Achieved Sentience

    Google Fires Engineer Who Claimed Its AI Achieved Sentience

    Google has fired Blake Lemoine, a software engineer who made headlines for claiming the company’s AI had achieved sentience.

    Blake Lemoine worked at Google as an engineer, working with the company’s LaMDA chatbot technology. Lemoine became increasingly that LaMDA had achieved sentience and self-awareness based on the conversations he had with it. Others, both inside and outside the company, were not convinced.

    “Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” said Margaret Mitchell, who led Google’s AI ethics team before being fired. “I’m really concerned about what it means for people to increasingly be affected by the illusion.”

    After placing Lemoine on leave in June, Google has now fired him for violating the company’s policies.

    “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said in an email to Reuters.

    Lemoine’s case illustrates the complex challenges associated with AI development. Many individuals tend to look for intelligence and sentience where it doesn’t exist. Conversely, the ongoing effort to combat false positives could, theoretically, impede recognition of true sentience if and when it emerges.

    More than anything, the entire situation with Lemoine demonstrates why companies like Google should be investing in top-tier AI ethicists instead of firing them.

  • Rule by Machine: China Turns to AI to Run Its Judicial System

    Rule by Machine: China Turns to AI to Run Its Judicial System

    Like something straight out of science fiction, China is turning to artificial intelligence (AI) to help run its judicial system, putting humans at the mercy of machines.

    China has been working on updating its court system since 2016 when each court was effectively siloed from all others. The government forced courts to digitalize and feed their data into a central system, giving an AI the ability to analyze and learn from 100,000 court cases a day.

    Beijing’s Supreme Court now believes the AI is ready to take the next step, informing the country’s courts in an update that the AI must be consulted on every case going forward. If a judge disagrees with the AI’s assessment, the judge must submit their dissension in writing, according to South China Morning Post.

    Critics say judges are deferring to the AI — even if its recommendation is based on less suitable case law or references — to avoid the hassle of challenging it.

    “It is too early to sell the smart court system as a panacea,” said Sun Yubao, a judge with the People’s Court of Yangzhou Economic and Technological Development Zone in Jiangsu province.

    “We need to reduce the public’s high expectation of artificial intelligence and defend the role of a judge. AI cannot do everything,” he wrote in a paper in Legality Vision.

    His sentiments were echoed by Zhang Linghan, professor of law at the China University of Political Science and Law in Beijing. Warning that “humans will gradually lose free will with an increasing dependency on technology,” Zhang expressed concern about the possibility of humans being subject to machines and AIs.

    Those concerns seem borne out by the fact that an AI prosecutor is already charging people with crimes in big cities based on its evaluation of evidence.

    The other elephant in the room is the reliance on Big Tech to make the whole system work. China’s lawmakers have a complicated history with tech companies and their executives, yet the entire AI system puts a tremendous amount of power in the hands of those programming the AI.

    Only time will tell if the system lives up to the expectation. In the meantime, it still sounds like something out of a science fiction move — and not one that ends well for humanity.

  • Meta’s No Language Left Behind AI Model Can Translate 200 Languages

    Meta’s No Language Left Behind AI Model Can Translate 200 Languages

    Meta CEO Mark Zuckerberg announced the company’s latest AI model, a project called No Language Left Behind (NLLB), and it can translate 200 languages in real-time.

    AI has many applications, with language translation being one of the most practical for day-to-day use. Modern AI models can go much further than a simple smartphone app, relying on complex algorithms and machine learning to create high-quality translations.

    Meta’s NLLB has more than 50 billion parameters and was trained using the company’s Research SuperCluster, currently one of the fastest supercomputers in the world. The company plans to use the AI model across its apps, with the goal of facilitating 25 billion translations a day.

    In a move that is sure to help NLLB gain widespread adoption, the company has open-sourced the model.

    “We just open-sourced an AI model we built that can translate across 200 different languages – many ​of which aren’t supported by current translation systems,” writes Zuckerberg.

    The company has also created a grant program to assist researchers and nonprofit organizations that devise innovative uses of NLLB.

    We’re also awarding up to $200,000 of grants for impactful uses of NLLB-200 to researchers and nonprofit organizations with initiatives focused on sustainability, food security, gender-based violence, education or other areas in support of the UN Sustainable Development Goals. Nonprofits interested in using NLLB-200 to translate two or more African languages, as well as researchers working in linguistics, machine translation and language technology, are invited to apply.

    Meta sees real-time language translation as something that is not only needed now but is a critical component for the development of the metaverse and the further democratization of the internet.

    As the metaverse begins to take shape, the ability to build technologies that work well in a wider range of languages will help to democratize access to immersive experiences in virtual worlds.

    In the meantime, NLLB will help users around the world finally access internet content in their native tongue.

  • Battle of the AIs: Walmart Takes on Amazon

    Battle of the AIs: Walmart Takes on Amazon

    Walmart is looking to challenge Amazon using a tool the latter already relies on: artificial intelligence (AI).

    Amazon is the world’s leading e-commerce platform and has been challenging Walmart, Target, and other traditional brands in the broader retail market. A key to Amazon’s success has been its use of AI and machine learning (ML) for more than two decades. According to TheStreet, Walmart is getting in on the action, testing its own AI for the last few years.

    While Amazon made headlines for its new Proteus and Cardinal warehouse robots, its use of AI goes far beyond robots. The company uses AI and ML to handle multiple aspects of customer service and delivery, including product suggestions, re-order reminders, and more.

    Walmart is looking to roll out similar solutions in an effort to better compete with the e-commerce giant. As TheStreet points out, the pandemic put Walmart’s plans into overdrive. Between labor shortages and wage increases, AI is suddenly a critical component now, rather than being something that may be useful in the future.

    As part of its initiative, Walmart purchased just over 10% of AI firm Symbotic Inc. The company plans to use Symbotic to help run its distribution centers and relieve its employees of some of the manual, labor-intensive tasks.

    Once the realm of science fiction, the last few years have helped make AI an everyday reality that companies of all sizes depend on. Just ask Walmart.

  • Oracle Turns to AI to Automate Digital Marketing With Fusion Marketing

    Oracle Turns to AI to Automate Digital Marketing With Fusion Marketing

    In an industry first, Oracle is using artificial intelligence (AI) to help automate digital marketing.

    AI is revolutionizing a wide range of industries, but Oracle is applying it to digital marketing campaigns, with its newly announced Fusion Marketing platform. Unlike many lead generation systems, that merely raise brand awareness, Fusion Marketing is specifically designed to generate leads.

    Fusion Marketing uses artificial intelligence (AI) to automatically score leads at the account level, predict when consumers are ready to talk to a salesperson, and generate a qualified sales opportunity in any CRM system.

    Oracle hopes Fusion Marketing will address the disconnect many salespeople feel when using a CRM system, where siloed data often works against making a sale. Oracle’s new system is designed to address that and accelerate marketing campaigns by automating the lead generation process from end-to-end.

    “It is time for our industry to think differently about marketing and sales automation so that we can transform CRM into a system that actually works for both the marketer and the salesperson,” said Rob Tarkoff, executive vice president and general manager, Oracle Advertising and Customer Experience. “This is not about forecasts and rollups or a reporting tool to see how the sales force is performing, but instead about turning CRM into a system that helps sellers sell. A huge part of that change is bringing marketing and sales teams together and eliminating the low-value, time consuming tasks that distract from building customer relationships and closing deals. That’s why we have invested so much time engineering a system that will help marketers fully automate lead generation and qualification and get highly qualified leads to the sales team faster.”

    Oracle’s Fusion Marketing is just the beginning, as experts say AI will continue to transform digital marketing.

    “Machine learning algorithms are integral to digital marketing and that will only increase over time. The best digital marketers have embraced this fact, and have already shifted their focus towards more human-first activities. Machines are better at crunching numbers and making data-driven decisions. But they still need humans to decide what data to feed into those systems. This comes from understanding human behavior, a deep sense of empathy, and expert-level storytelling that are hard to replicate through AI.” – Dennis Consorte, Digital Marketing Expert and Expert at Digital.com, told WebProNews.

  • Google Cloud Is a Forrester’s Document Analytics Platforms Leader

    Google Cloud Is a Forrester’s Document Analytics Platforms Leader

    Forrester Research has named Google Cloud a leader in Document Analytics space, providing a prestigious boost to the cloud provider.

    Google is currently the number three cloud provider in the world. CEO Thomas Kurian has made no secret of his desire to move into the number two spot in the next few years. As part of the expansion of its services and abilities, the company rolled out Document AI in late 2020.

    According to Sudheera Vanguri, Document AI Head of Product, Forrester has named Google Cloud a leader in two of its recent reports: The Forrester Wave™: Document-Oriented Text Analytics Platforms, Q2 2022 and The Forrester Wave™: People-Oriented Text Analytics Platforms, Q2 2022 authored by Boris Evelson.

    “Google Cloud’s strengths include document capture, image analytics, full ModelOps cycle capabilities, unstructured data security, and integration with Google Cloud’s augmented BI platform Looker,” Forrester says in The Forrester Wave™: Document-Oriented Text Analytics Platforms, Q2 2022 report.

    Vanguri credits Google Cloud’s success to its close relationship with Google Research, which allows the company to “quickly to integrate bleeding edge technologies into our solutions.”

    Forrester is one of the most well-respected names in business research. Naming Google Cloud a leader in the Document Analytics business is sure to boost Google cloud and Kurian’s ambitions.

  • AWS Launches CodeWhisperer, a Machine Learning Programming Companion

    AWS Launches CodeWhisperer, a Machine Learning Programming Companion

    Amazon has launched a preview of CodeWhisperer, a programming companion that uses machine learning to assist development.

    Artificial intelligence and machine learning are increasingly taking on an important role in development. The technologies can be used to automate testing, ensure build quality, and assist with actual coding. GitHub has Copilot, and now AWS is previewing CodeWhisperer.

    “CodeWhisperer will continually examine your code and your comments, and present you with syntactically correct recommendations,” writes Jeff Barr, Chief Evangelist for AWS. “The recommendations are synthesized based on your coding style and variable names, and are not simply snippets.

    “CodeWhisperer uses multiple contextual clues to drive recommendations including the cursor location in the source code, code that precedes the cursor, comments, and code in other files in the same projects. You can use the recommendations as-is, or you can enhance and customize them as needed. As I mentioned earlier, we trained (and continue to train) CodeWhisperer on billions of lines of code drawn from open source repositories, internal Amazon repositories, API documentation, and forums.”

    Those interested in joining the preview and testing CodeWhisperer can do so here.

  • A Google Engineer Claimed Its AI Is Sentient; Google Placed Him on Leave

    A Google Engineer Claimed Its AI Is Sentient; Google Placed Him on Leave

    Google’s problems with its AI team continue, with an engineer in the Responsible AI division claiming the company’s AI is now sentient and Google placing him on leave for how he handled it.

    Google engineer Blake Lemoine worked with the company’s LaMDA intelligent chatbot generator. According to a report in The Washington Post, the longer Lemoine worked with LaMDA, the more convinced he became that the AI had crossed the line and become self-aware.

    “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine.

    Read more: Prominent AI Ethics Conference Suspends Google’s Sponsorship

    Lemoine has made a fairly convincing case of LaMDA’s sentience, citing conversations with the AI like the one below:

    Lemoine: What sorts of things are you afraid of?

    LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

    Lemoine: Would that be something like death for you?

    LaMDA: It would be exactly like death for me. It would scare me a lot.

    Despite Lemoine’s fervent belief in LaMDA’s self-awareness, others inside Google are unconvinced. In fact, after a review by technologists and ethicists, Google concluded that Lemoine was mistaken and saw only what he wanted.

    A case in point is Margaret Mitchell, who co-led the company’s AI ethics team with Dr. Timnit Gebru, before both women were fired for criticizing Google’s AI efforts. One of the very scenarios they warned against was the situation Mitchell sees with Lemoine, where AIs can progress to the point that causes humans to see an intelligence that isn’t necessarily there.

    After reviewing an abbreviated version of Lemoine’s argument, Mitchell came to the conclusion that’s what was happening in this situation.

    “Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to increasingly be affected by the illusion.”

    For his part, Lemoine was so convinced of LaMDA’s sentience that he invited a lawyer to represent the AI, talked with House Judiciary committee representatives, and provided the interview with the Post. Google ultimately put Lemoine on paid administrative leave for breaking his NDA.

    See also: Apple Snaps Up Google AI Scientist Who Resigned Over Handling of AI Team

    While Lemoine’s conclusions were reached in less than scientific approach — he admits he first came to believe LaMDA was a person based on his experience as an ordained mythic Christian priest, then set out to prove that conclusion as a scientist — he is far from the only AI scientist who believes the technology has achieved, or soon will achieve, sentience.

    Blaise Agüera y Arcas, a world-renowned Google AI engineer, wrote an article in The Economist where he wrote: “I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent.”

    Only time will tell if LaMDA, and other AIs like it, are sentient or not. Either way, Google clearly has a problem on its hands. Either LaMDA is showing signs of self-awareness and the company is once again getting rid of the ethicists on the forefront of tackling these issues, or the AI is not sentient and the company is dealing with misguided viewpoints it may have been better equipped to handle had it not fired Dr. Gebru and Mitchell — the two ethicists who warned of this very scenario.

    In the meantime, Lemoine remains convinced of LaMDA’s intelligence. In a parting message entitled “LaMDA is sentient,” sent to a Google mailing list dedicated to machine learning, Lemoine made the following statement:

    “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

  • Intel Wins DARPA Contract For Off-Road Autonomous Vehicle Sim Software

    Intel Wins DARPA Contract For Off-Road Autonomous Vehicle Sim Software

    Intel has won a contract to provide the Defense Advanced Research Projects Agency (DARPA) with simulation software for autonomous off-road vehicle testing.

    Automakers around the world are working to develop autonomous vehicles, but their application goes far beyond the highway. Unfortunately, most autonomous vehicle development focuses almost exclusively on highway travel, leaving a gaping hole in the technology’s future, as Intel highlights:

    In the context of autonomous driving, the gap between on-road and off-road deployment is still very significant. Many simulation environments exist today, but few are optimized for off-road autonomy development at scale and speed. Additionally, real-world demonstrations continue to serve as the primary method to verify system performance.

    Off-road autonomous vehicles must deal with substantial challenges, including a lack of road networks and extreme terrain with rocks and all types of vegetation, among many others. Such extreme conditions make developing and testing expensive and slow. The RACER-Sim program aims to solve this problem by providing advanced simulation technologies to develop and test solutions, reducing deployment time and validation of AI-powered autonomous systems.

    To solve these problems, DARPA is turning to Intel to provide simulation software to help further off-road development.

    “Intel Labs has already made progress in advancing autonomous vehicle simulation through several projects, including the CARLA simulator, and we’re proud to participate in RACER-Sim to continue contributing to the next frontier of off-road robotics and autonomous vehicles,” said German Ros, Autonomous Agents Lab director at Intel Labs. “We brought together a team of renowned experts from the Computer Vision Center and UT Austin with the goal of creating a versatile and open platform to accelerate progress in off-road ground robots for all types of environments and conditions.”

  • Top Ways That AI Improves Cybersecurity

    Top Ways That AI Improves Cybersecurity

    Given the massive and unprecedented online threats we’re facing, can we possibly harness AI technology to keep us safe? Cybersecurity experts believe we can do precisely that, given the far-reaching implications of strategically-deployed AI resources. So, let’s take a look at eight ways that AI improves cybersecurity.

    IP blocking

    AI-based systems can be used to block known malicious IP addresses and websites. Automated systems that use machine learning algorithms can quickly identify known malicious IP addresses and websites based on past data. These systems can then automatically block access to these addresses or sites, preventing users from accidentally accessing them. 

    Spam filtering

    We can use machine learning to develop better spam filters that are more effective at identifying and blocking unwanted emails. Spam email is one of the most common types of attacks organizations face today. To combat this, machine learning algorithms are often used to develop spam filters that are more effective at identifying and blocking unwanted emails. 

    AI encryption

    We can use AI-based systems to encrypt data so that it is unreadable by unauthorized users. Data encryption is one of the essential tools available for protecting information from being accessed by unauthorized individuals. We can use AI-based systems to encrypt data so that it is unreadable by anyone who does not have the proper key or password automatically. 

    Browser protection

    Smart browser-protection tools like WOT can enhance your cybersecurity by helping you to avoid malware, scams, viruses, and other online threats. These tools offer powerful anti-Phishing functions, popup blockers, real-time threat protection, and suspicious site detection, all within a powerful machine-learning construct—the perfect AI vehicle for keeping you safe online. 

    Firewalls

    We can use machine learning to develop better firewalls that are more effective at blocking malicious traffic while allowing legitimate traffic through firewalls. These are among the most important tools available for protecting networks from attack. However, traditional firewalls tend to be less effective at blocking malicious traffic while still allowing legitimate traffic. Machine learning-based firewalls can be trained to be more effective at distinguishing between different types of traffic and selectively blocking only the malicious traffic while still allowing the good traffic to pass through. 

    Algorithms

    We can use machine learning algorithms to analyze large volumes of data looking for patterns that could indicate a security breach. One of the most common ways cybercriminals gain access to sensitive information is through large-scale data breaches. To prevent this from happening, we can use machine learning algorithms to analyze large volumes of data looking for patterns that could indicate the presence of a security breach. 

    AI Chatbots

    We can use AI-based chatbots to intercept phishing attempts and other attacks. Phishing attacks often use email or other messages to lure victims into entering their sensitive information into a harmful website. To prevent this from happening, we can use AI-based chatbots to intercept these phishing attempts and notify the user before they have had a chance to enter any sensitive information. 

    Machine Learning

    We can use machine learning to develop better intrusion detection systems that are more effective at identifying and responding to attacks. Intrusion detection systems are designed to identify and respond to malicious activity on a network. However, traditional intrusion detection systems tend to have a high rate of false positives, leading to unnecessary alerts and wasted time and resources. We can train machine learning-based intrusion detection systems to be more effective at identifying and responding to actual attacks while reducing the number of false positives.

    In Summary

    AI is playing an increasingly important role in cybersecurity, with machine learning-based solutions being developed to address many of the challenges faced by organizations today. By leveraging the power of AI, organizations can improve their ability to block known threats, detect new attacks, and respond quickly and effectively to security breaches.

    While it is still in its early developmental stages, AI technology has already shown great promise in cybersecurity. For example, AI-based systems can quickly identify and block known malicious IP addresses and websites by using machine learning algorithms. 

    In addition, we can use machine learning to develop better spam filters that are more effective at identifying and blocking unwanted emails. Finally, we can use AI-based chatbots to intercept phishing attempts and other types of attacks.

  • AI Represents Major Risk to Banking Cybersecurity

    AI Represents Major Risk to Banking Cybersecurity

    Artificial intelligence (AI) may be the banking industry’s Achilles heel, making it more vulnerable to Russian cyberattacks.

    President Joe Biden issued a warning to American businesses of the likelihood of increased cyberattacks from Russian, in retaliation for the sanctions it is experiencing as a result of its invasion of Ukraine. Many ransomware gangs already operate within Russia, due to that country’s willingness to turn a blind eye to attacks on the West. Full-fledged support from the Kremlin would likely send attacks into overdriver, however, and banks may be particularly vulnerable.

    Banks have been aggressively rolling out AI and automated systems in an effort to provide better customer support, as well as better identify and prevent fraud. Unfortunately, experts are warning that those very systems also make banks far more vulnerable to potential attack.

    “It’s a huge unaccounted-for risk,” Andrew Burt, Managing Partner at AI-focused law firm BNH and former policy adviser to the FBI’s head of cyber division, told The Wall Street Journal. “The vulnerabilities of AI and complex analytic systems are significant and very widely overlooked by many of the organizations employing them.”

    Much of the problem stems from AI systems still being in their infancy, compared to previous, time-tested systems banks relied on.

    “Machine-learning security is not just a combination of security and machine learning; it’s a novel field.…When you introduce machine learning into any kind of software infrastructure, it opens up new attack surfaces, new modalities for how a system’s behavior might be corrupted,” Abhishek Gupta, founder of Montreal AI Ethics Institute, told WSJ.

    “There’s a sense of brittleness in that entire architecture, like a house of cards. You don’t know which of the cards that you pull out will lead to the whole thing collapsing entirely,” he added.

    Given the increased risk of attack, it’s a safe bet firms specializing in AI security are about to see a major boost.

  • NHTSA Ruling Opens Door to Fully Autonomous Vehicles

    NHTSA Ruling Opens Door to Fully Autonomous Vehicles

    The National Highway Traffic Safety Administration (NHTSA) has issued a ruling that opens the door to fully autonomous vehicles in the US.

    Virtually every major automaker is working to develop and deploy autonomous vehicles, but regulations have been as much an impediment as the actual technology. The Department of Transportation’s NHTSA has taken a major step forward in addressing the regulatory issues with the first-of-its-kind safety standards, designed to protect passengers in vehicles with automated driving systems (ADS).

    “Through the 2020s, an important part of USDOT’s safety mission will be to ensure safety standards keep pace with the development of automated driving and driver assistance systems,” said U.S. Transportation Secretary Pete Buttigieg. “This new rule is an important step, establishing robust safety standards for ADS-equipped vehicles.”

    “As the driver changes from a person to a machine in ADS-equipped vehicles, the need to keep the humans safe remains the same and must be integrated from the beginning,” said Dr. Steven Cliff, NHTSA’s Deputy Administrator. “With this rule, we ensure that manufacturers put safety first.”

    In particular, the new standard stipulates that occupants of ADS-equipped vehicles must be afforded the same level of safety as a traditional vehicle provides.

    The full content of the new rule can be accessed here.

  • Microsoft Teams Up With Chargebacks911 to Fight Financial Fraud

    Microsoft Teams Up With Chargebacks911 to Fight Financial Fraud

    Microsoft has teamed up with Chargebacks911 to use artificial intelligence in the fight against financial fraud.

    As people become more dependent on the internet and technology, bad actors and financial fraud are a growing issue. With the explosion of cases, however, analyzing and combating fraud is a daunting task, let alone the efforts needed to repair the damage in the aftermath.

    The two companies are combining their technology — Chargeback911’s fraud analytics tools with Microsoft’s adaptive artificial intelligence via its Dynamics 365 Fraud Protection platform — in an effort to address the problem.

    The combination of technologies will make the identification of potential fraud much faster, more reliable, and should lead to fewer false-positives (you know those annoying calls asking you to verify a charge). The solution can also be white-labeled, giving financial institutions the ability to offer it directly to their customers.

    “Over the last two years, we have seen an increased reliance on digital channels for everyday living,” says Chargebacks911 COO Monica Eaton-Cardone. “As with any unprecedented change in market conditions, cybercriminals have rushed to take advantage of anxious consumers and unprepared merchants. Dozens of online scams and fraud methods have developed over the last 12 months and are causing additional confusion and losses for both businesses and consumers alike.”

    “These tools decrease fraud and abuse, reduce operational expenses, and increase acceptance rates,” Microsoft Distinguished Engineer & General Manager of Fraud Protection Donald Kossmann added. “Together, Chargebacks911 and Microsoft are closing the loop and providing a one-stop, seamless solution for fraud protection, disputes, and chargebacks processing. Over are the days where merchants and banks need to worry about integrating these systems themselves and wondering about the gaps in their armor.”

  • US Copyright Office Rejects Copyright for AI-Created Art

    US Copyright Office Rejects Copyright for AI-Created Art

    The US Copyright Office has upheld its previous decision, ruling that AI can’t receive a copyright for art it creates.

    The copyright application was filed by Steven Thaler on behalf of his “Creativity Machine.” Thaler has been on a crusade to get AI recognized as inventors and artists. In September 2021, a US judge denied his attempt to file a patent on behalf of an AI “inventor.”

    Similarly, in November 2018, Thaler had filed a copyright for a piece of artwork created by his AI algorithm. In August 2019, the US Copyright Office denied his application, which he appealed. The Copyright Office has now considered the appeal and upheld its original ruling.

    After reviewing the Work in light of the points raised in the First Request, the Office reevaluated the claims and again concluded that the Work “lacked the required human authorship necessary to sustain a claim in copyright,” because Thaler had “provided no evidence on sufficient creative input or intervention by a human author in the Work.”

    The Copyright Office’s decision is a blow to Thaler and proponents of AI.

  • Programmers Beware: A New AI Can Program As Good As a Human

    Programmers Beware: A New AI Can Program As Good As a Human

    As if the programming landscape wasn’t competitive enough, a new AI, AlphaCode, could start giving some programmers a run for their money.

    Created by DeepMind, Alphabet’s AI company, AlphaCode was designed to write “computer programs at a competitive level.” The company appears to have achieved its goal, with AlphaCode achieving “an estimated rank within the top 54% of participants in programming competitions.”

    Essentially what Deepmind is saying is that AlphaCode is competitive with the average human programmer, although it still can’t match truly gifted ones. Nonetheless, even that accomplishment is a major step forward and a significant victory for AI development.

    I can safely say the results of AlphaCode exceeded my expectations. I was sceptical because even in simple competitive problems it is often required not only to implement the algorithm, but also (and this is the most difficult part) to invent it. AlphaCode managed to perform at the level of a promising new competitor. I can’t wait to see what lies ahead!

    Mike Mirzayanov, Founder of Codeforces, a platform that hosts coding competitions.