WebProNews

Tag: neural network

  • OpenAI Debuts AI That Draws Images From Text Prompts

    OpenAI Debuts AI That Draws Images From Text Prompts

    OpenAI has debuted DALL·E, an AI model that can draw images based on text prompts it receives.

    While AI is relatively good at duplicating things, it’s a significant leap for AI to create, and especially to create based on nothing more than a text prompt. DALL·E, “a portmanteau of the artist Salvador Dalí and Pixar’s WALL·E,” can do just that, drawing images from descriptions given to it.

    DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs. We’ve found that it has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images.

     

    Text Prompt - An illustration of a baby daikon radish in a tutu walking a dog - Credit OpenAI
    Text Prompt – An illustration of a baby daikon radish in a tutu walking a dog – Credit OpenAI

    This breakthrough opens the door to using language to manipulate visual images.

    GPT-3 showed that language can be used to instruct a large neural network to perform a variety of text generation tasks. Image GPT showed that the same type of neural network can also be used to generate images with high fidelity. We extend these findings to show that manipulating visual concepts through language is now within reach.

     

    Text Prompt - An armchair in the shape of an avocado - Credit OpenAI
    Text Prompt – An armchair in the shape of an avocado – Credit OpenAI

    The Holy Grail is the ability to engage in verbal communication with an AI, and having that AI understand and respond accordingly. OpenAI’s latest breakthrough is a step in that direction.

  • Cards Against Humanity Writers Take on AI Challenge

    Cards Against Humanity Writers Take on AI Challenge

    As artificial intelligence (AI) has revolutionized industries and taken over jobs, writers the world over (including yours truly) have taken solace in the fact that writing is about more than raw science. Writing is a skill that takes a more nuanced understanding of communication and human thought. As a result, many writers have felt their jobs were relatively safe.

    It appears that the writers at Cards Against Humanity (CAH) are putting that theory to the test. CAH is a tongue-in-cheek card game where players fill-in-the-blank with politically incorrect, risqué or offensive statements to complete the sentence. As its latest Black Friday stunt, CAH has pitted its writers against a true, neural network AI borrowed from OpenAI.

    According to the CAH website, “over the next 16 hours, our writers will battle this powerful card-writing algorithm to see who can write the most popular new pack of cards. If the writers win, they’ll get a $5,000 holiday bonus. If the A.I. wins, we’ll fire the writers.”

    No one really believes CAH will fire its writers if they lose. Nonetheless, it’s probably a good thing that at the time of writing the human writers were winning by roughly $1,100 indicating that, perhaps, AI taking over writing jobs is still a long way off. Although, to be fair, CAH believes the shift is inevitable.

    One of the questions in their FAQ is: “Do I need to worry about AI taking my job?”

    Their answer: “Basically if you do anything other than hoard capital, you’re going to end up plugged into the Matrix and the robots will harvest your body’s electricity.”

    Guess it’s time for this writer to start looking for cool Neo-inspired sunglasses and trench coat.

  • Google Updates Its Search Algorithm: Brings Neural Network Techniques to Search

    Google Updates Its Search Algorithm: Brings Neural Network Techniques to Search

    Whenever Google updates, tweaks, replaces or improves its search algorithms, webmasters the world over anxiously wait to see how it will impact their rankings.

    Google’s latest update, Bidirectional Encoder Representations from Transformers (BERT), is one of the company’s most interesting to date. Last year Google “introduced and open-sourced a neural network-based technique for natural language processing (NLP) pre-training,” or BERT.

    The company is using BERT to better understand complex, natural language queries and return more relevant results.

    “By applying BERT models to both ranking and featured snippets in Search, we’re able to do a much better job helping you find useful information,” wrote Pandu Nayak, Google Fellow and Vice President, Search in a company blog post. “In fact, when it comes to ranking results, BERT will help Search better understand one in 10 searches in the U.S. in English, and we’ll bring this to more languages and locales over time.

    “Particularly for longer, more conversational queries, or searches where prepositions like ‘for’ and ‘to’ matter a lot to the meaning, Search will be able to understand the context of the words in your query. You can search in a way that feels natural for you.

    “To launch these improvements, we did a lot of testing to ensure that the changes actually are more helpful. Here are some of the examples that showed up our evaluation process that demonstrate BERT’s ability to understand the intent behind your search.

    “Here’s a search for ‘2019 brazil traveler to usa need a visa.’ The word ‘to’ and its relationship to the other words in the query are particularly important to understanding the meaning. It’s about a Brazilian traveling to the U.S., and not the other way around. Previously, our algorithms wouldn’t understand the importance of this connection, and we returned results about U.S. citizens traveling to Brazil. With BERT, Search is able to grasp this nuance and know that the very common word “to” actually matters a lot here, and we can provide a much more relevant result for this query.”