Microsoft has scored a big win in the field of artificial intelligence (AI), gaining an exclusive license for OpenAI’s GPT-3 language model.
OpenAI is one of the leading AI research labs. Elon Musk has been one of the biggest critics of AI, believing it poses an existential threat to humanity. Musk was one of the original founders of OpenAI, in the hopes that responsible AI development could help avert disaster.
GPT-3 is “an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model.” The model uses deep learning to better emulate human language patterns.
Microsoft’s latest announcement builds on their existing partnership with OpenAI, which was expanded in May.
”Today, I’m very excited to announce that Microsoft is teaming up with OpenAI to exclusively license GPT-3, allowing us to leverage its technical innovations to develop and deliver advanced AI solutions for our customers, as well as create new solutions that harness the amazing power of advanced natural language generation,” writes Kevin Scott – Executive Vice President and Chief Technology Officer, Microsoft.
”We see this as an incredible opportunity to expand our Azure-powered AI platform in a way that democratizes AI technology, enables new products, services and experiences, and increases the positive impact of AI at Scale. Our mission at Microsoft is to empower every person and every organization on the planet to achieve more, so we want to make sure that this AI platform is available to everyone – researchers, entrepreneurs, hobbyists, businesses – to empower their ambitions to create something new and interesting.”
Scott acknowledges that, at this point, it’s hard to imagine the many ways GPT-3 will impact the industry. The partnership, however, ensures Microsoft will continue to be on the forefront of AI development.
And the award goes to…Rock and Roll Bot 101111. While that may not be something we are accustomed to hearing, it may be soon thanks to OpenAI.
OpenAI is a research organization focused on artificial Intelligence (AI). It was founded in 2015 by Sam Altman and Elon Musk. Musk, in particular, has long been critical of AI, warning that it could bring about the downfall of the human race. OpenAI was founded to promote safe, responsible AI research and development.
OpenAI’s latest announcement is “Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles.” Providing Jukebox with an artist, style and lyrics gives the AI the raw material it needs to produce a unique music sample from scratch.
“We chose to work on music because we want to continue to push the boundaries of generative models,” reads OpenAI’s announcement. “Our previous work on MuseNet explored synthesizing music based on large amounts of MIDI data. Now in raw audio, our models must learn to tackle high diversity as well as very long range structure, and the raw audio domain is particularly unforgiving of errors in short, medium, or long term timing.”
The accompanying paper goes further, saying “we show that our models can produce songs from highly diverse genres of music like rock, hip-hop, and jazz. They can capture melody, rhythm, long-range composition, and timbres for a wide variety of instruments, as well as the styles and voices of singers to be produced with the mu- sic. We can also generate novel completions of existing songs. Our approach allows the option to influence the generation process: by swapping the top prior with a conditional prior, we can condition on lyrics to tell the singer what to sing, or on midi to control the composition. We release our model weights and training and sampling code at https://github.com/openai/jukebox.”
OpenAI is hiring and invites any interested in contributing to apply. In the meantime, Jukebox is another significant step forward for AI.
Elon Musk, a long-time critic of AI, has come out in favor of government regulation of AI development, including at his own company.
While many working on AI believe it is the key to solving countless world problems, there are just as many who are convinced the technology will create far more problems than it solves, perhaps even bringing about the downfall of humanity. Musk has tended to be in the latter camp, even being quoted as saying “I have exposure to the most cutting-edge AI and I think people should be really concerned about it. I keep sounding the alarm bell but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.”
That concern didn’t stop Musk from co-founding OpenAI, dedicated to the ongoing development of the technology, however. In fact, Musk’s concerns were one of the driving motivations, as he believed the technology needed responsible development, as opposed to being left in the hands of just a few companies—such as Google and Facebook—who have poor track records protecting user privacy.
Now, in response to a piece by Karen Hao in the MIT Technology Review that covers “OpenAI’s bid to save the world,” Elon Musk has tweeted his support of AI regulation.
All orgs developing advanced AI should be regulated, including Tesla
As artificial intelligence (AI) has revolutionized industries and taken over jobs, writers the world over (including yours truly) have taken solace in the fact that writing is about more than raw science. Writing is a skill that takes a more nuanced understanding of communication and human thought. As a result, many writers have felt their jobs were relatively safe.
It appears that the writers at Cards Against Humanity (CAH) are putting that theory to the test. CAH is a tongue-in-cheek card game where players fill-in-the-blank with politically incorrect, risqué or offensive statements to complete the sentence. As its latest Black Friday stunt, CAH has pitted its writers against a true, neural network AI borrowedfrom OpenAI.
According to the CAH website, “over the next 16 hours, our writers will battle this powerful card-writing algorithm to see who can write the most popular new pack of cards. If the writers win, they’ll get a $5,000 holiday bonus. If the A.I. wins, we’ll fire the writers.”
No one really believes CAH will fire its writers if they lose. Nonetheless, it’s probably a good thing that at the time of writing the human writers were winning by roughly $1,100 indicating that, perhaps, AI taking over writing jobs is still a long way off. Although, to be fair, CAH believes the shift is inevitable.
One of the questions in their FAQ is: “Do I need to worry about AI taking my job?”
Their answer: “Basically if you do anything other than hoard capital, you’re going to end up plugged into the Matrix and the robots will harvest your body’s electricity.”
Guess it’s time for this writer to start looking for cool Neo-inspired sunglasses and trench coat.
“We’re working together with Microsoft to build next-generation supercomputers,” says OpenAI co-founder Greg Brockman. “The real goal of OpenAI and what we’re trying to accomplish is to build what we call artificial general intelligence. They’re trying to build a computer system that is as capable as a human at being able to master a domain of study and being able to master more fields of study than any one human can. We think whoever builds artificial general intelligence will be the number one most valuable company in the world by a huge margin.”
OpenAI Working With Microsoft To Build AI That Will Change The World
We’re working together with Microsoft to build next-generation supercomputers. The real goal of OpenAI and what we’re trying to accomplish is to build what we call artificial general intelligence. They’re trying to build a computer system that is as capable as a human at being able to master a domain of study and being able to master more fields of study than any one human can. If we succeed the kind of thing that we want to be able to do is, for example, build a computer system that can solve medicine better than humans can. If you think about how humans solve medicine today we do it by increased specialization.
I have a friend who’s going through medical procedures right now where he talks to a first doctor who does an ultrasound but can’t read it so he has to go to a different doctor who doesn’t really have context as to what’s going on. This is not a problem that we can solve by increasing the amount of knowledge that humans have to learn. There’s only so much we can fit in our minds. What we really need are tools that are capable of helping us solve these problems. That’s the kind of thing that we want to apply general intelligence to.
Our goal is to distribute the economic benefits of artificial general intelligence. You can imagine a general intelligence system that can generate huge amounts of value. If you look at the top ten most valuable companies in the world, seven of them are technology companies. We think whoever builds artificial general intelligence will be the number one by a huge margin. It’s really important that those benefits go to everyone rather than being locked up in one place.
Building Powerful Safe and Secure AI Technology
There’s a second part which is it’s really important that you keep these systems safe and secure and that you build them with ethics in the forefront. That’s something that both we and Microsoft are very aligned on doing from the beginning. What it really boils down to is that AI technology is becoming very powerful. That means that there’s both these amazing benefits and these amazing applications. Imagine a personalized tutor that can really understand you that is available for free to every person on the planet. That’s the kind of thing we should be able to build with the kind of systems that we want to create.
You also have to ask the questions of what are the risks. How can they be misused? Today, we already see AI technology, for example, deepfakes, that already has bad implications in the world. How do we maximize those benefits and mitigate the downsides? That’s our goal. Our goal is to push the technology forward and make sure that we’re capturing those benefits while making sure everyone benefits from them. But we also want to make sure that we keep it safe and secure to mitigate the downsides.
AI Computational Power Growing 5 Times Faster Than Moore’s Law
The timelines (of where AI will take us) are always really hard to predict. One story I really like thinking about is just looking, for example, at previous technological innovations. In 1878, Thomas Edison announced that he was going to create the incandescent lamp and gas securities in England fell. So British Parliament put together a commission of distinguished experts who came out to Menlo Park. They checked everything out. They said this isn’t going to work and one year later he shipped. I think that we’re in a similar sort of place here where it’s hard to predict what the future will be like.
We’re in this exponential right now where the computational power that we’re using is growing five times faster than Moore’s Law. What we do know is every year we’re going to have unprecedented AI technologies. We’ve been doing that for seven years and OpenAI has been doing it for three. This year we have systems that can understand and generate text. I think five years from now we should expect that we will have systems that you can really have meaningful conversations with. I think that we should see within a bunch of different domains, a lot of systems that can work with humans to augment what they can do much further than anything we can imagine today.