Here’s How the Coders at OpenAI Make The World’s Most Powerful AI Even Better

Over the past few months, OpenAI’s researchers have been working to improve their artificial intelligence system, called GPT-2. The effort has resulted in a new version of the software, called GPT-2 1.5B (the original was 1.2B), that’s bigger and more powerful than before.

The latest version is 1.5 times as large as the original and excels at generating text that is more coherent over longer stretches than its predecessor, says Jeff Wu, a research engineer at OpenAI who helped lead the effort. It’s also better at multi-sentence text completion, which helps it better understand context across multiple sentences—as opposed to just one sentence or phrase.

The organization plans to release a number of other technical details about GPT-2 1.5B on Monday, but they wanted to give WIRED an early peek at what this work involves and why it matters. It’s also worth noting that OpenAI isn’t releasing a second version of its initial GPT-2 model because of any immediate concerns about misuse—though Wu says they are thinking hard about this issue as they consider future developments with the software—but because they want to encourage others to use it

Codex is a system for storing and retrieving code. It’s designed to support the following workflow:

1. You write or find some code you want to save.

2. You upload it to Codex, which gives you back a link.

3. You can then use that link to retrieve the code later, on any machine, even if it’s never been used before. It also makes it easy to upload and download code in bulk.

We built Codex because we wanted a way to share code with each other that was faster and easier than emailing files around, but which had less friction than setting up a full-fledged Git repository for each project. We also wanted a system that made it easy to share small snippets of code (e.g., one-liners) as well as large projects (e.g., a full library).

We’ve found Codex useful for many tasks beyond sharing code between ourselves:

* We use Codex as our primary mechanism for installing OpenAI code on new machines (we run Python scripts from Codex rather than installing packages). This means we don’t need to do complicated package setup or figure out which versions of packages are compatible with each other; we just run whatever is specified in Codex when we need it (we also

The new neural network created by OpenAI and trained with the help of the company’s Codex system is unique in that it’s able to generate text. It was created using a technique called unsupervised learning, which means it didn’t need to be trained on any specific set of data. This type of technique has been used before, but what makes OpenAI’s approach different is that it doesn’t rely on any particular data set. Instead, it uses the entire Internet as its training ground.

The most impressive part of OpenAI’s neural network is its ability to write text based on information that it hasn’t seen or been given. The Codex system scans millions of web pages every day and learns from them all. It can then use this information to generate text that is similar in style and meaning to what it has learned from its online training.

Codex is not yet ready for prime time, though; the system is still in beta testing and there are some bugs still being worked out. But once it gets out into the wild, we can expect some interesting results.

Imagine you’re writing code, and something goes wrong. You go to the command line, type in a command, and see the output of your program. You tweak the code and try again. The program doesn’t crash or return an error. Instead, it prints out some numbers that seem like they should be higher—but there’s no way to tell because it’s not clear what those numbers mean.

Welcome to the world of AI research. The goal is for machines to learn like humans do: through trial and error, with minimal guidance from humans. But when machines do something well—or badly—it can be hard for researchers to figure out why.

“You have these massive models that are doing all this stuff, but it’s all opaque,” says Jonathan Zung, who leads software engineering at OpenAI, the AI research company cofounded by Elon Musk. “There are just a lot of questions that we don’t know how to answer right now. We want to make it so you can look inside of these models and get better understanding of what they do.”

OpenAI is an artificial intelligence research company, funded in part by Elon Musk. The nonprofit organization made news last month when it announced that its text generation AI, GPT-2, was so good at imitating human writing that it would not be releasing the full system. The company said it was concerned about the misuse of such powerful AI systems by malign actors.

The announcement led to understandable skepticism among some machine learning researchers who were unable to replicate the results. They were also confused as to why OpenAI—which is supposed to promote open research and sharing of results—would be publishing an abstract only, rather than a full paper detailing their research.

The blog post provides more details on how GPT-2 works and how it can be used. It also explains why OpenAI chose not to release the system into the wild and allows readers to judge for themselves whether their concerns are justified.

At OpenAI, we’ve used the Internet as a tool for collaborative research on AI safety. In this post, we’ll describe some of the tools we use internally to support our work.

We started by trying to use existing software, but quickly realized that “off-the-shelf” productivity software wasn’t yet designed to support our research workflow. Our team is spread across many time zones, we need to rapidly iterate on papers and code, and we want to be able to easily share ideas with the external academic community. Calling meetings is hard when people are in so many different time zones; most existing project management software doesn’t let you easily track discussion threads or assign tasks; and it’s much easier for multiple people to edit a document simultaneously if you can see what other people are typing.

As a result, we’ve built internal tools that are designed specifically for our workflow. These tools help us run meetings more smoothly, create documents collaboratively, and even write code together. They also help us share ideas with the broader AI safety community: at the end of this post, we’ll talk about how anyone can suggest or vote on topics for future AI safety research.

Sharing documents

One of our main tools is an internal website

“In the past, we’ve used this internal system to ship everything from a popular video summarization model on Reddit to an interactive fiction game. We’re now re-architecting this infrastructure to be even faster and more flexible. In particular, we want it to be easy for teams outside of research to use the latest advances from our labs, and for researchers to easily combine components developed by different teams. This will help us move away from a world in which each AI project is built from scratch, towards one where AI systems are composed of reusable components.”

Leave a Reply