Skip to main content

Runway ML puts AI tools in the hands of creators everywhere

Runway ML puts AI tools in the hands of creators everywhere

/

A new model for easy access to machine learning

Share this story

Machine learning can be a fantastic tool for creators, but integrating AI into your workflow is a challenge for those who can’t code. A new program called Runway ML aims to make this process easier by providing artists, designers, filmmakers, and others with an “app store” of machine learning applications that can be activated with a few clicks.

Say you’re an animator on a budget who wants to turn a video of a human actor into a 3D model. Instead of hiring expensive motion capture equipment, you could use Runway to apply a neural network called “PosetNet” to your footage, creating wireframe models of your actor that can then be exported for animation.

An easy way to experiment with machine learning

Or say you need to remove a coffee cup that was accidentally left in a shot on your high-budget fantasy TV drama. You could edit it out the traditional way, painting over the cup by hand, or you could run your footage through a machine learning segmentation model, which would automatically highlight different objects in each frame to make your job easier.

Examples like these are just the tip of the iceberg for Runway, which co-founder Cristóbal Valenzuela describes as radically egalitarian tool. “Machine learning is a very exclusive technology,” Valenzuela tells The Verge. “But I want to make things more inclusive; to get people from different backgrounds sitting around the table and using these models.”

Runway began as Valenzuela’s thesis project at the Tisch School of the Arts at New York University. After getting enthusiastic feedback from the AI art community, he decided to take the program mainstream, asking two school friends to come on board as co-founders and gathering seed money from NYC and Silicon Valley backers. The company was incorporated last December with a beta launch following this January.

Valenzuela straddles the fields of art and code and says he wants to bridge these two worlds, empowering non-coders to use machine learning models and, in turn, connecting researchers to the people who will benefit directly from their work.

In a blog post Valenzuela wrote last May, he compares the current AI art scene to painting in the 16th and 17th centuries. At that time, the act of simply storing and using paint was something of a craft secret, with painters relying on esoteric techniques involving pigs’ bladders and string. But with the invention of the paint tube in 1841, the craft became more accessible. It was also easier to conduct outdoors, leading to new styles and movements.

Doing for AI what paint tubes did for portraiture

As 19th century painter Pierre-Auguste Renoir told his son: “Without colors in tubes, there would have been no Cézanne, no Monet, no Sisley or Pissarro, nothing of what the journalists were to call Impressionism.” In other words: accessibility begets creativity.

But what is the “paint tube” for modern artists? Valenzuela makes a convincing argument that it might be Runway — or, at least, a program that looks a lot like it.

The pigs’ bladders holding back progress in this case is the skill-set currently needed to use machine learning models. That means learning to use software like TensorFlow or PyTorch; it might mean buying a few pricey GPUs (because your current computer won’t run these systems), or connecting to an AWS instance instead. None of these tasks are beyond the reach of non-coders, but they certainly take time and create a bottleneck for users. By comparison, Runway’s model is the perfect paint tube: just click and go.

The company is not the first to make AI models easier to use of course. But earlier examples — like Lobe, which let users train AI systems using a visual interface before it was bought by Microsoft — have focused on business use cases, rather than creative ones.

Runway’s target market is obvious when you load up its store front, which lets you browse a range of models that run the gamut from text generation to motion tracking.

You click to see the details of each model, click to add it to your workspace, set up your inputs and outputs, then start the system running. There are hooks to connect these outputs to other apps (so you can send ML-processed images to Photoshop, for example), and users can import new models directly from GitHub with just a few lines of code.

Exploring two pretrained generative adversarial networks (or GANs) generating images in the style of portraits and bedrooms using Runway ML.
Exploring two pretrained generative adversarial networks (or GANs) generating images in the style of portraits and bedrooms using Runway ML.

This latter point is one of the most important for Runway. It’s hard to understate just how fast-paced and collaborative the current AI art scene is, and how much individuals benefit from one another’s work and professional research. No sooner does a new model get released than it’s pounced on by the masses who use it in all sorts of unexpected ways.

Take the AI text generation system called GPT-2, unveiled in February by research lab OpenAI. In the months since it was launched, GPT-2 has been turned into accessible web apps, it’s been used to help write a novel, and someone even created a subreddit populated entirely by chatbots mimicking other subreddits using it.

In short: this is a bubbling and energetic scene, and Runway wants to stay as connected to it as possible. Valenzuela says his team is constantly responding to users’ feedback, adding new models to the program and updating the software’s interface every month.

It’s this connection to the community and speed of updating that he says will also stop Runway from getting overrun by the industry’s established players. “We’re just four people on our team,” he says. “We like to ship things fast, to get people excited and get them using things.” And although corporate giants like Adobe certainly have a lot of interest in similar AI applications, they’re necessarily going to be slower to integrate them into products.

The reception Runway has received so far gives credence to Valenzuela’s comments.

“I personally find it useful for trying out some models quickly,” says Mario Klingemann, an AI artist who recently sold his work at Sotheby’s. He says the program will be particularly useful for workshops and classes since it allows teachers to get a new machine learning model up and running in minutes.

Klingemann has reservations too. He tells The Verge that the software is “too limited” for making his own art because it only offers pretrained models. That means if someone wants to customize the output of a generator system, like a generative adversarial network (GAN), they can’t train it on their own data. Valenzuela says this functionality will be added soon to the program, along with the ability to use the output of one model as the input of another.

Robbie Barrat, another AI artist who’s renowned for his work with GANs, also praised the program, singling out its potential to democratize the AI art scene as particularly positive.

“If Runway can lower the barrier for entry ... I’m all for it.”

“Right now, it’s easiest for rich tech bros who work at Google or something, went to a fancy computer science school, and have access to huge GPU clusters at work to make artwork with neural networks,” Barrat tells The Verge. “If Runway can lower the barrier for entry and let a more diverse group of people easily create art with AI, I’m all for it.”

But Barrat agrees that the program’s simplicity is a limiting factor. People who want to use AI to do more than just simplify an existing step in their process will need more powerful programs than Runway, he says. “No matter how good it is as a tool, people who code will still be able to make a bigger range of things and ultimately have more control over what they produce.”

Barrat and Klingemann also say price is a potential problem when it comes to training ML models on Runway. The software is free to download and comes with $10 of cloud credit, which is more than enough to get to know its potential. But after that, it costs users 5 cents per minute of computation. For artists like Klingemann and Barrat who need to train their own models, it’s likely cheaper to buy their own GPUs or connect to a cloud service. “I am not sure it will financially make sense to me,” says Klingemann.

But saying that Runway isn’t a perfect fit for all users is hardly a huge criticism. No one is surprised when their Swiss army knife can’t be used to dig a trench. And for Valenzuela, the primary mission isn’t catering to pros; it’s expanding the borders of AI art world.

“These technologies are going to radically change the way we create content because algorithms are understanding text, video, and sound in the same way we do,” he says. “And if we put these tools in the hands of people who have never accessed these before, they’re going to start thinking of new ways to use them.”