Creative AI Is Generating Some Messy Problems

The hot new trend in tech circles comes with thorny legal and ethical challenges.

Bloomberg News

November 28, 2022

5 Min Read
man wearing VR goggles
Bloomberg

(Bloomberg Opinion/Parmy Olson) — A tense scene in the 2004 movie iRobot shows the character played by Will Smith arguing with an android about humanity’s creative prowess. “Can a robot write a symphony?” he asks, rhetorically. “Can a robot turn a canvas into a beautiful masterpiece?”

“Can you?” the robot answers.

Machines wouldn’t need the snarky reply in our current reality. The answer would simply be “yes.”

In the last few years, artificial-intelligence systems have shifted from being able to process content — recognizing faces or reading and transcribing text — to creating digital paintings or writing essays. The digital artist Beeple was shocked in August when several Twitter users generated their own versions of one of his paintings with AI-powered tools. Similar software can create music and even videos. The broad term describing all this is “generative AI,” and as this latest lurch into our digital future becomes part of our present, some familiar tech industry challenges like copyright and social harm are already reemerging.

We’ll probably look back on 2022 as the year generative AI exploded into mainstream attention, as image-generating systems from OpenAI and the open source startup Stability AI were released to the public, prompting a flood of fantastical images on social media. The breakthroughs are still coming thick and fast. Last week, researchers at Meta Platforms Inc. announced an AI system that could successfully negotiate with humans and generate dialogue in a strategy game called Diplomacy. Venture capital investment in the field grew to $1.3 billion in deals this year, according to data from research firm Pitchbook, even as it contracted for other areas in tech. (Deal volume grew almost 500% in 2021.)

Related:Nvidia GTC Highlights Diverse AI Career Paths

AI startups

AI-startups

Companies that sell AI systems for generating text and images will be among the first to make money, says Sonya Huang, a partner at Sequoia Capital who published a “map” of generative AI companies that went viral this month. An especially lucrative field will be gaming, already the largest category for consumer digital spending.

“What if gaming was generated by anything your brain could imagine, and the game just develops as you go?” asks Huang. Most generative AI startups are building on top of a few popular AI models that they either pay to access, or get for free. OpenAI, the artificial intelligence research company co-founded by Elon Musk and mostly funded by Microsoft Corp., sells access to its image generator DALL-E 2 and its automatic text writer GPT-3. (Its forthcoming iteration of the latter, known as GPT-4, is reputed by its developers to be freakishly proficient at mimicking human jokes, poetry and other forms of writing.)

Related:AI in Business: Can Ethics Be Reduced to Metrics?

But these advancements won’t carry on unfettered, and one of the thorniest problems to be resolved is copyright. Typing in “a dragon in the style of Greg Rutkowski” will churn out artwork that looks like it could have come from the forenamed digital artist who creates fantasy landscapes. Rutkowski gets no financial benefit for that, even if the generated image is used for a commercial purpose, something the artist has publicly complained about.

Popular image generators like DALL-E 2 and Stable Diffusion are shielded by America’s fair use doctrine, which hinges on free expression as a defense for using copyrighted work. Their AI systems are trained on millions of images including Rutkowski’s, so in theory they benefit from a direct exploitation of the original work. But copyright lawyers and technologists are split on whether artists will ever be compensated.

In theory, AI firms could eventually copy the licensing model used by music-streaming services, but AI decisions are typically inscrutable – how would they track usage? One path might be to compensate artists when their name comes up in a prompt, but it would be up to the AI companies to set up that infrastructure and police its use. Ratcheting up the pressure is a class action lawsuit against Microsoft Corp, Github Inc. and OpenAI over copyright involving a code-generating tool called Copilot, a case that could set a precedent for the broader generative AI field.

Then there’s content itself. If AI is quickly generating more information than humanly possible – including, inevitably, porn – what happens when some of it is harmful or misleading? Facebook and Twitter have actually improved their ability to clean up misinformation on their sites in the last two years, but they could face a much greater challenge from text-generating tools — like OpenAI’s — that set their efforts back. The issue was recently underscored by a new tool from Facebook parent Meta itself.

Earlier this month Meta unveiled Galactica, a language system specializing in science that could write research papers and Wikipedia articles. Within three days, Meta shut it down. Early testers found it was generating nonsense that sounded dangerously realistic, including instructions on how to make napalm in a bathtub and Wikipedia entries on the benefits of being white or how bears live in space. The eerie effect was facts mixed in so finely with hogwash that it was hard to tell the difference between the two. Political and health-related misinformation is hard enough to track when it’s written by humans. What happens when it is generated by machines that sound increasingly like people?

That could turn out to be the biggest mess of all.

About the Author

Bloomberg News

The latest technology news from Bloomberg.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like