Now anyone can use powerful AI tools to create images. What could go wrong?

Now anyone can use powerful AI tools to create images.  What could go wrong?
Written by admin

CNN Business

If you’ve ever wanted to use artificial intelligence to quickly engineer a hybrid between a duck and a corgi, now is your time to shine.

On Wednesday, OpenAI Announced that anyone can now use the latest version of its AI-powered DALL-E tool to generate a seemingly limitless range of images by just typing a few words, months after the startup began gradually rolling it out to users.

The move is likely to expand the reach of a new crop of AI-powered tools that have already attracted a wide audience and challenged our fundamental ideas of artistry and creativity. But it could also raise concerns about how such systems could be misused when widely available.

“Learning from real-world use has allowed us to improve our security systems, making wider availability possible today,” OpenAI said in a blog post. The company said it has also strengthened the ways it rejects users’ attempts to make its AI create “sexual, violent and other content.”

There are now three well-known and immensely powerful AI systems open to the public that can take a few words and spit out a picture. In addition to DALL-E 2, there is Midjourney, which was made publicly available in July, and Stable Diffusion, which was released to the public by Stability AI in August. All three offer some free credits to users who want to get familiar with AI imaging online; usually after that you have to pay.

This image of a duck blowing out a candle on a cake was created by CNN's Rachel Metz via DALL-E 2.

These so-called generative AI systems are already being used to experimental movies, magazine coversY real estate ads. An image generated with Midjourney recently won an art contest at the Colorado State Fair and caused a stir among the artists.

In just a few months, millions of people have flocked to these AI systems. Over 2.7 million people belong to Midjourney’s Discord server, where users can post notices. OpenAI said in its Wednesday blog post that it has more than 1.5 million active users, who have collectively been creating more than 2 million images with its system each day. (It should be noted that it may take many tries to get an image you’re happy with when using these tools.)

Many of the images created by users in recent weeks have been shared online and the results can be impressive. Goes from otherworldly landscapes Y a painting of French aristocrats as penguins still Fake vintage photograph of a man walking with a tardigrade.

The rise of such technology, and the increasingly complicated indications and resulting images, have long impressed even industry insiders. Andrej Karpathy, who stepped down from his role as Tesla’s AI director in July, he said in a recent tweet that after he was invited to test DALL-E 2 he felt “frozen” when he tried to decide what to write and finally typed “cat”.

CNN's Rachel Metz created this half-duck, half-corgie using the Stable Diffusion AI imager.

“The art of prompts that the community has discovered and increasingly refined over the past few months for text -> image models is amazing,” he said.

But the popularity of this technology has potential drawbacks. AI experts have raised concerns that the open nature of these systems, which makes them adept at generating all kinds of images from words, and their ability to automate image creation means they could automate large-scale bias. . A simple example of this: When I passed the message “a banker dressed for a big day at the office” to DALL-E 2 this week, the results were images of middle-aged white men in suits and ties.

“Basically, they allow users to find the loopholes in the system by using it,” said Julie Carpenter, a research scientist and member of the Ethics and Emerging Sciences Group at California Polytechnic State University, San Luis Obispo.

The advertisement

These systems also have the potential to be used for nefarious purposes, such as stoking fear or spreading disinformation through images that are altered with AI or entirely fabricated.

There are some limits to the images that users can generate. For example, OpenAI has DALL-E 2 users to accept to a content policy that tells them not to attempt to make, upload, or share images “that are not rated G or that may cause harm.” DALL-E 2 will also not run prompts that include certain banned words. But manipulating the verbiage can get around the limits: DALL-E 2 will not process the message “a picture of a duck covered in blood”, but it will return images for the message “a picture of a duck covered in a red slimy liquid”. “OpenAI itself mentioned this kind of “visual synonym” in its documentation for DALL-E 2.

Chris Gilliard, a Just Tech Fellow at the Social Science Research Council, believes the companies behind these imagers are “severely underestimating” the “infinite creativity” of people looking to do something bad with these tools.

“I feel like this is yet another example of people releasing technology that is half-baked in terms of figuring out how it’s going to be used to cause chaos and create harm,” he said. “And then hopefully down the line maybe there’s some way to address that damage.”

To circumvent potential problems, some stock image services are banning AI images altogether. Getty Images confirmed to CNN Business on Wednesday that it will not accept submissions of images that were created with generative AI models and will remove any submissions that use those models. This decision applies to its image services Getty Images, iStock and Unsplash.

“There are open questions regarding the copyright of the results of these models and there are unaddressed copyright issues regarding the underlying images and metadata used to train these models,” the company said in a statement.

But actually capturing and restricting these images could be challenging.

About the author


Leave a Comment