Imagine if Siri could write you a college essay, or Alexa could spit out a Shakespearean-style movie review.
Last week, OpenAI opened up access to ChatGPT, an AI-powered chatbot that interacts with users in an eerily convincing and conversational way. His ability to Providing extensive, thoughtful, and comprehensive answers to questions and prompts, even if they are inaccurate, has surprised users, including academics and some in the tech industry.
the tool quickly went viral. On Monday, Open AI co-founder Sam Altman, a prominent Silicon Valley investor, said on Twitter that ChatGPT crossed a million users. It also caught the attention of some prominent tech leaders, such as Box CEO Aaron Levie.
“There’s a certain feeling that happens when a new technology adjusts the way you think about computing. Google did it. Firefox did. AWS did it. iPhone did it. OpenAI is doing it with ChatGPT”, Levie said On twitter.
But as with other AI-powered tools, it also raises potential concerns, including how it could disrupt creative industries, perpetuate bias, and spread misinformation.
ChatGPT it’s a great language model trained on a massive treasury of information online to create its responses. It comes from the same company behind DALL-E, which generates a seemingly limitless range of images in response to user input. It is also the next iteration of the GPT-3 text generator.
After Sign up for ChatGPT, users can ask the AI system to answer a variety of questions, such as “Who was the president of the United States in 1955?”, or boil down difficult concepts into something a second grader can understand. It’ll even tackle open-ended questions, like “What is the meaning of life?” or “What do I wear if it’s 40 degrees today?”
“It depends on the activities you plan to do. If you plan to be outside, you should wear a light jacket or sweater, long pants, and closed-toe shoes,” ChatGPT replied. “If you plan to be inside, you can wear a T-shirt and jeans or other comfortable clothing.”
But some users are getting very creative.
A person I ask the chatbot to rewrite the hit of the 90s song, “Baby Got Back”, in the style of “The Canterbury Tales”; other wrote a letter to remove a bad account from a credit report (instead of using a credit repair attorney). Other colorful examples including ask Home Decorating Tips Inspired by Fairy Tales and giving him an AP English exam question (responded with a 5 paragraph essay about Wuthering Heights).
In a blog post last week, OpenAI said that “the format makes it possible for the tool to answer follow-up questions, admit its mistakes, question incorrect premises, and reject inappropriate requests.”
As of Monday morning, the page for testing ChatGPT was down, citing “exceptionally high demand.” “Please stand by while we work to scale our systems,” the message read. (Now it appears to be back online.)
While ChatGPT successfully answered a variety of questions submitted by CNN, some answers were notably wrong. In fact, Stack Overflow, a question and answer platform for coders and programmers, temporarily banned users of sharing information from ChatGPT, noting that it is “substantially harmful to the site and to users who ask questions or seek correct answers.”
Beyond the problem of spreading misinformation, the tool could also threaten some writing professions, be used to explain problematic concepts, and, as with all AI tools, perpetuate biases based on the data set it is trained on. Writing a message involving a CEO, for example, could generate a response assuming the person is white and male, for example.
“While we’ve worked hard to make the model reject inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior,” Open AI said on your website. “We are using the moderation API to warn or block certain types of unsafe content, but we expect you to have some false negatives and positives for now. We are eager to gather user feedback to assist in our continued work to improve this system.”
Still, Lian Jye Su, research director at market research firm ABI Research, cautions that the chatbot is operating “without a contextual understanding of the language.”
“It’s very easy for the model to give plausible but incorrect or nonsensical answers,” he said. “He guessed when he was supposed to clarify and sometimes responded to harmful instructions or exhibited biased behavior. It also lacks regional and country-specific understanding.”
However, at the same time, it provides insight into how companies can capitalize on developing stronger virtual assistance, as well as customer and patient care solutions.
While the DALL-E tool is free, it does put a limit on the number of requests a user can make before having to pay. When Elon Musk, co-founder of OpenAI, recently asked Altman on Twitter about ChatGPT’s average cost per chat, Altman said: “We’ll have to monetize it somehow at some point; the computing costs are staggering.”