What Is ChatGPT? Key Things to Know About AI ChatBot
AI-powered chatbots like OpenAI and Microsoft's Bing are sparking an AI race that could transform the future of work.
The release of OpenAI’s ChatGPT late November triggered a new global race in artificial intelligence. In March, the company’s AI model, GPT-4, which it used to update ChatGPT’s capabilities, upped the stakes even more.
The chatbot is part of a wave of so-called generative AI—sophisticated systems that produce content from text to images—that has shaken up Big Tech and is set to transform industries and the future of work.
Microsoft Corp., OpenAI’s strategic partner, has already added the technology across its products, including the MS 365 Suite and search engine Bing. Competitor Google has unveiled a similar search tool and said March 21 that it is opening public access to the program, called Bard, while stopping short of integrating it into its flagship search engine. Chinese tech giant Baidu’s version made its debut on March 16.
Despite its sudden burst in popularity, the technology currently has serious limitations and potential risks that include spewing misinformation and infringing on intellectual property.
Here’s what to know.
What is ChatGPT?
ChatGPT is an artificial intelligence chatbot developed by the AI research company OpenAI. Released in November 2022, it can have conversations on topics from history to philosophy, generate lyrics in the style of Taylor Swift or Billy Joel, and suggest edits to computer programming code. In March 2023, OpenAI said it would upgrade it to also handle visual information, such as answering questions about the contents of a photo.
ChatGPT is trained on a vast compilation of articles, images, websites, and social media posts scraped from the internet as well as real-time conversations—primarily in English—with human contractors hired by OpenAI. It learns to mimic the grammar and structure of writing and reflects frequently used phrases. It also learns to recognize shapes and patterns in images, such as the contours of a cat, a child or a shirt. It can match words and phrases to those shapes and patterns as well, allowing users to ask about the contents of an image, such as what a cat is doing or the color of the shirt.
The chatbot isn’t always accurate. Its sources aren’t fact-checked, and it relies on human feedback to improve its accuracy. It may also misjudge the objects in a painting or photo.
Who created ChatGPT?\0,
OpenAI is a San Francisco-based AI research firm, co-founded in December 2015 by Sam Altman, now its chief executive, and Elon Musk, who cut ties with the company three years later.
Messrs. Altman and Musk originally started the organization as a nonprofit, saying that such a structure would keep its research “free from financial obligations” and allow them to “better focus on a positive human impact.” As the firm’s research grew increasingly capital intensive, leadership changed their approach, creating a for-profit arm in 2019 to encourage more investments.
OpenAI subsequently developed ChatGPT as part of a strategy to help the company turn a profit. In January, Microsoft unveiled a fresh multibillion-dollar investment in OpenAI and has since integrated the chatbot’s underlying technology into its Bing search engine and other products. In March, OpenAI said it would no longer open-source the technical details of its systems, as it had originally stated in its founding principles, to maintain its competitive advantage.
How do ChatGPT and other AI chatbots work?
The technology that underlies ChatGPT is referenced in the second half of its name, GPT, which stands for Generative Pre-trained Transformer. Transformers are specialized algorithms for finding long-range patterns in sequences of data. A transformer learns to predict not just the next word in a sentence but also the next sentence in a paragraph and the next paragraph in an essay. This is what allows it to stay on topic for long stretches of text.
Because a transformer requires a massive amount of data, it is trained in two stages: first, it is pretrained on generic data, which is easier to gather in large volumes, and then it is fine-tuned on tailored data for the specific task it is meant to perform. ChatGPT was pretrained on a vast repository of online text to learn the rules and structure of language; it was fine-tuned on dialogue transcripts to learn the characteristics of a conversation.
Keep in mind that OpenAI has access to your inputs and outputs for ChatGPT and its employees and contractors may read them as part of improving the service. Avoid providing private data or sensitive company information.
Other generative AI technologies such as image-generators Dall-E 2 and Midjourney as well as avatar-generator Lensa have become popular with internet users for producing fantastical images and illustrations. Some are finding use among independent writers to create artwork for their articles and architects to brainstorm new design ideas for clients.
What are the pitfalls of AI chatbots?
AI chatbots and other generative AI programs are mirrors to the data they consume. They regurgitate and remix what they are fed to both great effect and great failure. Transformer-based AI program failures are particularly difficult to predict and control because the programs rely on such vast quantities of data that it is almost impossible for the developers to grasp what that data contains.
ChatGPT, for example, will sometimes answer prompts correctly on topics where it ingested high-quality sources and frequently conversed with its human trainers. It will spew nonsense on topics that contain a lot of misinformation on the internet, such as conspiracy theories, and in non-English languages, such as Chinese.
Early user tests of Microsoft’s conversational AI Bing service have also shown that its comments can start to become unhinged, expressing anger, obsession and even threats. Microsoft said it discovered that Bing starts coming up with strange answers following chat sessions of 15 or more questions.
Meanwhile, some artists have also said AI image generators plagiarize their artwork and threaten their livelihoods, while software engineers have said that code generators rip large chunks of their code.
For the same reasons, ChatGPT and other text generators can spit out racist and sexist outputs. OpenAI says it uses humans to continually refine the chatbot’s outputs to limit these mishaps. It also uses content-moderation filters to restrict ChatGPT’s responses and avoid politically controversial or unsavory topics.
Ridding the underlying technology of bias—which has for years been a recurring problem, including for an infamous Microsoft chatbot in 2016 known as Tay—remains an unsolved problem and a hot area of research.
“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” tweeted OpenAI’s Mr. Altman shortly after the chatbot’s release, adding that it is “a mistake to be relying on it for anything important right now.”
What is Microsoft’s relationship to OpenAI?
Microsoft is OpenAI’s largest investor and exclusively licenses its technologies. The tech giant invested $1 billion into the AI startup in 2019, an undisclosed amount in 2021 and an additional amount of up to $10 billion in January, according to people familiar with the latest deal. Under the agreement, Microsoft can use OpenAI’s research advancements, including GPT-4 and ChatGPT, to create new or enhance existing products. It is the only company outside of OpenAI that can provide an API for these technologies.
Read more about How you can Maximize Your Potential for using ChatGPT
Is AI going to replace jobs?
As with every wave of automation technologies, the latest will likely have a significant impact on jobs and the future of work. Whereas blue-collar workers bore the brunt of earlier waves, generative AI will likely have a greater effect on white-collar professions. A 2019 study from the Brookings Institution found that AI would most affect jobs such as marketing specialists, financial advisers and computer programmers.
Those effects will be mixed. Economists who study automation have found that three things tend to happen: Some workers improve their productivity, some jobs are automated or consolidated, and new jobs that didn’t previously exist are also created.
The final scorecard is difficult to predict. In company-level studies of automation, researchers have found that some companies that adopt automation may increase their productivity and ultimately hire more workers over time. But those workers can experience wage deflation and fewer career-growth opportunities.
Newly created jobs often go one of two ways: They either require more skill, or a lot less, than the work that was automated. Self-driving cars, for example, create new demand for highly skilled engineers but also for low-skilled safety drivers, who sit in the driver’s seat to babysit the vehicle.