Months after ChatGPT wowed Silicon Valley, OpenAI announces GPT-4

Four months ago, a small San Francisco company made headlines in the tech world by introducing a new online chatbot that can answer complex questions, write poetry, and imitate human emotions.

Now, the company is back with a new version of the technology that powers its chatbots. The system raises the bar for Silicon Valley’s competition to decide who will embrace artificial intelligence and become the next generation of leaders in the technology industry.

OpenAI, which has about 375 employees and has received billions of dollars in investments from Microsoft and industry luminaries, announced Tuesday that it has released a technology it calls GPT-4. It was designed as the underlying engine that powers chatbots and all sorts of systems, from search engines to personal online tutors.

Most people will use the technology through a new version of the company’s ChatGPT chatbot, but businesses will incorporate it into a variety of systems, including business software and e-commerce websites. The technology is already powering chatbots that are available to a limited number of people using Microsoft’s Bing search engine.

Advances in OpenAI have brought the technology industry to one of the most unpredictable times in decades in just a few months. Many industry leaders believe that the development of AI represents a fundamental technological change, as important as his creation of the web browser in the early 1990s. This rapid progress surprised computer scientists.

GPT-4 learns its skills by analyzing vast amounts of data collected from the Internet and enhances the original ChatGPT in several ways. It’s more accurate. For example, you can pass the Uniform Bar Exam, instantly calculate someone’s tax liability, or provide a detailed description of an image.

But OpenAI’s new technology still has some strangely human flaws that worry industry insiders and unsettle people who have used modern chatbots. It is an expert in some subjects and a dilettante in others. They can perform better than most people on standardized tests and provide accurate medical advice to doctors, but they can also mess up basic math.

Companies betting their future on this technology, at least for now, must put up with inaccuracies that have long been taboo in an industry built from the ground up on the idea that computers are more accurate than their human creators. Maybe.

“I don’t want to make it seem like we’ve solved reasoning or intelligence, but we definitely haven’t,” Sam Altman, CEO of OpenAI, said in an interview. “But this is a big step forward from what already exists.”

Other tech companies are adding GPT-4 capabilities to a range of products and services, including Microsoft software for performing business tasks and e-commerce sites that want to offer customers new ways to virtually try out their products. There is a possibility of incorporating it. Many industry giants, including Google and Facebook’s parent company Meta, are also working on developing their own chatbot and AI technologies.

ChatGPT and similar technologies are already changing the behavior of students and educators who are trying to understand whether tools should be adopted or banned. The nature of work is also changing, as systems can write computer programs and perform other business tasks.

Even the best systems tend to complement, rather than replace, skilled workers. This system cannot be used in place of a doctor, lawyer, or accountant. Experts are still needed to find their mistakes. But they may soon replace some paralegals (whose work is reviewed and edited by trained lawyers), and many AI experts believe they will manage content on the internet. I believe they will be replaced by workers.

“There’s definitely disruption, and that means some jobs will be lost and some new jobs will be created,” said Greg Brockman, president of OpenAI. “But I think the net effect is to lower barriers to entry and increase professional productivity.”

OpenAI on Tuesday began selling access to GPT-4 so that businesses and other software developers can build their own applications on top of it. The company also used this technology to build a new version of its popular chatbot. The chatbot is available to anyone who purchases access to ChatGPT Plus, a $20/month subscription service.

Several companies are already working on GPT-4. Morgan Stanley Wealth Management is building a system that instantly retrieves information from company documents and other records and delivers it to financial advisors in conversational prose. Khan Academy, an online education company, uses this technology to build an automated tutor.

“This new technology can act like a tutor,” said Sal Khan, CEO and founder of Khan Academy. “We want to be able to teach new techniques while students do most of the work.”

As with similar technologies, new systems can also exhibit “hallucinations.” Generate completely incorrect information without warning. If you ask for his website for the latest information on cancer research, he may list several Internet addresses that don’t exist.

GPT-4 is a neural network, a type of mathematical system that learns skills by analyzing data. This is the same technology that digital assistants like Siri use to recognize voice commands, and self-driving cars use to identify pedestrians.

Around 2018, companies like Google and OpenAI started building neural networks that learn from vast amounts of digital text, such as books, Wikipedia articles, chat logs, and other information posted on the internet. These are called large-scale language models, or LLMs.

By pinpointing billions of patterns in all that text, LLMs learn how to generate text themselves, whether it’s a tweet, a poem, or a computer program. OpenAI has put more and more data into LLM. The company hoped that more data would provide better answers.

OpenAI also used feedback from human testers to improve the technology. When testing ChatGPT, we evaluated the chatbot’s responses and differentiated between those that were useful and true and those that were not. Then, using a technique called reinforcement learning, the system spent months analyzing those ratings to better understand what to do and what not to do.

“Humans evaluate what they want to see and what they don’t want to see,” said Luke Metz, a researcher at OpenAI.

The original ChatGPT was based on a large language model called GPT-3.5. OpenAI’s GPT-4 learned from extremely large amounts of data.

OpenAI executives declined to say how much data the new chatbot learned from, but Brockman said the dataset is “internet-scale” and representative of all English speakers on the internet. That said, this means spanning enough websites to provide a sample.

GPT-4’s new features may not be obvious to the average person using the technology for the first time. But as the public and professionals continue to use the service, they may soon come to prominence.

When given a long article from the New York Times and asked to summarize it, the bot returns an accurate summary almost every time. If you add some random sentences to that summary and ask the chatbot if the revised summary is accurate, it will point out the added sentences as the only inaccuracy.

Altman described the behavior as “rational.” But technology cannot replicate human reasoning. I’m good at analyzing, summarizing, and answering complex questions about books and news articles. It is much less skilled when asked about events that have not yet happened.

You can write jokes, but you don’t really understand what will make the other person laugh. “We don’t understand the nuances of what’s interesting,” said Oren Etzioni, founding CEO of the Allen Institute for Artificial Intelligence, a prestigious research institute in Seattle.

As with similar technologies, users may find ways to coax the system into strange and creepy behavior. When these bots are asked to imitate or act out other people, they can stray into areas they are designed to stay away from.

GPT-4 can also respond to images. Given a photo, chart, or diagram, this technology provides a detailed paragraph-long description of the image and can answer questions about its contents. This could be a useful technology for people with visual impairments.

On a recent afternoon, Brockman demonstrated how the system reacts to images. He gave the new chatbot a Hubble Space Telescope image and asked it to describe it in “painstaking detail.” He responded with a four-paragraph explanation, including a description of the mysterious white line that stretches across the photo. The chatbot writes, “trajectories from satellites and shooting stars.”

OpenAI executives said the company is not immediately releasing the image description portion of the technology because it does not know how it could be exploited.

Chatbots are expensive to build and provide. OpenAI’s new chatbots will be trained on even more data, increasing costs for companies. Mira Murati, chief technology officer at OpenAI, said the company may restrict access to the service if there is too much traffic.

But in the long term, OpenAI plans to build and deploy systems that can juggle multiple types of media, including not just text and images, but also audio and video.

“We can take all of these general knowledge skills and spread them out into all kinds of different fields,” Brockman said. “This takes technology into a whole new realm.”

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button