AI

Big Tech’s AI race threatens the “essence of truth”

If science is about the pursuit of truth, the commercialization of artificial intelligence over the past few months is clearly unscientific and may even be dangerous, according to a top AI expert.

OpenAI may have let the AI ​​genie out of the bottle in November with the release of ChatGPT, a chatbot based on the company’s breakthrough generative AI system. Since then, big tech companies like Microsoft and Google have jumped into the fray, rushing to develop their own AI products, some of which have already been released.

But accelerating timelines can be risky, especially for technologies like AI, and experts disagree on whether it will be a net positive for humanity or whether it will evolve to destroy civilization. continues to divide. Even OpenAI CEO Sam Altman said in a Congressional hearing this week that AI could benefit from more regulation and government oversight rather than being left to companies alone. Ta. However, it is difficult to stop competition once it has started, and the AI ​​race is rapidly becoming a “vicious cycle,” said Yoshua Bengio, a professor at the University of Montreal and a leading expert on artificial intelligence and deep learning. Told. financial times in an interview Thursday.

Bengio was one of more than 1,000 experts who signed an open letter in March calling for a six-month moratorium on advanced AI research. For his pioneering work in deep learning, Bengio is a co-recipient of his 2018 Turing Award, one of the highest honors in computer science, along with Jeffrey Hinton and Yang LeCun. He is also called one of the “godfathers of AI”. Award.

But Bengio now warns that the current approach to AI development carries significant risks, saying: F.T. Tech companies’ competitive strategies using AI are “unsound,” he said, adding that he was beginning to see “dangers to our political systems, our democracies, and the very nature of truth.”

A long list of dangers related to AI has come to light over the past few months. Current generative AI, which is trained on large amounts of data to predict text and images, has so far been error-prone, contradictory, and known for spreading misinformation. OpenAI’s Altman testified this week, warning that ChatGPT could be used for “two-way disinformation” during next year’s elections.

But as technology evolves, the risks can become even greater. If researchers can crack his code on artificial general intelligence, also known as AGI, machines will be able to think and reason like humans. Technology executives suggest we are closer to AGI than previously thought, but AI experts, including Bengio’s colleague Hinton, say advanced AI threatens humanity’s survival. It warns that this is a possibility.

Bengio said: F.T. It is believed that within the next decade, humans are at risk of losing control over more advanced forms of AI, which may be capable of more independent thinking. In the meantime, he recommended that regulators crack down on existing AI systems and create rules for the technology and the information used to train it. He also noted that while disagreements within the AI ​​community are normal in scientific research, they should give companies a reason to pause and reflect.

“There’s a lot of emotion and shouting within the broader AI community right now,” he said. “But we need more research and more thought about how we adapt to what’s to come.” . “That’s the scientific method.”

Governments have been slow to embrace AI, but there have been recent signs of momentum. President Joe Biden invited technology leaders involved in AI research to the White House earlier this month to discuss future risks and best practices, shortly after announcing new initiatives to promote responsible AI development.

Regulatory action is accelerating in Europe, where lawmakers last week took an important step toward approving the European Union’s AI law. The bill outlines the risks of AI and imposes further obligations on companies developing the technology. Meanwhile, in China, where companies are developing their own versions of ChatGPT, regulators announced rules in early April that would require companies to obtain approved data to train their AI systems.

Subscribe to the Eye on AI newsletter to stay up to date on how AI is shaping the future of business. You can apply for free.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button