AI

“Big tech’s talk about regulating AI doesn’t match their actions.”

“One of the things that has bothered me is the myth that big technology companies are the main voices calling for regulation,” Søren Minderman said recently. target In an interview.

Minderman is currently working as a postdoctoral researcher with MILA computer scientist Yoshua Bengio on AI safety research. Although he has just completed his PhD at the University of Oxford in machine learning, he wants to continue to focus on safety and risk in AI.

“They’re the ones getting the most attention right now. But many of the big tech companies like Meta and IBM are denying the risks, telling clever lies and lobbying against regulation. They calls it “regulatory capture” and pretends that other companies are the only ones seeking regulation. Indeed, there is an emerging academic consensus calling for regulation and recognizing the real risks. ” he said.

The AI ​​researcher wrote his first paper on AI safety seven years ago. “I’ve taken detours into coronavirus scaling, deep learning, and statistical modeling, but my focus has always been safety. All of a sudden it became such a big event, and it happened a little sooner than I expected. “I thought we needed a lot of time to prepare for these issues,” he said.

A month ago, Minderman published a paper with 22 academic co-authors from the US, China, the EU and the UK, including Jeffrey Hinton, Stuart Russell and Bengio. AI stakeholders called for immediate action and suggested that companies working on AI systems allocate at least one-third of their resources to ensure the safe and ethical use of AI.

“This paper began with the realization that many AI scholars, including some of the most cited people in the field, are concerned about the risks and technologies posed by AI,” Minderman said. . Currently he is focusing on his AI Honesty project.

“People can’t always tell whether what AI says is true or not. So we developed a lie detector for language models that can determine with a fairly high degree of accuracy whether AI output is truthful.” he revealed.

Lack of concentration, knowledge and researchers

Minderman is no stranger to the safety teams working at Google and OpenAI. “Last I checked, they were talking to a small safety team, a small portion of the entire research team,” he noted. His State of AI report for 2023 also mentions similar numbers.

Google DeepMind has the largest and most established AI coordination team of 40 members, led by co-founder Shane Legg, according to the report. By contrast, he has 11 team members at OpenAI, compared to just 10 at rival startup Anthropic.

However, companies are not solely responsible for this sad situation. “Every company wants to be ahead of the competition, reduce safety, and profit from AI while letting society deal with the risks. That’s why governments need to intervene. In addition to competition, There is a lack of awareness of risk among AI developers,” Minderman said.

He also elaborated that understanding the risks posed by AI is not part of an AI researcher’s job description.

“No one really knows what AI is causing in sensitive areas, and regulations so far have been reactive after something bad has happened. “It could be like a Chernobyl shutdown. Some AI companies are asking for regulation, in part because they don’t want anything to happen to the AI ​​industry,” he said. .

can’t keep up with the pace

“Regulation is the main focus, but it is too slow given the speed of progress in AI,” an AI researcher suggested. Similar thinking was echoed eight months ago, when thousands of business and AI leaders signed an open letter calling for a six-month pause on training AI systems more powerful than OpenAI’s GPT-4. It resonated with the community.

Although this call was not implemented, it was not considered a failure as AI safety was finally on the public agenda. Minderman suggested that immediate detailed commitments from companies are needed before training the next generation of his AI systems.

“If a company has a level of risk-taking capability that the government can assess, then it can take steps to implement safety measures. If the company does not have safety measures in place, it may not introduce the system or develop it further.” “Measures can be taken,” he concluded. .


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button