AI

Industry players promote open alternatives to Big Tech’s AI

Ali Farhadi is no tech rebel.

The 42-year-old computer scientist is a highly respected researcher, professor at the University of Washington, and founder of a startup company acquired by Apple, where he worked until four months ago.

But Farhadi, who took over as chief executive of the Allen Institute for Artificial Intelligence in July, is working on research and development in a new wave of artificial intelligence that many believe is the most important technological advance in decades. They are calling for “thorough openness” to democratize the country.

The Allen Institute has launched an ambitious effort to build a freely available AI alternative to tech giants like Google and startups like OpenAI. An industry process called open sourcing will allow other researchers to review and use this new system and the data it feeds.

The position taken by the Allen Institute, an influential nonprofit research center in Seattle, puts it squarely on one side of the fierce debate over how open or closed new AI should be. . Will opening up the so-called generative AI that powers chatbots like OpenAI’s ChatGPT and Google’s Bard lead to more innovation and opportunity, or will it open a Pandora’s box of digital harm?

Definitions of what “open” means in the context of generative AI vary. Traditionally, software projects have exposed the underlying “source” code of the program. Anyone can then look at the code, find bugs, and make suggestions. There are rules that govern whether changes are made.

This is how the popular open source project behind the widely used Linux operating system, Apache web server, and Firefox browser works.

But generative AI technology involves more than code. AI models are trained and fine-tuned iteratively on vast amounts of data.

Experts warn that the Allen Institute’s path, no matter how well-intentioned, is inherently dangerous.

“Decisions regarding the openness of AI systems are irreversible and are perhaps the most consequential decisions of our time,” said Aviv Ovadia, a researcher at Harvard University’s Berkman Klein Center for Internet and Society. said. He believes an international agreement is needed to decide which technologies should and should not be made public.

Generative AI is powerful, but often unpredictable. You can instantly compose emails, poems, term papers, and answer every question you can imagine with human-like fluency. But they also have a disturbing tendency to make things up with what researchers call “hallucinations.”

Leading chatbot makers, Microsoft-backed OpenAI and Google, keep their new technology private and do not disclose how their AI models are trained and tuned. Google in particular has had a long history of publishing research and sharing AI software, but as it developed Bard, it increasingly kept that technology in-house.

The companies say this approach reduces the risk that criminals can take over the technology and flood the internet with more misinformation and fraud, or engage in more dangerous activities.

Proponents of open systems recognize the risks, but argue that a better solution is when more smart people work to combat the risks.

Meta created a stir this year when it released an AI model called LLaMA (Large Language Model Meta AI). Farhadi praised Mehta’s move but believes it is not enough.

“Their approach is basically, ‘I tried magic.’ I’m not going to tell you what it is,” he said.

Farhadi suggests disclosing the technical details of the AI ​​model, the data on which it was trained, any tweaks made, and the tools used to evaluate the AI ​​model’s behavior.

The Allen Institute has taken the first step by publishing a massive dataset for training AI models. It consists of publicly available data from the web, books, academic journals, and computer code. Datasets are carefully selected to remove personally identifiable information and harmful language, such as racist or obscene language.

Editing involves judgment. Will removing some potentially harmful language reduce the model’s ability to detect hate speech?

Farhadi said the Allen Institute data set is the largest open dataset currently available. Since its release in August, it has been downloaded more than 500,000 times on Hugging Face, an open source AI resource and collaboration site.

At the Allen Institute, this dataset will be used to train and fine-tune OLMo (Open Language Model), a large-scale generative AI program that will be released this year or early next year.

Farhadi said large-scale commercial AI models are “black box” technology. “We are promoting a glass box,” he said. “Once we expose it in its entirety, we can talk about how it works and partially explain what’s going on inside.”

There are only a few publicly available core-generating AI models of the scale envisaged by the Allen Institute. These include his LLaMA in Meta and Falcon, a project backed by the Abu Dhabi government.

The Allen Institute seems like a logical home for large-scale AI projects. “It’s well-funded, it’s driven by academic values, and it has a history of supporting the advancement of open science and AI technology,” said Zachary Lipton, a computer scientist at Carnegie Mellon University.

The Allen Institute collaborates with other institutions to advance an open vision. This year, the nonprofit Mozilla Foundation invested his $30 million in Mozilla.ai, a startup for building open source software, initially to develop tools around an open AI engine like the Allen Institute. We plan to focus on it and make it easier to use. , monitor and deploy.

Founded in 2003 to promote the Internet as a global resource open to all, the Mozilla Foundation is concerned about the further concentration of technology and economic power.

“A small group of players on the West Coast of the United States are trying to shut down the field of generative AI before it even gets started,” said Mark Thurman, president of the foundation.

Farhadi and his team have spent time controlling the risks of their openness strategy. For example, we assess the behavior of our models during the training phase and work on ways to prevent certain acts, such as racism or the production of biological weapons.

Farhadi sees the guardrails of large chatbot models as band-aids that can be easily peeled off by a clever hacker. “My argument is that such knowledge should not be encoded into these models,” he says.

As with any powerful technology, people will do bad things with it, Farhadi said. The challenge for society is to better understand and manage risks, he added. Openness, he argues, is the best way to stay safe and share economic opportunities.

“Regulation alone will not solve this problem,” Farhadi said.

The Allen Institute’s efforts face some formidable hurdles. The main problem is that building and improving large-scale generative models requires a lot of computing firepower.

Farhadi and his colleagues say the new software technology is more efficient. Still, he estimates that the Allen Institute’s efforts will require $1 billion worth of computing over the next few years. He is trying to gather support from government agencies, private companies and technology philanthropists. But he declined to say whether he had gathered supporters or named them.

If he succeeds, larger tests will foster a lasting community of support for the project.

“To really impact major companies, we need an open ecosystem of companies,” said the Mozilla Foundation’s Thurman. “And the challenge in that type of play is just patience and persistence.”


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button