AI

How Big Tech’s AI Hype Cycle Swallowed It Sam Altman, OpenAI

Photo illustration: Intelligencer.Photo: Getty Images

Last Friday, seemingly out of nowhere, OpenAI’s board of directors announced that it had fired celebrity CEO Sam Altman for “failing to consistently communicate candidly.” Details were not disclosed. Things evolved from there, Microsoft announced that it had adopted Altman, and OpenAI was on the verge of collapse. For the tech world, this is an “earthquake” story like when Steve Jobs was fired from Apple, except perhaps the fate of humanity is at stake. And like Mr. Jobs, Mr. Altman will return to the company he co-founded after nearly all of its employees threatened to quit unless the board pulled them back.

There are many theories about what happened here, some more believable than others. There are also reports that Mr. Altman was trying to raise funds for a new venture. There was enough evidence that OpenAI’s nonprofit board, whose mission is to develop secure AI for the benefit of “humanity, not OpenAI investors,” indicated that Altman was concerned about the company’s direction. There’s evidence. According to the company, atlanticSince the release of ChatGPT, the rift between the True Believers faction, represented by chief scientist Ilya Satskeva, who orchestrated the coup against Altman, and the larger faction led by Altman, who wanted to pursue growth but were not particularly visible, has deepened. Ta. For example, we are worried about destroying the world.

These are different ways of looking at what is starting to happen. But they have one characteristic that makes the conversation around OpenAI, and AI in general, feel very disconnected from reality and, frankly, a little crazy. That means it’s all just speculation. Those are bets. These are theories based on premises that have been lost in the mists of a truly dramatic year for AI development, and have been brought into the present with strange results.

There are dozens of nuanced positions along the spectrum of AI risks and opportunities, but the top opinion from industry players can be described as follows. AI is going to be huge, so we need to develop it as quickly and fully as possible to ensure a bright future. As AI grows, we need to be extremely careful to avoid a terrifying future. AI will be huge so you will need to invest so that you can earn a lot of money and defeat others who are challenging you. Then the rest will be resolved automatically.

They are concerned with what happens, what should happen, what shouldn’t happen, and what the various parties need to happen. Despite their clearly stated differences of opinion, they have much in common. These are people debating different versions of what they believe is an inevitable future in which the kind of work OpenAI is doing will somehow become most important. have important meaning in the world. If you’re wondering why people treat OpenAI’s self-inflicted slapstick incident like the biggest incident on Earth, it’s because many people close to OpenAI believe, or need to believe, that it is.

Novel, sci-fi-adjacent, speculative topics in AI discussions are getting the most attention as something out of the blue. That is the definition and possibility of “artificial general intelligence”. OpenAI describes it as “a system with a high degree of autonomy, at best exceeding that of humans.” economically valuable work.”Possible superintelligence concepts and prospects Conquer human. Thought experiments about misguided AIs that destroy the earth to plant more strawberries or destroy civilizations to maximize production of paperclips. A hodgepodge of recently popular terms, acronyms, and acronyms such as X-risk, effective altruism, e/acc, decel, alignment, and doomer are not simply the arrival of new technology, but are rising to respond to it. It signals the arrival of an intellectual moment.

These are all interesting predictions, and the levels of evidence supporting them vary widely. Many of them are predictions about the technology (Can sufficiently advanced large language models produce anything resembling human intelligence?) and predictions about the social and economic impact of technology (Will AI destroy jobs?) , or create jobs?). However, making an intelligent investment requires a certain amount of belief. Again, we’re talking about a highly speculative future here. It takes confidence to protect them. Working professionally toward or against them will certainly foster something like faith, and I think companies like OpenAI are trying to build a similar school of thought in academic disciplines and denominations within the same church. , it is natural for them to become factionalized along ideological and, to some extent, spiritual lines. (The ends contain a little bit of both.)

Commercial AI projects are less exciting, but they are still speculative. People believed they could make unimaginable amounts of money, and OpenAI seemed to be the startup to end all startups. But the company also has revenues of about $1 billion a year, was recently rumored to have raised funding at an $86 billion valuation, and is currently “burning” money on computing power. Microsoft is still in the early stages of commercializing OpenAI’s technology in Windows, Office, Github, and Bing. Google has invested billions of dollars in generative AI products, most of which are in the testing phase. Meta, Amazon, and virtually every other big tech company are rolling out new tools, but no one is actually making any money from this yet. In fact, these companies almost universally bleeding Spend money on AI. Of course they are doing this on purpose. Investors and companies are making bets. Some of them are confident. Some people don’t want to miss out. Of course, no one knows what will actually happen. Even if you think you have an opportunity, you haven’t had the chance to realize it yet. Even if we have predicted a significant threat, it has not yet materialized. In the future, we are talking about a completely new world. Right now, we’re talking about bringing AI to Microsoft 365 Copilot and Excel.

Of course you are right. The impact of new technology remains to be seen. No one wants to be dissatisfied. There is much to be gained by facing big changes, and much to be lost by not. It’s just the beginning! Decisions in every industry, but especially in technology, are based on uncertain predictions. But there’s a lot of discussion about what’s going on with OpenAI. OpenAI is a major AI company, but it is also one of many. Nonprofits are start-ups that have become subsidiaries of notoriously boring technology companies in name only, inheriting the industry’s narrow sense of destiny and inevitability, which is expressed in a variety of extreme outcomes. Mass automation and sudden economic exile. Autonomous intelligence like humans goes out of control. It satisfies all human needs quickly and forever. Or the rapid domination of an industry by a single company. Geopolitical advantages of countries where AI research and companies are prevalent. The rewards are greater than anyone could have imagined.

https://x.com/sama/status/1484950632331034625?s=20

There are a variety of attractive possibilities, and it’s interesting to see them compete against each other. They are all apocalyptic in the sense that they imagine a departure from, if not the end of, the current world. Although they seem to consider all possible outcomes in extreme situations, they do not even consider implicit scenarios. none They really do come true. While it’s a plausible scenario that the world doesn’t follow the AI, rather the AI ​​follows the world, there are countless less interesting scenarios. A scenario where OpenAI is just a big tech company or is captured and marginalized by other companies in a crowded field. A scenario in which AI does not represent any kind of radically different technology, but rather represents a new range of new partial automation possibilities, resulting in halting and uneven changes in the labor market. A scenario where “cooperation” is just his new ESG term. Scenarios in which AI improves productivity, exaggerates rather than overtakes current features of capitalism, or subtly redistributes power and capital in ways that are difficult for a single company to monetize. This is no small thing. But it would be familiar. And at this point, this would come as a surprise to many in the industry (from disastrous business owners to profit-minded Microsoft executives) and would be a real reputational and financial disaster.

What happened (and is happening now) with OpenAI is dramatic, fascinating, and potentially very important indeed. But this is as a governance breakdown in late-stage startups run under the stranglehold by a small number of people with different instincts on how to scale, huge egos, and beleaguered big partners. You should also be able to understand. In trying to anticipate various AI-related slippery slopes and pitfalls, we found that OpenAI had a strange, clumsy, and ultimately unhelpful organizational structure.In other words, it failed at that exact point. beginning Predict how AI will play out in the real world and what it can do in advance. It’s clearly a mess, a huge failure from several different angles, and a setback for everyone involved. Perhaps Sam Altman is an exiled Steve Jobs or a pre-Trinity Robert Oppenheimer. But maybe he’s Travis Kalanick.

OpenAI’s slapstick weekend should They demonstrate the dangers of running a company according to what is essentially a collection of speculative short stories, and they point out that companies and industries that are deeply indebted intellectually and financially to various futures calls into question the collective predictive power of That cannot be guaranteed. Rather, it demonstrated how deeply these stories about artificial intelligence (and financial gain) are embedded in the imagination of the entire tech industry, the news media, and, to a lesser extent, the broader public. The company’s sudden success has helped make rough predictions a kind of disjointed common sense. Much to Altman’s credit, the sense that what’s happening with OpenAI is much bigger than what OpenAI has escaped containment.

OpenAI accomplished something incredibly valuable before it even got close to AGI. Entertaining every speculative AI scenario at once and presenting an impressive chatbot to the public, it has become itself synonymous with the future of technology. OpenAI is too hyped to fail, not only for Microsoft, but also for its peers and competitors.




Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button