The OpenAI Governance Crisis: Early Tech Company Lessons

By Andrea Bonime-Blanc

01/25/2024

Technology Risk AI Emerging Risk

For five days in November, technology observers had real-time, front row seats via social media and news media to something unique: the very public and dramatic unraveling and re-raveling of Silicon Valley darling OpenAI. 

OpenAI, the creator of a wildly successful large language model (LLM) generative artificial intelligence tool called ChatGPT, suffered a short but very public, humiliating governance meltdown, the effects of which are still unfolding and will continue to have medium- and longer-term implications for OpenAI and for the greater tech ecosystem. There’s a spider web of governance issues to explore from before, during, and after OpenAI’s meltdown, as well as some preliminary governance lessons applicable to the broader world of tech denizens at the frontier of tech innovation. 

The Relevance of the OpenAI Example 

Silicon Valley—with its start-up, “move fast and break things,” “ask for forgiveness not permission,” “tech bro” mentality—has been notorious for its aversion to guardrails of any kind, though there are notable exceptions. It’s a mind-set typical of innovators and technologists that don’t have the time or inclination to worry about governance, risk management, ethical implications, or stakeholder impact while inventing until it is perceived as needed (for a sale or initial public offering) or it is too late (a crisis). It’s only then that the “grown-ups” are let into the room—whether they be investors, partners, new management, board members, or, sadly, government prosecutors or regulators. 

The OpenAI governance saga is a colorful, compressed, and dramatic microcosm of the larger governance and cultural struggle between humans focused on innovating frontier tech (“innovators”) and humans focused on good governance (“stewards”). This is not to say that we can’t innovate and govern simultaneously, but that we often don’t do both until forced to. 

Early Lessons From the OpenAI Governance Saga 

OpenAI’s public governance meltdown dramatically highlights several underlying, intertwined, and urgent technology governance challenges that the business and regulatory world face at several levels.

Entity governance: The governance of OpenAI as an organization itself must be reformed. Clearly, the internal mix of for-profit and nonprofit entities and missions, and of safety-focused versus growth-minded cultures within the same organization, is not only confusing but ultimately combustible. A new solution must be developed that either makes the right choices for OpenAI as a nonprofit or as a for-profit. There are third-way solutions, as well; consider as the example of Mozilla, operator of the Firefox web browser, reflects, where a nonprofit foundation owns the for-profit business. It all starts with a properly arrayed and diverse board. OpenAI now has the opportunity to do governance right.

Tech sector governance: We must focus on the governance of tech companies that are at the cutting edge of developing and releasing exponential technology into the wild. In other words, if complex, contradictory, or poor governance was (or is) happening at OpenAI, it is probably commonplace at other technology companies. This has potentially negative and positive implications for stakeholders, including the public at large, who experience the upsides and suffer the downsides of tech innovation. The explosion and penetration of the good, the bad, and the ugly of the Internet in the 1990s and social media in the 2010s are a few examples that come to mind. Let’s find governance solutions now to generative AI writ large, not in 10 years when it’s too late. The same applies to other frontier technologies, such as quantum computing and synthetic biology.

Actual technology governance: The governance of technology itself within the entities creating it must be a priority. At the most granular level, the OpenAI saga points to a dire need for good governance at the inception and earliest stages of the development of new technology. This requires tone from the top and empowered interdisciplinary teams. This applies not only to generative AI at OpenAI, but to all other companies developing that and other cutting-edge technologies, such as quantum computing, synthetic biology, metaverse technology, and robotics. 

Tech Governance Future-Proofing: Be Innovators and Stewards 

In an era of exponential technology with broad and deep implications and reverberations that we cannot even predict or fathom, good-to-great tech governance is no longer a nice thing to have or something to think about tomorrow. It’s a must-have to think about yesterday and today. Moreover, good-to-great tech governance cannot consist of merely grafting old practices and systems onto something so new and so fundamentally different. Tech governance requires a new form of adaptable, future-facing governance.

While the innovators are “moving fast and (possibly) breaking things”—things that may be unfixable once broken—in furtherance of discovery and riches, the stewards are also trying to move fast, racing against time to fix flaws and build or rebuild things. The recent adoption by the European Union of the AI Act and policy developments in China and the United States addressing the development of AI and generative AI guardrails speak volumes to the urgency of developing national and global tech governance standards applicable to persons, organizations, and nations in every sector.

While the innovators are more motivated by riches, influence, and power mostly for themselves and their peers, the governance crowd is more motivated by safety, security, ethics, and guardrails, and thus protecting a broader swath of stakeholders. 

It’s not that the twain shall never meet—there are many of us who embody both the excitement and the concern, as well as the wanting of innovation and the need for safety. We are human after all. It’s not the tech that is dangerous or evil; it is the humans with negligent, dangerous, or evil intentions that deploy the tech as harmful weapons for their own ends that need to be kept in mind. This includes the powerful human technologists who become potentially more careless, disconnected, or hubristic as they gain fame and fortune.

Picture drone swarms armed with synthetic biological agents, all created with the assistance of generative AI, inflicting a terror event upon an urban environment. That involves humans making the wrong choices. That is what we must be concerned about. That is what we must prepare for and prevent to the greatest extent possible at every level of governance within and across entities and jurisdictions while allowing for the unfurling of helpful, life-changing inventions. 

In an internal email quoted by multiple news outlets, a Microsoft executive stated, “Speed is even more important than ever” and it would be an “absolutely fatal error in this moment to worry about things that can be fixed later.”

To this, I would respond and ask: But what about the things that cannot be fixed later?

We do not need to sacrifice innovation for governance, nor do we need to sacrifice governance for innovation. It is up to diverse, knowledgeable, learning-oriented, forward-thinking, and continuously curious tech company boards and directors to develop this new approach. 

Andrea Bonime-Blanc

Andrea Bonime-Blanc is founder and CEO of GEC Risk Advisory, a board member and advisor. An NACD 2022 Directorship 100 honoree, she is a global governance, risk, ethics, and technology strategist and counselor to business, government, and nonprofits. An author of multiple books and sought after keynote speaker, she is a life member of the Council on Foreign Relations.