The rapid evolution of AI presents remarkable opportunities to accelerate scientific discovery, revolutionize industry and enhance access to information. Yet wider AI adoption also introduces novel risks ranging from bias and inaccuracy to misuse and weaponization.
Many business leaders are looking to U.S. policymakers to develop a framework of guardrails to guide AI deployment, with 62 percent of firms reporting they are awaiting clearer AI-specific regulations before proceeding with full adoption of the technology.
CEOs should not sit idly awaiting this development in Washington, however. Instead they should actively work with lawmakers and other officials to help establish such a structure. After all, the goals of leading in AI innovation and ensuring AI’s safety and security are not in opposition; they are complements.
A strategic and clear AI framework will enhance U.S. leadership in AI, expediting the path to further advances, new tools and the breakthroughs that will follow. Pressure is mounting: Europe has finalized its Artificial Intelligence Act, while a growing number of U.S. states propose and enact AI legislation in the absence of Federal action, making compliance more difficult for business facing a patchwork of state regulations rather than a single uniform standard.
Business leaders should collaborate with policymakers to ensure the AI transition occurs in a manner that is safe for users and clearly defined for businesses, while supporting innovation.
The Administration’s Executive Order on Safe, Secure, and Trustworthy AI was a significant first step. It prompted the much-needed review of existing laws and regulations across government to ensure their applicability in the AI context. Existing structures and authorities can cover many AI concerns. For instance, current law already prohibits discrimination—whether or not AI or other tools are involved—including in employment, housing and credit decisions. The Federal Communications Commission affirmed that AI-generated robocalls without consent are banned, while the Food and Drug Administration outlined how its approval process for medical devices assesses AI-enabled products.
How should CEOs and policymakers approach a broader AI framework? The guardrails we and other tech leaders adopted address nine critical areas:
Business leaders should press Congress to codify early Federal AI efforts and support their expansion, while supporting initiatives such as the U.S. AI Safety Institute and the National AI Research Resource Pilot, helping to coordinate safe AI deployment, widening access to research resources and promoting interoperable international standards.
All this can be accomplished through public-private collaboration and bipartisan efforts to develop a clear and strategic U.S. framework to understand AI and address its implications. By updating current rules to account for AI technologies, investing in research and workforce initiatives, and establishing guardrails that prioritize high-risk AI applications, the U.S. can harness the benefits of AI while mitigating its risks. Policymakers and business leaders should work together to develop these guardrails to promote security, safety and innovation simultaneously that will pave the way to a future where AI serves as a catalyst for economic growth.
As he prepares to retire, Northwestern Mutual CEO John Schlifske looks back at a 14-year…
Sure, you want to make your culture safe for employees, but has the pendulum swung…
To drive innovation—the kind that launched humanity into space—leaders must embrace bold, aggressive strategies without…
At weekly staff meetings, Pitaro ensures every team member has a voice. 'We follow the…
Is it about enhanced productivity—or control?
The logjam of the past couple of years is poised to break—and mid-market deals will…