Search
Close this search box.
Search
Close this search box.

AI 2024: Building Trust, Avoiding Disaster

Rolling out an AI strategy is tricky and a growing thicket of regulation around the world will make it even more so. Here's how to navigate.

Ask any board member or CEO what’s top of mind these days, and you’re certain to hear something about artificial intelligence. If your company isn’t using it already, you’re thinking about it—as well as the risks that can accompany it as well.

One of the more underappreciated risks? The evolving legal landscape around AI, when nearly every jurisdiction in the world is currently grappling with how they will—and won’t—begin to regulate this explosive new technology.

For those running and governing companies, it’s exceptionally important to understand where all this is going, and what to do about it now to avoid thorny issues down the road.

In her new, just released book, Trust. Responsible AI, Innovation, Privacy and Data Leadership, Dominique Shelton Leipzig lays out a detailed framework for corporate leaders trying to get a handle on where all this is going, and how best to navigate it.

A longtime partner at law firm Mayer Brown with a focus on privacy and cybersecurity, Shelton Leipzig has fast become a go-to authority for boards and C-suites looking to develop their AI strategy with the right structures and guardrails to avoid disaster. She also founded the Digital Trust Summit, hosted at Brown University, a daylong conference for CEOs and board members concerning generative AI and other emerging topics.

“If I were a CEO or a board member,” says Shelton Leipzig, “the first thing I would ask is, do we have any high-risk use cases in the company? Have we risk-ranked our AI? That’s just a good question to find out because it’s not cheap to train your own applications and some of these things are seven figures, eight figures. Then, the second question I would ask is, how are we dealing with guardrails? How will we know if the model drifts and how can we be proactive about making sure we know about it before our customers and our business partners are impacted by a drift?”

Corporate Board Member recently talked with Shelton Leipzig, and asked for a quick primer: What do CEOs and boards need to know about building trust in the age of AI—and what should we be watching for in the year to come? What follows was edited for length and clarity. 

What are board members and CEOs asking you about AI technology and trust, and what do you tell them?

What they’re [saying] is: “We’re concerned about making sure that our investments are actualized, that our AI is doing exactly what we expect it to do. We’re concerned that we’re hearing headlines about hallucinations bias and so forth and we don’t know how to prevent those things. So, it is scary for us but at the same time, we know we don’t wanna miss this amazing technological revolution.”

What I tell them is that the good news is that humans control AI, not the other way around, and they don’t have to become suddenly data scientists or large language model experts in order to be amazing leaders in this area and inculcate a culture of trustworthy technology.

In the book you describe some 96 different legal jurisdictions around the world that are all developing rules about AI. What regulation are you starting to see rolled out?

Europe is going to be the first. It’s going to be the high bar because it’ll be the first, I think, to roll out a comprehensive AI protocol that will, most likely, be taking effect in two years, so early 2026. There’s tremendous synergy for companies in terms of what they can do now to get ahead of governance, and not just between the three drafts of the EU AI Act, but in the 69 other countries that have regulatory frameworks and draft legislation in place.

So here’s what they’re saying: First, they want companies to engage in a risk ranking of their AI. I liken this to driving into an intersection.

There’s a red light to tap the brakes and stop. That category in AI land is called prohibited AI. There are actually 13 different specific areas that are enumerated. But things like social scoring, using an individual’s activity online to make a determination about whether they can be your customer—governments around the world are saying they don’t want private companies doing that because they’re concerned about the risk of harm, either emotional or physical harm to individuals if private companies get involved in social scoring.

The other area is things like remote biometric monitoring of people in public spaces. Again, governments are saying they don’t want private companies doing that, only law enforcement, and if law enforcement does it and uses a private company that they have judicial authority to do that. Every continent except for Iceland has this concept.

The green light is for low-risk AI, low or minimal risk. Those are things like chatbots, AI-enabled video games, AI used for spam filters and things of that nature. The view there is people have been interacting with AI in those capacities for quite some time. Think of Siri, or your mapping app that talks to you that might’ve been powered by predictive or machine learning AI, but it was still AI.

The added risk of having generative AI, a part of that, governments around the world just say, we don’t think that’s a particularly appreciable risk, so that’s low. The only governance that’s expected for low-risk AI is just to tell people that they’re interacting with an AI so they don’t think that they’re talking to a person. Frankly, that’s already the law. It’s been the law in California since 2019. So this is just what EU is doing and what we’re seeing around the world. They’re just taking what California’s been doing and making it global.

The meat of the matter is in the high-risk AI. It’s more like the yellow light—you can go into the intersection but proceed with caution. They’re very specific there. Health-use cases, a critical infrastructure use case, children, those are all examples of high-risk AI.

Basically, if you are at high risk after your risk ranking, they then want to make sure that the company has high-quality data for the training of their application. They want to make sure that the data you’re training with is relevant and material. Part of that relevance and materiality determination includes having rights to train with the data from the beginning, from the get-go, IP rights, privacy rights, business-critical information. Do you have rights to use that if it’s not yours?

So you’ve got the first step of risk ranking. If you have high-risk, your second step is to make sure you have high-quality data.

The third step is continuous pre- and post-deployment, monitoring, testing and auditing to address model drift. What they’re calling for there, it’s not a mystery—just putting code into the AI tool itself and identifying parameters for things like safety, fairness, to avoid bias, cyber protection, IP rights. They want those parameters basically inputted into the AI model itself and the range included so that companies can be alerted when the AI drifts out of specifications. They’re expecting that there will be technical documentation.

Step four is that the continuous testing, monitoring and auditing that’s happening in the background is happening every second, so the code would alert the company if the model drifts outside of the safety parameters in these areas, bias health and safety, IP, cyber, and so forth, and then a human would go to the technical documentation, the metadata and the logging data from the AI tool itself, identify exactly when the tool started to drift so that they could diagnose the problem and fix it. That means a human needs to go in and assess this and make the changes. Change the model weights, look at the training data, just adjust the AI.

What about partners? A lot of folks might not be making their own models. What do we need to know in order to not break trust there?

The good news is that our big tech companies are really on top of this and have been doing a lot in the area of trustworthy AI. Microsoft has a portal on responsible AI, Google OpenAI, and I noticed IBM with their watsonx governance, so they’re doing a lot in these spaces.

But one thing that the companies need to understand when they’re licensing these products is that there’s a continuum from kind of just enterprise adoption to customizing the AI to make it as smart and as tailored to their environment as possible. Once you start getting into that area where you’re using an application and training it with your own data to make the customized results as targeted to your business as possible, the more you’re going to have to look at these governance steps that I talked about and not just rely on the provider.

What’s the sense that you have about enforcement?

We’ve got a couple years, in my opinion, before you see any major enforcement in terms of huge fines or anything of that nature. The EU AI Act, for example, will have higher fines than GDPR. It’s 7% of gross revenue. GDPR is 4%, so we’re seeing significant increase. I don’t think you’re going to see billion-dollar fines right away. I think this is something, just as we did with privacy and as we’re starting to do with cyber, it’s only after…what is it, we’re five years out from GDPR where we’re really starting to see the nine-figure fines.

The main thing for companies to think about is not fines really, but brand, value and trust of the customers, employees and your business partners. Because the reality is there’s going to be a first mover, it might be one of your competitors that embraces these concepts that I’ve talked to you about in terms of governance. Those are going to be the trustworthy parties that everyone’s going to want to do business with, especially in unchartered territory. Don’t tether yourself to waiting for the law to pass and just doing the minimum.


MORE LIKE THIS

  • Get the CEO Briefing

    Sign up today to get weekly access to the latest issues affecting CEOs in every industry
  • upcoming events

    Roundtable

    Strategic Planning Workshop

    1:00 - 5:00 pm

    Over 70% of Executives Surveyed Agree: Many Strategic Planning Efforts Lack Systematic Approach Tips for Enhancing Your Strategic Planning Process

    Executives expressed frustration with their current strategic planning process. Issues include:

    1. Lack of systematic approach (70%)
    2. Laundry lists without prioritization (68%)
    3. Decisions based on personalities rather than facts and information (65%)

     

    Steve Rutan and Denise Harrison have put together an afternoon workshop that will provide the tools you need to address these concerns.  They have worked with hundreds of executives to develop a systematic approach that will enable your team to make better decisions during strategic planning.  Steve and Denise will walk you through exercises for prioritizing your lists and steps that will reset and reinvigorate your process.  This will be a hands-on workshop that will enable you to think about your business as you use the tools that are being presented.  If you are ready for a Strategic Planning tune-up, select this workshop in your registration form.  The additional fee of $695 will be added to your total.

    To sign up, select this option in your registration form. Additional fee of $695 will be added to your total.

    New York, NY: ​​​Chief Executive's Corporate Citizenship Awards 2017

    Women in Leadership Seminar and Peer Discussion

    2:00 - 5:00 pm

    Female leaders face the same issues all leaders do, but they often face additional challenges too. In this peer session, we will facilitate a discussion of best practices and how to overcome common barriers to help women leaders be more effective within and outside their organizations. 

    Limited space available.

    To sign up, select this option in your registration form. Additional fee of $495 will be added to your total.

    Golf Outing

    10:30 - 5:00 pm
    General’s Retreat at Hermitage Golf Course
    Sponsored by UBS

    General’s Retreat, built in 1986 with architect Gary Roger Baird, has been voted the “Best Golf Course in Nashville” and is a “must play” when visiting the Nashville, Tennessee area. With the beautiful setting along the Cumberland River, golfers of all capabilities will thoroughly enjoy the golf, scenery and hospitality.

    The golf outing fee includes transportation to and from the hotel, greens/cart fees, use of practice facilities, and boxed lunch. The bus will leave the hotel at 10:30 am for a noon shotgun start and return to the hotel after the cocktail reception following the completion of the round.

    To sign up, select this option in your registration form. Additional fee of $295 will be added to your total.