Artificial intelligence (AI) is driving the digital transformation sweeping business, yet many companies face foundational challenges getting started. The issue, our studies show, isn’t technological; it’s human.
Because of our history of technophobia—probably dating back to Socrates, who warned against writing because it would “lead to forgetfulness and weaken the mind”— humans worry when they see technology that behaves like them or mimics their decision-making skills.
Yet CEOs need to battle through these barriers and ensure AI is deployed at scale in their company. The business benefits of AI are too great to ignore. In addition to enabling automation at an unprecedented level and scale, AI drives better decision-making through data-driven insights, for instance, developing complex scenarios that improve forecasting and planning. Companies that hold back will be at a severe disadvantage, just like those that missed the first wave of digitalization in the early 2000s.
For many companies, the answer to concerns about AI is the framework known as Responsible AI, which goes beyond algorithmic fairness and bias to identify the potential effects of the technology on safety, privacy and society. However, following the principles of Responsible AI is just a starting point—plenty of firms using this framework have still had substantial problems with AI deployment.
Based on our experience, we at Boston Consulting Group (BCG) think that if a business wants to use AI at scale, it needs to go beyond responsibility in AI development and obtain society’s explicit approval to deploy it.
This explicit approval to use AI can be described as a social license. This concept of a social license is new to AI but has been used in mining and other industries that have high community impact for decades. Companies cannot award themselves social licenses; they must earn them, by demonstrating consistent and trustworthy behavior and stakeholder interaction. If they lose their social license, they risk higher costs or other threats to competitiveness, even if they comply with all formal license conditions.
Our studies show that a social license for AI rests on three pillars:
• Responsibility: Society must perceive the AI application as fair and transparent in its working and results. For example, a company that uses an AI-based recruitment system must demonstrate that candidates who provide similar responses receive similar ratings.
• Benefit: This implies stakeholders share their perception that the advantages of using AI systems are greater—or, at least, no less—than the costs of doing so. Society’s verdict will not always favor AI’s use. For instance, privacy concerns may prevent AI’s use in some healthcare applications.
• Social contract: Society must accept that companies that want to develop AI can be trusted with its use, as well as the acquisition and analysis of real-time data to feed their algorithms, and that they will be accountable for the decisions made by AI systems. Mistrust and a perceived lack of accountability are two reasons why society has been slow to approve unrestricted use of self-driving automobiles, for example.
Every business has to find its unique way to earn a social license for AI based on the problem being solved and the number of stakeholders involved. But to start:
• Commit to stakeholders about how you will (and won’t) use AI and the standards you will use.
• Deliver on that commitment with a Responsible AI program entirely consistent with your values.
• Demonstrate commitment through transparency. Admit to any failures, explain what happened and what you will do about it.
This will likely take time and effort. But AI’s value to business means there is clear value in a process that increases the chances of success. CEOs should use the social license concept to help unlock the transformational benefits of AI at scale.