If you are a CEO who has stayed on top of AI, you probably spent the past 18 months building a team, cleaning the data, developing the algorithms and fine-tuning deployment. But have you thought about what happens next when this intelligent technology starts to learn and make decisions on its own? No… well, you are in good company. Consider what happened at Amazon when they had to fire their machine learning recruiting tool for discriminating against women.
According to Reuters, Amazon had been using artificial intelligence since 2014 to fly through resumes and pick out the best talent. Before long, Amazon noticed that the AI appeared to be sexist. Having been given resumes from a decade when the majority came from men, AI “learned” that men were dominant and evaluated new resumes using that criteria. If a resume even contained the word “women’s” it was discarded. As Reuters reported, Amazon was concerned the AI would come up with other ways of being discriminatory, so rather than adjusting for the bias against women, they scrapped the project entirely.
The problem, of course, wasn’t the technology. A mathematical function can’t be sexist. It is humans who must take responsibility for the algorithms, for the data that powers them, and for the ways that they are – or are not – monitored and adjusted.
Like many other things about your job as CEO, AI is not a set-it and forget-it solution. Before you can start counting the positive impact to your P&L, you need to get involved and set a strategy for how the data and technology will be monitored and adjusted to produce the desired business impact without creating an undesirable ethical problem.
This starts with a protocol for how your team must build and deploy your machine learning systems to mitigate bias from the start. It doesn’t stop there. If you want to ensure that your bottom line business impact doesn’t come with a backlash, you have to establish procedures to ensure that your AI systems will be continuously monitored.
What Happens on Day 13?
Back when I was working on implementing large software systems it could take a year of work and testing to get the system up and running. The day it went live, the teams would send out congratulatory notes that my boss was quick to dismiss based on his experience.
“Well, of course it went well,” my boss said. “You’ve got everybody looking at it on Day 1. What happens on Day 13?”
“What’s the significance of Day 13?” I asked.
“Exactly,” my boss replied. “There is no significance. Everybody goes back to doing their old jobs. Everybody is now ignoring the technology. Does it still work then? Is it still up? Is it producing the right results? Do the users know what’s going on?”
My boss was right and his wisdom is even more applicable today. Because machine learning continues to learn, the real danger is the trouble it can get into when everybody stops paying attention.
Ethics Become An Issue When ML Continues To Learn
You have to expect that at any point in the future you may discover algorithmic problems or data flaws you didn’t anticipate when you originally built the system.
Even if your team is proactive and gets the data, the development and the deployment right, the core ethical concern is: “What happens when machine learning keeps learning once it’s out in the real world?” This is when you really need to focus on the monitoring and adjusting of machine learning systems. After a machine learning solution is deployed, its learning will ideally continue in production. In other words, it will continue to get smarter—to produce more helpful results—by adjusting to the real-world data it’s using. This is what makes it so intelligent, so useful and also potentially dangerous.
As my old boss pointed out, a lot of human involvement goes into getting the system to the point where it can be deployed in the first place, but then drops off during the production phase. This lack of oversight can result in bad data going in and/or the algorithm processing good data in an unanticipated way. Unnoticed, this can lead to ethical problems or worse.
The Critical Importance of Creating a Monitoring System
Keeping an ethical eye on your algorithms can be a challenge; machine learning systems aren’t explicitly programmed and thus there is no human-written set of rules to reference when something goes wrong. Asking for a fully explainable algorithm holds the algorithm to a higher standard than humans, who may not be able to fully explain their own motivations or decisions.
Although machine learning may not be fully explainable or understandable, it certainly needs to be auditable. While the true motivations of the human mind can be murky, the ability to audit human behavior is a standard business requirement. Anything that makes a business decision has to be auditable, whether it’s financial numbers, compliance, employee behavior or anything else that happens in a company. That’s the reality of the business world. This audit capability is an ethical requirement—and the same rule applies to the use of intelligent technology.
As a CEO, you wouldn’t hire an employee or subcontractor who could never be questioned. So why would you agree to put technology into your business to do critical thinking on your behalf that could never be investigated. This is especially the case if the technology were a part of sensitive business processes that could be the focus of legal and public scrutiny?
This is uncharted territory and the onus is on the CEO to figure out how to get it right. The presence of bias has the potential to sabotage many AI solutions and only critical thinking can preemptively address this problem. If we want machine learning to be ethical, humans need to stay involved.
So do the ethical thing and build a team around this that knows how to monitor, audit and optimize your AI technology. Like most challenges CEOs confront, the solution is within your grasp, by proactively and creatively utilizing your people.