For AI to achieve its maximum benefits for our society, companies that design, implement and use AI will not only need to harness its innovation capacity, but do so in a business environment that maintains and promotes social responsibility. To that end, it is advisable for companies to take an enterprise-wide approach to the design and deployment of ethical AI that also supports its ethical use by embedding it in their daily business operations. That is, they need to ensure that all the people, process and technology elements related to AI are addressed across the entire organization.
A number of voluntary ethical codes and legislative proposals relating to AI have already started to appear. In a previous article I discussed the nine common responsibilities that are emerging in these ethical codes. Here I offer seven specific actions that your company can take to help operationalize and comply with these codes.
1. Establish consistent policies, procedures and records. Implementing relevant company-wide policies, procedures and record-keeping is foundational to the effective and ethical development, deployment and management of AI. These provide the structure and discipline that is necessary for consistent and measurable implementation. An AI system that may appear to be functioning well but is not supported by good policies, procedures and documentation may very well lack the ability to maintain consistent performance in an ever-changing environment like AI.
2. Manage AI, and its ethical aspects, via a cross-functional team. In order for a company to design and deploy AI effectively and ethically, there needs to be someone, or a group, in charge. As with other key areas of business, cross-functional coordination and cooperation is key. The team managing AI needs to ensure that the organization’s controls and ethical requirements for AI development, deployment and use are implemented and followed consistently throughout the organization.
3. Adopt a systematic way of assessing, prioritizing and managing risk. Many companies already use the Enterprise Risk Management approach of Identify > Assess > Manage to address many types of business risks. When it comes to AI development and use, companies should likewise consider what new risks these present, what steps should be taken to mitigate such risks, and when they cannot be mitigated cost-effectively, whether they can be dealt with as an acceptable risk.
4. Train and communicate with staff. Successful development, implementation and management of AI is ultimately dependent on your staff’s awareness and ability to make good decisions and take appropriate actions as they do their daily work. Training and communication are key to achieving this. Training can help build needed knowledge, skill and attitude among your staff. Regular communication with employees can help to reinforce the importance of company policies and ethical considerations beyond the formal training process.
5. Manage third parties. Good network links, data flows and alignment between your company and your suppliers, customers and contractors are critical to running your business in any number of areas. Your third parties’ AI policies and use are no different. It is important that you understand which of your external parties’ processes and business decisions incorporate or are delegated to AI, for what purpose, and under what management controls. You should also assess the risks of such third parties’ AI use to your own business and manage these risks accordingly.
6. Monitor and validate AI design and deployment activities. Monitoring is the ongoing process of evaluating the effectiveness of your management of ethical AI. From people to processes to technology, it is important to look at your own company and at relevant third parties on a regular basis to make sure the overall management systems— including those related to AI—are functioning well and to identify and address any specific areas that need improvement.
7. Take corrective actions and make improvements if incidents occur. Finally, there needs to be a means of remediation if and when AI systems end up causing damage, not complying with your own or external requirements or are presenting other problems. This requires a consistent, systematic approach, including the development and following of incident response plans, and implementation of any corrective actions or management system changes needed to help prevent such problems from happening again.