MANY organizations are using artificial intelligence (AI) to improve their product and service offerings. A common example is using natural language processing to answer customer questions. This type of technology uses AI to answer customer questions without human assistance.
AI technology comes with a unique set of risks, however. In March 2016, Microsoft tested a new chatbot called Tay good Twitter. Within hours of its launch, Tay began tweeting highly offensive and offensive comments, including racist and anti-Semitic ones. Needless to say, Microsoft had to suspend the account. In another example, Amazon had to update its own Alexa after the voice assistant challenged a 10-year-old girl to touch a coin with her fingers to a half-inserted socket.
If your organization uses a chatbot to provide services to customers or uses AI to analyze data or adjudicate other matters, there may be legal and reputational risks that you need to consider protecting against.
Unlike traditional programming where a human tells the computer what to do, AI uses machine learning and other techniques to create its own set of rules. AI therefore mimics the human brain by performing tasks normally performed by humans, including sensing, analyzing and processing data to make decisions. AI is currently being used, among other things, to make medical diagnoses, predict tax liability cases and review contracts.
Because AI is not recognized as a separate legal entity, actions taken by AI (although sometimes unforeseeable) can make your organization liable. Since there is no legislation specifically regulating AI in Jamaica, here are some things you should keep in mind when dealing with AI-related liability.
Transparency and Human Care
The first risk mitigation mechanism is to make appropriate disclosures. If AI is being used in aspects of a business that are important to customers and other stakeholders, they should be informed that AI is being used. Although Jamaica does not have similar regulations, there are guidelines from the European Union that require documentation and record keeping, transparency and information to users, and human supervision.
Behind the scenes, companies should ensure that AI technologies are continuously tested to ensure that bugs are fixed as they arise. Quick action by Microsoft and Amazon in the above examples could save millions of dollars in potential civil claims. There are also some well-known incidents of AI where the AI produced biased results due to the data that was fed into the training. Controlling AI is essential to avoid potential discrimination claims and civil liability.
In some cases the right solution may be to leave the final decision-making to humans. If you fear that the AI decision maker used in your business will produce biased or incomplete results, you should ensure that there is human oversight of the decision-making processes.
Use of contracts
A unique advantage of using contracts to regulate liability arising from the use of AI is that they allow the parties to allocate risk before the event causing loss occurs. For example, a contract may contain an indemnification provision in respect of loss due to AI. This clause can be useful if you have a third-party AI provider responsible for testing and monitoring. If you sue, your business could rely on this provision to indemnify customers for losses caused by AI malfunctions.
There may be other types of AI-related risks that can be regulated by contracts. For example, contracts can be used to impose obligations on AI providers, including maintaining the privacy and confidentiality of the data on which the AI is trained. This is also essential given the data protection obligations imposed by Jamaican law.
Admittedly, AI liability is currently not easily insurable. However, aspects of loss caused by AI may be covered under certain business interruption policies. Your organization may want to consider whether there is insurance coverage for certain types of risks associated with the use of AI in your business. As more mainstream uses of AI proliferate, we’ll likely see more insurance coverage. In the UK, for example, there is legislation that requires owners of autonomous vehicles (operated by AI) to obtain and maintain insurance for any loss resulting from such vehicles.
Legal and regulatory considerations
Your organization may also want to consider whether there is any non-AI specific legislation or regulation that may impact the use of AI in your business. If you operate in an industry that uses AI to sort and process customer data, you should consider whether there are any data privacy or confidentiality issues that you should consider and protect. For example, if you are processing personal data you should consider whether all actions taken comply with the Jamaican Data Protection Act.
Every company should consider the potential legal and reputational risks of using AI in business, and protect against those risks. Your organization should seek legal advice to determine the unique set of risks for your business.
Litrow is a partner at Hickson Myers, Fletcher & Gordon and a member of the firm’s Litigation Department. Litrow can be contacted at [email protected] or www.myersfletcher.com. This article is for general information only and does not constitute legal advice.