Relying on AI as a Core Business Component Brings Layers of Risk

Despite the vast volumes of data relied upon to generate responses to posted questions, the responses produced by AI are often incorrect. But that's only the beginning of the risks enterprises face.

No Jitter, Martha Buyer

September 14, 2023

6 Min Read
artificial intelligence 3D concept
Alamy

History has shown that there has often long been a gap between the market availability of sexy new technologies and their wide adoption across enterprises and the economy as a whole. Although the analogy isn’t perfect, for the same reason that most people choose not to buy a new car/bike/tv as soon as the latest and greatest innovative products are available, many enterprises are leery of AI-driven products and services. Such enterprises are curious and optimistic, but cautious because, aside from the good sense to be cautious, they have legal departments that demand due diligence. While there exists the promise of increased efficiency and productivity, there also exists a huge risk of costly litigation and negative verdicts when things go wrong.

Already, in the relative infancy of AI deployment in the enterprise space, there have been cases filed and outcomes determined based on violations of various elements of the law. And while the cases cited here are American, litigation is not—and will not be—limited to U.S. courts.

The single biggest risk of reliance on AI systems is that despite the vast volumes of data relied upon to generate responses to posted questions, the responses generated are incorrect. The “why” in this case is irrelevant. When bad data is relied upon to make decisions, bad things happen. Period. The next level of risk is that those who relied on the AI-based “solutions” to make decisions use the AI-generated outcome improperly. The third level of risk is that the AI-based output may provide a correct answer in the short term, but an incorrect one in the long term. The fourth risk is that whatever questions were used to generate the AI data reflects an unidentified or unknown bias, thus impacting the outcome. Wherever there’s risk, there’s liability.

Related:ChatGPT Fails at Recommending Cancer Treatment, Study Finds

Recently, the case of Roberto Mata v. Avianca Airlines received some well-documented and well-deserved negative publicity for one of the attorneys who took lazy lawyering to a whole new level. Specifically, the attorney asked ChatGPT to do his legal research for him as part of his litigation prep. He used the results, and the cases cited in that research, none of which existed, to prepare documents that he submitted to the court. As a result, the cases that he not only cited in making his argument, but generously quoted from in his court filings did not pass muster. Both the cases and their citations were, um, fictitious. You’ve heard of “the dog ate my homework?” At least the guilty lawyer didn’t blame ChatGPT. The bad news is that he did it in the first place. The first bit of good news is that the lawyer got caught. The second bit of good news is that this case proves that ChatGPT is not about to take over the profession. It may be a useful tool when used judiciously, but it is not a be all and end all legal research tool.

(Editor's note: For a look at how artificial intelligence might be used in legal research, read No Jitter's June 23, 2023, Q&A with Thompson Reuters' director of product, "New AI Capabilities Mean to Streamline Legal Professionals' Problem Solving.")

However, this is just one area where use (some might say overuse) of AI has resulted in litigation, sanctions, and just plain exposure of undue reliance on an underlying technology that is not ready to replace human intellect and capacity.

In the unfortunate case of Mr. Mata’s litigation with Avianca Airlines, the problem was a direct result of an attorney’s undue reliance on ChatGPT for his legal research. But there are other sectors of the law that are dealing with AI issues as well.

While those selling AI systems claim that the systems rely on vast quantities of data, what they don’t say—usually because they don’t know—is whether the data that’s being used has been properly vetted and verified to ensure that all legal requirements for privacy are met. This is particularly critical in issues involving medical decision-making, where the source and level of protection provided to the lowest level of data meets the strict requirements of HIPAA and other state, federal and international privacy standards and legal obligations.

In other words, no AI vendor is telling its customers whether or not they're fully compliant with state or national regulations, and that could present risks to the customers later.

Defamation is another chapter of law that has been used to hold those responsible who used AI tools to erroneously defame an individual. At its most basic level, defamation is “the act of communicating false statements about a person that injure the reputation of that person.” In the current case, Mark Walters v. OpenAI, LLC, plaintiff Mark Walters filed suit against AI company Open AI as a result of an ChatGPT “output” that, according to Bloomberg Law, “accused him of embezzling money from a gun rights group.” The problem is that Mr. Walters has never worked for the gun rights group in question, and he has never embezzled money. The information provided by the ChatGPT-generated legal complaint, which was false, to the editor-in-chief of a gun publication that was covering a real-life legal case in Washington state. Mr. Walters has sued on grounds of defamation arguing that he has been defamed as the result of an AI-generated document. Stay tuned.

Lastly, at least for now, is the risk of copyright violation. In the recent case of Thaler v. Perlmutter, Register of Copyrights and Director of the United States Copyright Office, the plaintiff, Mr. Thaler attempted to secure a copyright for a work of art that was generated by a computer system that he owns called the “Creativity Machine.” The court’s opinion makes very clear that only works of art created by humans can be copyrighted, and thus only works of art created by humans are offered the protection of copyright law. This case, and ones similar to it, have been filed in multiple international venues as Mr. Thaler seeks to find a venue that will yield him a favorable result. So far, no takers. But it’s early.

The takeaway is that AI remains a powerful tool. But the broad array of litigation based upon claims originating from various chapters of U.S. law, should serve as a reminder that the technology is relatively new and not yet ready for prime time in many applications. The legal risks need to be part of any calculus when considering a platform with AI-powered tools; it might not hurt to ask counsel what risks you're assuming. Artificial intelligence technologies like machine learning, natural language processing and generative AI are powerful for sure. But like any new technology-driven result, it must be relied upon with great care.

Read more about:

No Jitter

About the Authors

No Jitter

No Jitter, a sister publication to ITPro Today, is a leading source of information and objective analysis for enterprise communications professionals and decision-makers faced with rapidly evolving technologies and proliferating business/management challenges.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like