AI in IP Opens Questions of Liability

Lawyers should respond with caution, not aversion

Artificial intelligence is filling the role of attorneys at law firms more and more. And that includes in situations that could get a law firm into trouble.

As artificial intelligence has become more sophisticated, it gets introduced into new industries and job roles. In the legal world, machine learning tools are being adopted for research and crunching litigation data and helping attorneys gain an upper hand in their cases. Popular tools can sift through public data to determine which arguments are the most effective and for which judges. Other solutions might be used to review documents in ediscovery at a pace tens of times faster than humans. Others might be used for research involving private data and doing diligence-related work in transactional law. In any use, artificial intelligence carries its own liability risks that law firms should be aware of, but one AI expert says that shouldn’t be cause for alarm.


“It should be embraced,” said David London, an intellectual property transactional attorney and former global chair for AI at Hogan Lovells.  “[Artificial intelligence] is the future. It’s going to hit every sector. … Law firms would be better served spending their time figuring out how to adopt artificial intelligence solutions and use them in the best interest of their clients rather than shy away from them.”

Spending that time in developing artificial intelligence is not just key to keeping at the crest of the wave of innovation, but it is also essential in limiting liability risks. London said that because machine learning software learns from being trained, it does take a time investment to make it effective. The payoff means saving money by allowing a computer to do the work that might otherwise be handed off to an associate. And the consequences of not using it properly or responsibly could be catastrophic — at least theoretically, London said.

The risk of AI comes in when it is used for a major M&A transaction, for example, when it might be used to handle private data. London described a scenario where an attorney might be using an AI to value an asset based on private data. If a client wants to do a transaction based their belief about an asset they’re buying, and an AI tool confirms that belief about the condition of the asset or company and is wrong, it could theoretically lead clients into closing a transaction they otherwise wouldn’t or where they might have otherwise negotiated for a lower price. 

Or in the situation where AI is used to evaluate assets to purchase and determines they are transferrable without getting third-party consent and is wrong, a client could end up with unusable assets or even end up in breach of contract. From what London describes, the image of a dystopian future caused by destructive AIs is one where one of his clients ends up spending millions of dollars to buy nothing or runs into serious legal trouble.

“And who’s liable?” he asked. “I don’t know that there’s a clear answer.”

To his knowledge, the question of liability for faulty AI is one clients and law firms often do and should negotiate up front rather than going to court, but it’s an issue that could be resolved by a judge at some point. “I think the clients are winning,” he said. “I don’t get the sense that law firms are shifting the risk. Anecdotally, based on my experience in the industry, law firms are not having success shifting that risk.”

London said that law firms and individual lawyers should mitigate that risk, regardless of the accuracy of the AI, by doing their own diligence. While he described catastrophic hypotheticals of acquisitions led astray by faulty AI, attorneys should be checking that work before it gets to that point, he said, just as they would check the work done by a young associate. 

“It doesn’t matter, at the end of the day, what arrangement a law firm has made with a client around this risk issue,” London said. “At the end of the day, it’s incumbent on a law firm to deliver quality service and quality results to their clients. It’s incumbent on law firms to have their lawyers examine the results of the artificial intelligence solution and not just turn over the results, uninspected, to clients.”

AI is one area where lawyers should embrace their risk aversion, if London’s confidence in AI tools is to be trusted. While there are risks that go with AI solutions, they are risks that can be mitigated by due diligence, and adopting new technologies can ultimately help law firms. Rather than AI taking attorneys’ jobs, London expects it’s only a matter of time that these solutions do a better job than attorneys on certain tasks, which ultimately allows lawyers to do more work in less time or focus on higher level work instead. 

“It’s a client demand issue,” he said. “We will respond to what clients want. If it’s what they want, we’ll do it. It is what they want, and we’re doing it all over the planet.”

— Tony Flesor

Previous articleThe Difference Two Decades Make
Next articleWill the Phillips Standard Change the Game on Patent Claims?

LEAVE A REPLY

Please enter your comment!
Please enter your name here