The AI Royal Flush: The Five Foundations of Artificial Intelligence, Part 2

Robotic Hand Assisting Person For Signing Document

This is Part 2 of a summary of Ward and Smith Certified AI Governance Professional and IP attorney Angela Doughty's comprehensive overview of the potential impacts of the use of Artificial Intelligence (AI) for in-house counsel.

Please see Part 1 here.

Generative AI Best Practices for In-House Counsel

Considering all of the risks and responsibilities outlined in Part 1 of this series, Doughty advises in-house counsel to mandate training, discuss with the management team control of approved tools, and organize a review of what data can be used with AI at each company level.

TRAINING

Training has the potential to bridge generational gaps and create a working environment where people are more comfortable sharing new ideas. "I am very proud of our firm for the way that we have adapted to the modern business landscape," noted Doughty. "We have some folks who have been practicing for 30 to 40 years, and we have some right out of law school. With everyone evolving and learning at the same rate, we're using training to build a more inclusive culture."

HUMAN REVIEW

Having an actual human review key decisions is another best practice. In a case where an experienced person would not have required an additional layer of review, but AI is being used to streamline the process, prudent companies should conduct a secondary review of the AI-generated work.

MONITOR REGULATIONS

Similar to the technology, the regulatory environment is constantly in flux. Most of the potential regulations are currently just proposals that are not legally binding. For example:

  • The EU AI Act is a comprehensive legislative framework that aims to regulate AI technologies based on their level of risk. It's designed to ensure that AI systems used in the EU are safe and respect fundamental rights and EU values.
  • The U.S. Blueprint for an AI Bill of Rights outlines principles to protect the public from the potential harms of AI and automated systems, focusing on civil rights and democratic values in the U.S.
  • The FTC enforces consumer protection laws relevant to AI, focusing on issues like fairness, transparency, bias, and data privacy, but it currently operates within a more reactive and general deceptive trade practices legal framework.

Like the future, the final rules are impossible to predict. Doughty expects that transparency, fairness, and explainability will be common themes.

Regulators will want to know how decisions were made, whether AI was involved, how data is processed, and how data is protected. "They will not hesitate to hold senior-level people accountable. This is partially why clear policies are an effective strategy for minimizing risk," commented Doughty.

Different regions have different ideas regarding ethics and bias. This increases the challenge of navigating the evolving regulatory framework.

COMPLIANCE

"Compliance with all of the standards is practically impossible, which makes this very similar to data protection and privacy. One of my worst nightmares is when a client asks me to make them compliant," added Doughty, "because, in most cases, it’s simply not feasible."

Penalties are likely to vary in proportion to the risk to society. Companies should consider whether using AI is worth the reputational damage and harm it could cause to individuals.

Businesses operating in high-risk sectors may see additional regulations compared to other businesses. It is a patchwork of inconsistent, overlapping laws, and that is unlikely to change. "If there is a positive to this, it is that it will keep us in business for a long time," joked Doughty.

Legal knowledge will continue to be vital for helping clients make decisions. Critical thinking skills and an understanding of jurisprudence will also continue to support job security for attorneys.

Remember, AI is a Tool, not a Replacement.

"As attorneys, we have empathy skills. People don't want to sit in front of a computer and talk about really difficult, hard things. They want to look you in the eyes," Doughty explained.

AI is just a tool, and fears over being replaced may be overblown. Doughty is using the technology on a daily basis. Along with using it to edit her presentation into bullet points for experienced in-house attorneys, she uses it to draft legal scenarios.

Doughty advises not to use a person's real name because of privacy. "I also use it when I am frustrated with someone, so I draft how I really feel, then ask the AI to make it more professional," noted Doughty.

AI can quickly write an article if provided with a topic, a target audience, and a few links. The speed and accuracy are astonishing, but many believe it is difficult, if not impossible, to determine whether the copy was plagiarized. This is likely to be the subject of ongoing litigation.

Audience Questions Answered

In the Q&A portion of the presentation, the audience questions came in quickly. Doughty attempted to address all she could in the time allotted and offered to take calls regarding questions after the program.

In response to a question centered on navigating ongoing regulations, Doughty advised following the National Institute of Standards and Technology Cybersecurity Framework.

Another audience member wondered, "Can AI be trusted?"

"No, it cannot be trusted at all," said Doughty. "There is not a single tool that I would recommend using as the basis for legal decisions without substantial human oversight – same as you would with any other technology tool."

"What technology or change, if any, compares to the effect generative AI is having on the legal system and profession?"

"I don’t think we've seen anything like this since Word, Excel, and Outlook came out in terms of changing the way that we practice law and prepare legal work products. I remember having to go from the book stacks to Westlaw; it was just a different way to do research, but I still had to do all of those things. This is even more revolutionary than what we saw at that point."

"How do you mitigate the risk of harmful bias in a vendor agreement?"

"The short answer is to fully vet your vendors. Many vendors understand the risks and will include representations and warranties within their contracts, but this one is difficult. Understanding the training model and data used for training can be key, as it was with the earlier examples of AI hiring tools trained in male-dominated industries that preferred male applicants."

"Any tips to bring up the topic of AI to organizational leadership?"

"Quantify the risk and discuss all of the penalties that could occur, along with the opportunity costs associated with ruining a deal."

"Are there any legal-specific AI tools that you see as a good value?"

"In terms of legal research, writing, or counsel, I would not advise using AI for any of that right now – outside of the (very) expensive, but known, traditional legal vendors, such as Westlaw and Thompson Reuters. This is partially because most of the AI tools people are using are open AI tools. This means every question and answer – right or wrong – is being used to train the technology. This is also partially because to ethically use these tools, we must understand their strengths and weaknesses enough to provide sufficient oversight, and many attorneys are not there yet."

"What about IP infringement?"

"If AI has been predominantly trained on existing content and you use it to create an article, does the writer have an infringement claim? This is to be determined, and it's one of the biggest issues being litigated right now. There are a slew of artists that are suing Gen AI companies for this purpose."

"What about the environmental impact of AI processing?"

"Generative AI significantly impacts the environment due to its high energy consumption, especially during the training and operation of large models, leading to substantial carbon emissions. The use of resource-intensive hardware and the cooling needs of data centers further exacerbate this impact."

"Any suggestions for when the IT department believes their understanding of the risks supersede the opinions of the legal department?"

"This is when the C-suite needs to come in because the legal risk and responsibility are already out there, and implementation is under a completely different department. It's a business decision. I look at the IT department no different than the marketing or sales departments in determining the legal risk and making a recommendation."

"Any recommendations for AI-based tools to stay on top of the regulatory tsunami?"

"Not yet, but I spend a lot of time listening to demos and participating in vendor training sessions. Signing up for trade association newsletters is another way to stay current. These are free resources for training and help with staying current on industry trends and proposed regulations."

Conclusion

Doughty concluded the session by reminding the group of In-House Counsel that their ethical duties and responsibilities extend to governance, compliance, risk management, and an ongoing understanding of the ever-evolving landscape of Generative AI.

This article is part of a series highlighting insights from our 2024 In-House Counsel Seminar. More insights are below.

--
© 2025 Ward and Smith, P.A. For further information regarding the issues described above, please contact William S. Durr, E. Bradley Evans or Eileen A. Schnur.

This article is not intended to give, and should not be relied upon for, legal advice in any particular circumstance or fact situation. No action should be taken in reliance upon the information contained in this article without obtaining the advice of an attorney.

We are your established legal network with offices in Asheville, Greenville, New Bern, Raleigh, and Wilmington, NC.

Subscribe to Ward and Smith