![AI Regulation and Justice](https://nmcdn.io/e186d21f8c7946a19faed23c3da2f0da/2198139c60484547ac05dbaa326cedbb/files/AI-Regulation-and-Justice-MAIN-process-sc720x385-t1738940786.jpg?v=5d0a09a3c3)
During their annual In-House Counsel Seminar, Ward and Smith Certified AI Governance Professional and privacy and data security attorney Angela Doughty provided a comprehensive overview of the potential impacts of the use of Artificial Intelligence (AI) to give in-house attorneys an edge on this transformative technology.
The Seminar followed a gaming theme and used poker analogies during this session.
Doughty is also a Certified Information Privacy Professional and a North Carolina State Bar Board Specialist in Trademark Law. As the Director of Legal Innovation for Ward and Smith, she routinely advises the law firm and clients on data privacy and security, intellectual property, artificial intelligence, and technology applications related to the legal field.
The Royal Flush: Five Key Topics for In-House Counsel to Consider for AI Ethics
To effectively serve internal clients, Doughty posited in-house attorneys must understand the following:
- Ethical considerations associated with AI
- Implications for ongoing practice
- Compliance issues
- Regulatory framework
- Necessity of establishing damage control procedures
"Walking the line between leveraging this technology and mitigating risk is a constant challenge," noted Doughty. This challenge has evolved alongside technology.
Historical risks were fairly minimal. AI was used to classify and automate information, make predictions, analyze industry trends, generate business leads, and customize advertisements.
"Basically, AI was predicting human behavior based on past behavior, with close supervision," advised Doughty. "Now, it can imitate the way people think and generate large varieties of output. This is the evolution everyone's talking about, Generative AI."
Common modern-day applications include drafting business communications like memos, letters and presentations, social media posts, web content, HR, and administrative tasks. From a risk management perspective, attorneys have to understand the increasing legal/ethical risks presented by AI, potential biases, accountability, and the importance of privacy safeguards.
The technology has also heightened concerns about navigating current litigation, contract disputes, and emerging legal claims. "Regardless of your practice area, you're going to need a basic understanding of how Generative AI works," Doughty said.
She anticipates an increased level of scrutiny from regulators. "More oversight and documentation will add to the difficulty in terms of managing risk without presenting an obstacle to creating innovative new products and services," Doughty explained.
Duties of In-House Counsel
Doughty outlined several key duties all in-house counsel will need to embrace to manage AI risks and benefits in their organizations:
- Competence: Ethical duty to represent clients competently, which now includes understanding AI's benefits, risks, and effects.
- Responsibility and Accountability: Remaining responsible for the accuracy of services, even when using AI.
- Confidentiality and Privilege: AI tools must have robust security measures to protect sensitive client information.
- Duty of Candor: Disclosing AI involvement when relevant, ensuring transparency about risks and limitations
- Bias and Fairness: Address potential AI biases, such as algorithms reflecting biased data, to ensure fairness in legal services.
"Delegating is not an option."
The need to represent clients effectively presents an array of ethical implications for in-house counsel. "Delegating is not an option. An attorney's ethical demand of competency requires being current with all of these changes…all the state bars have been clear it's our duty to keep up with this, and we can't pass on the responsibility," added Doughty.
Some courts have mentioned that attorneys have to certify that no AI was used in a submission. Others are seeking to determine whether it may be permissible in some instances.
"We've had lots of conversations with litigators about how this could impact evidence verification, the potential need for a new type of expert, and other AI issues such as deepfakes," commented Doughty.
The use of AI does not eliminate the burden of accuracy. Similarly, it is vital for attorneys to understand that AI must have robust security to protect sensitive client information.
Must Attorneys Disclose the Use of AI?
An audience member cheekily asked, "Can attorneys hold their cards to their chest, or do they have to disclose the use of AI?"
"The answer is it depends," laughed Doughty. "Candor is a more difficult, nuanced area. You may not have to disclose it in every area, but there is a need to disclose whether it's being used for strategic decisions. Clients should also have a chance to opt-out."
Some clients have wondered if using AI to streamline operations will make legal services less expensive. However, the costs associated with purchasing and implementing the software are significant, so for many firms, the use of Generative AI has yet to make an impact on the scalability or the cost of providing services.
When Generative AI is factored into strategic decisions, bias and fairness become important. It is essential to work with reputable vendors and review their data-gathering practices.
To illustrate how AI can go wrong, Doughty shared the example of two companies using it to make hiring decisions. Generative AI technology used historical data and companies operate in historically male-dominated industries, so the AI was found to have a bias toward males. This is contrary to each company's goal and stated public hiring practices.
In-House Counsel and Job Security in the Face of AI
A common fear among attorneys is that AI will take their job. "Since AI is being used for things like research, reviewing, and drafting, many attorneys have asked about this," noted Doughty.
The traditional aspects of what makes an attorney valuable will continue to be relevant. "Clients are still going to turn to us for advice and advocacy… they're still going to want us to develop creative solutions," explained Doughty.
She believes that AI could free up time to focus on strategic objectives: "Layering on Generative AI within the context of a traditional legal claim could provide additional insight."
Product and service liability is an evolving issue since AI is being integrated in so many areas. Also, many cases are settling because there is an extensive learning curve associated with fully grasping the nuances of the technology.
In a car wreck involving auto drive, for example, is the manufacturer to blame? Is it the consumer's fault for using the technology? Is the component manufacturer or coder at fault?
Determining how to make a client whole represents an evolving challenge. "All of this takes legal knowledge, critical thinking, understanding the jurisprudence, having the context and the experience," said Doughty, "and AI doesn't have that."
Many of the contractual terms, such as indemnification and warranties, are the same when it comes to purchasing AI software. However, what is often not understood, is that the way these terms apply to AI is different. This means reliance on boilerplate contract language from other types of technologies can be problematic.
"This is where the lines are drawn with regard to the allocation of risk," commented Doughty. "The more autonomy the AI has, the less control you have over the risk."
Doughty believes this factor could result in more liability and risk going to the AI provider. Another issue relates to the ownership of the data. There is no longer a clear delineation between the ownership of the output and the IP since AI inherently changes over time.
"Now, AI providers may offer a discount based on their ability to anonymize the data and continue to learn from it," noted Doughty. "It is critical to address that on the front end because the industry standard is that, if the data limitation is not in the contract, we're going to do what we want with it."
Who Needs Access to AI Tools in Your Company?
Minimizing access to AI is an important measure for mitigating risk. People across varying organizational levels are using AI tools, and this exposes the company to an array of negative outcomes, including data breaches, privacy claims, discrimination, and bias claims.
"This is difficult in a business situation where everyone is just trying to do their job. They want to get what they need out of it when they need it. The fact that many of us have prioritized convenience over security for many years makes it even more difficult to cordon off information," added Doughty.
"As Devon (Williams, Co-Managing Director of Ward and Smith) says, 'Direct is kind; people want to do what you ask them to do.' With that in mind, it's vital to have stringent policies, right down to the data types and tools," mentioned Doughty.
For the purposes of HR, the technology is increasingly being used for hiring decisions, terminations, salaries, bonuses, eliminating candidates, and workplace monitoring. AI is driving decision-making like never before, but accountability remains with the organization, so it's important to review everything.
IP, Copyrights, and AI-informed Data
The ownership of intellectual property is likely to be an ongoing issue with Generative AI, and one in-house counsel should pay particular attention to. The courts decided long ago that a monkey cannot own the copyright to a picture, but what about a machine?
Doughty used AI to create every slide in her presentation: "Does that mean I own the copyright? If you reused this, could I sue you for infringement?"
Currently, the copyright office states that anything created by Generative AI has no copyrights attached. Many believe that AI plagiarizes the work of writers to generate articles, and there is a significant lack of trust in the technology, which is essentially only able to pull ideas from existing content.
In terms of trade secrets, the court only protects what the company protects. "Sometimes, employees put trade secrets into these systems and come up with summaries, bullet points, or presentations. The cat's out of the bag in many cases, and that could be a very expensive loss of intellectual property," Doughty said.
Policies need to be direct and specific. The AI tool needs to be approved, and the person using it should also be tested and approved to ensure it is being used the way it was intended.
It is advisable to denote whether AI can be used within a contract. "We're seeing a lot of this with marketing companies. You don't want something posted on your website that brings a demand letter. There are a lot of copyright trolls out there, and they are all eager to sue. This can be very expensive to deal with," commented Doughty.
For those with experience negotiating with insurance companies, it may not be a surprise to learn they often seek a loophole to avoid payment. Limiting employee access to information can be an effective way to strengthen a cyber-insurance claim.
AI is already having an impact on everything from risk assessment and insurance underwriting to policies and claims processing. Companies using the technology should review their cyber coverage to ensure that AI-related events are covered.
Notably, some cyber insurance policies do not cover data breaches and/or work stoppages that were caused by AI. In the never-ending search for a competitive advantage with the lowest possible risk, companies interested in AI should implement two strategies.
One should focus internally, and another should focus externally due to the fact that each carries a different level of risk. Each strategy also has the potential to move the company forward, so ignoring the technology is risky in terms of opportunity costs.
This article is part of a series highlighting insights from our 2024 In-House Counsel Seminar. More insights are below.
- The House Advantage: Wisdom from In-House Counsel
- Upping the Ante: New Rules and Regulations in Play for In-House Counsel
- The DEI Stalemate: Paying the Price for the Wrong Move Part 1 and Part 2
- On a Roll: Hot Topics for In-House Counsel
--
© 2025 Ward and Smith, P.A. For further information regarding the issues described above, please contact or .
This article is not intended to give, and should not be relied upon for, legal advice in any particular circumstance or fact situation. No action should be taken in reliance upon the information contained in this article without obtaining the advice of an attorney.
We are your established legal network with offices in Asheville, Greenville, New Bern, Raleigh, and Wilmington, NC.