Can Attorneys Face Sanctions for Using AI?
Artificial Intelligence ("AI") is no longer science fiction. In the past few years, AI technology has infiltrated TV screens and cell phones, gradually making its way into the workplace. The possibility of having an always-available research assistant has sparked interest in the legal community, but it has also raised considerable ethical questions about how the technology would be integrated into legal practice. Federal courts have since made it clear that AI technology can be used as a tool, not a replacement for research.
In 2023, Two New York lawyers, Steven Schwartz and Peter LoDuca, filed a legal document citing six non-existent cases in Federal Court; the lawyers only realized the cases were fake when representatives from the other side of the case informed the court of their inability to locate the cited sources. When both parties appeared before the court, Schwartz admitted to using ChatGPT and unknowingly including false citations. U.S. District Judge P. Kevin Castel found that the lawyers acted in bad faith and made misleading statements to the court. He ordered the lawyers to pay a $5,000 fine.
Judge Castel’s reasoning in imposing this fine should serve as a reminder and a warning to lawyers everywhere. It is a reminder that attorneys have professional responsibilities and a warning that they will be punished for neglecting those responsibilities. The problem here is not that Schwartz and LoDuca used AI assistance in their research. Rather, the problem is that the lawyers neglected their responsibility to ensure the accuracy of their AI research.
No matter how advanced AI technology becomes, a lawyer’s duty to their client remains the same. AI is just an addition to a lawyer’s tool belt. As Judge Castel made clear, AI cannot be used to replace a lawyer’s duty to adequately represent their client.