纽约一上诉法院的法官们在几秒钟后就意识到,在一起诉讼中,通过视频屏幕要向他们陈述案情的男子,不仅没有法律学位,甚至根本不存在。
人工智能渗透法律界所引发的尴尬故事再添新篇章。3月26日,在纽约州最高法院上诉庭第一司法部镶嵌彩色玻璃的穹顶下,一起职场纠纷案的原告杰罗姆·德瓦尔德准备向法官陈述案情时,上演了这出闹剧。
萨莉·马扎内特-丹尼尔斯法官宣布:“上诉人已提交视频作为陈述材料。好的,我们现在就播放这段视频。”
视频屏幕上出现一位面带微笑的年轻男子,发型经过精心打理,身着扣角领衬衫和毛衣。
这名男子在视频开头说道:“敬请法庭垂鉴。我今日以谦卑的自诉人身份,向五位尊贵的法官做法律陈述。”
马扎内特-丹尼尔斯法官打断道:“请稍等。这是本案代理律师吗?”
德瓦尔德回应称:“这是我制作的,并非真人。”
事实上,这是人工智能生成的虚拟形象。法官显然不悦。
马扎内特-丹尼尔斯法官说道:“您在申请时本应说明此事。但你并没有告诉我们,先生。”说罢,她立即喝令关闭视频。
她表示:“我无法容忍被误导。”然后她才允许德瓦尔德继续陈述。
德瓦尔德事后向法院致歉,申明他并无恶意。由于他没有聘请代理律师,因此不得不自行完成法律陈述。他原以为虚拟形象能避免自己的言语含糊与卡顿,顺利完成陈述。
在接受美联社专访时,德瓦尔德透露,他事先向法院申请了播放预录视频,随后使用旧金山某科技公司开发的产品创建虚拟形象。最初,他曾尝试制作自己的数字分身,但开庭前未能完成。
德瓦尔德坦言:“法庭对此确实非常不满。法官们严厉呵斥了我。”
即便执业律师应用人工智能出现问题时也会遇到麻烦。
2023年6月,纽约联邦法官对两名律师和一家律所各处以5,000美元罚款,原因是他们使用人工智能进行法律检索,结果引用了聊天机器人编造的虚构案例。涉事律所称这是“善意失误”,因为他们没有意识到人工智能会编造事实。
同年末,在特朗普前私人律师迈克尔·科恩的代理律师提交的法律文书中,再次出现人工智能虚构的判例。科恩承担全责,并表示他未预料到自己使用的谷歌(Google)法律检索工具存在所谓的“AI幻觉”问题。
尽管AI存在诸多错误,亚利桑那州最高法院上月却有意启用了两个AI生成的虚拟形象(与德瓦尔德所用的技术类似),用于向公众总结裁判文书。
法院官网上的“丹尼尔”与“维多利亚”宣称其职责是“传播法院资讯”。
威廉与玛丽法学院(William & Mary Law School)法律与法院技术中心副教授兼研究部助理主任丹尼尔·申表示,他对德瓦尔德在纽约法院上诉案件中使用虚假形象进行陈述并不感到意外。
他表示:“在我看来,这种情况是不可避免的。”
他指出囿于传统与法院的规则以及被取消律师执业资格的风险,律师不可能这样做。但没有聘请律师并且申请准许在法院进行陈述的自诉人,通常缺乏对使用合成视频陈述案件的风险提示。
德瓦尔德表示,他试图关注技术前沿,近期刚听了美国律师协会(American Bar Association)主办的一场在线研讨会,讨论的主题是AI在法律界的应用。
截至上周四,德瓦尔德的案件仍在纽约上诉法院审理中。(财富中文网)
译者:刘进龙
审校:汪皓
纽约一上诉法院的法官们在几秒钟后就意识到,在一起诉讼中,通过视频屏幕要向他们陈述案情的男子,不仅没有法律学位,甚至根本不存在。
人工智能渗透法律界所引发的尴尬故事再添新篇章。3月26日,在纽约州最高法院上诉庭第一司法部镶嵌彩色玻璃的穹顶下,一起职场纠纷案的原告杰罗姆·德瓦尔德准备向法官陈述案情时,上演了这出闹剧。
萨莉·马扎内特-丹尼尔斯法官宣布:“上诉人已提交视频作为陈述材料。好的,我们现在就播放这段视频。”
视频屏幕上出现一位面带微笑的年轻男子,发型经过精心打理,身着扣角领衬衫和毛衣。
这名男子在视频开头说道:“敬请法庭垂鉴。我今日以谦卑的自诉人身份,向五位尊贵的法官做法律陈述。”
马扎内特-丹尼尔斯法官打断道:“请稍等。这是本案代理律师吗?”
德瓦尔德回应称:“这是我制作的,并非真人。”
事实上,这是人工智能生成的虚拟形象。法官显然不悦。
马扎内特-丹尼尔斯法官说道:“您在申请时本应说明此事。但你并没有告诉我们,先生。”说罢,她立即喝令关闭视频。
她表示:“我无法容忍被误导。”然后她才允许德瓦尔德继续陈述。
德瓦尔德事后向法院致歉,申明他并无恶意。由于他没有聘请代理律师,因此不得不自行完成法律陈述。他原以为虚拟形象能避免自己的言语含糊与卡顿,顺利完成陈述。
在接受美联社专访时,德瓦尔德透露,他事先向法院申请了播放预录视频,随后使用旧金山某科技公司开发的产品创建虚拟形象。最初,他曾尝试制作自己的数字分身,但开庭前未能完成。
德瓦尔德坦言:“法庭对此确实非常不满。法官们严厉呵斥了我。”
即便执业律师应用人工智能出现问题时也会遇到麻烦。
2023年6月,纽约联邦法官对两名律师和一家律所各处以5,000美元罚款,原因是他们使用人工智能进行法律检索,结果引用了聊天机器人编造的虚构案例。涉事律所称这是“善意失误”,因为他们没有意识到人工智能会编造事实。
同年末,在特朗普前私人律师迈克尔·科恩的代理律师提交的法律文书中,再次出现人工智能虚构的判例。科恩承担全责,并表示他未预料到自己使用的谷歌(Google)法律检索工具存在所谓的“AI幻觉”问题。
尽管AI存在诸多错误,亚利桑那州最高法院上月却有意启用了两个AI生成的虚拟形象(与德瓦尔德所用的技术类似),用于向公众总结裁判文书。
法院官网上的“丹尼尔”与“维多利亚”宣称其职责是“传播法院资讯”。
威廉与玛丽法学院(William & Mary Law School)法律与法院技术中心副教授兼研究部助理主任丹尼尔·申表示,他对德瓦尔德在纽约法院上诉案件中使用虚假形象进行陈述并不感到意外。
他表示:“在我看来,这种情况是不可避免的。”
他指出囿于传统与法院的规则以及被取消律师执业资格的风险,律师不可能这样做。但没有聘请律师并且申请准许在法院进行陈述的自诉人,通常缺乏对使用合成视频陈述案件的风险提示。
德瓦尔德表示,他试图关注技术前沿,近期刚听了美国律师协会(American Bar Association)主办的一场在线研讨会,讨论的主题是AI在法律界的应用。
截至上周四,德瓦尔德的案件仍在纽约上诉法院审理中。(财富中文网)
译者:刘进龙
审校:汪皓
NEW YORK (AP) — It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen — a person about to present an argument in a lawsuit — not only had no law degree, but didn’t exist at all.
The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division’s First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute.
“The appellant has submitted a video for his argument,” said Justice Sallie Manzanet-Daniels. “Ok. We will hear that video now.”
On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater.
“May it please the court,” the man began. “I come here today a humble pro se before a panel of five distinguished justices.”
“Ok, hold on,” Manzanet-Daniels said. “Is that counsel for the case?”
“I generated that. That’s not a real person,” Dewald answered.
It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased.
“It would have been nice to know that when you made your application. You did not tell me that sir,” Manzanet-Daniels said before yelling across the room for the video to be shut off.
“I don’t appreciate being misled,” she said before letting Dewald continue with his argument.
Dewald later penned an apology to the court, saying he hadn’t intended any harm. He didn’t have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words.
In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing.
“The court was really upset about it,” Dewald conceded. “They chewed me up pretty good.”
Even real lawyers have gotten into trouble when their use of artificial intelligence went awry.
In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a “good faith mistake” in failing to understand that artificial intelligence might make things up.
Later that year, more fictious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn’t realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
Those were errors, but Arizona’s Supreme Court last month intentionally began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public.
On the court’s website, the avatars — who go by “Daniel” and “Victoria” — say they are there “to share its news.”
Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he wasn’t surprised to learn of Dewald’s introduction of a fake person to argue an appeals case in a New York court.
“From my perspective, it was inevitable,” he said.
He said it was unlikely that a lawyer would do such a thing because of tradition and court rules and because they could be disbarred. But he said individuals who appear without a lawyer and request permission to address the court are usually not given instructions about the risks of using a synthetically produced video to present their case.
Dewald said he tries to keep up with technology, having recently listened to a webinar sponsored by the American Bar Association that discussed the use of AI in the legal world.
As for Dewald’s case, it was still pending before the appeals court as of Thursday.