欢迎访问好信调解网,今天是
0701-6215508、0791-86847528
好信调解网
网站导航
您的当前位置:首页 >> 新闻资讯 >
新闻资讯
人工智能在调解中的应用
来源:多元化纠纷解决机制发布时间 :2026-03-24

Introduction

引言

  In recent years, the rapid advancement of artificial intelligence (“AI”) has prompted growing debate about its potential role in dispute resolution, particularly in mediation. Legal technology products built on large language models (“LLMs”) have progressed beyond simple drafting functions to become more sophisticated tools capable of supporting mediation. One illustration is the eBRAM International Online Dispute Resolution Centre in Hong Kong, which has integrated real-time transcription and multilingual translation services. These developments show how AI can enhance communication, bridge language barriers, and improve overall efficiency in the mediation process.

  近年来,人工智能的快速发展引发了关于其在争议解决,特别是调解中潜在作用的日益激烈的讨论。基于大型语言模型构建的法律技术产品已超越简单的起草功能,发展成为能够支持调解的更为复杂的工具。香港的一邦国际网上争议解决中心就是一个例证,该中心已整合了实时转录和多语言翻译服务。这些发展表明,人工智能如何能够增强沟通、弥合语言障碍,并提高调解过程的整体效率。

  At the international level, professional bodies have begun to address the integration of AI into mediation and arbitration. In June 2025, the Mediation Committee of the International Bar Association (“IBA”) issued the Guidelines on the Use of Generative Artificial Intelligence in Mediation (“IBA Guidelines”), marking the first steps toward developing international standards in this area. By contrast, the discussion of AI in mediation in Hong Kong has remained relatively limited, and neither widespread adoption of AI in mediation nor the formulation of local guidelines has yet taken place. This gap makes it timely to consider both the potential benefits of AI for mediators in Hong Kong and the risks and challenges that must be addressed if AI is to be responsibly integrated into mediation practice.

  在国际层面,专业机构已开始着手解决将人工智能融入调解和仲裁的问题。2025年6月,国际律师协会调解委员会发布了《关于在调解中使用生成式人工智能的指南》,标志着在制定该领域国际标准方面进行了初步探索。相比之下,香港关于人工智能在调解中应用的讨论相对有限,既未广泛采用人工智能进行调解,也未制定本地指南。这一差距使得我们有必要审视人工智能为香港调解员带来的潜在益处,以及若要将人工智能负责任地融入调解实践,必须应对的风险和挑战。

  The Use of AI in Mediation

  人工智能在调解中的应用

  Supporting Reframing through Neutral Language

  通过中立语言支持重构

  Neutral language is a cornerstone of effective mediation. It emphasises facts, avoids blame, and promotes constructive dialogue. One of the key skills of a mediator is the ability to reframe heated or accusatory statements into neutral language that reduces tension and encourages collaboration. Ideally, issues should be expressed in positive and balanced terms wherever possible.

  中立语言是有效调解的基石。它强调事实,避免指责,并促进建设性对话。调解员的一项关键技能是将激烈或指责性的陈述重构为中立的语言,以降低紧张气氛并鼓励合作。理想情况下,应尽可能以积极和平衡的方式表达问题。

  Assisting Mediators in Understanding the Case

  协助调解员理解案件

  Mediation cases can range widely, from family disputes to complex corporate conflicts. Some matters may involve highly technical or professional subject matter, such as medical, technological, or legal terminology, which can be difficult for a mediator without specialised training to fully grasp. This can make it challenging for the mediator to follow the discussion or to respond effectively during the session.

  调解案件的范围很广,从家庭纠纷到复杂的公司冲突。某些事项可能涉及高度技术性或专业性的主题,例如医疗、技术或法律术语,这对于未经专门培训的调解员来说可能难以完全理解。这可能会使调解员在会议期间难以跟进讨论或有效回应。

  LLMs can assist by helping mediators prepare for such cases and by simplifying complex terminology into plain language. With this support, mediators are better equipped to grasp technical issues and to engage with the parties. Importantly, if a party introduces specialised terms during the session, the LLMs can provide immediate clarification, allowing the mediator to stay on track and guide the discussion.

  大型语言模型可以通过帮助调解员为此类案件做准备,并将复杂的术语简化为通俗语言来提供帮助。有了这种支持,调解员能更好地掌握技术问题并与当事人互动。重要的是,如果当事人在会议期间引入了专业术语,大型语言模型可以提供即时澄清,使调解员能够保持在正轨上并引导讨论。

  For example, consider the following party statement: “The defendant breached the licensing agreement by reverse-engineering our encryption algorithm, which violates the non-compete clause and exposes us to GDPR compliance risks.”

  例如,考虑以下当事人的陈述:"被告通过对我们的加密算法进行逆向工程,违反了许可协议,这违反了竞业禁止条款,并使我们在通用数据保护条例合规方面面临风险。"

  An LLM might render this into simpler terms for the mediator: “The concern is that the other side copied and used your data security system in a way that may breach the contract and could also raise legal issues about data protection.”

  大型语言模型可能会为调解员将其简化为更简单的术语:"问题是,另一方以可能违反合同的方式复制并使用了您的数据安全系统,并且还可能引发关于数据保护的法律问题。"

  By translating technical concepts into accessible language, AI enables mediators to maintain control of the process and to ensure that discussions remain clear and productive, even in highly specialised disputes.

  通过将技术概念转化为易于理解的语言,人工智能使调解员能够保持对过程的控制,并确保讨论清晰且富有成效,即使在高度专业化的争议中也是如此。

  Facilitating the Right Questions

  促成恰当的问题

  Another critical skill in mediation is the ability to ask the “right” question. Such questions help uncover the parties’ underlying interests, test the reality of their positions, and encourage them to consider alternative options. However, identifying and framing the right question is not straightforward. A question is “right” not only if it is objectively relevant, but also if it is phrased in a way that the party can subjectively understand and meaningfully respond to. In practice, the effectiveness of a question often depends on the party’s education level, age, professional background, and communication style. A question that resonates with a university professor may not work for an investment banker, and vice versa.

  调解中的另一项关键技能是提出"恰当"问题的能力。这类问题有助于揭示当事人的潜在利益,检验其立场的现实性,并鼓励他们考虑替代方案。然而,识别和构建恰当的问题并非易事。一个问题之所以"恰当",不仅在于它在客观上具有相关性,还在于其措辞方式能让当事人主观上理解并做出有意义的回应。在实践中,问题的有效性通常取决于当事人的教育水平、年龄、专业背景和沟通风格。一个能让大学教授产生共鸣的问题,可能对投资银行家不起作用,反之亦然。

  LLMs can provide valuable assistance in this area. As Judge Newsom observed in Snell v. United Specialty Insurance Co., 102 F.4th 1208 (11th Cir. 2024), these models are trained on vast amounts of ordinarylanguage input, ranging from Hemingway novels and Ph.D. dissertations to gossip rags and comment threads. This broad training enables them to tailor the style and tone of a question to suit the audience, while preserving its substance.

  大型语言模型可以在这一领域提供宝贵的帮助。正如纽森法官在 Snell v. United Specialty Insurance Co., 102 F.4th 1208 (11th Cir. 2024) 案中所指出的,这些模型是在海量普通语言输入(从海明威的小说和博士论文到八卦杂志和评论帖子)上进行训练的。这种广泛的训练使它们能够根据受众调整问题的风格和语气,同时保留其内容实质。

  For example, when a mediator is asking a university professor to uncover underlying interests, a more reflective and academic tone may be effective: “Is your main concern the immediate financial shortfall, or how this delay affects your reputation and future cooperation?”

  例如,当调解员询问一位大学教授以揭示其潜在利益时,更具反思性和学术性的语气可能更有效:"您主要担心的是眼前的资金短缺,还是这次延误对您声誉和未来合作的影响?"

  By contrast, when a mediator is asking a trader to uncover underlying interests, a more direct and practical approach may be preferable: “What’s more important now—quick cash in, or keeping the counterparty for future deals?”

  相比之下,当调解员询问一位交易员以揭示其潜在利益时,更直接和务实的方法可能更可取:"现在更重要的是——快速拿到现金,还是为了未来的交易保留对方?"

  In this way, LLMs can support mediators by generating tailored questions that align with the parties’ backgrounds and communication styles, which increases the likelihood of constructive dialogue.

  通过这种方式,大型语言模型可以通过生成与当事人背景和沟通风格相符的定制问题来支持调解员,从而增加建设性对话的可能性。

  Generating Settlement Options

  生成和解方案

  AI can also support mediators in generating settlement options. Because LLMs are trained on vast amounts of data, including examples of past disputes and resolutions, they can help mediators brainstorm a broad range of possible solutions. This can encourage parties to move beyond preliminary positions and consider creative or alternative approaches that they might not otherwise have identified on their own.

  人工智能还可以支持调解员生成和解方案。由于大型语言模型在海量数据(包括过去争议和解决方案的例子)上进行训练,它们可以帮助调解员集思广益,探索广泛可能的解决方案。这可以鼓励当事人超越初步立场,考虑他们自己可能无法识别的创造性或替代性方法。

  Drafting Settlement Agreements

  起草和解协议

  An ideal outcome in mediation is for the parties to sign a settlement agreement promptly once consensus has been reached. In practice, however, delays often arise because of the time needed to prepare and refine the written agreement. LLMs, particularly legal-focused products such as Thomson Reuters CoCounsel, can assist mediators by generating draft settlement agreements and suggesting edits. This support helps streamline the process and allows the parties, after obtaining proper legal advice and carefully considering the terms, to finalise and sign the agreement without unnecessary delay.

  调解的理想结果是,一旦达成共识,当事人能迅速签署和解协议。然而,在实践中,由于准备和完善书面协议所需的时间,常常会出现延误。大型语言模型,特别是像汤森路透 CoCounsel 这样的法律专用产品,可以通过生成和解协议草案和建议修改来协助调解员。这种支持有助于简化流程,使当事人在获得适当的法律建议并仔细考虑条款后,能够最终确定并签署协议,而不会造成不必要的延误。

  The Risks of Using AI in Mediation

  在调解中使用人工智能的风险

  While AI offers promising benefits, its use in mediation is not without risks. Mediators must exercise caution and remain alert to potential challenges when integrating AI into the mediation process. The following four key risks deserve particular attention:-

  虽然人工智能提供了有前景的益处,但它在调解中的应用并非没有风险。调解员在将人工智能融入调解过程时必须谨慎行事,并时刻警惕潜在的挑战。以下四个关键风险尤其值得关注:

  1. AI Hallucinations

  人工智能幻觉

  Language models can generate answers that appear plausible but are in fact false. These “hallucinations” may arise even in response to simple questions and cannot be eliminated by prompting alone. If left unchecked, hallucinations may produce unreasonable or misleading statements, or even questions that inadvertently appear to favour one party. Such outputs could inflame emotions, damage trust, and ultimately derail the mediation.

  语言模型可能会生成看似合理但实际上错误的答案。即使是针对简单问题,也可能出现这些"幻觉",并且无法仅通过提示来消除。如果不加以控制,幻觉可能会产生不合理或误导性的陈述,甚至无意中看似偏袒某一方的问题。此类输出可能会激化情绪、损害信任,并最终使调解失败。

  2. Bias in AI Algorithms

  人工智能的算法偏见

  LLMs are trained on vast datasets that may reflect historical or structural biases. As a result, their outputs can unintentionally reinforce stereotypes or discriminatory patterns. If a mediator were to adopt such outputs uncritically, this could compromise the impartiality of the process. Maintaining neutrality requires that mediators critically review and adapt any AI-generated suggestions rather than relying on them wholesale.

  大型语言模型在可能反映历史或结构性偏见的海量数据集上进行训练。因此,它们的输出可能会无意中强化刻板印象或歧视性模式。如果调解员不加批判地采用此类输出,可能会损害过程的公正性。保持中立性要求调解员批判性地审查和调整任何人工智能生成的建议,而不是全盘依赖。

  3. Loss of Human Interaction

  人际互动的缺失

  AI tools currently lack the ability to fully capture or respond to human emotion. A skilled mediator can recognise subtle expressions of frustration, anger, or anxiety and adapt their approach accordingly. Over-reliance on AIgenerated questions or statements risks overlooking these emotional dimensions,weakening the human connection that is essential to building trust and rapport. Mediation is ultimately a human-centred process, and no AI tool can replace the importance of empathy and interpersonal sensitivity.

  人工智能工具目前缺乏完全捕捉或回应人类情感的能力。一位熟练的调解员能够识别出微妙的沮丧、愤怒或焦虑表情,并相应地调整自己的方法。过度依赖人工智能生成的问题或陈述,可能会忽略这些情感维度,削弱对建立信任和融洽关系至关重要的人际联系。调解最终是一个以人为本的过程,没有任何人工智能工具可以取代同理心和人际敏感度的重要性。

  4. Confidentiality and Disclosure

  保密性与披露

  Confidentiality is a core principle of mediation, enshrined in Section 8 of the Mediation Ordinance (Cap. 620), which prohibits disclosure of mediation communications except in limited circumstances set out in subsections 8(2) and 8(3). It remains unclear whether sharing mediation communications with an LLM, particularly when the model is hosted on cloud servers, would constitute a breach of these provisions. To avoid any doubt, parties’ express consent should be obtained before AI is used to assist in mediation.

  保密性是调解的核心原则,载于《调解条例》(第620章)第8条,该条禁止披露调解通讯,但第8(2)和8(3)条规定的有限情况除外。目前尚不清楚,将与大型语言模型共享调解通讯(特别是当模型托管在云服务器上时)是否构成违反这些规定。为避免任何疑问,在使用人工智能协助调解之前,应获得当事人的明确同意。

  The IBA Guidelines provide useful reference points. Part Two of the Guidelines stresses that users of AI must take reasonable steps to ensure that confidential information is not compromised, noting that data entered into proprietary or open-source LLMs may be vulnerable to data breaches and unintended disclosure.The Guidelines recommend measures such as anonymising inputs, limiting the amount of information entered to what is necessary to achieve the desired outputs, and reviewing the privacy policies of any AI tools before use.3 They also include a sample disclosure statement in Part Three, which mediators or parties can adopt to inform others that AI tools are being used and the potential risks.

  国际律师协会指南提供了有用的参考点。指南第二部分强调,人工智能用户必须采取合理措施,确保机密信息不受损害,并指出输入专有或开源大型语言模型的数据可能容易受到数据泄露和意外披露的影响。该指南建议采取诸如对输入信息进行匿名化处理、将输入的信息量限制在实现所需输出所必需的范围,以及在使用任何人工智能工具前审查其隐私政策等措施。指南第三部分还包含一份披露声明样本,调解员或当事人可以采用该样本来告知他人正在使用人工智能工具及其潜在风险。

  Managing the Risks

  风险管理

  Most of the technical limitations of LLMs can be managed by the human mediator. It is essential to remember that AI serves only as an assistance tool, not as a substitute for the mediator. From this perspective, the “Two Guiding Rules” set out in the Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary provide a useful reference point for mediators, even though they were drafted for the judiciary rather than mediation practice.

  大型语言模型的大部分技术局限性可以通过人类调解员来管理。必须牢记,人工智能仅作为辅助工具,而非调解员的替代品。从这个角度来看,香港司法机构《法官及司法人员及支援人员使用生成式人工智能指引》中规定的"两条指导原则"为调解员提供了有用的参考点,尽管这些原则是为司法机构而非调解实践制定的。

  The first rule is that mediator functions must never be delegated to AI. The second rule is that mediators must remain vigilant about the output generated by the AI chatbots, in particular, factual accuracy, potential bias, infringement of intellectual property rights, and use it at the mediator’s own risk. The mediator should take responsibility for using AI and for the end product.

  第一条原则是,调解员的职能绝不能委托给人工智能。第二条原则是,调解员必须对人工智能聊天机器人生成的输出保持警惕,特别是事实准确性、潜在偏见、侵犯知识产权等问题,并自行承担使用人工智能的风险。调解员应对使用人工智能及其最终产物负责。

  Conclusion

  结论

  AI has significant potential to assist mediators, but its limitations, including hallucinations, bias, loss of human interaction, and confidentiality concerns, demand vigilance and sound professional judgment. While the IBA has already issued guidelines setting out both applications and safeguards, Hong Kong has yet to develop a comparable framework, and establishing such guidance would help ensure that AI is integrated responsibly, in line with the principles of neutrality, confidentiality, and party autonomy that underpin mediation. The challenge ahead lies in striking the right balance between embracing innovation and safeguarding the integrity of the mediation process.

  人工智能在协助调解员方面具有巨大潜力,但其局限性,包括幻觉、偏见、人际互动的缺失以及保密性问题,要求保持警惕并运用稳健的专业判断。虽然国际律师协会已经发布了列出应用和保障措施的指南,但香港尚未制定类似的框架。建立此类指引将有助于确保人工智能以负责任的方式融入调解,并符合作为调解基础的公正性、保密性和当事人自治原则。未来的挑战在于如何在拥抱创新与维护调解过程的完整性之间取得适当的平衡。

  中文文本为机器翻译,仅供参考。

  因篇幅限制,注释等已删减。

  作者:

  Siu Wing Yee, Sylvia, JP, Chairlady of Joint Mediation Helpline Office;Consultant, Hui Doe &Sum Law Firm LLP in Association with Yingke (Hong Kong) Law Firm

  萧咏仪,太平绅士,香港联合调解专线办事处主席;希仕廷律师行与盈科(香港)律师事务所联营顾问

  Iu Kwan Yuen, Barrister-at-law; MPhil student (Part-time), Institute of Advanced Legal Studies,UOL

  姚君源,大律师;伦敦大学高等法律研究所哲学硕士生(兼读)

原标题:萧咏仪、姚君源:人工智能在调解中的应用(Artificial Intelligence in Mediation)

手机扫码分享本稿件