Accuracy alone is insufficient in complex legal AI
In highly complex fields like law, accuracy is just the baseline, not the goal line. When people’s rights, contracts, or liabilities are on the line, precision is non-negotiable. But precision alone doesn’t make a system useful. Legal AI has to do more than get facts right; it must also understand authority, relevance, and context. In other words, it must know what matters most, why it matters, and whether what it finds is still valid law.
This is why perfect AI doesn’t exist. Accuracy levels can improve, but they’ll never reach 100% in complex, high-stakes industries. Min Chen, Senior Vice President and Chief AI Officer at LexisNexis, said it clearly: “There’s no such [thing] as ‘perfect AI’ because you never get 100% accuracy or 100% relevancy, especially in complex, high stake domains like legal.” Her view aligns with how serious AI systems should be designed, not for flawlessness, but for trustworthy and reliable performance under uncertainty.
Leaders should see this as a call to redefine performance metrics for AI. Accuracy is critical, but reliability, traceability, and legal soundness create real competitive advantage. Enterprises that integrate these dimensions build stronger trust and reduce risk exposure. It’s about building AI not just to answer, but to answer responsibly.
Advancements beyond standard retrieval-augmented generation (RAG)
LexisNexis is pushing AI beyond conventional retrieval-augmented generation (RAG). The company’s next move involves graph-based models, graph RAG and agentic graphs, that connect related information points instead of treating them as isolated data. This makes the system more capable of understanding relationships between laws, cases, and precedents, significantly improving how it determines authority and relevance.
Traditional semantic search does an adequate job of finding contextually related material. But in law, relevance without authority is useless. Min Chen explained how LexisNexis solves this by layering a “point of law” graph over semantic search results. This allows the AI to isolate documents that hold actual legal authority, reducing the risk of users relying on outdated or overruled information.
Executives should view this as a step toward smarter enterprise data systems that not only retrieve information but interpret its worth. Deploying these graph-based AI architectures means moving beyond producing outputs to producing outcomes, decisions informed by verified, high-credibility data. That’s where AI drives real-world value: when it thinks more like an expert assistant than a search engine.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.
Emphasis on completeness and comprehensiveness in AI responses
Accuracy means little if an answer is incomplete. In law, missing even one dimension of a legal question can lead to wrong conclusions and real-world risk. That’s why LexisNexis has expanded its evaluation framework far beyond accuracy. Its teams assess legal AI outputs using sub-metrics that include authority, citation accuracy, hallucination rate, and comprehensiveness. This comprehensive scoring ensures that responses don’t just sound right, they cover every relevant legal angle.
Min Chen, Senior Vice President and Chief AI Officer at LexisNexis, captured this idea clearly: “Completeness speaks directly to legal reliability.” Her statement reflects a principle that goes beyond legal AI. Decision-makers in any complex field must ensure that their AI systems provide complete, actionable insights, not partial answers that could mislead outcomes.
For executives, this approach sets a precedent for evaluating AI as part of enterprise governance. Systems designed to measure and verify the completeness of answers reduce blind spots and increase organizational confidence in automation. Comprehensiveness becomes a measurable standard of reliability, something every leader can track, refine, and continuously improve.
Evolving human-AI collaboration through innovative tools
LexisNexis has built its next generation of AI tools around collaboration, not replacement. Its flagship solutions, Lexis+ AI, launched in 2023, and Protégé, launched in 2024, show how human expertise and AI capabilities can integrate seamlessly. Protégé, for example, combines knowledge graphs with semantic search to extract more authoritative content from the company’s extensive legal database.
The company is also developing “planner” and “reflection” AI agents that operate more dynamically. The planner agent breaks complex legal queries into smaller, manageable sub-questions that users can review and refine. The reflection agent drafts legal documents and then critiques its own work, making immediate, data-driven revisions. These are practical examples of how automation and human oversight can scale expertise without losing quality or accountability.
Min Chen emphasized this direction by saying, “I see the future [as] a deeper collaboration between humans and AI.” For executives, that statement points to a strategic takeaway: effective AI deployment isn’t about removing people from the process, it’s about enhancing their influence and reach. Human judgment, paired with adaptive automation, builds systems that can learn faster, operate more intelligently, and maintain trust within high-stakes industries.
Continuous iteration and improvement as drivers of AI reliability
Perfection is not the goal in AI, progress is. LexisNexis has built its approach to AI around continuous experimentation, iteration, and improvement. Every model update, evaluation cycle, and feedback loop brings the technology closer to producing higher-quality, more reliable results. This mindset acknowledges a key truth: uncertainty can never be eliminated, but it can be managed and reduced through disciplined development and testing.
Min Chen, Senior Vice President and Chief AI Officer at LexisNexis, explained it clearly: “The quality of the AI outcome… is a continuous journey of experimentation, iteration and improvement.” This reflects the company’s view that consistent refinement, rather than one-time optimization, is what sustains customer trust and business value. In industries where accuracy and reliability have tangible financial and legal consequences, ongoing improvement is part of the product, not a stage of development.
For executives, this approach offers an operational model worth adopting. Building processes that incorporate feedback and continuous learning ensures technology remains aligned with evolving business needs and regulatory landscapes. Iteration keeps AI systems adaptable, sharp, and resilient in competitive environments. When improvement is institutionalized, reliability becomes a predictable outcome rather than a variable result.
Main highlights
- Accuracy isn’t enough in complex legal AI: AI systems in law must go beyond accuracy to ensure authority, contextual relevance, and reliability. Leaders should focus on multi-dimensional evaluation methods to reduce legal and reputational risk.
- Graph-based AI boosts reliability: By leveraging graph RAG and agentic graphs, LexisNexis improves how AI verifies authoritative information. Executives should invest in graph-enhanced systems to ensure data output is both relevant and defensible.
- Comprehensive answers drive trust: LexisNexis measures AI usefulness through completeness, ensuring responses fully address complex legal questions. Organizations should adopt similar metrics to increase confidence and accuracy in AI-assisted decision-making.
- Human-AI collaboration increases precision: Tools like Lexis+ AI and Protégé show how AI can augment, not replace, expert judgment. Leaders should design workflows where human oversight and machine intelligence continuously refine each other’s output.
- Iteration sustains AI reliability: Continuous experimentation and refinement are key to maintaining dependable outcomes in evolving markets. Executives should embed iterative improvement cycles into AI development to build systems that adapt and strengthen over time.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.


