top of page

Should law firms trust AI for legal research?



Woman focuses on laptop, surrounded by glowing network graphics; text highlights financial reporting obligations for UCITS funds in office.
Should law firms trust AI for legal research?

Introduction

Artificial Intelligence (AI) has emerged from the periphery of legal innovation into the core of legal operations. Nowhere is its influence more contentious—and consequential—than in the realm of legal research. While proponents extol AI’s ability to streamline precedent searches, interpret statutes, and even predict outcomes, sceptics cite concerns regarding accuracy, transparency, and ethical oversight. For law firms operating in Luxembourg and the broader European legal landscape, the question is not merely whether AI can assist, but whether it can be trusted to do so reliably, compliantly, and without undermining professional obligations.


The evolution of NLP and LLMs has significantly enhanced AI's ability to comprehend and interpret complex legal texts. These models can now process intricate legal language, enabling more accurate and efficient legal research.


The hallucination problem in LLMs: implications for legal research

Large Language Models (LLMs), such as those underpinning contemporary AI tools used in legal practice, are trained on extensive corpora of textual data. Their capacity to generate coherent and contextually relevant language has led to their increasing deployment in tasks such as legal research, contract analysis, and predictive modelling. However, a fundamental and as yet unresolved issue persists within these systems: the tendency to hallucinate—a term that refers to the generation of factually incorrect or non-existent information that is presented in a syntactically plausible and authoritative manner.


In the context of legal research, where accuracy, traceability, and authority are paramount, this tendency presents significant risks to professional integrity, legal compliance, and client trust.


Understanding hallucination in LLMs

Hallucination in LLMs arises from the probabilistic nature of their architecture. These models do not possess intrinsic understanding or factual verification mechanisms. Instead, they rely on statistical pattern prediction—selecting the next word in a sequence based on likelihoods learned from training data.


There are two principal types of hallucination:

  • Intrinsic hallucination: where the model invents information inconsistent with the input prompt or source material.

  • Extrinsic hallucination: where the model fills in gaps with incorrect or fabricated facts in the absence of explicit cues.


The phenomenon is exacerbated in tasks that require precise citation, statutory interpretation, or jurisdictional specificity, which LLMs may not reliably perform unless constrained by a curated, up-to-date, and legally accurate dataset.

 

What is a RAG ?

Retrieval-Augmented Generation (RAG) is an advanced AI architecture that combines the generative capabilities of large language models (LLMs) with the accuracy and factual grounding of information retrieval systems. It is designed to overcome one of the key limitations of LLMs: their tendency to "hallucinate" or generate plausible but incorrect or unverifiable content.




Flowchart titled Retrieval-Augmented Generation (RAG) shows a process with steps: Documents, Retriever, Generator, and Checks.
What is RAG ?

Integration of retrieval-augmented generation (RAG) and knowledge graphs

Recent research has introduced systems that combine RAG with knowledge graphs to improve legal information retrieval. These systems can identify and analyse complex connections among cases, statutes, and legal precedents, uncovering hidden relationships and predicting legal trends.


User query workflow: Head silhouette to Retriver box, linked to Knowledge Graph, then Generator, showing document processing. Text reads: User Query.

Such advancements aim to bridge the gap between traditional keyword-based searches and contextual understanding in legal research.


When retrieval-augmented generation (RAG) systems are integrated with knowledge graphs, the resulting architecture goes beyond simple document retrieval. Instead of fetching isolated passages of text, these enhanced systems extract logically structured and contextually relevant legal information, often revealing latent relationships between legal entities that would be difficult to identify manually or through keyword-based methods. This integration offers a more nuanced understanding of legal material, significantly improving the quality and accountability of outputs in professional practice.


One of the most significant advantages of this combination is that it yields more accurate and explainable outputs. In legal settings—where practitioners are obliged to justify their reasoning with reference to identifiable sources and legal logic—explainability is not a technical luxury but a regulatory necessity. For example, when researching data protection obligations under the GDPR, a traditional AI model might retrieve relevant provisions from Article 6 (lawful basis for processing). However, a RAG system enhanced with a knowledge graph can go further by identifying connected concepts such as "data controller," "special categories of data," and linked case law interpreting those terms. It can present these elements with explicit relational links—e.g., “Article 9 refers to conditions for processing special categories, which are subject to stricter safeguards”—giving the user a traceable and interpretable result.


This traceability enhances client trust and legal compliance by showing not only what the answer is, but why the system produced it.


Moreover, the integration of knowledge graphs helps bridge the gap between statistical pattern recognition and legal reasoning. Large language models (LLMs) excel at generating human-like text by predicting word sequences based on training data. However, they often lack an understanding of legal hierarchies, definitional dependencies, and doctrinal relationships, which are essential to reasoning accurately within legal frameworks. By embedding a knowledge graph—where legal concepts such as statutes, cases, and regulations are mapped as nodes and edges—the system can simulate legal inference. It can "understand," for instance, that a ruling from the Court of Justice of the European Union (CJEU) interpreting a directive must be read in light of the directive’s recitals and implementing acts in member states.


As a practical illustration, consider a legal query on the scope of financial reporting obligations for UCITS funds under Luxembourg law. A standard model might return a passage from the Law of 17 December 2010. However, a knowledge graph-aware RAG system could identify and retrieve not only the relevant articles from the law, but also:

  • Cross-references to the CSSF circulars interpreting the reporting timelines;

  • Applicable ESMA guidelines that supplement EU-level interpretation;

  • Related administrative sanctions involving reporting breaches;

  • Definitions of key terms such as “management company” and “depositary” as used in the specific context of UCITS.


This layered and interconnected output simulates the structure of legal analysis, rather than merely aggregating surface-level text. It aligns more closely with how legal practitioners actually research, reason, and advise.


Flowchart explaining UCITS funds' reporting under Luxembourg law. Arrows lead from "Knowledge Graph-Enhanced RAG" to 5 items, including law articles and guidelines.
Example knowledge graph enhanced RAG

In summary, when RAG systems are enhanced with knowledge graphs, they evolve from advanced search engines into assistive legal reasoning systems. Their outputs become not only more useful, but also more defensible —bridging the divide between raw textual retrieval and structured legal analysis. This is especially critical in high-stakes domains such as regulatory compliance, financial services, and cross-border advisory, where legal nuance is not optional, and the credibility of reasoning must be demonstrable.


Applied use of RAG in legal research

The application of retrieval-augmented generation (RAG) in the legal domain marks a significant evolution in legal research methodology. Unlike traditional keyword-based search tools or standalone generative models, RAG systems combine the precision of document retrieval with the contextual fluency of language generation. This hybrid structure enables more nuanced and evidence-based legal outputs, which is particularly valuable in environments that demand both speed and reliability, such as litigation preparation, regulatory compliance, or transactional due diligence.


One of the most immediate benefits of RAG in legal research is its ability to identify relevant precedents and statutory materials in response to natural language queries. Unlike conventional legal databases that often require structured inputs or Boolean logic, a RAG system allows the user to input queries in the manner a lawyer might phrase them naturally. For instance, a query such as “Has the European Court of Justice addressed the compatibility of internal whistleblowing procedures with the EU Whistleblower Directive?” can be interpreted contextually by the model. It will then retrieve and rank relevant judgments, directives, and legal commentaries, not merely by keyword occurrence but by conceptual relevance to the legal issue presented. This process facilitates more comprehensive and accurate results, particularly in cross-border or multilingual contexts, where linguistic variation often complicates search precision.


Beyond retrieval, the generative component of RAG supports the drafting of legal summaries and argumentation that are grounded in the retrieved legal materials. In practice, this allows legal professionals to produce first drafts of legal memos, risk assessments, or advisory notes that are informed by actual statutes, case law, or regulatory texts. Because the generation process is anchored in verified sources, the risk of factual fabrication is substantially reduced compared to standalone large language models. Some platforms employing this architecture even embed direct links or citations to the source materials within the generated output, thereby facilitating traceability and streamlining the review process.


A particularly valuable feature of RAG systems is their ability to support due diligence and regulatory compliance through source traceability. Legal professionals are often required to demonstrate not only the conclusions they have reached, but the precise legal texts on which those conclusions are based. RAG systems respond to this need by preserving the connection between generated outputs and their documentary sources. This traceability is essential in contexts such as data protection assessments, anti-money laundering protocols, or financial sector compliance, where an evidentiary audit trail may be required by regulators or internal governance bodies.


Flowchart of RAG in legal research: Natural language query to RAG system, identifies precedents, generates summaries, maintains audit trail.
Applied use of RAG

Human-in-the-loop without regression to manual research

One of the principal objections to the use of Human-in-the-Loop (HITL) systems in legal AI applications is the concern that such oversight may negate the very efficiencies AI seeks to deliver. If every AI-generated output must be reviewed, verified, and potentially rewritten by a qualified lawyer, what value does automation provide? This concern, while understandable, mischaracterises both the purpose and practical implementation of HITL frameworks. When designed correctly, HITL processes enhance reliability without reverting to manual research models.


Modular oversight rather than line-by-line review

When designed properly, a HITL model does not entail line-by-line review or wholesale rewriting of AI-generated content. Rather, it introduces structured checkpoints at key stages of the process, where human judgment is most valuable. For example, the initial query submitted to the AI system can be reviewed or refined by the user to ensure it is jurisdictionally appropriate and legally precise. This front-end intervention can significantly reduce the likelihood of irrelevant or misdirected outputs.

Once the AI tool has generated its response—often including citations, case summaries, or legal arguments—the human reviewer need not verify every sentence. Instead, the lawyer can focus on validating whether the cited authorities exist, whether they are accurately characterised, and whether the proposed legal reasoning holds up to scrutiny. This selective oversight, when guided by confidence scores or source attribution features built into the AI system, allows lawyers to exercise professional judgment without redoing the underlying research.

In practice, this means that the lawyer’s role evolves from that of primary researcher to that of reviewer and editor. The AI system handles much of the mechanical or repetitive labour, such as searching databases, synthesising similar clauses, or summarising case law across jurisdictions. The human user intervenes only at those points where accuracy, interpretation, or ethical context is required. This model preserves efficiency while safeguarding legal and professional standards.



Flowchart of research process: Human refines query, AI generates output, validates sources, assesses results, performs research tasks.
Human in the loop research

Assisted, not redundant, research

HITL should be viewed not as a fail-safe but as a strategic filter, one that optimises the output quality without re-imposing the burdens of traditional legal research. AI tools can:

  • Surface 95% of potentially relevant cases, allowing the lawyer to verify and apply only the subset that is legally pertinent.

  • Provide automated cross-references across multilingual regulations, which the human operator can accept, modify, or dismiss.

This reduces the time lawyers spend locating sources, enabling them to focus on interpretation, synthesis, and application—functions that remain firmly in the human domain.


Workflow integration with legal-tech platforms

Many legal technology platforms are already incorporating human-in-the-loop capabilities into their design. For instance, generated outputs are often accompanied by direct links to the source material, allowing the reviewer to verify claims without leaving the platform. Some systems also offer versioning features, collaborative editing environments, or audit trails, which make it easier to integrate AI outputs into broader legal workflows involving multiple stakeholders.

It is important to recognise that the objective of human-in-the-loop design is not to duplicate work, but to strategically enhance reliability. Rather than undermining automation, it enables legal professionals to benefit from AI tools while maintaining control over the outcome. In this sense, HITL does not represent a regression to manual research, but a maturation of legal AI—one in which human oversight complements computational capacity in a way that is both responsible and efficient.


AI-powered legal research tools

Finally, it is worth noting that several AI tools have emerged to streamline legal research:

  • Casetext CoCounsel: An AI legal assistant that helps with document review, legal research, and contract analysis.

  • Hebbia: Utilizes AI to extract information from various document formats, aiding in due diligence and legal research.

  • vLex : Offers multimodal capabilities, transforming audio and video content into actionable legal intelligence.


Conclusion

As artificial intelligence continues to reshape the legal profession, its integration into legal research is no longer speculative but operational. Retrieval-augmented generation offers a practical model for producing faster, more accurate, and verifiable legal insights, while human-in-the-loop oversight ensures that efficiency never compromises rigour. Yet despite these advances, regulatory and policy frameworks must evolve in parallel. The exclusion of lawyers from public AI adoption support is not only short-sighted—it runs counter to the very innovation agenda it seeks to promote. Moving forward, the legal profession must embrace AI not with blind enthusiasm, but with informed conviction and institutional support. The tools are ready. The profession should be allowed—and encouraged—to use them.




bottom of page