Skip to Main Content

Artificial Intelligence and the Law: Legal Research

Legal Research using AI

Although AI cannot conduct legal research for you, it can help with many different tasks to improve efficiency. For example, it can help:

  • Provide a starting point 
  • Summarize information and search results
  • Review documents
  • Analyze, review, and manage contracts/documents
  • Process documents
  • Draft legal memos and/or briefs 

To use an AI tool effectively, visit the University of Windsor's "Guidance for Law Students on using AI in Legal Research and Writing Applications (Draft)" or apply the prompt engineering strategies listed in the box below.

Limitations of AI

It is important to keep in mind that AI tools can only generate and make decisions based on information that it received through training. 

Therefore, AI 

  • Does not always understand context 
  • Can provide fictitious information and present it as fact also known as hallucinations or confabulations (see box below)
  • Can make mistakes
  • Lacks ethics
  • Cannot provide sources
  • May have outdated information (e.g. ChatGPT-3 was last updated in 2022)
  • Is not fully reliable

*Ensure you always fact check and verify the information that is provided.*

Apply the ROBOT Test in the AI Tools page to help you critically evaluate AI tools. 

Prompt Engineering Strategies

Prompt engineering is the process of designing and developing effective inputs that can be understood by AI to achieve relevant outputs.

The links below provide examples, tips, and guidance on developing prompts to use in AI tools to obtain effective results:

Hallucinations

A hallucination occurs when an AI tool presents false or fabricated information as fact. This can be dangerous if the information the AI provides is not thoroughly fact checked. Therefore, it is important to verify the information that an AI tool provides.

See My "Hallucinating" Experience with ChatGPT by retired Judge Herbert B. Dixon for an example. 

The IBM article, Techniques for Avoiding Undesirable Output, provides some tips on how to prevent hallucinations and bias.