Skip to content

aialt/agentic-legal-reasoning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

@[TOC](⚖️ Divide and Enhance - Legal Agent Framework)

A Code Implementation of the "Divide and Enhance" Paper

This project provides a reference implementation for the "Divide" component of the paper 《Divide and Enhance: Agentic Legal Reasoning with Domain-Adapted LLMs for Chinese Law》. It establishes a complete client-server architecture, including a local model deployment service and an intelligent agent application that calls this service.

🚀 Architecture Overview

The project is divided into two main components, which should be run as separate processes:

  1. Model_Server: A dedicated FastAPI server responsible for loading the large language model into GPU memory and exposing it via an API endpoint. Its sole job is to perform text generation.
  2. Main Application: The core client application that implements the "Divide" workflow. It handles user queries, orchestrates the Triage -> Decompose -> Execute -> Synthesize pipeline, and communicates with the model_server for AI reasoning.

📂 Directory Structure

DIVIDE_AGENT_FRAMEWORK/

├── configs/
   └── settings.py           

├── divide/                     
   ├── agent_workflow.py     
   ├── steps/                  # Implementations for Assess, Decompose, Execute, Synthesize
   └── tools/                  # Tools available to the LangChain agent (RAG, Web Search)

├── Model/                      # Directory for storing model files
   ├── Base_Model/
   ├── Fine_Tuned_Model/
   └── Lawformer/

├── model_server/               # FastAPI Model Server
   ├── config.py               # Configuration for the model server (model path, port)
   └── server.py               # Main FastAPI server application

├── Resources/                  # Data files for the RAG system
   ├── laws_and_regulations.json
   └── legal_dictionary.txt

├── retrieval/                  # Core implementation of the Hybrid RAG system
   ├── document_loader.py
   ├── bm25_retriever.py
   ├── semantic_retriever.py
   ├── hybrid_retriever.py
   └── utils.py

├── services/
   └── llm_interface.py        # Wrapper for all communications with the LLM service

├── main.py                     # Entry point for the agent application
├── Readme.md                   # This file
└── requirements.txt            # All project dependencies

🛠️ Setup and Installation

Prerequisites

  • Python 3.9+
  • NVIDIA GPU with CUDA installed
  • pip package manager

Step 1: Clone the Repository

git clone <url>
cd DIVIDE_AGENT_FRAMEWORK

Step 2:Install Dependencies Install all dependencies for the entire project from the unified requirements.txt file.

pip install -r requirements.txt

Step 3: Configure the Project This is the most crucial step. You must configure the paths and keys before running the project. 1.Configure the Model Server:

  • Open model_server/config.py.
  • Modify MODEL_PATH to the absolute path of your fine-tuned language model folder (e.g., pointing to the Base_ModelFine_Tuned_Model directory).
  • Adjust HOST, PORT, and MAX_GPU_MEMORY if needed.

2.Configure the Agent Application:

  • Open configs/settings.py.
  • Ensure TRIAGE_DECOMPOSE_MODEL_URL and SYNTHESIZE_AGENT_MODEL_URL match the address of your running model server.
  • Modify all file paths (LAW_DATA_PATH, LEGAL_JIEBA_DICT_PATH, SEMANTIC_EMBEDDING_MODEL_PATH) to the absolute paths of your resource and model files.
  • Fill in your BOCHA_API_KEY for the web search tool.

▶️ How to Run

You need to start the two components in two separate terminal windows. Terminal 1: Start the Model Server

# Navigate to the model server directory
cd model_server

#Start the server
python server.py

Wait until you see the log message indicating that the model has been successfully loaded and the server is running. Terminal 2: Run the Agent Application

# From the root directory (DIVIDE_AGENT_FRAMEWORK)
python main.py

Once the agent application initializes the RAG system, you can start typing your legal questions in the console.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages