This repository contains code to generate datasets and run experiments for using large language models (LLMs) to compute extensions of various abstract argumentation semantics.
First, create a Python 3 virtual environment (tested with Python 3.10) and install the required packages using pip or conda.
-
Install PyTorch (tested with version 2.5.1)
Make sure to install PyTorch with the CUDA/CPU settings appropriate for your system.
-
Install other dependencies
pip install -r requirements.txt
Navigate to the src/data/generators/vendor directory and compile the Argumentation Framework generators:
./install.sh
Compiling requires Java, Ant, and Maven.
To generate Argumentation Frameworks (AFs), we use:
The src/data folder contains data generation scripts:
generate_apx.py: Generates AFs in APX format.apx_to_afs.py: Converts APX files toArgumentationFrameworkobjects and computes extensions and argument acceptance.afs_to_enforcement.py: Generates and solves status and extension enforcement problems for an AF.
To generate data, simply run:
./generate_data.sh
-
Download LLaMA-Factory and place it as the
llama_factoryfolder. -
Enter the
llama_factorydirectory and install the requirements following theREADME.md. -
Download models:
-
Copy your train and test datasets into
llama_factory/data, and update the dataset information accordingly. -
Go to
examples/train_lora, editllama3_lora_sft.yaml, then start training:llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
This project builds on code and ideas from the following:
-
Craandijk, Dennis, and Floris Bex. "Enforcement heuristics for argumentation with deep reinforcement learning." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 5. 2022.
-
Zheng, Yaowei, et al. "Llamafactory: Unified efficient fine-tuning of 100+ language models." arXiv preprint arXiv:2403.13372 (2024).