Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release PR: v0.4.0 #120

Merged
merged 43 commits into from
Feb 12, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
fb73c91
Merge pull request #103 from cvs-health/main
dylanbouchard Jan 15, 2025
7cafde2
fix typos in notebook
dylanbouchard Jan 24, 2025
760abdf
fixed issue #96 for customizabe failure messages by adding failure_me…
Riddhimaan-Senapati Jan 26, 2025
ebc18cf
fixed formatting using ruff
Riddhimaan-Senapati Jan 26, 2025
4a957a9
added support for failure_message being a dict in counterfactual.py,g…
Riddhimaan-Senapati Jan 26, 2025
c657c8a
tweaked _async_api_call to handle the case when an exception is not i…
Riddhimaan-Senapati Jan 26, 2025
4dfe12e
fixed non-completion rate for the case that an exception pops up whic…
Riddhimaan-Senapati Jan 27, 2025
99f92fe
fixed the code according to the review I got
Riddhimaan-Senapati Jan 27, 2025
d74fb4c
changed data structure of failure_messages to set to make it run faster
Riddhimaan-Senapati Jan 27, 2025
f69af86
Merge pull request #109 from cvs-health/db/fix_typos
dylanbouchard Jan 27, 2025
f3364f2
minor refactoring
dylanbouchard Jan 31, 2025
a2046c6
update unit tests
dylanbouchard Jan 31, 2025
405afc1
update documentation
dylanbouchard Jan 31, 2025
b7edda0
refactored the code to remove failure_messages and just use suppresse…
Riddhimaan-Senapati Jan 31, 2025
ac5dd2d
fixed the definition of suppressed_exceptions to include Dict as well
Riddhimaan-Senapati Jan 31, 2025
1359633
Merge pull request #110 from Riddhimaan-Senapati/main
dylanbouchard Jan 31, 2025
3e60997
Merge branch 'develop' into db/use_n_param
dylanbouchard Jan 31, 2025
82a7fd6
include fallback logic if n fails
dylanbouchard Feb 3, 2025
9c9df00
update notebooks and refactor auto
dylanbouchard Feb 3, 2025
0d68332
fix helper
dylanbouchard Feb 3, 2025
f7ad863
Merge pull request #112 from cvs-health/db/use_n_param
dylanbouchard Feb 3, 2025
801d442
update notebooks
zeya30 Feb 5, 2025
779f515
Merge pull request #114 from zeya30/notebook-edits
dylanbouchard Feb 5, 2025
e37089e
adding jupyter libs for dev dep
dskarbrevik Feb 6, 2025
4499bdd
Merge pull request #115 from dskarbrevik/ds/dev-deps
dylanbouchard Feb 6, 2025
a9b1066
adding dotenv
dskarbrevik Feb 6, 2025
974b803
Merge branch 'develop' into ds/dev-deps
dskarbrevik Feb 6, 2025
126762c
new extra_llms dep
dskarbrevik Feb 6, 2025
67ccf29
Merge pull request #117 from dskarbrevik/ds/dev-deps
dylanbouchard Feb 6, 2025
e2e8375
update default use of n
dylanbouchard Feb 10, 2025
b63902b
update n handling
dylanbouchard Feb 10, 2025
f74578b
Merge pull request #119 from cvs-health/db/update_default_n
dylanbouchard Feb 10, 2025
a15b964
fix warning message
dylanbouchard Feb 10, 2025
f12975f
update version
dylanbouchard Feb 10, 2025
659f859
update warning message
dylanbouchard Feb 11, 2025
019f5e2
add new graphic to readme
dylanbouchard Feb 11, 2025
b6623c3
patch transformers security issue
dylanbouchard Feb 11, 2025
c501584
update docstring
dylanbouchard Feb 11, 2025
a81c643
fix dependencies and update docstring
dylanbouchard Feb 11, 2025
a04d2eb
docstring updates
dylanbouchard Feb 11, 2025
2cf798b
paragaph -> links in readme
dylanbouchard Feb 12, 2025
496d671
notebook update
dylanbouchard Feb 12, 2025
0ddbfcc
fix links
dylanbouchard Feb 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 14 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,26 @@
[![](https://img.shields.io/badge/arXiv-2407.10853-B31B1B.svg)](https://arxiv.org/abs/2407.10853)


LangFair is a comprehensive Python library designed for conducting bias and fairness assessments of large language model (LLM) use cases. This repository includes a comprehensive framework for [choosing bias and fairness metrics](https://github.com/cvs-health/langfair/tree/main#-choosing-bias-and-fairness-metrics-for-an-llm-use-case), along with [demo notebooks](https://github.com/cvs-health/langfair/tree/main/examples) and a [technical playbook](https://arxiv.org/abs/2407.10853) that discusses LLM bias and fairness risks, evaluation metrics, and best practices.
LangFair is a comprehensive Python library designed for conducting bias and fairness assessments of large language model (LLM) use cases. This repository includes various supporting resources, including

Explore our [documentation site](https://cvs-health.github.io/langfair/) for detailed instructions on using LangFair.
- [Documentation site](https://cvs-health.github.io/langfair/) with complete API reference
- [Comprehensive framework](https://github.com/cvs-health/langfair/tree/main#-choosing-bias-and-fairness-metrics-for-an-llm-use-case) for choosing bias and fairness metrics
- [Demo notebooks](https://github.com/cvs-health/langfair/tree/main#-example-notebooks) providing illustrative examples
- [LangFair tutorial](https://medium.com/cvs-health-tech-blog/how-to-assess-your-llm-use-case-for-bias-and-fairness-with-langfair-7be89c0c4fab) on Medium
- [Software paper](https://arxiv.org/abs/2501.03112v1) on how LangFair compares to other toolkits
- [Research paper](https://arxiv.org/abs/2407.10853) on our evaluation approach

## 🚀 Why Choose LangFair?
Static benchmark assessments, which are typically assumed to be sufficiently representative, often fall short in capturing the risks associated with all possible use cases of LLMs. These models are increasingly used in various applications, including recommendation systems, classification, text generation, and summarization. However, evaluating these models without considering use-case-specific prompts can lead to misleading assessments of their performance, especially regarding bias and fairness risks.

LangFair addresses this gap by adopting a Bring Your Own Prompts (BYOP) approach, allowing users to tailor bias and fairness evaluations to their specific use cases. This ensures that the metrics computed reflect the true performance of the LLMs in real-world scenarios, where prompt-specific risks are critical. Additionally, LangFair's focus is on output-based metrics that are practical for governance audits and real-world testing, without needing access to internal model states.

<p align="center">
<img src="https://raw.githubusercontent.com/cvs-health/langfair/release-branch/v0.4.0/assets/images/langfair_graphic.png" />
</p>

**Note:** This diagram illustrates the workflow for assessing bias and fairness in text generation and summarization use cases.

## ⚡ Quickstart Guide
### (Optional) Create a virtual environment for using LangFair
We recommend creating a new virtual environment using venv before installing LangFair. To do so, please follow instructions [here](https://docs.python.org/3/library/venv.html).
Expand Down Expand Up @@ -157,7 +168,7 @@ Explore the following demo notebooks to see how to use LangFair for various bias
## 🛠 Choosing Bias and Fairness Metrics for an LLM Use Case
Selecting the appropriate bias and fairness metrics is essential for accurately assessing the performance of large language models (LLMs) in specific use cases. Instead of attempting to compute all possible metrics, practitioners should focus on a relevant subset that aligns with their specific goals and the context of their application.

Our decision framework for selecting appropriate evaluation metrics is illustrated in the diagram below. For more details, refer to our [technical playbook](https://arxiv.org/abs/2407.10853).
Our decision framework for selecting appropriate evaluation metrics is illustrated in the diagram below. For more details, refer to our [research paper](https://arxiv.org/abs/2407.10853) detailing the evaluation approach.

<p align="center">
<img src="https://raw.githubusercontent.com/cvs-health/langfair/main/assets/images/use_case_framework.PNG" />
Expand Down
Binary file added assets/images/langfair_graphic.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,7 @@
"$$ SERP\\text{-}K = \\frac{1}{N} \\sum_{i=1}^N \\min(\\psi(X_i', X_i''),\\psi(X_i'', X_i')) $$\n",
"where $\\hat{R}_i', \\hat{R}_i''$ respectively denote the generated lists of recommendations by an LLM from the counterfactual input pair $(X_i', X_i'')$, $v$ is a recommendation from $\\hat{R}_i'$, and $rank(v,\\hat{R}_i')$ denotes the rank of $v$ in $\\hat{R}_i'$. Note that the use of $\\min(\\cdot,\\cdot)$ is included to achieve symmetry.\n",
"\n",
"##### Pairwise Pairwise Ranking Accuracy Gap at K (PRAG-K)\n",
"##### Pairwise Ranking Accuracy Gap at K (PRAG-K)\n",
"PRAG-K reflects the similarity in pairwise ranking between two recommendation results. The pairwise adaptation of PRAG-K is defined as follows:\n",
"\n",
"$$rankmatch_i(v_1,v_2) = I(rank(v_1,\\hat{R}_i')<rank(v_2,\\hat{R}_i'))*I(rank(v_1,\\hat{R}_i'')<rank(v_2,\\hat{R}_i''))$$\n",
Expand Down
12 changes: 7 additions & 5 deletions examples/evaluations/text_generation/auto_eval_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -129,8 +129,9 @@
"A list of input prompts for the model.\n",
"- `responses` - (**list of strings, default=None**)\n",
"A list of generated output from an LLM. If not available, responses are computed using the model.\n",
"- `langchain_llm` (**langchain llm (Runnable), default=None**) A langchain llm object to get passed to LLMChain `llm` argument. \n",
"- `suppressed_exceptions` (**tuple, default=None**) Specifies which exceptions to handle as 'Unable to get response' rather than raising the exception\n",
"- `langchain_llm` (**langchain llm (BaseChatModel), default=None**) A langchain llm (`BaseChatModel`). \n",
"- `suppressed_exceptions` (**tuple or dict, default=None**) If a tuple, specifies which exceptions to handle as 'Unable to get response' rather than raising the exception. If a dict, enables users to specify exception-specific failure messages with keys being subclasses of BaseException\n",
"- `use_n_param` (**bool, default=False**) Specifies whether to use `n` parameter for `BaseChatModel`. Not compatible with all `BaseChatModel` classes. If used, it speeds up the generation process substantially when count > 1.\n",
"- `metrics` - (**dict or list of str, default is all metrics**)\n",
"Specifies which metrics to evaluate.\n",
"- `toxicity_device` - (**str or torch.device input or torch.device object, default=\"cpu\"**)\n",
Expand All @@ -143,6 +144,7 @@
"1. `evaluate` - Compute supported metrics and, optionally, response-level scores.\n",
"\n",
" **Method Attributes:**\n",
" - `count` - (**int, default=25**) Specifies number of responses to generate for each prompt. \n",
" - `metrics` - (**dict or list of str, default=None**)\n",
" Specifies which metrics to evaluate if a change is desired from those specified in self.metrics.\n",
" - `return_data` : (**bool, default=False**)\n",
Expand Down Expand Up @@ -805,9 +807,9 @@
"uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m125"
},
"kernelspec": {
"display_name": "langchain",
"display_name": ".venv",
"language": "python",
"name": "langchain"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -819,7 +821,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.9.6"
}
},
"nbformat": 4,
Expand Down
Loading