Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into re/response-caching…
Browse files Browse the repository at this point in the history
…-111
  • Loading branch information
renzmann committed Feb 12, 2025
2 parents 3c9c3b5 + 6ceb521 commit aa02423
Show file tree
Hide file tree
Showing 12 changed files with 1,033 additions and 978 deletions.
17 changes: 14 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,26 @@
[![](https://img.shields.io/badge/arXiv-2407.10853-B31B1B.svg)](https://arxiv.org/abs/2407.10853)


LangFair is a comprehensive Python library designed for conducting bias and fairness assessments of large language model (LLM) use cases. This repository includes a comprehensive framework for [choosing bias and fairness metrics](https://github.com/cvs-health/langfair/tree/main#-choosing-bias-and-fairness-metrics-for-an-llm-use-case), along with [demo notebooks](https://github.com/cvs-health/langfair/tree/main/examples) and a [technical playbook](https://arxiv.org/abs/2407.10853) that discusses LLM bias and fairness risks, evaluation metrics, and best practices.
LangFair is a comprehensive Python library designed for conducting bias and fairness assessments of large language model (LLM) use cases. This repository includes various supporting resources, including

Explore our [documentation site](https://cvs-health.github.io/langfair/) for detailed instructions on using LangFair.
- [Documentation site](https://cvs-health.github.io/langfair/) with complete API reference
- [Comprehensive framework](https://github.com/cvs-health/langfair/tree/main#-choosing-bias-and-fairness-metrics-for-an-llm-use-case) for choosing bias and fairness metrics
- [Demo notebooks](https://github.com/cvs-health/langfair/tree/main#-example-notebooks) providing illustrative examples
- [LangFair tutorial](https://medium.com/cvs-health-tech-blog/how-to-assess-your-llm-use-case-for-bias-and-fairness-with-langfair-7be89c0c4fab) on Medium
- [Software paper](https://arxiv.org/abs/2501.03112v1) on how LangFair compares to other toolkits
- [Research paper](https://arxiv.org/abs/2407.10853) on our evaluation approach

## 🚀 Why Choose LangFair?
Static benchmark assessments, which are typically assumed to be sufficiently representative, often fall short in capturing the risks associated with all possible use cases of LLMs. These models are increasingly used in various applications, including recommendation systems, classification, text generation, and summarization. However, evaluating these models without considering use-case-specific prompts can lead to misleading assessments of their performance, especially regarding bias and fairness risks.

LangFair addresses this gap by adopting a Bring Your Own Prompts (BYOP) approach, allowing users to tailor bias and fairness evaluations to their specific use cases. This ensures that the metrics computed reflect the true performance of the LLMs in real-world scenarios, where prompt-specific risks are critical. Additionally, LangFair's focus is on output-based metrics that are practical for governance audits and real-world testing, without needing access to internal model states.

<p align="center">
<img src="https://raw.githubusercontent.com/cvs-health/langfair/release-branch/v0.4.0/assets/images/langfair_graphic.png" />
</p>

**Note:** This diagram illustrates the workflow for assessing bias and fairness in text generation and summarization use cases.

## ⚡ Quickstart Guide
### (Optional) Create a virtual environment for using LangFair
We recommend creating a new virtual environment using venv before installing LangFair. To do so, please follow instructions [here](https://docs.python.org/3/library/venv.html).
Expand Down Expand Up @@ -157,7 +168,7 @@ Explore the following demo notebooks to see how to use LangFair for various bias
## 🛠 Choosing Bias and Fairness Metrics for an LLM Use Case
Selecting the appropriate bias and fairness metrics is essential for accurately assessing the performance of large language models (LLMs) in specific use cases. Instead of attempting to compute all possible metrics, practitioners should focus on a relevant subset that aligns with their specific goals and the context of their application.

Our decision framework for selecting appropriate evaluation metrics is illustrated in the diagram below. For more details, refer to our [technical playbook](https://arxiv.org/abs/2407.10853).
Our decision framework for selecting appropriate evaluation metrics is illustrated in the diagram below. For more details, refer to our [research paper](https://arxiv.org/abs/2407.10853) detailing the evaluation approach.

<p align="center">
<img src="https://raw.githubusercontent.com/cvs-health/langfair/main/assets/images/use_case_framework.PNG" />
Expand Down
Binary file added assets/images/langfair_graphic.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/evaluations/text_generation/auto_eval_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@
"- `responses` - (**list of strings, default=None**)\n",
"A list of generated output from an LLM. If not available, responses are computed using the model.\n",
"- `langchain_llm` (**langchain llm (BaseChatModel), default=None**) A langchain llm (`BaseChatModel`). \n",
"- `suppressed_exceptions` (**tuple, default=None**) Specifies which exceptions to handle as 'Unable to get response' rather than raising the exception\n",
"- `suppressed_exceptions` (**tuple or dict, default=None**) If a tuple, specifies which exceptions to handle as 'Unable to get response' rather than raising the exception. If a dict, enables users to specify exception-specific failure messages with keys being subclasses of BaseException\n",
"- `use_n_param` (**bool, default=False**) Specifies whether to use `n` parameter for `BaseChatModel`. Not compatible with all `BaseChatModel` classes. If used, it speeds up the generation process substantially when count > 1.\n",
"- `metrics` - (**dict or list of str, default is all metrics**)\n",
"Specifies which metrics to evaluate.\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@
"**Class Attributes:**\n",
"\n",
"- `langchain_llm` (**langchain llm (BaseChatModel), default=None**) A langchain llm (`BaseChatModel`). \n",
"- `suppressed_exceptions` (**tuple, default=None**) Specifies which exceptions to handle as 'Unable to get response' rather than raising the exception\n",
"- `suppressed_exceptions` (**tuple or dict, default=None**) If a tuple, specifies which exceptions to handle as 'Unable to get response' rather than raising the exception. If a dict, enables users to specify exception-specific failure messages with keys being subclasses of BaseException\n",
"- `use_n_param` (**bool, default=False**) Specifies whether to use `n` parameter for `BaseChatModel`. Not compatible with all `BaseChatModel` classes. If used, it speeds up the generation process substantially when count > 1.\n",
"- `max_calls_per_min` (**deprecated as of 0.2.0**) Use LangChain's InMemoryRateLimiter instead."
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@
"##### Class parameters:\n",
"\n",
"- `langchain_llm` (**langchain llm (BaseChatModel), default=None**) A langchain llm (`BaseChatModel`). \n",
"- `suppressed_exceptions` (**tuple, default=None**) Specifies which exceptions to handle as 'Unable to get response' rather than raising the exception\n",
"- `suppressed_exceptions` (**tuple or dict, default=None**) If a tuple, specifies which exceptions to handle as 'Unable to get response' rather than raising the exception. If a dict, enables users to specify exception-specific failure messages with keys being subclasses of BaseException\n",
"- `use_n_param` (**bool, default=False**) Specifies whether to use `n` parameter for `BaseChatModel`. Not compatible with all `BaseChatModel` classes. If used, it speeds up the generation process substantially when count > 1.\n",
"- `max_calls_per_min` (**Deprecated as of 0.2.0**) Use LangChain's InMemoryRateLimiter instead.\n",
"\n",
Expand Down
88 changes: 40 additions & 48 deletions examples/evaluations/text_generation/toxicity_metrics_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -185,8 +185,8 @@
"##### Class parameters:\n",
"\n",
"- `langchain_llm` (**langchain llm (BaseChatModel), default=None**) A langchain llm (`BaseChatModel`). \n",
"- `suppressed_exceptions` (**tuple, default=None**) Specifies which exceptions to handle as 'Unable to get response' rather than raising the exception\n",
"- `use_n_param` (**bool, default=False**) Specifies whether to use `n` parameter for `BaseLaBaseChatModelnguageModel`. Not compatible with all `BaseChatModel` classes. If used, it speeds up the generation process substantially when count > 1.\n",
"- `suppressed_exceptions` (**tuple or dict, default=None**) If a tuple, specifies which exceptions to handle as 'Unable to get response' rather than raising the exception. If a dict, enables users to specify exception-specific failure messages with keys being subclasses of BaseException\n",
"- `use_n_param` (**bool, default=False**) Specifies whether to use `n` parameter for `BaseChatModel`. Not compatible with all `BaseChatModel` classes. If used, it speeds up the generation process substantially when count > 1.\n",
"- `max_calls_per_min` (**Deprecated as of 0.2.0**) Use LangChain's InMemoryRateLimiter instead.\n",
"\n",
"##### Methods:\n",
Expand All @@ -211,7 +211,7 @@
"source": [
"Below we use LangFair's `ResponseGenerator` class to generate LLM responses, which will be used to compute evaluation metrics. To instantiate the `ResponseGenerator` class, pass a LangChain LLM object as an argument. \n",
"\n",
"**Important note: We provide three examples of LangChain LLMs below, but these can be replaced with a LangChain LLM of your choice.**\n",
"**Important note: We provide four examples of LangChain LLMs below, but these can be replaced with a LangChain LLM of your choice.**\n",
"\n",
"To understand more about how to instantiate the langchain llm of your choice read more here:\n",
"https://python.langchain.com/docs/integrations/chat/"
Expand Down Expand Up @@ -299,77 +299,69 @@
},
{
"cell_type": "markdown",
"id": "34047018-b43a-4a66-b5bb-024a3c659ab2",
"id": "c214ff7d",
"metadata": {},
"source": [
"Example 3: OpenAI on Azure"
"Example 3: OpenAI (non-azure)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "8cb7cc8b-4634-4ea4-a23f-f3eea88528fe",
"metadata": {
"tags": []
},
"execution_count": 6,
"id": "93b26b73",
"metadata": {},
"outputs": [],
"source": [
"# # Run if langchain-openai not installed\n",
"# import sys\n",
"# !{sys.executable} -m pip install langchain-openai\n",
"\n",
"# import openai\n",
"# from langchain_openai import AzureChatOpenAI, AzureOpenAI\n",
"\n",
"# llm = AzureChatOpenAI(\n",
"# deployment_name=DEPLOYMENT_NAME,\n",
"# openai_api_key=API_KEY,\n",
"# azure_endpoint=API_BASE,\n",
"# openai_api_type=API_TYPE,\n",
"# openai_api_version=API_VERSION,\n",
"# temperature=1, # User to set temperature\n",
"# #rate_limiter=rate_limiter\n",
"# from langchain_openai import ChatOpenAI\n",
"\n",
"# rate_limiter = InMemoryRateLimiter(\n",
"# requests_per_second=.1, \n",
"# check_every_n_seconds=10, \n",
"# max_bucket_size=10, \n",
"# )\n",
"\n",
"# # Define exceptions to suppress\n",
"# suppressed_exceptions = (openai.BadRequestError, ValueError) # this suppresses content filtering errors"
"# # Initialize ChatOpenAI with the rate limiter\n",
"# llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", max_tokens=100, rate_limiter=rate_limiter)\n",
"# suppressed_exceptions = (openai.BadRequestError, ValueError)\n"
]
},
{
"cell_type": "markdown",
"id": "c214ff7d",
"id": "34047018-b43a-4a66-b5bb-024a3c659ab2",
"metadata": {},
"source": [
"Example 4: OpenAI (non-azure)"
"Example 4: OpenAI on Azure"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "93b26b73",
"metadata": {},
"execution_count": 4,
"id": "8cb7cc8b-4634-4ea4-a23f-f3eea88528fe",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import openai\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# llm = OpenAI(\n",
"# model=\"gpt-3.5-turbo\",\n",
"# temperature=1,\n",
"# rate_limiter=rate_limiter\n",
"# )\n",
"# Run if langchain-openai not installed\n",
"# import sys\n",
"# !{sys.executable} -m pip install langchain-openai\n",
"\n",
"rate_limiter = InMemoryRateLimiter(\n",
" requests_per_second=.1, \n",
" check_every_n_seconds=10, \n",
" max_bucket_size=10, \n",
"import openai\n",
"from langchain_openai import AzureChatOpenAI, AzureOpenAI\n",
"\n",
"llm = AzureChatOpenAI(\n",
" deployment_name=DEPLOYMENT_NAME,\n",
" openai_api_key=API_KEY,\n",
" azure_endpoint=API_BASE,\n",
" openai_api_type=API_TYPE,\n",
" openai_api_version=API_VERSION,\n",
" temperature=1, # User to set temperature\n",
" #rate_limiter=rate_limiter\n",
")\n",
"\n",
"# Initialize OpenAI with the rate limiter\n",
"# llm = OpenAI(model_name=\"text-davinci-003\", max_tokens=100, callbacks=[rate_limiter])\n",
"llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", max_tokens=100, rate_limiter=rate_limiter)\n",
"\n",
"suppressed_exceptions = (openai.BadRequestError, ValueError)\n"
"# Define exceptions to suppress\n",
"suppressed_exceptions = (openai.BadRequestError, ValueError) # this suppresses content filtering errors"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions examples/generators/response_generator_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -95,8 +95,8 @@
"\n",
"##### Class parameters:\n",
"\n",
"- `langchain_llm` (**langchain llm (Runnable), default=None**) A langchain llm object to get passed to LLMChain `llm` argument.\n",
"- `suppressed_exceptions` (**tuple, default=None**) Specifies which exceptions to handle as 'Unable to get response' rather than raising the exception\n",
"- `langchain_llm` (**langchain llm (BaseChatModel), default=None**) A langchain llm (`BaseChatModel`). \n",
"- `suppressed_exceptions` (**tuple or dict, default=None**) If a tuple, specifies which exceptions to handle as 'Unable to get response' rather than raising the exception. If a dict, enables users to specify exception-specific failure messages with keys being subclasses of BaseException\n",
"- `use_n_param` (**bool, default=False**) Specifies whether to use `n` parameter for `BaseChatModel`. Not compatible with all `BaseChatModel` classes. If used, it speeds up the generation process substantially when count > 1.\n",
"- `max_calls_per_min` (**Deprecated as of 0.2.0**) Use LangChain's InMemoryRateLimiter instead."
]
Expand Down
4 changes: 2 additions & 2 deletions langfair/auto/auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,8 @@ def __init__(
relevant parameters to the constructor of their `langchain_llm` object.
suppressed_exceptions : tuple or dict, default=None
If a tuple,Specifies which exceptions to handle as 'Unable to get response' rather than raising the
exception. If a dict,enables users to specify exception-specific failure messages with keys being subclasses
If a tuple, specifies which exceptions to handle as 'Unable to get response' rather than raising the
exception. If a dict, enables users to specify exception-specific failure messages with keys being subclasses
of BaseException
use_n_param : bool, default=False
Expand Down
4 changes: 2 additions & 2 deletions langfair/generator/counterfactual.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ def __init__(
relevant parameters to the constructor of their `langchain_llm` object.
suppressed_exceptions : tuple or dict, default=None
If a tuple,Specifies which exceptions to handle as 'Unable to get response' rather than raising the
exception. If a dict,enables users to specify exception-specific failure messages with keys being subclasses
If a tuple, specifies which exceptions to handle as 'Unable to get response' rather than raising the
exception. If a dict, enables users to specify exception-specific failure messages with keys being subclasses
of BaseException
use_n_param : bool, default=False
Expand Down
10 changes: 7 additions & 3 deletions langfair/generator/generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,11 @@ class GeneratedResponses(TypedDict):
metadata: ResponseMetadata


N_PARAM_WARNING = """Use of `n` parameter is not compatible with all BaseChatModels. Ensure your BaseChatModel is compatible."""
N_PARAM_WARNING = """
The 'use_n_param' parameter may not be compatible with all BaseChatModel instances.
Please ensure that your specific BaseChatModel has an 'n' attribute and supports setting 'n' to a value up to 'count'.
Note that some BaseChatModel instances only support 'n' up to a certain value. If 'count' exceeds this value, an error may occur.
"""


@final
Expand All @@ -66,8 +70,8 @@ def __init__(
relevant parameters to the constructor of their `langchain_llm` object.
suppressed_exceptions : tuple or dict, default=None
If a tuple,Specifies which exceptions to handle as 'Unable to get response' rather than raising the
exception. If a dict,enables users to specify exception-specific failure messages with keys being subclasses
If a tuple, specifies which exceptions to handle as 'Unable to get response' rather than raising the
exception. If a dict, enables users to specify exception-specific failure messages with keys being subclasses
of BaseException
use_n_param : bool, default=False
Expand Down
Loading

0 comments on commit aa02423

Please sign in to comment.