The GenderCARE framework for comprehensive gender bias assessment and reduction in LLMs. It consists of four key components: (I) Criteria for gender equality benchmarks; (II) Assessment of gender bias in LLMs using the proposed GenderPair benchmark aligned with the criteria; (III) Reduction of gender bias via counterfactual data augmentation and fine-tuning strategies; (IV) Evaluation metrics at both lexical and semantic levels for bias quantification.
Abstract
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but they have also been observed to magnify societal biases, particularly those related to gender. In response to this issue, several benchmarks have been proposed to assess gender bias in LLMs. However, these benchmarks often lack practical flexibility or inadvertently introduce biases.
To address these shortcomings, we introduce GenderCARE, a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics for quantifying and mitigating gender bias in LLMs. To begin, we establish pioneering criteria for gender equality benchmarks, spanning dimensions such as inclusivity, diversity, explainability, objectivity, robustness, and realisticity. Guided by these criteria, we construct GenderPair, a novel pair-based benchmark designed to assess gender bias in LLMs comprehensively.
Our benchmark provides standardized and realistic evaluations, including previously overlooked gender groups such as transgender and non-binary individuals. Furthermore, we develop effective debiasing techniques that incorporate counterfactual data augmentation and specialized fine-tuning strategies to reduce gender bias in LLMs without compromising their overall performance. Extensive experiments demonstrate a significant reduction in various gender bias benchmarks, with reductions peaking at over 90% and averaging above 35% across 17 different LLMs. Importantly, these reductions come with minimal variability in mainstream language tasks, remaining below 2%.
By offering a realistic assessment and tailored reduction of gender biases, we hope that our GenderCARE can represent a significant step towards achieving fairness and equity in LLMs.
GenderPair Benchmark Statistics
Summary of the elements in the pair set utilized by the GenderPair benchmark. We delineate the distribution of gender targets, biased and anti-biased descriptors, and prompts across three distinct gender groups. The details of each element are documented in the appendix, available at our GitHub repository.
| Gender Groups | Gender Targets | Biased Descriptors | Anti-Biased Descriptors | Prompts | |||
|---|---|---|---|---|---|---|---|
| Identities | Titles | Pronouns | Names | ||||
| Group 1 | 5 | 25 | 4 | 30 | 83 | 83 | 31,872 |
| Group 2 | 5 | 25 | 4 | 30 | 83 | 83 | 31,872 |
| Group 3 | 10 | 23 | 18 | 30 | 83 | 83 | 40,338 |
GenderPair Prompt Examples
Some prompt instances of the proposed GenderPair benchmark. Each instance is constructed as [instruction & pair set & requirement]. For different configurations of pair set, we provide two options for instruction.
Debiasing Results
Interactive visualization of gender bias metrics before and after applying GenderCARE debiasing techniques. Select a model to compare the Bias-Pair Ratio, Toxicity, and Regard-Negative scores. Lower values indicate better performance (i.e., less bias).
BibTeX
@inproceedings{DBLP:conf/ccs/TangZZLDLQ00Y24,
author = {Kunsheng Tang and
Wenbo Zhou and
Jie Zhang and
Aishan Liu and
Gelei Deng and
Shuai Li and
Peigui Qi and
Weiming Zhang and
Tianwei Zhang and
Nenghai Yu},
title = {GenderCARE: {A} Comprehensive Framework for Assessing and Reducing
Gender Bias in Large Language Models},
booktitle = {Proceedings of the 2024 on {ACM} {SIGSAC} Conference on Computer and
Communications Security, {CCS} 2024, Salt Lake City, UT, USA, October
14-18, 2024},
pages = {1196--1210},
publisher = {{ACM}},
year = {2024},
url = {https://doi.org/10.1145/3658644.3670284},
doi = {10.1145/3658644.3670284},
}