{ "info": { "author": "Center for Data Science and Public Policy", "author_email": "datascifellows@gmail.com", "bugtrack_url": null, "classifiers": [ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Natural Language :: English", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9" ], "description": "# *Aequitas*: Bias Auditing & Fair ML Toolkit\n\n[](https://pypi.org/project/aequitas/)\n[](https://opensource.org/licenses/MIT)\n[](https://github.com/python/black)\n\n[comment]: <> (Add badges for coverage when we have tests, update repo for other types of badges!)\n\n\n
\n \n
Type | \nMethod | \nDescription | \n
---|---|---|
Pre-processing | \nData Repairer | \nTransforms the data distribution so that a given feature distribution is marginally independent of the sensitive attribute, s. | \n
Label Flipping | \nFlips the labels of a fraction of the training data according to the Fair Ordering-Based Noise Correction method. | \n|
Prevalence Sampling | \nGenerates a training sample with controllable balanced prevalence for the groups in dataset, either by undersampling or oversampling. | \n|
Unawareness | \nRemoves features that are highly correlated with the sensitive attribute. | \n|
Massaging | \nFlips selected labels to reduce prevalence disparity between groups. | \n|
In-processing | \nFairGBM | \nNovel method where a boosting trees algorithm (LightGBM) is subject to pre-defined fairness constraints. | \n
Fairlearn Classifier | \nModels from the Fairlearn reductions package. Possible parameterization for ExponentiatedGradient and GridSearch methods. | \n|
Post-processing | \nGroup Threshold | \nAdjusts the threshold per group to obtain a certain fairness criterion (e.g., all groups with 10% FPR) | \n
Balanced Group Threshold | \nAdjusts the threshold per group to obtain a certain fairness criterion, while satisfying a global constraint (e.g., Demographic Parity with a global FPR of 10%) | \n