This page presents our preprocessing-based methods for bias mitigation and fairness enhancement in machine learning models. Preprocessing methods operate directly on the training data before any machine learning (ML) model is built. These approaches aim to reduce or eliminate bias at its source, as ML models learn statistical patterns from the data they are trained on. If the training data is biased, it can result in unfair or discriminatory predictions. By addressing bias prior to model training, preprocessing methods help ensure that downstream models produce fairer and more responsible outcomes.