DESARROLLO DE METODOLOGIAS DE CLASIFICACION Y VALIDACION PARA DATOS COMPLEJOS
PID2021-128314NB-I00
•
Nombre agencia financiadora Agencia Estatal de Investigación
Acrónimo agencia financiadora AEI
Programa Programa Estatal para Impulsar la Investigación Científico-Técnica y su Transferencia
Subprograma Subprograma Estatal de Generación de Conocimiento
Convocatoria Proyectos de I+D+I (Generación de Conocimiento y Retos Investigación)
Año convocatoria 2021
Unidad de gestión Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023
Centro beneficiario UNIVERSIDAD DE VALLADOLID
Identificador persistente http://dx.doi.org/10.13039/501100011033
Publicaciones
Resultados totales (Incluyendo duplicados): 1
Encontrada(s) 1 página(s)
Encontrada(s) 1 página(s)
Preserving the fairness guarantees of classifiers in changing environments: a survey
Academica-e. Repositorio Institucional de la Universidad Pública de Navarra
- Barrainkua, Ainhize
- Gordaliza Pastor, Paula
- Lozano, José Antonio
- Quadrianto, Novi
The impact of automated decision-making systems on human lives is growing, emphasizing the need for these systems to be not
only accurate but also fair. The ield of algorithmic fairness has expanded signiicantly in the past decade, with most approaches
assuming that training and testing data are drawn independently and identically from the same distribution. However, in
practice, diferences between the training and deployment environments exist, compromising both the performance and
fairness of the decision-making algorithms in real-world scenarios. A new area of research has emerged to address how to
maintain fairness guarantees in classiication tasks when the data generation processes difer between the source (training)
and target (testing) domains. The objective of this survey is to ofer a comprehensive examination of fair classiication under
distribution shift by presenting a taxonomy of current approaches. The latter is formulated based on the available information
from the target domain, distinguishing between adaptive methods, which adapt to the target environment based on available
information, and robust methods, which make minimal assumptions about the target environment. Additionally, this study
emphasizes alternative benchmarking methods, investigates the interconnection with related research ields, and identiies
potential avenues for future research., This research was supported by a European Research Council (ERC) Starting Grant for the project “Bayesian
Models and Algorithms for Fairness and Transparencyž, funded under the European Union’s Horizon 2020
Framework Programme (grant agreement no. 851538); by the Basque Government under grant IT1504-22 and
through the BERC 2022-2025 program; by the Spanish Ministry of Science and Innovation under the grants
PID2022-137442NB-I00 and PID2021-128314NB-I00, and through BCAM Severo Ochoa accreditation CEX2021-
001142-S / MICIN / AEI / 10.13039/501100011033.
only accurate but also fair. The ield of algorithmic fairness has expanded signiicantly in the past decade, with most approaches
assuming that training and testing data are drawn independently and identically from the same distribution. However, in
practice, diferences between the training and deployment environments exist, compromising both the performance and
fairness of the decision-making algorithms in real-world scenarios. A new area of research has emerged to address how to
maintain fairness guarantees in classiication tasks when the data generation processes difer between the source (training)
and target (testing) domains. The objective of this survey is to ofer a comprehensive examination of fair classiication under
distribution shift by presenting a taxonomy of current approaches. The latter is formulated based on the available information
from the target domain, distinguishing between adaptive methods, which adapt to the target environment based on available
information, and robust methods, which make minimal assumptions about the target environment. Additionally, this study
emphasizes alternative benchmarking methods, investigates the interconnection with related research ields, and identiies
potential avenues for future research., This research was supported by a European Research Council (ERC) Starting Grant for the project “Bayesian
Models and Algorithms for Fairness and Transparencyž, funded under the European Union’s Horizon 2020
Framework Programme (grant agreement no. 851538); by the Basque Government under grant IT1504-22 and
through the BERC 2022-2025 program; by the Spanish Ministry of Science and Innovation under the grants
PID2022-137442NB-I00 and PID2021-128314NB-I00, and through BCAM Severo Ochoa accreditation CEX2021-
001142-S / MICIN / AEI / 10.13039/501100011033.