Themis: Modeling, Measuring and Mitigating Bias in Online Information Platforms
We live in a world where most of our information and communication needs are satisfied by online information platforms (OIPs) such as Search Engines and Social Networks and Media. These platforms play an important role in shaping the opinions and guiding the decisions of users for simple or important matters. Their operation relies on sophisticated machine learning and AI algorithms for filtering, ranking and recommendations, trained on massive amounts of data collected by the behavior and the contributions of users online. However, the function of these platforms can be compromised due to the presence of bias in the behavior of human users, but also in the decisions of the automated AI algorithms used by OIPs. These biases can result in unfair decisions by the AI algorithms, in recommendations of content, ranking of results, or the profiling of the users, as well as the emergence of echo chambers and filter bubbles in social networks.
The goal of the Themis project is to develop a formal framework for modeling, measuring, and mitigating bias online.
Themis will consider different types of bias in online information providers such as search engines and social networks and media. It will provide novel definitions of bias and fairness for problems such as opinion formation and community detection, and provide models for the emergence of bias in social media. It will also perform measurements of bias on datasets. The project is structured along three axes aligned with the three goals above.
Modeling Bias
The goal in this axis is to define models that capture the different aspects of bias and fairness and define metrics for quantitatively measuring bias and fairness in the OIPs. We will also consider models for understanding the emergence of bias in the OIPs. The focus will be on defining bias in formation processes, and communities in social networks.
Measuring Bias
The goal in this axis is to measure bias in practice in OIPs. The focus will be on Large Language Models (LLMs) which are currently being used for answering questions, retrieving information, and generating new content. The focus of the work will be on detecting stereotyping behavior of LLMs with respect to gender, race or religion.
Mitigating Bias
The goal in this axis is to define fair algorithms that mitigate bias. The focus will be again on making opinion formation processes and community detection algorithms fair.
Project information:
- Start date: November 2023
- End date: November 2025
- Project Number: 016636
- Principal Investigator: Panayiotis Tsaparas
- Sub-action: Sub-action 2. Funding Projects in Leading-Edge Sectors – RRFQ: Basic Research Financing (Horizontal support for all Sciences)
- Scientific Area: ThA4. Mathematics & Information Sciences
- Scientific Field: 4.1 Artificial intelligence and robotics
- Total Budget (€): 170,000
- Host Institution: University of Ioannina
- Cooperative Organizations:
- Funded under: 2nd Call for H.F.R.I.’s Research Projects to Support Faculty Members & Researchers
Team

Panayiotis Tsaparas
Principal Investigator
Associate Professor, University of Ioannina, Department of Computer Science & Engineering
Panagiotis Papadakos
Research Assistant
Post-doctoral Researcher at the Institute of Computer Science of the Foundation for Research and Technology - Hellas and the University of Ioannina, Department of Computer Science & Engineering, Greece
Christos Karanikolopoulos
Research Assistant
MSc student at University of Ioannina, Department of Computer Science & Engineering, Greece
Glykeria Toulina
Research Assistant
MSc student at University of Ioannina, Department of Computer Science & Engineering, Greece
Spyridon Tzimas
Research Assistant
Undergraduate student at University of Ioannina, Department of Computer Science & Engineering, Greece Phd at Mathematics, University of Ioannina, Greece
Christos Gartzios
Research Assistant
Undergraduate student at University of Ioannina, Department of Computer Science & Engineering, Greece
Evaggelia Pitoura
Advisory Board
Professor, University of Ioannina, Department of Computer Science & Engineering
Aristides Gionis
Advisory Board
Professor at KTH Royal Institute of Technology, Sweden
Carlos Castillo
Advisory Board
Professor at Universitat Pompeu Fabra, Spain
Stavros Sintos
Advisory Board
Assistant Professor at University of Illinois Chicago, U.S.A.