
Shaping Responsible and Inclusive AI in Recruitment
5.0 / 5 (4 ratings)
The BIAS project is a European research initiative dedicated to building fairer, more transparent AI in recruitment. Funded under the EU Horizon programme and developed by a consortium of universities, civil society organisations and industry partners, this MOOC brings together cutting-edge research, policy insight and lived experience. It reflects a shared commitment to ensuring AI serves people equitably, making it essential learning for anyone operating in or affected by today's evolving world of work.
General project information and certificate requirements: participants will receive the certificate upon completion of 100% of the course content, which includes viewing the videos, PDFs, and achieving at least 75% in the assessment tests.
About this course
AI is quietly reshaping who gets hired and who doesn't. As AI-powered recruitment tools become standard practice across industries, the risk of encoding existing societal biases into automated decisions is growing. The BIAS project exists to change that, combining advanced research in Natural Language Processing, Case Based Reasoning, law, and human resources to develop tools and knowledge that make AI-driven hiring more transparent and fair.
This MOOC translates that research into practical knowledge for everyone. Backed by a nine-partner European consortium and aligned with the EU AI Act, it equips learners with the critical lens needed to understand, question, and influence how AI is used in the workplace, turning awareness into action.
Instructors
Prof. Roger A. Søraa
Roger A. Søraa is a Professor in Science and Technology Studies (STS) at NTNU's Center for Technology and Society. He leads the interdisciplinary research group DigiKULT, which investigates how digital cultures shape society, and two large research projects: BIAS - Mitigating Diversity Biases in the Labour Market, and Sociomaterial Transformations in Norway and East Asia (SoMaT). Dr. Søraa's main research focuses on how technologies such as Artificial Intelligence and robotics impact daily lives and work. He is the author of “AI for Diversity” and co-author of Digitalization and Social Change: A Guide to Digital Thinking, both published by Routledge.
Maria Sangiuliano
Maria Sangiuliano is Research Director at Smart Venice, with over 20 years of experience as a senior gender and innovation researcher, project manager, and learning designer in EU-funded projects. Her work bridges social and technological innovation, focusing on making policies and innovation processes sustainable and inclusive through diversity-sensitive co-design and participatory approaches. She holds a PhD in Cognitive and Learning Sciences and an MA in Philosophy and Social Sciences. In the BIAS Project, she serves as Learning Designer for the Capacity Building Programme, developing educational frameworks to strengthen awareness and competencies in algorithmic bias, fairness, and inclusivity in AI systems.
Alexandre Puttick
Alexandre Puttick is a data science researcher, writer, and educator based in Biel. He earned a PhD in Pure Mathematics from ETH Zurich in 2019 and works at the intersection of data, AI, and society. In 2020, he taught a data-driven activism science course in Berlin. Since 2021, he has been a postdoctoral researcher in the Applied Machine Intelligence group at the Bern University of Applied Sciences. His projects include BurnoutWords, developing clinical tools to detect burnout from text, and BIAS. From 2021 to 2024, he was research associate at the Zurich University of the Arts Latent Spaces project.
Mascha Kurpicz-Briki
Mascha Kurpicz-Briki is a professor of data engineering at the Bern University of Applied Sciences in Biel, Switzerland, and co-leader of the research group Applied Machine Intelligence and the BFH Generative AI Lab. In her research, she is investigating bias in word embeddings and language models, with a focus on European languages and values. In this context, she is leading the corresponding tasks in the EU-funded project BIAS. She is also the author of the book “More than a Chatbot: Language Models Demystified”, explaining the background behind recent language models to a less technical audience.
Carlotta Rigotti
Carlotta Rigotti is Assistant Professor at the eLaw – Center for Law and Digital Technologies at Leiden University. Her research explores the intersection of law, gender, and technology, examining how digital and AI systems shape structural and intersectional inequalities. She focuses on online and technology-facilitated violence against women, platform governance of sexual and intimate content, and diversity bias in AI systems. She earned her PhD from Vrije Universiteit Brussel in 2023, studying the regulation of sex robotics and the co-construction of gender, sexuality, law, and technology. She collaborates with international organisations and civil society to inform policy and practice.
Aída Ponce Del Castillo
Aida Ponce Del Castillo is a lawyer. She obtained her European Doctorate in Law, focusing on the regulatory issues of human genetics, from the Universities of Valencia and Bonn. She also holds a Master’s degree in Bioethics. At ETUI’s Foresight Unit, her research addresses strategic foresight and the legal, ethical, social, and regulatory implications of emerging technologies. She is a member of the European Commission’s Competent Authorities Sub-Group on nanomaterials and the OECD Working Party on Bio, Nano and Convergent Technologies and AI Governance. She previously led ETUI’s Health and Safety Unit and coordinated the Workers’ Interest Group.
Maria Giovanna Zamburlini
Maria Giovanna is a Project Manager at Smart Venice with over a decade of experience in EU project development and management. She has collaborated with public authorities and international organisations, including the Council of European Municipalities and Regions, and previously worked at the Municipality of Schaerbeek in Brussels, overseeing financial reporting, project coordination, and grant proposal support in compliance with EU funding rules. She holds a bachelor’s degree in Political Science from the University of Padua and a master’s degree in European Studies from Brussels. She contributes to project coordination, monitoring, and implementation activities.
Rogerio Moreira
Rogério Moreira Junior holds a master’s degree in European and Global Studies from the University of Padua, with expertise in European policies, global communication, and socio-political analysis. During his studies, he gained practical experience in EU-funded projects, contributing to their design, implementation, monitoring, and evaluation. Before joining Smart Venice, he spent eight years in Brazil’s private sector, developing skills in data analysis, team coordination, management, and communication. At Smart Venice, he supports financial administration, communication, and programme management, contributing to project coordination, stakeholder engagement, and strategic planning.
Course structure
Understanding Bias in AI, between detection and mitigation
2. Defining and positioning intersectional bias between Human and AI
- What is a bias?
- Bias through the lenses of intersectionality
- BIAS from human to AI
- The Identity Wheel: a self-reflection exercise
- Chapter material
- Identity Wheel
3. AI in recruitment: existing tools and case studies of bias in recruitment using AI
- AI in recruitment: an introduction
- Case studies of bias in recruitment using AI
- Impact of Algorithmic Decision-Making on AI Recruitment
- Chapter material
4. Elements of AI
- Algorithms and Artificial Intelligence
- Examples of Machine Learning
- Training Language Models
- Behind the Scenes: Word Embeddings
- Chapter material
5. Ethical and social implications of AI in terms of bias
- Introduction: Biases in AI
- Societal Biases in Word Embeddings
- Bias Mitigation in Word Embeddings
- Chapter material
6. Sociotechnical implications of AI and fairness approaches
- Bias in AI Systems
- Analysis of fairness approaches
- Data-level Bias Mitigation
- Model-level Bias Mitigation and Evaluation Metrics
- Chapter material
7. Value-Sensitive Design
- Value Sensitive Design, definition
- Values in the Data
- Values in the Model and Applications
- Case Based-Reasoning Design pipeline
- Chapter material
Policy Framework
8. Policy Framework, General Data Protection Regulation
- The legal ecosystem for AI in the labour market
- GDPR basics for AI in the labour market
- Critiques and limits of the GDPR
- The GDPR: Challenges and future developments
- Chapter material
9. Policy Framework, Artificial Intelligence Act
- Overview of the AI Act
- High-risk AI systems in the labour market
- Regulation of non–high-risk AI in the labour market
- The AI Act: Limitations and outlook
- Chapter material
10. Policy Framework, Case studies
- TU and CSO case studies
- The power of collective bargaining
- Chapter material
Reflective Practice on AI fairness
11. Challenge the AI: tutorial for playing with the Black Box
- The Hiring “Black Box”. A candidate ranking exercise
- Step-by-step: ranking exercise
- Chapter material
- Exercise material
12. Prompting exercise
- Introduction to Prompting
- Different Prompting Styles
- Bias Mitigation with Prompting
- Chapter material
13. VSD Challenge applied to designing an AI based tool
- Introducing the envisioning cards
- Operationalise values into the design pipeline
- Chapter material
- Exercise material
14. Course Conclusions and BIAS Resources
- Course Conclusions and BIAS Resources
- PDF - course conclusion
15. Additional resources: Comics
- Comics
References
16. Bibliography
- Chapter material