Responsible AI and Society - IDSIA

Scientific area

Responsible AI and Society

The rapid advancement and widespread use of AI technologies is transforming many aspects of human life, from healthcare and education to mobility and decision-making. Understanding and governing the relationship between AI and society has therefore become increasingly urgent. Key challenges include the epistemic authority of AI systems, ethical responsibility in human–machine interactions, algorithmic transparency, risks of bias and discrimination, and the social roles attributed to generative AI. At the same time, AI is emerging as a creative and interpretive force that reshapes how knowledge, culture, and human experience are produced and represented. At IDSIA USI-SUPSI, we address these issues through interdisciplinary research that integrates computer science with philosophy, psychology, social sciences, interaction design, education, and the humanities. Our work investigates how AI systems operate in real-world contexts, how people understand and experience them, and how they can be designed to support autonomy, literacy, and meaningful human expression.

Area co-coordinator:
Alessandro Facchini (SUPSI)
Monica Landoni (USI)

SUPSI Image Focus

The philosophical foundations of modern AI remain largely underexplored, yet they are essential to the responsible development of intelligent technologies. This pillar examines fundamental questions about the nature and epistemic status of machine learning systems, the relationship between theoretical frameworks and empirical data, and the social roles that AI systems play — including the dynamics that lead
users to attribute epistemic authority to them, correctly or otherwise. A central focus of our research lies at the intersection of epistemology and ethics: we investigate how the comprehensibility and transparency of intelligent systems shape their legitimacy and trustworthiness in real-world applications, with particular attention to the conceptual and epistemological foundations of explainable AI and the challenge of algorithmic opacity. By bringing rigorous philosophical analysis to bear on these questions, we aim to build the conceptual groundwork that responsible AI development demands.

For more information, see the respective websites of the groups involved in the pillar:
  • The Responsible AI & Society group (link tba)
The success of AI systems depends not only on their technical sophistication, but above all on the quality of the interaction between humans and intelligent technologies. Our Human-AI Interaction pillar is dedicated to studying user experience, accessibility, and inclusivity, with particular attention to educational environments and the needs of diverse populations. By placing people at the center of the design process, we aim to develop systems that are not only technically excellent, but genuinely usable and accessible to all — ensuring that the benefits of AI can be meaningfully experienced across a broad spectrum of users, contexts, and abilities.

For more information, see the respective websites of the groups involved in the pillar:
The AI and Education pillar addresses the relationship between intelligent technologies and learning from two complementary directions. On one hand, it investigates how AI can enhance learning environments — while carefully examining the moral, cognitive, and epistemic concerns that such integration raises. On the other, it recognises that both educators and the broader public require a sophisticated understanding of AI's capabilities and limitations and works to equip individuals and communities with the critical tools needed to engage effectively and responsibly with intelligent systems. By combining research into AI-enhanced
learning with a commitment to meaningful AI literacy, this pillar prepares people not only to benefit from intelligent technologies, but to interrogate, contextualise, and shape them.

For more information, see the respective websites of the groups involved in the pillar:
The future of AI must be built on systems that are not only technically robust but also fair, transparent, and aligned with human values. Through the Trustworthy AI pillar, IDSIA USI-SUPSI develops and implements AI systems that humans can genuinely rely on — designing for equity, preventing discrimination, and carefully considering the broader social consequences of algorithmic decision-making. Our work in this area addresses the foundational challenge of building AI that earns trust: from ensuring transparency in how systems reach their conclusions, to guaranteeing safety in high-stakes environments, to aligning AI behaviour with the core values of the individuals and communities it serves.

For more information, see the respective websites of the groups involved in the pillar:
Artificial intelligence is increasingly becoming a creative and interpretive force, reshaping how knowledge is produced, culture is expressed, and human experience is represented. This pillar explores the intersection of AI with creative computation, the arts, and the humanities — investigating both how computational methods can enrich humanistic inquiry and how humanistic perspectives can critically inform the design of AI systems. From generative models applied to music and visual arts to computational approaches to language and narratives our research embraces a bidirectional dialogue between technical innovation and human expression. By bridging computer science with the humanistic disciplines and arts, we aim to expand the boundaries of what AI can create, interpret, and preserve — while keeping human meaning and cultural diversity at the core.

For more information, see the respective websites of the groups involved in the pillar:
  • The Responsible AI & Society group (link tba)

European University Alliance EUonAIR (2025-2028)

Bridging the Gap: Empowering Teachers about AI Education (2025-2027)

Noi e l’IA (in collaboration with L’Ideatorio) (2025-2027)

Rexasi Pro (2022-2025)

BBC - Breaking Boundaries in K-12 Classrooms: Fostering Gender Inclusion in STEM Teaching (2024-2025)

SOL - Scaffolding to foster independence when children search Online for Learning (2024-2028)

TADAA - Tools for Assessing and Developing Affecting & Attractive Narratives for Girls in Informatics (2022-2026)

Termine A, Ratti E., & Facchini, A. (2026) Machine learning and theory-ladenness: a phenomenological account. Synthese, 207(3), 94.

Cangiano, S., Moruzzi, C., Tremblay, P. A., & Facchini, A. (2026). SOUND DIALOGUES. The Responsible Use of AI in Artistic Creation. Zenodo. https://doi.org/10.5281/zenodo.18338772

Ferrario, A., Termine A, & Facchini, A. (2025) Social Misattribution in Conversations with Large Language Models. In Proceedings of 8th AAAI/ACM Conference on AI, Ethics and Society.

Starke, G., Gille, F., Termine, A., Aquino, Y. S. J., Chavarriaga, R., Ferrario, A., ... & Ienca, M. (2025). Finding Consensus on Trust in AI in Health Care: Recommendations From a Panel of International Experts. Journal of medical Internet research, 27, e56306.

Dominici, G., Barbiero, P., Espinosa Zarlenga, M., Termine, A., Gjoreski, M., Marra, G., & Langheinrich, M. (2025). Causal concept graph models: Beyond causal opacity in deep learning. Proceedings of the 13th International Conference on Learning Representations (ICLR 2025).

Facchini, A., & Mangili, F. (2024). Human-centered AI (Also) for humanistic management. In Humanism in Marketing: Responsible Leadership and the Human-to-Human Approach (pp. 225-255). Cham: Springer Nature Switzerland.

Ferrario, A., Facchini, A., & Termine, A. (2024). Experts or authorities? The strange case of the presumed epistemic superiority of artificial intelligence systems. Minds and Machines, 34(3), 30.

C. Grigioni, F. Corradini, A. Antonucci, J. Guzzi, and F. Flammini: “Safe Road-Crossing by Autonomous Wheelchairs: a Novel Dataset and its Evaluation“. In: Ceccarelli, A., Trapp, M., Bondavalli, A., Schoitsch, E., Gallina, B., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2024 Workshops. SAFECOMP 2024. Lecture Notes in Computer Science, vol 14989. Springer, Cham. https://doi.org/10.1007/978-3-031-68738-9_4

Facchini, A., & Termine, A. (2022). Towards a taxonomy for the opacity of AI systems. In (Ed. By V. Müller) Philosophy and theory of artificial intelligence (pp. 73-89). Cham: Springer International Publishing.

Hidden Widget