Human-Computer Interaction Seminar - Spring semester 2026
This is the main seminar of the Interactive Intelligence lab of the Human-IST Institute. The topics will change each semester and will be proposed by I&I lab members. Each participant will be supervised by one researcher.
With the advent of autonomous cars, mobile devices and conversational agents, the question of the interaction with digital devices in everyday life is becoming everyday more relevant. Through embedded objects, AI is penetrating our daily routines and raises important questions of agency and privacy. The aim of the HCI seminar is to look at this question over several specific contexts and expose students to state-of-the art research in interactive embedded AI.
Through a set of topics for the participants to choose from, the different fields studied within the I&I lab will be reviewed and discussed. Each specific topic is proposed by a member of the I&I team who will be available during the semester to follow the work of the student.
Prof. Denis Lalanne
Dr Julien Nembrini (contact person)
The introductory lecture will be held on Monday 23.02 15h15 in presence in PER21 A420. This session can be attended online, the sessions during the semester are in presence. Please contact Dr Julien Nembrini julien.nembrini@unifr.ch, with copy to Prof. Denis Lalanne denis.lalanne@unifr.ch, if you wish to attend.
Semester topics
Current trends in interactive embedded AI
The general aim of the semester is to map current research in interactive embedded AI. Please find below the topics related to research fields of I&I researchers.
UPDATE: The topics will be presented on the first session on Monday 23.02.2026 (please see details above)
Gamification and AI-Driven Personalization for Sustainable Mobility
Promoting sustainable transport requires not only technological innovation but effective behaviour change strategies. Two prominent approaches have emerged in mobile mobility systems: gamification, which uses game elements to motivate engagement, and AI-driven personalization, which leverages sensing and user profiling to deliver tailored nudges.
In this seminar, the student will analyze a literature review on gamified green transportation systems and a data-driven personalized mobility intervention based on mobile sensing and persuadability profiling.
The student will:
- Compare gamification-based and AI-based personalization strategies
- Examine how persuasive mechanisms are operationalized in each approach
- Identify conceptual and empirical gaps between gameful engagement and algorithmic nudging
- Formulate a concrete research gap emerging from this comparison
- Propose an experimental study to investigate how gamification and AI personalization could be combined or systematically compared
The final output is an article that develops an experiment proposal to advance interactive embedded AI systems for sustainable mobility.
References
[1] Bassanelli, S., Belliato, R., Bonetti, F., Vacondio, M., Gini, F., Zambotto, L., & Marconi, A. (2025). Gamify to persuade: A systematic review of gamified sustainable mobility. Acta Psychologica, 252, 104687.
[2] Anagnostopoulou, E., Urbančič, J., Bothos, E., Magoutas, B., Bradesko, L., Schrammel, J., & Mentzas, G. (2020). From mobility patterns to behavioural change: leveraging travel behaviour and personality profiles to nudge for sustainable transportation. Journal of Intelligent Information Systems, 54(1), 157-178.
Supervisor
- Moreno Colombo
From Object Recognition to Actionable Space for Low-Vision People: Designing Affordance-Aware Embedded AI in Augmented Reality
For people with low vision, navigating and interacting with physical environments can be challenging due to reduced visual acuity, contrast sensitivity, or field of view. Recent advances in computer vision and Augmented Reality (AR) enable real-time object detection and spatial localization, allowing virtual overlays to highlight objects directly within a user’s field of view. Such systems can enhance spatial awareness by visually labeling objects and landmarks in situ. However, merely identifying objects (e.g., “chair”, “door”, “elevator”, “staircase”) may not fully support action or decision-making. Moving beyond object recognition toward communicating what can be done with detected elements (e.g., “guide toward”, “sit here”) and/or their status (e.g., “available”, “closed”, “not accessible”)—i.e., their affordances—may fundamentally reshape how embedded AI augments interaction with the real world.
This seminar topic explores how embedded AI systems integrated into physical environments (e.g., camera networks or distributed sensing infrastructure) combined with AR headsets could transform object detection into actionable spatial augmentation for Low-Vision (LV) users. By leveraging a network of cameras that collectively build a spatial map of objects and their positions, AI-driven perception could move beyond labeling toward encoding and communicating affordances—the possible actions associated with objects and spatial structures.
Taking inspiration from recent work, the student will:
-
Research
-
How AI perception could be embedded into environmental infrastructure (e.g., distributed cameras, edge computing)
-
How object identity combined with spatial mapping could be transformed into actionable meaning (e.g., “chair” → “available”, “staircase” → “accessible route”, “room” → “provide guidance”)
-
The conceptual and interactional difference between labeling objects and augmenting affordances
-
Which affordances could meaningfully support LV users in navigation and daily interaction
-
-
Reflect On
-
How affordance modeling could affect the user’s interaction with the physical world
-
The tradeoff between assistance, cognitive overload, and autonomy: Should the system reveal all possible affordances or selectively guide behavior based on user needs and capabilities?
-
Ethical concerns, including surveillance (infrastructure-based sensing), dependency, and transparency
-
References
[1] Ruijia Chen, Junru Jiang, Pragati Maheshwary, Brianna R Cochran, and Yuhang Zhao. 2025. VisiMark: Characterizing and Augmenting Landmarks for People with Low Vision in Augmented Reality to Support Indoor Navigation. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 298, 1–20. https://doi.org/10.1145/3706598.3713847
[2] Jaewook Lee, Andrew D. Tjahjadi, Jiho Kim, Junpu Yu, Minji Park, Jiawen Zhang, Jon E. Froehlich, Yapeng Tian, and Yuhang Zhao. 2024. CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool Interactions for People with Low Vision. In Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology (UIST '24). Association for Computing Machinery, New York, NY, USA, Article 141, 1–16. https://doi.org/10.1145/3654777.3676449
For additional context:
[3] Yong-Joon Thoo, Karim Aebischer, Nicolas Ruffieux, and Denis Lalanne. 2026. Real-Time Object Detection and Augmented Reality to Support Low-Vision Navigation and Object Localization: A Demonstration. (To appear in ACM AlpCHI'26).
Supervisor
- Yong-Joon Thoo
Digital Information Accessibility in Constrained Contexts
While large language models available through the form of cloud-based AI services are commonly used by blind and low vision users for accessing digital information [1], interactive and embedded AI can be used to improve accessibility in more constrained situations (e.g., privacy, no internet access). Also called on-device artificial intelligence (AI) or tiny machine learning (TinyML), embedded AI refers to the integration of AI into resource-limited devices or systems, such as wearable devices, smartphones, smart home devices, industrial automation systems, and robotic systems. Interactive AI focuses on building AI around users to facilitate their needs and improve automation. This seminar topic aims to explore the use of interactive and embedded AI deployed on laptops or tablets with limited resources, in particular in combination with assistive technology, and in the context of productivity applications [2].
Based on two selected papers, the student will:
- Identify relevant and focused use cases in which accessibility to digital information can be improved
- Understand interactions of blind and low-vision people with embedded AI within selected use cases
- Document limitations of embedded AI in selected use cases
References
[1] Xinru Tang, Ali Abdolrahmani, Darren Gergle, and Anne Marie Piper. 2025. Everyday Uncertainty: How Blind People Use GenAI Tools for Information Access. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 63, 1–17. https://doi.org/10.1145/3706598.3713433
[2] Minoli Perera, Swamy Ananthanarayan, Cagatay Goncu, and Kim Marriott. 2025. The Sky is the Limit: Understanding How Generative AI can Enhance Screen Reader Users' Experience with Productivity Applications. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 1165, 1–17. https://doi.org/10.1145/3706598.3713634
Supervisor
- Maximiliano Jeanneret Medina
Sport Tracking Devices and Training Management
The continuous development of applications and wearable devices is profoundly transforming the way athletes, both professional and recreational, manage their training. Sport tracking devices, particularly smartwatches, collect a wide range of physiological signals through embedded sensors (e.g., heart rate, heart rate variability, respiratory rate, etc.). These data are processed to generate numerous performance indicators such as VO₂ max, training load, and training status, which are presented to users to support training optimization and performance improvement.
This topic explores how such tracking technologies, along with the user interactions they foster, influence athletes’ training management. More specifically, it examines the extent to which users rely on computed metrics to adjust their training behaviors and sporting habits.
Taking inspiration from selected papers, the student will:
- Identify the different data processing methods used to generate performance metrics;
- Analyze and discuss the implications of interaction design on user experience and decision-making;
- Explore potential improvements in user interaction to promote positive and sustainable impacts of personalized tracking systems.
References
[1] Armağan Karahanoğlu, Aykut Coskun, Dees Postma, Bouke Leonard Scheltinga, Rúben Gouveia, Dennis Reidsma, and Jasper Reenalda. 2024. Is it just a score? Understanding Training Load Management Practices Beyond Sports Tracking. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 313, 1–18. https://doi.org/10.1145/3613904.3642051
[2] Mollie Brewer, Nadia S. J. Morrow, Xuening Peng, Oluwatomisin Obajemu, Kristy Elizabeth Boyer, and Juan E. Gilbert. 2025. Investigating How Recreational Cyclists Engage with Sport Tracking Technologies to Prepare for Endurance Events. In Proceedings of the First Annual Conference on Human-Computer Interaction and Sports (SportsHCI '25). Association for Computing Machinery, New York, NY, USA, Article 10, 1–14. https://doi.org/10.1145/3749385.3749389
Supervisor
- Célina Treuillier
Using smartwatches to monitor heat stress
Due to climate change, the occurrence of heatwaves is predicted to increase, which could have a detrimental effect on vulnerable populations, such as the elderly and those with mental health conditions. Wearable technologies, such as smartwatches, could help to continuously monitor personal heat stress and provide early warnings tailored to the user's needs.
However, repurposing consumer-grade technologies initially designed for non-health applications is notoriously subject to limited access to raw data and overall accuracy issues due to sensor quality and placement. Recently, machine learning techniques have been proposed to address these issues. The two papers explore using smartwatches or smartwatch-mounted sensors to monitor heat stress or ambient temperature with the help of machine learning.
References
[1] Parchami, M., Nazarian, N., Hart, M. A., Liu, S., & Martilli, A. (2025). Can your smartwatch measure ambient air temperature?. Environmental Research Letters, 20(6), 064016. https://iopscience.iop.org/article/10.1088/1748-9326/add6b5/meta
[2] Nazarian, N., Liu, S., Kohler, M., Lee, J. K., Miller, C., Chow, W. T., ... & Norford, L. K. (2021). Project Coolbit: can your watch predict heat stress and thermal comfort sensation?. Environmental Research Letters, 16(3), 034031. https://iopscience.iop.org/article/10.1088/1748-9326/abd130/meta
Reference person:
- Julien Nembrini
Wearable AI-Assistants
Wearable devices with embedded AI capabilities have existed for some time and promise to support users in their daily activities [1, 1.1]. More than 15 years ago, the SixthSense prototype offered gesture-based interactions to enable more natural engagement with digital information in outdoor environments. With recent advances in AI, such devices can now provide far more sophisticated forms of support, including voice processing, image recognition, and complex conversational interaction [2]. Meanwhile, concerns are increasing about new forms of device being sold on the market which become ever more ubiquitous and potentially intrusive in everyday life [0]. As a result, interaction paradigms may need to be fundamentally rethought and reconsidered (notably with the emerging concept of "Designing with Friction").
In this context, how can we design interactions that strike the right balance between support and intrusiveness? How should this balance be considered from a personal perspective (individual autonomy, cognitive load, privacy) and from a public perspective (social acceptability, visibility, collective norms)?
Depending on the student’s interests, different research questions related to wearable AI assistants could also be explored within this broader framework.
References
[1] Pranav Mistry and Pattie Maes. 2009. SixthSense: a wearable gestural interface. In ACM SIGGRAPH ASIA 2009 Sketches (SIGGRAPH ASIA '09). Association for Computing Machinery, New York, NY, USA, Article 11, 1. https://doi.org/10.1145/1667146.1667160
[1.1] Pranav Mistry, Pattie Maes, and Liyan Chang. 2009. WUW - wear Ur world: a wearable gestural interface. In CHI '09 Extended Abstracts on Human Factors in Computing Systems (CHI EA '09). Association for Computing Machinery, New York, NY, USA, 4111–4116. https://doi.org/10.1145/1520340.1520626
[2] Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, and Pattie Maes. 2020. Wearable Reasoner: Towards Enhanced Human Rationality Through A Wearable Device With An Explainable AI Assistant. In Proceedings of the Augmented Humans International Conference (AHs '20). Association for Computing Machinery, New York, NY, USA, Article 23, 1–12. https://doi.org/10.1145/3384657.3384799
Reference person:
- Simon Ruffieux
AI-Driven Adaptive Learning: Personalized Task Difficulty and Tool Integration for Enhanced Educational Outcomes
This research explores the role of AI-driven adaptive learning systems in enhancing educational outcomes by combining personalized task difficulty models and integrated AI learning tools. Building on the dynamic difficulty adjustment model proposed by Kassenkhan, Mendes, and Bekarystankyzy (2026) [1] and the AI tool integration framework by Madanchian et al. (2025) [2], the study aims to develop a comprehensive framework for personalized, technology-enhanced education. The framework seeks to optimize learner engagement, manage cognitive load, and improve knowledge retention, providing insights for designing effective AI-powered learning environments.
References
[1] Kassenkhan, Aray M., Mateus Mendes, and Akbayan Bekarystankyzy. "An Adaptive Task Difficulty Model for Personalized Reading Comprehension in AI-Based Learning Systems." Algorithms 19.2 (2026): 100. https://doi.org/10.3390/a19020100
[2] Madanchian, Mitra, et al. "Integrating AI Tools to Enhance Learning Outcomes in Modern Education Systems." Procedia Computer Science 263 (2025): 514-521. https://doi.org/10.1016/j.procs.2025.07.062
Reference person:
Ilyes Kadri
AI nudges for energy reduction in buildings
Occupant behaviour is a significant contributor to energy consumption in buildings. While changes in occupant behaviour might help reduce building-related energy use, selecting effective and acceptable interventions is complex, requiring a balance between human comfort, social practices, and technical system adaptation. The proposed topic examines how interactive and embedded AI systems can support energy reduction in buildings, and critically evaluate their effectiveness and overall impact.
Students should pay attention to privacy implications, fairness between various occupants such as comfort disparities, and the advantages and limitations of using advanced AI technologies to address climate-related issues.
References
[1] Xu, Y., Zhu, S., Cai, J., Chen, J., & Li, S. (2025). A large language model-based platform for real-time building monitoring and occupant interaction. Journal of Building Engineering, 100, 111488. https://doi.org/10.1016/j.jobe.2024.111488
[2] Gonçalves, V. P., Serrano, A. L. M., Rodrigues, G. A. P., de Oliveira, M. N., Meneguette, R. I., Bispo, G. D., ... & Filho, G. P. R. (2025). An Empirically Validated Framework for Automated and Personalized Residential Energy-Management Integrating Large Language Models and the Internet of Energy. Energies, 18(14), 3744. https://doi.org/10.3390/en18143744
Reference person:
- Marion Schoenenweid
Seminar schedule
(may be subject to changes)
- 23.02 15h15 1st introductory session, in presence
- 02.03 (at the latest) communication of preferred topics (1st, 2nd and 3rd choice)
- 30.03 15h15 1st presentation (concepts,papers,research gaps) (7 minutes per student) in presence
- 13.04 1st complete draft (no session)
- 20.04 15h15 2nd presentation in presence
- 11.05 2nd draft (no session)
- 18.05 15h15 final presentation in presence
- 08.06 final draft (no session)
Work to be done
Students will be asked to :
- Study in depth the papers proposed for the chosen topic, including relevant references
- Discuss their findings with their respective tutor
- Present the papers, their methodology and results
- Identify potential research gaps and imagine a user experiment addressing one of
- Write a 4-pages article summarizing their review work
- Present their article to the seminar participants
Interested computer science students are invited to participate by expressing their interest on three specific topics (1st, 2nd, 3rd choices) among the topics presented above.
Learning Outcomes
At the end of the seminar, students will be exercised in writing a scientific article. Further, they will build a general knowledge on the field of interactvie embedded AI and its current techniques and trends, as well as an in-depth understanding on their chosen topic.
Registration
Attend the introductory session and express your interest about both your chosen topics with a short text (3-5 sentences) to Dr Julien Nembrini, mentioning (1) your preferred topic, (2) your second-preferred topic, (3) your third-preferred. Each reference person will then contact you if you are chosen for participating to the seminar. Others will receive a notification email.
emails : julien.nembrini@unifr.ch and denis.lalanne@unifr.ch
Registration process
- Participate to the first introductory session on Monday 23.02 15h15 in presence in PER21 A420 or online.
- Express your interest for the topics as mentioned above at the latest the 02.03 (first come, first served).
- Wait for confirmation of your participation.
Seminar Process
In addition to the first session, there will only be three plenary sessions during the semester, which consist in participants presenting their work. These sessions will be in presence and will happen on Mondays at 15h15. Reference persons will organize additional bilateral meetings during the semester.
The seminar process is as follows:
- Select state-of-the-art references relevant to the chosen topic, synthesize these references to identify research gaps, discuss and refine your approach with your topic reference person.
- Present your findings to the other participants of the seminar for the intermediate presentation.
- Synthesize the selected bibliographic references in a written 4 pages article, authored in LaTeX following ACM SIGCHI format.
- Define a user experiment within the chosen topic that is relevant with regard to existing research
- Discuss and refine your article with your topic reference person.
- Present your article in the final presentation session (end of semester)
Evaluation
The evaluation will be conducted by the topic reference person and the person responsible for the seminar.
The evaluation will be based on the quality of :
- Your written article: readability, argumentation, English
- Your review work: reference selection and field structure
- Your final presentation
- Your initial draft
- Your review of a colleague's first draft
If each step described below is not formally marked, each contribute towards producing a good final paper
Date: Spring 2026