Human-Computer Interaction Seminar - Spring semester 2025
This is the main seminar of the HCI group of the Human-IST Institute. The topics will change each semester and will be proposed by Human-IST members. Each participant will be supervised by one Human-IST member.
With the advent of autonomous cars, mobile devices and conversational agents, the question of the interaction with digital devices in everyday life is becoming everyday more relevant. The aim of the HCI seminar is to look at this question over several specific contexts and expose students to state-of-the art research in Human-Computer Interaction.
Through a set of topics for the participants to choose from, the different research fields studied within the Human-IST institute will be reviewed and discussed. Each specific topic is proposed by a member of the Human-IST team who will be available during the semester to follow the work of the student.
Prof. Denis Lalanne
Dr Julien Nembrini (contact person)
The introductory lecture will be held on Monday 17.02 14h in presence in PER21 A420. This session can be attended online, the sessions during the semester are in presence. Please contact Dr Julien Nembrini julien.nembrini@unifr.ch, with copy to Prof. Denis Lalanne denis.lalanne@unifr.ch, if you wish to attend.
Semester topics
Interfaces supporting human co-creation with AI (Human-AI co-creativity)
This research shall explore the design and development of interactive interfaces that facilitate human-AI co-creativity, focusing on how artificial intelligence can augment, enhance, and collaborate with human users in creative processes.
Interfaces supporting human-AI co-creativity are essential for enhancing human creativity, making creative tools more accessible, and improving collaboration between humans and AI. Well-designed interfaces ensure users retain control, trust, and interpretability in AI-assisted creation while optimizing workflow efficiency in various creative fields. As AI becomes more integrated into creative industries, understanding how to design intuitive and effective co-creative systems is crucial for fostering innovation, inclusion, and new forms of artistic expression.
The student shall review existing work from the litterature on Human-AI co-creativity, focusing on the notion of interface. Finally, the student shall propose a potential design and user-experiment to explore how an interface, being virtual, phyisical or hybrid, can impact Human-AI co-creativity.
References
[1] Muller, M., Kantosalo, A., Maher, M. lou, Martin, C. P., & Walsh, G. (2024). GenAICHI 2024: Generative AI and HCI at CHI 2024. Conference on Human Factors in Computing Systems - Proceedings. https://doi.org/10.1145/3613905.3636294
[2] Cetinic, E., & She, J. (2022). Understanding and Creating Art with AI: Review and Outlook. ACM Transactions on Multimedia Computing, Communications and Applications, 18(2). https://doi.org/10.1145/3475799
[3] Ling, L., Chen, X., Wen, R., Li, T. J.-J., & LC, R. (2024). Sketchar: Supporting Character Design and Illustration Prototyping Using Generative AI. Proceedings of the ACM on Human-Computer Interaction, 8(CHI PLAY), 1–28. https://doi.org/10.1145/3677102
[4] Hosanagar, K., & Ahn, D. (2024). Designing Human and Generative AI Collaboration. https://arxiv.org/abs/2412.14199v1 (unpublished)
Reference person:
- Simon Ruffieux
Augmenting Novice Researchers within the Literature Review Process
Over the past years, the production of scientific contributions has experienced unprecedented growth [3]. Paradoxically, while we have never had such easy and rapid access to scientific knowledge, it has never been so difficult to identify relevant knowledge on any subject of interest [5,6]. In this context, several digital solutions based on generative artificial intelligence (GAI), such as Artirev, Consensus, Elicit, or Scite, aim to help researchers find relevant scientific documents and summarize a corpus of research [1,2,5]. Considering the nascent literature focused on using such tools to augment the researcher [4], little is known about novice profiles such as Master or PhD students [7]. This work aims to explore how novice researchers interact with these tools, their benefits, and their limitations.
Keywords:
generative artificial intelligence; literature review; novice researchers
References:
- Francisco Bolaños, Angelo Salatino, Francesco Osborne, and Enrico Motta. 2024. Artificial intelligence for literature reviews: opportunities and challenges. Artificial Intelligence Review 57, 10: 259. https://doi.org/10.1007/s10462-024-10902-3
- Nicholas Fabiano, Arnav Gupta, Nishaant Bhambra, Brandon Luu, Stanley Wong, Muhammad Maaz, Jess G. Fiedorowicz, Andrew L. Smith, and Marco Solmi. 2024. How to optimize the systematic review process using AI tools. JCPP Advances 4, 2. https://doi.org/10.1002/jcv2.12234
- Mark A Hanson, Pablo Gómez Barreiro, Paolo Crosetto, and Dan Brockington. 2024. The strain on scientific publishing. Quantitative Science Studies 5, 4: 823–843. https://doi.org/10.1162/qss_a_00327
- Kien Nguyen-Trung, Alexander K Saeri, and Stefan Kaufman. 2024. Applying ChatGPT and AI-Powered Tools to Accelerate Evidence Reviews. Human Behavior and Emerging Technologies 2024, 1: 8815424. https://doi.org/https://doi.org/10.1155/2024/8815424
- Fabian Tingelhoff, Micha Brugger, and Jan Marco Leimeister. 2024. A guide for structured literature reviews in business research: The state-of-the-art and how to integrate generative artificial intelligence. Journal of Information Technology: 02683962241304105. https://doi.org/10.1177/02683962241304105
- Gerit Wagner, Roman Lukyanenko, and Guy Paré. 2021. Artificial intelligence and the conduct of literature reviews. Journal of Information Technology 37, 2: 209–226. https://doi.org/10.1177/02683962211048201
- Jiyao Wang, Haolong Hu, Zuyuan Wang, Song Yan, Youyu Sheng, and Dengbo He. 2024. Evaluating Large Language Models on Academic Literature Understanding and Review: An Empirical Study among Early-stage Scholars. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). https://doi.org/10.1145/3613904.3641917
Reference person:
- Maximiliano Jeanneret Medina
Explaining Automatic Document Remediations to People with Diverse Visual Abilities
Advances in the field of artificial intelligence (AI) and deep learning (DL) offer new possibilities to improve the accessibility of digital documents. For instance, intelligent systems [4] performing automatic remediation of scientific documents have emerged [5]. However, while explainable AI is largely covered in scientific literature [6] and many AI risks are formalized [3], little is known about how the outputs of those AI-based computer systems are interpreted by people with diverse visual abilities [1,2]. Focusing on web-based user interfaces, this work aims to explore explainable AI and accessibility.
Keywords:
blind or low-vision people; accessibility; document intelligence; adaptation; remediation; explainability
References:
- Jeffrey P. Bigham, Irene Lin, and Saiph Savage. 2017. The effects of “not knowing what you don’t know” on web accessibility for blind web users. In ASSETS 2017 - Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, 101–109. https://doi.org/10.1145/3132525.3132533
- Sandra Fernando, Chiemela Ndukwe, Bal Virdee, and Ramzi Djemai. 2025. Image Recognition Tools for Blind and Visually Impaired Users: An Emphasis on the Design Considerations. ACM Trans. Access. Comput. 18, 1. https://doi.org/10.1145/3702208
- Peter Slattery, Alexander K Saeri, Emily A C Grundy, Jess Graham, Michael Noetel, Risto Uuk, James Dao, Soroush Pour, Stephen Casper, and Neil Thompson. 2024. The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence. Retrieved from https://arxiv.org/abs/2408.12622
- Sarah Theres Völkel, Christina Schneegass, Malin Eiband, and Daniel Buschek. 2020. What is “intelligent” in intelligent user interfaces? a meta-analysis of 25 years of IUI. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI ’20), 477–487. https://doi.org/10.1145/3377325.3377500
- Lucy Lu Wang, Isabel Cachola, Jonathan Bragg, Evie Yu Yen Cheng, Chelsea Haupt, Matt Latzke, Bailey Kuehl, Madeleine van Zuylen, Linda Wagner, and Daniel Weld. 2021. Improving the Accessibility of Scientific Documents: Current State, User Needs, and a System Solution to Enhance Scientific PDF Accessibility for Blind and Low Vision Users. Retrieved from http://arxiv.org/abs/2105.00076
- Ziming Wang, Changwu Huang, and Xin Yao. 2024. A Roadmap of Explainable Artificial Intelligence: Explain to Whom, When, What and How? ACM Trans. Auton. Adapt. Syst. 19, 4. https://doi.org/10.1145/3702004
Reference person:
- Maximiliano Jeanneret Medina
(Visual) Representations of Personal CO₂ Emissions: Enhancing Comprehension Without Reinforcing Misconceptions
Visualizing personal carbon footprints is a powerful tool for raising awareness of one’s impact on the climate. Such visualizations are commonly used to influence behavior, particularly in mobility-related emissions, where daily transportation choices can significantly affect environmental outcomes.
However, current approaches to CO₂ emissions visualization face two key challenges. First, presenting emissions as raw numerical values (e.g., grams or kilograms of CO₂) can be cognitively difficult for users to interpret, making it hard to grasp their real-world significance. Second, alternative metaphors—such as the number of trees required to absorb a given amount of CO₂—can unintentionally reinforce a "compensatory" mindset, suggesting that emissions can simply be offset rather than reduced (e.g., by planting trees), which is not a sustainable long-term solution.
This seminar topic aims to explore and evaluate existing metaphors for CO₂ emissions, identifying those that best support intuitive understanding without promoting misleading perceptions. Additionally, students will design an experiment to assess the effectiveness of different visual representations in terms of comprehension, emotional engagement, and potential for behavior change in the context of daily mobility choices.
Keywords: eco-feedback; carbon emission equivalences; CO₂ metaphors; green mobility; behavior change
References:
- Vikram Mohanty, Alexandre L. S. Filipowicz, Nayeli Suseth Bravo, Scott Carter, and David A. Shamma. 2023. Save A Tree or 6 kg of CO2? Understanding Effective Carbon Footprint Interventions for Eco-Friendly Vehicular Choices. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23). Association for Computing Machinery, New York, NY, USA, Article 238, 1–24. https://doi.org/10.1145/3544548.3580675
- Park, H., Sanguinetti, A., Castillo Cortes, G. (2017). EcoTrips. In: Marcus, A., Wang, W. (eds) Design, User Experience, and Usability: Designing Pleasurable Experiences. DUXU 2017. Lecture Notes in Computer Science(), vol 10289. Springer, Cham. https://doi.org/10.1007/978-3-319-58637-3_5
- Angela Sanguinetti, Kelsea Dombrovski, Suhaila Sikand, Information, timing, and display: A design-behavior framework for improving the effectiveness of eco-feedback, Energy Research & Social Science, Volume 39, 2018, Pages 55-68, ISSN 2214-6296, https://doi.org/10.1016/j.erss.2017.10.001
Reference person:
- Moreno Colombo
AR Interfaces for Querying and Visualizing Spatial Information to Support Low-Vision People in Indoor Environments
To help low-vision (LV) individuals navigate unfamiliar indoor environments, robotics-inspired solutions provide spatial maps and guidance (e.g., [1, 2]). However, these solutions typically distinguish only between free space and obstacles, offering little insight into objects users may want to interact with. To address this, we are developing a robotic system that combines 3D mapping, object detection, and additional sensing to create enriched spatial maps containing valuable information (e.g., tables, chairs, lighting, temperature). A key question is how to make this information accessible and meaningful for LV users.
Augmented reality (AR) headsets have been proposed for LV navigation (e.g., [3, 4]) due to their ability to enhance real-world elements. By overlaying spatial information in real time, AR can provide a more intuitive way for users to explore their environment. This seminar will focus on designing an AR-based interface for querying and visualizing spatial information and proposing a user experiment to evaluate how different representations of spatial data support spatial understanding and interaction with objects.
Keywords Indoor navigation; augmented reality; assistive technology; blind and low-vision people; accessibility
References
- Daisuke Sato, Uran Oh, Kakuya Naito, Hironobu Takagi, Kris Kitani, and Chieko Asakawa. 2017. NavCog3: An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '17). Association for Computing Machinery, New York, NY, USA, 270–279. https://doi.org/10.1145/3132525.3132535
- Bing Li, Juan Pablo Munoz, Xuejian Rong, Qingtian Chen, Jizhong Xiao, Yingli Tian, Aries Arditi, and Mohammed Yousuf. 2019. Vision-Based Mobile Indoor Assistive Navigation Aid for Blind People. IEEE Transactions on Mobile Computing 18, 3 (March 2019), 702–714. https://doi.org/10.1109/TMC.2018.2842751
- Hein Min Htike, Tom H. Margrain, Yu-Kun Lai, and Parisa Eslambolchilar. 2021. Augmented Reality Glasses as an Orientation and Mobility Aid for People with Low Vision: a Feasibility Study of Experiences and Requirements. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 729, 1–15. https://doi.org/10.1145/3411764.3445327
- Yuhang Zhao, Elizabeth Kupferstein, Hathaitorn Rojnirun, Leah Findlater, and Shiri Azenkot. 2020. The Effectiveness of Visual and Audio Wayfinding Guidance on Smartglasses for People with Low Vision. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376516
Supervisor
- Yong-Joon Thoo
Boosts and Nudges to induce energy efficient behaviour
Buildings tend to consume more energy than what is usually expected in simulations. This can undermine efforts to reduce energy consumption as part of energy transition goals. We will try to explore how to help individuals in making a better use of existing infrastructure.
Research shows that change of occupant behaviour can lead to significant energy consumption reduction. A lot of research is made on how to induce those change for occupants. « Nudges » have been identified as efficient but with limited effects on longer time periods. « Boosts » seems to have more lasting effects and thus combined interventions tend to be potentially more efficient in the context of limiting energy consumption in buildings. Those types of interventions also raise ethical questions related to transparency, autonomy, and equity. Boosts and nudges have been studied in other fields for a long time but combined boost and nudges with digital tools in the field of building use remains understudied.
This work aims to explore the different types of Boosts and Nudges used in the context of energy efficiency in buildings. The second part will consist in designing an experiment using HCI knowledge to combine a digital dual boost/nudge intervention, aware of several challenges mentionned above. The work should finally help understand the advantages and disadvantages related to the use of digital tools in this domain.
Keywords: behavior change; boosts and nudges; energy consumption in buildings; social justice
References:
- Caballero, N., & Ploner, M. (2022). Boosting or nudging energy consumption? The importance of cognitive aspects when adopting non-monetary interventions. Energy Research & Social Science, 91, 102734. https://doi.org/10.1016/j.erss.2022.102734
- Paunov, Y., & Grüne-Yanoff, T. (2023). Boosting vs. nudging sustainable energy consumption: a long-term comparative field test in a residential context. Behavioural Public Policy, 1-26. https://doi.org/10.1017/bpp.2023.30
- Igei, K., Kurokawa, H., Iseki, M., Kitsuki, A., Kurita, K., Managi, S., ... & Sakano, A. (2024). Synergistic effects of nudges and boosts in environmental education: Evidence from a field experiment. Ecological Economics, 224, 108279. https://doi.org/10.1016/j.ecolecon.2024.108279
- Cellina, F., Fraternali, P., Gonzalez, S. L. H., Novak, J., Gui, M., & Rizzoli, A. E. (2024). Significant but transient: The impact of an energy saving app targeting Swiss households. Applied Energy, 355, 122280. https://doi.org/10.1016/j.apenergy.2023.122280
Supervisor
- Marion Schoenenweid
Seminar schedule
(may be subject to changes)
- 17.02 14h00 1st session, in presence
- 03.03 (at the latest) communication of preferred topics (1st, 2nd and 3rd choice)
- 24.03 14h00 1st presentation (ideas,papers,structure) in presence
- 14.04 1st complete draft (no session)
- 28.04 14h00 2nd presentation in presence
- 19.05 2nd draft (no session)
- 26.05 14h00 final presentation in presence
- 09.06 final draft (no session)
Work to be done
Students will be asked to :
- Conduct an in-depth review of the state-of-the-art of their chosen topic
- Discuss their findings with their respective tutor
- Imagine a user experiment relevant to existing research
- Write a 4-pages article summarizing their review work
- Present their article to the seminar participants
Interested computer science students are invited to participate by expressing their interest on two specific topics (preferred and secondary choices) among the ones presented on the topics presented above.
Learning Outcomes
At the end of the seminar, students will know how to do a bibliographic research and be exercised in writing a scientific article. Further, they will build a general knowledge on the field of HCI and its current techniques and trends, as well as an in-depth understanding on their chosen topic.
Registration
Attend the introductory session and express your interest about both your chosen topics with a short text (3-5 sentences) to Dr Julien Nembrini, mentioning (1) your preferred topic, (2) your second-preferred topic. Each reference person will then contact you if you are chosen for participating to the seminar. Others will receive a notification email.
emails : julien.nembrini@unifr.ch and denis.lalanne@unifr.ch
Registration Process
- Participate to the first introductory session on Monday 17.02 14h00 in presence in PER21 A420 or online.
- Express your interest for a specific topic as mentioned above at the latest the 01.03 (first come, first served).
- Wait for confirmation of your participation.
Seminar Process
In addition to the first session, there will only be three plenary sessions during the semester, which consist in participants presenting their work. These sessions will be in presence and will happen on Mondays at 14h00. Reference persons will organize additional bilateral meetings during the semester.
The seminar process is as follows:
- Select state-of-the-art references relevant to the chosen thematic, synthesize these references to structure the research field, discuss and refine your approach with your topic reference person.
- Present the structure developed to the other participants of the seminar for the intermediate presentation.
- Synthesize the selected bibliographic references in a written 4 pages article, authored in LaTeX following ACM SIGCHI format.
- Define a user experiment within the chosen topic that is relevant with regard to existing research
- Discuss and refine your article with your topic reference person.
- Present your article in one of the final presentation sessions (end of semester)
Evaluation
The evaluation will be conducted by the topic reference person and the person responsible for the seminar.
The evaluation will be based on the quality of :
- Your written article: readability, argumentation, English
- Your review work: reference selection and field structure
- Your final presentation
- Your initial draft
- Your review of a colleague's first draft
If each step described below is not formally marked, each contribute towards producing a good final paper (75% of the grade)
Guidelines
First presentation
- Introduce your theme
- Your selection of at least 3 papers most relevant to your theme, if possible different from the ones provided in the topic description. Don't forget to give author names, journal and date!
- Look in each paper for:
- Theme: How does the paper fit in the larger picture of your theme?
- Main contribution: how does the paper contribute to the field?
- Methods: what are the methods used? which kind of experimental setup, if any?
- Outlook: strengths / weaknesses, research opportunities, etc
- Presentation time 10'.
First draft
- Your draft should :
- present the research context of your topic with relevant publications
- state a research question and an hypothesis
- propose an experiment to test the hypothesis, and discuss its expected results
- Provide a complete draft. Enough content will allow reviewers to give meaningful feedback, although some parts may still be in bullet point style.
Rules:
- You MUST use the latex template https://www.overleaf.com/latex/templates/association-for-computing-machinery-acm-sigchi- proceedings-template/nhjwrrczcyys
- Maximum 4 pages excluding references (it is acceptable to concentrate on some sections and leaving one or two others in bullet point style)
- Polish your English grammar and orthography
Review
- Rephrase the content in 1 short paragraph
- List the positive points
- List the negative points, provide examples
- Propose improvements
- Do not insist on orthograph/grammar unless it is especially disturbing
Provide your review either as comments in the pdf or as a separate document.
Second presentation
Present your first draft.
- Your presentation should include:
- Introduction to the topic
- Structure of your research field (literature review outcome topic-by-topic instead of paper-by-paper)
- Research questions/hypotheses
- User experiment proposal and expected outcomes
- Include details (example papers, shared methods/approaches, experiments conducted by others, etc)
- Include full references (preferably not at the end)
- Presentation time 10'
Prepare also an oral feedback of your fellow student's first draft (structure,strengths/weaknesses, English, style, etc.)
Second draft
Your second draft should combine feedback from your supervisor and from your fellow student
Final presentation
Same guidelines as the seocond presentation, with comments addressed
Final draft
Integrate all feedback received and finalize your paper. this final version will be published on the website (habe a look here for previous examples)
Date: Spring 2025