Spring semester 2024

Human-Computer Interaction Seminar

This is the main seminar of the HCI group of the Human-IST Institute. The topics will change each semester and will be proposed by Human-IST members. Each participant will be supervised by one Human-IST member.

With the advent of autonomous cars, mobile devices and conversational agents, the question of the interaction with digital devices in everyday life is becoming everyday more relevant. The aim of the HCI seminar is to look at this question over several specific contexts and expose students to state-of-the art research in Human-Computer Interaction.

Through a set of topics for the participants to choose from, the different research fields studied within the Human-IST institute will be reviewed and discussed. Each specific topic is proposed by a member of the Human-IST team who will be available during the semester to follow the work of the student.

Prof. Denis Lalanne

Dr Julien Nembrini (contact person)

The introductory lecture will be held on Thursday 07.03 10h15 in presence in PER21 A420. This session can be attended online, the sessions during the semester are in presence. Please contact Dr Julien Nembrini julien.nembrini@unifr.ch, with copy to Prof. Denis Lalanne denis.lalanne@unifr.ch, if you wish to attend.

Semester topics

The Impact Visual Cues in News Platforms on Users’ Epistemic Agency

Epistemic welfare can be defined as users’ right and ability to access and evaluate trustworthy, independent, and diverse information. The quantity of content is overwhelming in today's digital media landscape, and media content providers are using algorithmic gatekeepers to filter, rank, and recommend content. Neglecting this aspect may compromise users’ epistemic agency. This leads them with an access to information that may be biased, invalid, sourced from poorer sources, possess an overly negative tone, or that keeps the suggested content within their current beliefs (filter bubbles). Epistemic agency can be enhanced by providing users with visual cues, which are graphical elements that convey information, emotion, or intention.

The goal of this seminar topic is to explore how visual cues can enhance epistemic agency in news platforms and how this affects epistemic welfare.


Epistemic welfare, algorithmic gatekeepers, visual cues, news platforms, information visualisation


[1] Md Momen Bhuiyan, Hayden Whitley, Michael Horning, Sang Won Lee, and Tanushree Mitra. 2021. Designing Transparency Cues in Online News Platforms to Promote Trust: Journalists' & Consumers' Perspectives. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 395 (October 2021), 31 pages. https://doi.org/10.1145/3479539

[2] Constantine Stephanidis, Gavriel Salvendy, Margherita Antona, Jessie Y. C. Chen, Jianming Dong, Vincent G. Duffy, Xiaowen Fang, Cali Fidopiastis, Gino Fragomeni, Limin Paul Fu, Yinni Guo, Don Harris, Andri Ioannou, Kyeong-ah (Kate) Jeong, Shin’ichi Konomi, Heidi Krömker, Masaaki Kurosu, James R. Lewis, Aaron Marcus, Gabriele Meiselwitz, Abbas Moallem, Hirohiko Mori, Fiona Fui-Hoon Nah, Stavroula Ntoa, Pei-Luen Patrick Rau, Dylan Schmorrow, Keng Siau, Norbert Streitz, Wentao Wang, Sakae Yamamoto, Panayiotis Zaphiris & Jia Zhou. 2019. Seven HCI Grand Challenges. International Journal of Human–Computer Interaction, 35(14), 1229-1269. DOI: 10.1080/10447318.2019.1619259. https://doi.org/10.1080/10447318.2019.1619259.

[3] Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, Jiquan Pei, Jinfeng Yi, and Bowen Zhou. 2023. Trustworthy AI: From Principles to Practices. ACM Comput. Surv. 55, 9, Article 177 (September 2023), 46 pages. https://doi.org/10.1145/3555803

[4] Epistemic Agency." Learning Discourses. Accessed February 8, 2024. https://learningdiscourses.com/subdiscourse/epistemic-agency/

Reference person:

  • Dr Raphaël Tuor

Exploring Shared Gaze Behaviors for Co-located Social Interactions and Collaboration in Mixed/Augmented Reality

Augmented reality (AR) technology enhances the real world with interactive virtual components. For instance, with headsets such as the HoloLens 2, users can interact with virtual objects via gestures or voice commands and make them interact with physical objects. Furthermore, optical see-through (OST) devices facilitate face-to-face (co-located) communication while providing users with contextual awareness of their surroundings, thus fostering opportunities for collaborative engagement.

To further facilitate the collaboration between participants, we would like to investigate the potential implications of shared gaze behaviors in enhancing collaborative interactions. Whilst this has somewhat been explored in remote settings, often involving virtual reality (VR), the goal of this seminar topics aims to investigate the implications of shared gaze behaviors within co-located AR environments. To achieve this, the student is expected to: (1) review studies on shared gaze behaviors in mixed reality (AR and VR); (2) conceptualize/design an experiment to evaluate the impact of shared gaze behaviors in a co-located AR setting.


[1] Allison Jing, Kieran May, Brandon Matthews, Gun Lee, and Mark Billinghurst. 2022. The Impact of Sharing Gaze Behaviours in Collaborative Mixed Reality. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 463 (November 2022), 27 pages. https://doi.org/10.1145/3555564

[2] Markus Wieland, Michael Sedlmair, and Tonja-Katrin Machulla. 2023. VR, Gaze, and Visual Impairment: An Exploratory Study of the Perception of Eye Contact across different Sensory Modalities for People with Visual Impairments in Virtual Reality. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA '23). Association for Computing Machinery, New York, NY, USA, Article 313, 1–6. https://doi.org/10.1145/3544549.3585726

[3] Shi Qiu, Jun Hu, Ting Han, Hirotaka Osawa & Matthias Rauterberg (2020) Social Glasses: Simulating Interactive Gaze for Visually Impaired People in Face-to-Face Communication, International Journal of Human–Computer Interaction, 36:9, 839-855, DOI: 10.1080/10447318.2019.1696513

Reference person:

  • Yong-Joon Thoo

Dealing with Imperfect Document Adaptation

Since several years, many schools and universities in North America and Europe are employing fully automated self-service softwares designed to enable students to convert educational material (e.g., PDF document) into alternative formats [3]. However, these solutions lies in the technology's inability to adequately process, identify and describe the structural elements and content of STEM digital files and images. Recent advances in the field of deep learning (DL) offers new possibilities to improve the accessibility of digital documents for people with visual impairments (PVI). For instance, alt text description can be automatically generated [4], while DL-based document object detection can support the remediation of PDF documents (i.e. converted as document images) [5,6]. More precisely, object detected in such documents can be used as input of a transcoding system [1]. However, DL-based systems can provide imperfect results, and highlight the problem of “not knowing what you don't know” [2]. The aim of this project is to study ways of informing PVI students about imperfect adaptation of documents.


blind or low-vision people; accessibility; document object detection; adaptation; remediation


  1. Chieko Asakawa, Hironobu Takagi, and Kentarou Fukuda. 2019. Transcoding. In Web Accessibility: A Foundation for Research, Yeliz Yesilada and Simon Harper (eds.). Springer London, London, 569–602. https://doi.org/10.1007/978-1-4471-7440-0_30
  2. Jeffrey P. Bigham, Irene Lin, and Saiph Savage. 2017. The effects of “not knowing what you don’t know” on web accessibility for blind web users. In ASSETS 2017 - Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, 101–109. https://doi.org/10.1145/3132525.3132533
  3. Lars Ballieu Christensen and Tanja Stevns. 2015. Universal access to alternate media. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 406–414. https://doi.org/10.1007/978-3-319-20678-3_39
  4. Cole Gleason, Amy Pavel, Emma McCamey, Christina Low, Patrick Carrington, Kris M Kitani, and Jeffrey P Bigham. 2020. Twitter A11y: A Browser Extension to Make Twitter Images Accessible. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–12. Retrieved from https://doi.org/10.1145/3313831.3376728
  5. Lucy Lu Wang, Isabel Cachola, Jonathan Bragg, Evie Yu Yen Cheng, Chelsea Haupt, Matt Latzke, Bailey Kuehl, Madeleine van Zuylen, Linda Wagner, and Daniel Weld. 2021. SciA11y: Converting Scientific Papers to Accessible HTML. In ASSETS 2021 - 23rd International ACM SIGACCESS Conference on Computers and Accessibility. https://doi.org/10.1145/3441852.3476545
  6. Lucy Lu Wang, Isabel Cachola, Jonathan Bragg, Evie Yu Yen Cheng, Chelsea Haupt, Matt Latzke, Bailey Kuehl, Madeleine van Zuylen, Linda Wagner, and Daniel Weld. 2021. Improving the Accessibility of Scientific Documents: Current State, User Needs, and a System Solution to Enhance Scientific PDF Accessibility for Blind and Low Vision Users. Retrieved from http://arxiv.org/abs/2105.00076

Reference person:

  • Maximiliano Jeanneret Medina

Automated Shared Bibliography Management

Managing a bibliography at a research group or an organization level is a tedious task. Even more when multiple authors store different metadata for the same bibliographic item. Managing shared bibliographies can be performed with a dedicated software, namely bibliographic or reference management software (RMS) [3]. Mendeley, or Zotero [3] are commonly used. More generally, a RMS helps scholars organize their work, improve workflows and ultimately save time [3]. Such tools provide numerous features such as data import, collaboration, metadata extraction, and duplicate checking [3]. However, obtaining a clean bibliography, or maintaining several personalized bibliographies, requires manual tedious tasks performed by the researcher.

Managing bibliographic metadata related to publications of a research group can benefit from an authority source and (partial) automation. For example, ORCID (Open Researcher & Contributor ID) provides a registry of persistent unique identifiers for researchers and scholars, owned and controlled by them [4]. Moreover, ORCID is integrated with CrossRef, thus offering complementary infrastructures for uniquely identifying researchers and enabling them to connect with their publications [1]. The management of bibliographic information by the researcher reduces the risk of errors (e.g., reference ambiguity [2]), while the automated link with CrossRef allows an automatic population of the researcher’s bibliography [3, 5].

While numerous works investigated the adoption and the use of RMS [5], no study investigated the benefits of automatic, personalized, and shared bibliographic management. This project aims to elucidate the user need to manage personalized bibliographies (e.g., per category, keywords, project) and provide guidelines to evaluate such computing systems.


bibliography management; workflow; collaborative tools


[1] Haak, L.L. et al. 2012. ORCID: A system to uniquely identify researchers. Learned Publishing. 25, 4 (Oct. 2012), 259–264. DOI:https://doi.org/10.1087/20120404.

[2] Kim, J. and Owen-Smith, J. 2021. ORCID-linked labeled data for evaluating author name disambiguation at scale. Scientometrics. 126, 3 (Mar. 2021), 2057–2083. DOI:https://doi.org/10.1007/s11192-020-03826-6.

[3] Nilashi, M. et al. 2019. An interpretive structural modelling of the features influencing researchers’ selection of reference management software. Journal of Librarianship and Information Science. 51, 1 (Mar. 2019), 34–46. DOI:https://doi.org/10.1177/0961000616668961.

[4] ORCID: 2022. https://orcid.org/. Accessed: 2022-05-09.

[5] Rempel, H.G. and Mellinger, M. 2015. Bibliographic Management Tool Adoption and Use: A Qualitative Research Study Using the UTAUT Model. Reference & User Services Quarterly. 54, 4 (2015), 43–53. DOI:https://doi.org/10.2307/refuseserq.54.4.43.

Reference person :

  • Maximiliano Jeanneret Medina

Can a Large Language Model produce a new International Auxiliary Language

Large Language Models (LLMs) and chatbots are evolving rapidly since their inception about 10 years ago. Now, with the arrival and public success of ChatGPT, new models are released every couple of days. These models already achieve outstanding results in many domains. They are able to search and aggregate complex information, translate language or perform computation and even resolve complex problems. An International Auxiliary Language (IAL) is a constructed language designed to facilitate communication between speakers of different native languages. Such languages, such as Esperanto, are usually inspired from multiple languages. In the context of this seminar, the student will investigate the notion of LLMs and auxiliary language and concretely investigate the feasibility to use LLMs to produce a new IAL through a documented methodology.


[1] Ye, S., Kim, D., Kim, S., Hwang, H., Kim, S., Jo, Y., Thorne, J., Kim, J., & Seo, M. (2023). FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets. https://arxiv.org/abs/2307.10928v1 (not peer-reviewed)

[2] Usman Hadi, M., al tashi, qasem, Qureshi, R., Shah, A., muneer, amgad, Irfan, M., Zafar, A., Bilal Shaikh, M., Akhtar, N., Wu, J., Mirjalili, S., Al-Tashi, Q., & Muneer, A. (2023). A Survey on Large Language Models: Applications, Challenges, Limitations, and Practical Usage. https://doi.org/10.36227/techrxiv.23589741.v1 (not peer-reviewed)

[3] Pereltsvaig, A. (2017). Esperanto linguistics. Language Problems and Language Planning, 41(2), 168–191. https://doi.org/10.1075/lplp.41.2.06per

[4] Gentry A. (2017). The Secret of International Auxiliary Languages. Medium article https://medium.com/salvo-faraday/the-secret-of-international-auxiliary-languages-7d45a1f1e312

Reference person:

  • Simon Ruffieux

Smartphone-based analysis of personal mobility: technological challenges and privacy concerns

Smartphones are packed with sensors that can be used for the most disparate tasks. One of such tasks is the analysis of a person's mobility patterns and used modes of transportation. This information can be exploited to improve the user experience with contextual information, for example by activating contextual settings on the phone based on the modality in use; or to provide suggestions about how to modify one's patterns to promote healthy and sustainable mobility and help saving money.

The choice of the sensors to be used for this type of analysis, however, is not trivial. Indeed, the use of more sensitive data (e.g., precise location) or a high number of sensors, can provide a more precise analysis of mobility, but raises issues for the user, such as security and privacy concerns. Thus, the set of sensors or alternative methods providing a good compromise between accuracy and user satisfaction or willingness to adopt is researched in this work. This will be performed with the analysis of existing literature from the technical and social perspectives, as well as with the design of a user experiment.


Kaptchuk, G., Goldstein, D. G., Hargittai, E., Hofman, J. M., & Redmiles, E. M. (2022). How good is good enough? quantifying the impact of benefits, accuracy, and privacy on willingness to adopt covid-19 decision aids. Digital Threats: Research and Practice (DTRAP), 3(3), 1-18.

Hasan, R. A., Irshaid, H., Alhomaidat, F., Lee, S., & Oh, J. S. (2022). Transportation mode detection by using smartphones and smartwatches with machine learning. KSCE Journal of Civil Engineering, 26(8), 3578-3589.

Reference person :

  • Dr Moreno Colombo

Seminar schedule

(may be subject to changes)

  • 07.03 10h15 1st session, in presence
  • 11.03 Communication of preferred topics (1st and 2nd choice)
  • 28.03 10h15 1st presentation (ideas,papers,structure) in presence
  • 18.04 1st complete draft (no session)
  • 25.04 10h15 2nd presentation in presence
  • 16.05 2nd draft (no session)
  • 23.05 10h15 final presentation in presence
  • 06.06 final draft 11.01.2024 (no session)

Work to be done

Students will be asked to :

  • Conduct an in-depth review of the state-of-the-art of their chosen topic
  • Discuss their findings with their respective tutor
  • Imagine a user experiment relevant to existing research
  • Write a 4-pages article summarizing their review work
  • Present their article to the seminar participants

Interested computer science students are invited to participate by expressing their interest on two specific topics (preferred and secondary choices) among the ones presented on the topics presented above.

Learning Outcomes

At the end of the seminar, students will know how to do a bibliographic research and be exercised in writing a scientific article. Further, they will build a general knowledge on the field of HCI and its current techniques and trends, as well as an in-depth understanding on their chosen topic.


Attend the introductory session and express your interest about both your chosen topics with a short text (3-5 sentences) to Dr Julien Nembrini, mentioning (1) your preferred topic, (2) your second-preferred topic. Each reference person will then contact you if you are chosen for participating to the seminar. Others will receive a notification email.

emails : julien.nembrini@unifr.ch and denis.lalanne@unifr.ch

Registration Process

  1. Participate to the first introductory session on Thursday 29.02 10h15 in presence in PER21 A420 or online.
  2. Express your interest for a specific topic as mentioned above before the 04.03 (first come first served).
  3. Wait for confirmation of your participation.

Seminar Process

In addition to the first session, there will only be three plenary sessions during the semester, which consist in participants presenting their work. These sessions will be in presence and will happen on Thursdays at 10h15. Reference persons will organize additional bilateral meetings during the semester.

The seminar process is as follows:

  1. Select state-of-the-art references relevant to the chosen thematic, synthesize these references to structure the research field, discuss and refine your approach with your topic reference person.
  2. Present the structure developed to the other participants of the seminar for the intermediate presentation.
  3. Synthesize the selected bibliographic references in a written 4 pages article, authored in LaTeX following ACM SIGCHI format.
  4. Define a user experiment within the chosen topic that is relevant with regard to existing research
  5. Discuss and refine your article with your topic reference person.
  6. Present your article in one of the final presentation sessions (end of semester)


The evaluation will be conducted by the topic reference person and the person responsible for the seminar.

The evaluation will be based on the quality of :

  • Your written article: readability, argumentation, English
  • Your review work: reference selection and field structure
  • Your final presentation
  • Your initial draft
  • Your review of a colleague's first draft

If each step described below is not formally marked, each contribute towards producing a good final paper (75% of the grade)


First presentation

  • Introduce your theme
  • Your selection of at least 3 papers most relevant to your theme, if possible different from the ones provided in the topic description. Don't forget to give author names, journal and date!
  • Look in each paper for:
    • Theme: How does the paper fit in the larger picture of your theme?
    • Main contribution: how does the paper contribute to the field?
    • Methods: what are the methods used? which kind of experimental setup, if any?
    • Outlook: strengths / weaknesses, research opportunities, etc
  • Presentation time 10'.

First draft

  • Your draft should :
    • present the research context of your topic with relevant publications
    • state a research question and an hypothesis
    • propose an experiment to test the hypothesis, and discuss its expected results
  • Provide a complete draft. Enough content will allow reviewers to give meaningful feedback, although some parts may still be in bullet point style.


  1. You MUST use the latex template https://www.overleaf.com/latex/templates/association-for-computing-machinery-acm-sigchi- proceedings-template/nhjwrrczcyys
  2. Maximum 4 pages excluding references (it is acceptable to concentrate on some sections and leaving one or two others in bullet point style)
  3. Polish your English grammar and orthography


  • Rephrase the content in 1 short paragraph
  • List the positive points
  • List the negative points, provide examples
  • Propose improvements
  • Do not insist on orthograph/grammar unless it is especially disturbing

Provide your review either as comments in the pdf or as a separate document.

Second presentation

Present your first draft.

  • Your presentation should include:
    • Introduction to the topic
    • Structure of your research field (literature review outcome topic-by-topic instead of paper-by-paper)
    • Research questions/hypotheses
    • User experiment proposal and expected outcomes
  • Include details (example papers, shared methods/approaches, experiments conducted by others, etc)
  • Include full references (preferably not at the end)
  • Presentation time 10'

Prepare also an oral feedback of your fellow student's first draft (structure,strengths/weaknesses, English, style, etc.)

Second draft

Your second draft should combine feedback from your supervisor and from your fellow student

Final presentation

Same guidelines as the seocond presentation, with comments addressed

Final draft

Integrate all feedback received and finalize your paper. this final version will be published on the website (habe a look here for previous examples)

Date: Spring 2024