Fall semester 2023

Human-Computer Interaction Seminar

This is the main seminar of the HCI group of the Human-IST Institute. The topics will change each semester and will be proposed by Human-IST members. Each participant will be supervised by one Human-IST member.

With the advent of autonomous cars, mobile devices and conversational agents, the question of the interaction with digital devices in everyday life is becoming everyday more relevant. The aim of the HCI seminar is to look at this question over several specific contexts and expose students to state-of-the art research in Human-Computer Interaction.

Through a set of topics for the participants to choose from, the different research fields studied within the Human-IST institute will be reviewed and discussed. Each specific topic is proposed by a member of the Human-IST team who will be available during the semester to follow the work of the student.

Prof. Denis Lalanne

Dr Julien Nembrini (contact person)

The introductory lecture will be held on Thursday 28.09 10h45 in presence in PER21 A420. This session can be attended online, the sessions during the semester are in presence. Please contact Dr Julien Nembrini julien.nembrini@unifr.ch, with copy to Prof. Denis Lalanne denis.lalanne@unifr.ch, if you wish to attend.

Semester topics

On-Site Augmented Reality Collaboration Tasks

Augmented reality (AR) technology enhances the real world with interactive virtual components. For instance, with headsets such as the HoloLens 2, users can interact with virtual objects via gestures or voice commands and make them interact with physical objects. Moreover, optical see-through (OST) devices enable users to maintain situational awareness and engage in face-to-face communication with people in their surroundings. This offers tremendous opportunities for collaboration between two AR users viewing shared or overlapping scenes/experiences with applications in design, education, training, healthcare, gaming, etc.

Although other forms of AR collaboration have been investigated in recent years, on-site AR collaboration has received limited attention in research. This topic aims to explore the potential of on-site AR collaboration and how the interaction between users could be evaluated. To achieve this, the student will: (1) review studies on collaborative AR tasks/games and associated evaluation methods; (2) conceptualize a "simple" on-site AR collaboration task (or adapt an existing one); and (3) design a user experiment.

References:

[1] T. Buckers, B. Gong, E. Eisemann, and S. Lukosch. 2018. VRabl: stimulating physical activities through a multiplayer augmented reality sports game. In Proceedings of the First Superhuman Sports Design Challenge: First International Symposium on Amplifying Capabilities and Competing in Mixed Realities (SHS '18). Association for Computing Machinery, New York, NY, USA, Article 1, 1–5. https://doi.org/10.1145/3210299.3210300

[2] Pidel, C., Ackermann, P. (2020). Collaboration in Virtual and Augmented Reality: A Systematic Overview. In: De Paolis, L., Bourdot, P. (eds) Augmented Reality, Virtual Reality, and Computer Graphics. AVR 2020. Lecture Notes in Computer Science(), vol 12242. Springer, Cham. https://doi.org/10.1007/978-3-030-58465-8_10

[3] Lukosch, S., Billinghurst, M., Alem, L. et al. Collaboration in Augmented Reality. Comput Supported Coop Work 24, 515–525 (2015). https://doi.org/10.1007/s10606-015-9239-0

Reference person:

  • Yong-Joon Thoo

Benchmarking tests for AI-based Chatbots

Large-language models and chatbots are evolving rapidly since their inception about 10 years ago. Now, with the arrival and public success of ChatGPT, new models are released every couple of days. The benchmarking of these models is not trivial and many different dimensions can be taken into account. Various different methodologies and tests have been proposed to measure the performances in terms of accuracy to answer questions. The student shall start by reviewing common benchmarking methodologies for chatbots and LLM models and continue by listing, classifying and describing the main tests currently employed to compare accuracy of models. The student shall then come up with an innovative solution of his own to provide a useful characterization of the performances of models.

References:

[1] Przegalinska, A., Ciechanowski, L., Stroz, A., Gloor, P., & Mazurek, G. (2019). In bot we trust: A new methodology of chatbot performance measures. Business Horizons, 62(6), 785–797. https://doi.org/10.1016/J.BUSHOR.2019.08.005

[2] Chan, C.-M., Chen, W., Su, Y., Yu, J., Liu, Z., Fu, J., Xue, W., Kong, H., & Zhang, S. (2023). ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate. https://arxiv.org/abs/2308.07201v1

[3] Clark, C., Lee, K., Chang, M. W., Kwiatkowski, T., Collins, M., & Toutanova, K. (2019). BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions. NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, 1, 2924–2936. https://arxiv.org/abs/1905.10044v1

Reference person:

  • Simon Ruffieux

Companion interfaces to record individual energy behaviour

Smartphone-based analysis of personal mobility: technological challenges and privacy concerns

Smartphones are packed with sensors that can be used for the most disparate tasks. One of such tasks is the analysis of a person's mobility patterns and used modes of transportation. This information can be exploited to improve the user experience with contextual information, for example by activating contextual settings on the phone based on the modality in use; or to provide suggestions about how to modify one's patterns to promote healthy and sustainable mobility and help saving money.

The choice of the sensors to be used for this type of analysis, however, is not trivial. Indeed, the use of more sensitive data (e.g., precise location) or a high number of sensors, can provide a more precise analysis of mobility, but raises issues for the user, such as security and privacy concerns. Thus, the set of sensors or alternative methods providing a good compromise between accuracy and user satisfaction or willingness to adopt is researched in this work. This will be performed with the analysis of existing literature from the technical and social perspectives, as well as with the design of a user experiment.

References

Kaptchuk, G., Goldstein, D. G., Hargittai, E., Hofman, J. M., & Redmiles, E. M. (2022). How good is good enough? quantifying the impact of benefits, accuracy, and privacy on willingness to adopt covid-19 decision aids. Digital Threats: Research and Practice (DTRAP), 3(3), 1-18.

Hasan, R. A., Irshaid, H., Alhomaidat, F., Lee, S., & Oh, J. S. (2022). Transportation mode detection by using smartphones and smartwatches with machine learning. KSCE Journal of Civil Engineering, 26(8), 3578-3589.

Reference person :

  • Dr Moreno Colombo

Understanding Algorithmic Gatekeepers to Promote Epistemic Welfare

In nowadays’ evolving media landscape, the quantity of content has become overwhelming. Media content providers are compelled to make use of algorithmic gatekeepers in order to filter, rank and recommend content.

The concept of epistemic welfare is defined as the individuals’ right to know and be exposed to trustworthy, independent and diverse information while respecting individual rights to their own data.

The goal of this topic is to examine the influence of algorithmic gatekeepers on epistemic welfare within the digital media landscape.

References

Stephanidis, Constantine, et al. "Seven HCI grand challenges." International Journal of Human–Computer Interaction 35.14 (2019): 1229-1269. https://www.db-thueringen.de/servlets/MCRFileNodeServlet/dbt_derivate_00050401/1532-7590_35_2019_14_1229-1269.pdf

Li, Bo, et al. "Trustworthy ai: From principles to practices." ACM Computing Surveys 55.9 (2023): 1-46. https://arxiv.org/pdf/2110.01167.pdf

Reference person :

  • Dr Raphael Tuor

Exploring the Use of Deep Learning Document Object Detection for Accessibility

Recent advances in the field of deep learning (DL), such as automatic text generation from images [1], has significantly improved the accessibility of computer systems for people with visual impairments (PVI). Moreover, advances in document object detection [4] made possible to improve the accessibility of PDF documents (i.e. converted as document images) or scanned document images. In that sense, such documents can benefit from labels detected to transcode a more accessible version [2] or enhance the navigability. However, DL‑based systems can provide non-perfect results [3], and little is known about how PVI deal with the results of such systems.

References

[1] Cole Gleason, Amy Pavel, Emma McCamey, Christina Low, Patrick Carrington, Kris M Kitani, and Jeffrey P Bigham. 2020. Twitter A11y: A Browser Extension to Make Twitter Images Accessible. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–12. Retrieved from https://doi.org/10.1145/3313831.3376728

[2] Simon Harper, Sean Bechhofer, and Darren Lunn. 2006. SADIe: Transcoding based on CSS. Eighth International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2006 2006, January: 259–260. https://doi.org/10.1145/1168987.1169044

[3] Haley MacLeod, Cynthia L Bennett, Meredith Ringel Morris, and Edward Cutrell. 2017. Understanding Blind People’s Experiences with Computer-Generated Captions of Social Media Images. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), 5988–5999. https://doi.org/10.1145/3025453.3025814

[4] Zejiang Shen, Ruochen Zhang, Melissa Dell, Benjamin Charles Germain Lee, Jacob Carlson, and Weining Li. 2021. LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 12821 LNCS, Dl: 131–146. https://doi.org/10.1007/978-3-030-86549-8_9

Reference person :

  • Maximiliano Jeanneret

Turing Test: A Review of the Turing Test and its Applications for Chatbots

The recent evolution of chatbots systems, like ChatGPT, showed some of the limits of the Turing Test. In this work, the student is expected to review the historical background of the Turing Test and review its most recent applications with Chatbots. We notably expect the student to review the different variations of the Turing Test that have been proposed in research, and in which contexts they have been used.

The student shall then propose a new variation or apply an existing historical variant with chatbots. We are particularly interested in variations which only involves a single human participant.

References:

[1] French, R. M. (2000). The turing test: The first 50 years. Trends in Cognitive Sciences, 4(3), 115–122. https://doi.org/10.1016/S1364-6613(00)01453-4

[2] Sejnowski, T. J. (2023). Large Language Models and the Reverse Turing Test. Neural Computation, 35(3), 309–342. https://doi.org/10.1162/NECO_A_01563

[3] Ishida, Y., & Chiba, R. (2017). Free Will and Turing Test with Multiple Agents: An Example of Chatbot Design. Procedia Computer Science, 112, 2506–2518. https://doi.org/10.1016/J.PROCS.2017.08.190

Reference person :

  • Simon Ruffieux

Assistive robotic environments

Assistive robotics for mobility-impaired persons has mainly focused in proximal help to the user in the form of robotic wheelchairs and arms, or more recently exoskeletons. Recent developments in mobile robotics allow to imagine teams of robots collaborating to provide assistance at a distance, with use cases such as space reconfiguration or object transport.

The student shall review the proposed systems, how they relate to users' needs, and how they have been tested. The aim is to propose a novel approach together with the design of a user experiment that addresses shortcomings of existing research.

References:

[1] A. Fallatah, B. Stoddard, M. Burnett, and H. Knight, “Towards user-centric robot furniture arrangement,” in IEEE International Conference on Robot & Human Interactive Communication (Ro-Man), pp. 1066–1073, 2021

[2] Verma, S., Gonthina, P., Hawks, Z., Nahar, D., Brooks, J. O., Walker, I. D., ... & Green, K. E. (2018, May). Design and evaluation of two robotic furnishings partnering with each other and their users to enable independent living. In Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare (pp. 35-44).

[3] D. Zhao, C. Yang, T. Zhang, J. Yang, and Y. Hiroshi, “A Task Allocation Approach of Multi-Heterogeneous Robot System for Elderly Care,” Machines, vol. 10, p. 622, Aug. 2022.

Reference person :

  • Julien Nembrini

Seminar schedule

(may be subject to changes)

  • 1st session 28.09 10h45
  • Communication of preferred topics (1st and 2nd choice) 01.10
  • 1st presentation (ideas,papers,structure) 19.10 10h45 1 week in advance relative to previous schedule
  • first complete draft 09.11 (no session)
  • second presentation 16.11 10h45
  • second draft 14.12 (no session)
  • final presentation 21.12 10h45
  • final draft 11.01.2024 (no session)

Work to be done

Students will be asked to :

  • Conduct an in-depth review of the state-of-the-art of their chosen topic
  • Discuss their findings with their respective tutor
  • Imagine a user experiment relevant to existing research
  • Write a 4-pages article summarizing their review work
  • Present their article to the seminar participants

Interested computer science students are invited to participate by expressing their interest on two specific topics (preferred and secondary choices) among the ones presented on the topics presented above.

Learning Outcomes

At the end of the seminar, students will know how to do a bibliographic research and be exercised in writing a scientific article. Further, they will build a general knowledge on the field of HCI and its current techniques and trends, as well as an in-depth understanding on their chosen topic.

Registration

Attend the introductory session and express your interest about both your chosen topics with a short text (3-5 sentences) to Dr Julien Nembrini, mentioning (1) your preferred topic, (2) your second-preferred topic. Each reference person will then contact you if you are chosen for participating to the seminar. Others will receive a notification email.

emails : julien.nembrini@unifr.ch and denis.lalanne@unifr.ch

Registration Process

  1. Participate to the first introductory session on Thursday 28.09 10h45 in presence in PER21 A420 or online.
  2. Express your interest for a specific topic as mentioned above before the 05.10 (first come first served).
  3. Wait for confirmation of your participation.

Seminar Process

In addition to the first session, there will only be three plenary sessions during the semester, which consist in participants presenting their work. These sessions will be in presence and will happen on Thursdays at 10h45. Reference persons will organize additional bilateral meetings during the semester.

The seminar process is as follows:

  1. Select state-of-the-art references relevant to the chosen thematic, synthesize these references to structure the research field, discuss and refine your approach with your topic reference person.
  2. Present the structure developed to the other participants of the seminar for the intermediate presentation.
  3. Synthesize the selected bibliographic references in a written 4 pages article, authored in LaTeX following ACM SIGCHI format.
  4. Define a user experiment within the chosen topic that is relevant with regard to existing research
  5. Discuss and refine your article with your topic reference person.
  6. Present your article in one of the final presentation sessions (end of semester)

Evaluation

The evaluation will be conducted by the topic reference person and the person responsible for the seminar.

The evaluation will be based on the quality of :

  • Your written article: readability, argumentation, English
  • Your review work: reference selection and field structure
  • Your final presentation
  • Your initial draft
  • Your review of a colleague's first draft

If each step described below is not formally marked, each contribute towards producing a good final paper (75% of the grade)

Guidelines (will be updated)

First presentation

  • Introduce your theme
  • Your selection of at least 3 papers most relevant to your theme, if possible different from the ones provided in the topic description. Don't forget to give author names, journal and date!
  • Look in each paper for:
    • Theme: How does the paper fit in the larger picture of your theme?
    • Main contribution: how does the paper contribute to the field?
    • Strengths and weaknesses of the proposed approach
    • Outlook: what research opportunities it opens?
    • Methods: what are the methods used? which kind of experimental setup, if any?
  • Presentation time 10'.

First draft

  • your draft should :
    • present a research field with relevant publications
    • state a research question and an hypothesis
    • propose an experiment to test the hypothesis, and discuss its expected results
  • provide an (almost) complete draft. Enough content will allow reviewers to give meaningful feedback
  • polish your English grammar and orthography
  • you MUST use the latex template https://www.overleaf.com/latex/templates/association-for-computing-machinery-acm-sigchi- proceedings-template/nhjwrrczcyys
  • maximum 4 pages excluding references (do not hesitate to concentrate on some sections and leaving others in bullet point style)

Review

  • rephrase the content in 1 short paragraph
  • list the positive points
  • list the negative points, provide examples
  • propose improvements
  • do not insist on orthograph/grammar unless it is especially disturbing

Provide your review either as comments in the pdf or as a separate document.

Second presentation

  • your presentation should include:
    • Introduction to the topic
    • Research questions/hypotheses
    • General structure of your research field (literature review outcome)
    • User experiment proposal
  • include details (example papers, shared methods/approaches, experiments conducted by others, etc)
  • include full references (preferably not at the end)
  • present your review of fellow student's first draft (see guidelines above)
  • experiment with your presentation style
  • Presentation time 8 + 2 (review)

Second draft

  • combine feedback from your supervisor and from your fellow student

Final presentation

  • your presentation should include:
    • Introduction to the topic
    • Research questions/hypotheses
    • General structure of your research field (literature review outcome)
    • User experiment proposal
  • include details (example papers, shared methods/approaches, experiments conducted by others, etc)
  • include full references (preferably not at the end)
  • timing 10'

Final draft

  • integrate all feedback received and finalize your paper. this final version will be published on the website (habe a look here for previous examples)

Date: Summer 2023