Human-Computer Interaction Seminar - Fall semester 2025

This is the main seminar of the HCI group of the Human-IST Institute. The topics will change each semester and will be proposed by Human-IST members. Each participant will be supervised by one Human-IST member.

With the advent of autonomous cars, mobile devices and conversational agents, the question of the interaction with digital devices in everyday life is becoming everyday more relevant. The aim of the HCI seminar is to look at this question over several specific contexts and expose students to state-of-the art research in Human-Computer Interaction.

Through a set of topics for the participants to choose from, the different research fields studied within the Human-IST institute will be reviewed and discussed. Each specific topic is proposed by a member of the Human-IST team who will be available during the semester to follow the work of the student.

Prof. Denis Lalanne

Dr Julien Nembrini (contact person)

The introductory lecture will be held on Monday 22.09 14h 13h in presence in PER21 A420. This session can be attended online, the sessions during the semester are in presence. Please contact Dr Julien Nembrini julien.nembrini@unifr.ch, with copy to Prof. Denis Lalanne denis.lalanne@unifr.ch, if you wish to attend.

Semester topics

MORE TO COME ...

Generative AI for Web Layout Design: Integrating Optimization, Personalization, and Human–AI Collaboration

The rapid evolution of digital platforms has increased the need for optimized, adaptable, and user-centered web layouts. Traditional manual design approaches struggle with scalability, personalization, and responsiveness, particularly when balancing multiple objectives such as alignment, usability, aesthetics, and real-time adaptability. Recent studies highlight different approaches to this challenge. GRIDS demonstrates the potential of integer programming for efficient grid layout generation [1], while Layout as a Service (LaaS) shows how optimization enables scalable, automated personalization [2]. In addition, studies on Human–AI co-design emphasize the importance of dynamic responsiveness and collaborative creativity in web layout generation [3].

This research aims to conduct a comprehensive review of these contributions, alongside broader literature in computational design, generative AI, and layout optimization. The review will identify recurring design criteria such as spatial coherence, personalization, adaptability, and Human–AI interaction. Based on this review, the aim is to propose a prototype that takes into consideration the identified design criteria for web layouts. This prototype would be a generative AI tool capable of automatically creating and evolving layouts while integrating optimization-based algorithms. It would (1) maintain structural balance and alignment, (2) provide personalization, and (3) adapt dynamically to changing content and context. Additionally, it would support co-design, enabling designers to interactively guide and refine the generated layouts.

References

[1] Niraj Ramesh Dayama, Kashyap Todi, Taru Saarelainen, and Antti Oulasvirta. 2020. GRIDS: Interactive Layout Design with Integer Programming. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376553

[2] Laine, M., Nakajima, A., Dayama, N., Oulasvirta, A. (2020). Layout as a Service (LaaS): A Service Platform for Self-Optimizing Web Layouts. In: Bielikova, M., Mikkonen, T., Pautasso, C. (eds) Web Engineering. ICWE 2020. Lecture Notes in Computer Science(), vol 12128. Springer, Cham. https://doi.org/10.1007/978-3-030-50578-3_2

[2] Jiayi Zhou, Renzhong Li, Junxiu Tang, Tan Tang, Haotian Li, Weiwei Cui, and Yingcai Wu. 2024. Understanding Nonlinear Collaboration between Human and AI Agents: A Co-design Framework for Creative Design. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 170, 1–16. https://doi.org/10.1145/3613904.3642812

Reference person:

  • Ilyes Kadri

HCI Skills Evolution with GenAI Advances (GenAI HCI Deskilling)

This research shall explore how Generative AI (GenAI) is transforming the skills required in Human-Computer Interaction (HCI). As GenAI technologies become more prevalent, they are reshaping the way users interact with systems, potentially leading to a deskilling effect where traditional HCI skills may become less relevant or obsolete. This study aims to identify which HCI skills are being affected by GenAI advancements, how these changes impact user experience and system design, and what new skills are emerging as essential in this evolving landscape.

The student shall review existing work from the litterature on Human-AI co-creativity, focusing on the notion of interface, design and human computer interaction. Finally, the student shall propose a potential design and user-experiment to explore how an GenAI reshapes skills and practice for UX designers.

References

[1] Shukla, P., Bui, P., Levy, S.S., Kowalski, M., Baigelenov, A., Parsons, P. (2025). De-skilling, cognitive offloading, and misplaced responsibilities: Potential ironies of ai-assisted design. Proceedings of the extended abstracts of the chi conference on human factors in computing systems (p. 1–7). New York, NY, USA: Association for Computing Machinery. Retrieved from https://dl.acm.org/doi/10.1145/3706599.3719931

[2] Li, J., Cao, H., Lin, L., Hou, Y., Zhu, R., El Ali, A. (2024). User experience design professionals’ perceptions of generative artificial intelligence. Proceedings of the 2024 chi conference on human factors in computing systems. New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/3613904.3642114

Reference person:

  • Simon Ruffieux

Characterize and Visualize Learner-AI Collaboration

Generative Artificial Intelligence (GAI) is increasingly present in our daily lives, including in education. While it offers opportunities to enhance learning through continuous and personalized feedback, it also raises concerns related to learner autonomy, engagement and skills development. Despite its rapid adoption, the way learners use AI, i.e. learner–AI collaboration, remains insufficiently described as does its impact on learning outcomes. Automatically characterizing this collaboration could provide valuable insights into current GAI usage in educational contexts. Going further, making this information visible to learners could help them become aware of their own behavior and adapt it to improve their learning. This requires the design of an innovative interface that integrates dedicated visualizations, making learner–AI collaboration more transparent and fostering engagement that supports successful learning.

This project focuses on two interconnected objectives: (1) exploring how machine learning models can be used to automatically characterize learner–AI collaboration from interaction traces, and (2) designing an interface that incorporates interactive visualizations of this collaboration. As part of the seminar, the student will conduct a literature review on learner–AI collaboration and related interface design. Finally, the student will propose a potential user study aimed at investigating how interactive visualizations can promote more reflective, informed, and active use of GAI in educational contexts.

Keywords: Generative Artificial Intelligence; Digital Learning; Learner-AI collaboration; Learning Analytics; Interactive interface

References

[1] Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., ... & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and individual differences, 103, 102274. Retrieved from https://doi.org/10.1016/j.lindif.2023.102274

[2] Cheng, Y., Lyons, K., Chen, G., Gašević, D., & Swiecki, Z. (2024, March). Evidence-centered assessment for writing with generative AI. In Proceedings of the 14th learning analytics and knowledge conference (pp. 178-188). Retrieved from https://dl.acm.org/doi/10.1145/3636555.3636866

[3] Lee, M., Gero, K. I., Chung, J. J. Y., Shum, S. B., Raheja, V., Shen, H., ... & Siangliulue, P. (2024, May). A design space for intelligent and interactive writing assistants. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-35). Retrieved from https://doi.org/10.1145/3613904.3642697

[4] Hoque, M. N., Mashiat, T., Ghai, B., Shelton, C. D., Chevalier, F., Kraus, K., & Elmqvist, N. (2024, May). The HaLLMark effect: Supporting provenance and transparent use of large language models in writing with interactive visualization. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-15). Retrieved from https://doi.org/10.1145/3613904.3641895

Reference person:

  • Célina Treuillier

Benchmarking Toolkits for Developing Multimodal User Interfaces

Multimodal interaction (MMI) has been extensively studied in the field of Human-Computer Interaction, demonstrating significant benefits across a variety of application domains [2,5] such as accessibility [4,7]. In recent years, HCI scholars and practitioners developed several toolkits such as Geno [8], Multisensor-Pipeline [1], Ummi [9], or ReactGenie [12] to develop multimodal user interfaces (MUI). Moreover, the recent development in artificial intelligence opens new directions for interactions [3,6,10,11]. In this context, this work will focus on benchmarking such toolkits. Criteria of comparison may encompass the easiness to integrate another modality recognizer or the effectiveness to support modality fusion.

Keywords: multimodal user interface; benchmarking; artificial intelligence

References

  1. Michael Barz, Omair Shahzad Bhatti, Bengt Lüers, Alexander Prange, and Daniel Sonntag. 2021. Multisensor-Pipeline: A Lightweight, Flexible, and Extensible Framework for Building Multimodal-Multisensor Interfaces. In Companion Publication of the 2021 International Conference on Multimodal Interaction (ICMI ’21 Companion), 13–18. https://doi.org/10.1145/3461615.3485432

  2. Bruno Dumas, Denis Lalanne, and Sharon Oviatt. 2009. Multimodal interfaces: A survey of principles, models and frameworks. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 5440 LNCS, January: 3–26. https://doi.org/10.1007/978-3-642-00437-7_1

  3. Jiewen Hu, Leena Mathur, Paul Pu Liang, and Louis-Philippe Morency. 2025. OpenFace 3.0: A Lightweight Multitask System for Comprehensive Facial Behavior Analysis. Retrieved from https://arxiv.org/abs/2506.02891

  4. Maximiliano Jeanneret Medina, Yong-Joon Thoo, Cédric Baudet, Jon E Froehlich, Nicolas Ruffieux, and Denis Lalanne. 2025. Understanding Research Themes and Interactions at Scale within Blind and Low-vision Research in ACM and IEEE. ACM Trans. Access. Comput. 18, 2. https://doi.org/10.1145/3726531

  5. Denis Lalanne, Laurence Nigay, Philippe Palanque, Peter Robinson, Jean Vanderdonckt, and Jean-François Ladry. 2009. Fusion engines for multimodal input: a survey. In Proceedings of the 2009 International Conference on Multimodal Interfaces (ICMI-MLMI ’09), 153–160. https://doi.org/10.1145/1647314.1647343

  6. Jaewook Lee, Jun Wang, Elizabeth Brown, Liam Chu, Sebastian S Rodriguez, and Jon E Froehlich. 2024. GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation in Wearable Augmented Reality. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). https://doi.org/10.1145/3613904.3642230

  7. Zahra J Muhsin, Rami Qahwaji, Faruque Ghanchi, and Majid Al-Taee. 2024. Review of substitutive assistive tools and technologies for people with visual impairments: recent advancements and prospects. Journal on Multimodal User Interfaces 18, 1: 135–156. https://doi.org/10.1007/s12193-023-00427-4

  8. Ritam Jyoti Sarmah, Yunpeng Ding, Di Wang, Cheuk Yin Phipson Lee, Toby Jia-Jun Li, and Xiang “Anthony” Chen. 2020. Geno: A Developer Tool for Authoring Multimodal Interaction on Existing Web Applications. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST ’20), 1169–1181. https://doi.org/10.1145/3379337.3415848

  9. Thibaut Septon, Santiago Villarreal-Narvaez, Xavier Devroey, and Bruno Dumas. 2024. Exploiting Semantic Search and Object-Oriented Programming to Ease Multimodal Interface Development. In Companion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS ’24 Companion), 74–80. https://doi.org/10.1145/3660515.3664244

  10. Anoop K Sinha, Chinmay Kulkarni, and Alex Olwal. 2024. Levels of Multimodal Interaction. In Companion Proceedings of the 26th International Conference on Multimodal Interaction (ICMI Companion ’24), 51–55. https://doi.org/10.1145/3686215.3690153

  11. Radu-Daniel Vatavu. 2024. AI as Modality in Human Augmentation: Toward New Forms of Multimodal Interaction with AI-Embodied Modalities. In Proceedings of the 26th International Conference on Multimodal Interaction (ICMI ’24), 591–595. https://doi.org/10.1145/3678957.3678958

  12. Jackie (Junrui) Yang, Yingtian Shi, Yuhan Zhang, Karina Li, Daniel Wan Rosli, Anisha Jain, Shuning Zhang, Tianshi Li, James A Landay, and Monica S Lam. 2024. ReactGenie: A Development Framework for Complex Multimodal Interactions Using Large Language Models. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). https://doi.org/10.1145/3613904.3642517

Reference person:

  • Maximiliano Jeanneret Medina

Evaluating Wearable Devices for Long-Term Field Monitoring of Heat Stress: Validity, Usability, and Trade-offs

Heatwaves are becoming more frequent with climate change, but their health effects are still not fully understood. To study them, researchers need to collect physiological data continuously over long periods, covering both heatwaves and cooler days. Wearable sensors make this possible, but different devices vary widely in the data they provide, how open their data access is, and how willing people are to wear them for months.

Reviews show that wearables have been used both in occupational settings and in broader climate-related health research. However, practical issues remain: sensor accuracy, battery life, comfort, and user acceptance all strongly affect data quality. Designing a successful study requires finding the right trade-off between accuracy and usability.

In this seminar topic, students will review the literature on which physiological parameters are most important for modeling heat stress, and analyze the devices that have been used in past research. They will compare their advantages and disadvantages in terms of accuracy, completeness, usability, and acceptance. The goal is to identify devices that offer the best balance for use in a multi-month heatwave study, ensuring reliable and complete data. Finally, students will outline a protocol for an experiment that could compare several candidate devices.

Keywords: heat stress monitoring; wearables; longitudinal studies; user acceptance

References:

  1. Cannady, R., Warner, C., Yoder, A., Miller, J., Crosby, K., Elswick, D., & Kintziger, K. W. (2024). The implications of real-time and wearable technology use for occupational heat stress: A scoping review. Safety Science, 177,.

  2. Koch, M., Matzke, I., Huhn, S., Gunga, H. C., Maggioni, M. A., Munga, S., Obor, D., Sié, A., Boudo, V., Bunker, A., Dambach, P., Bärnighausen, T., & Barteit, S. (2022). Wearables for Measuring Health Effects of Climate Change-Induced Weather Extremes: Scoping Review. JMIR mHealth and uHealth, 10(9). https://doi.org/10.2196/39532

  3. Cheong, S. M., & Gaynanova, I. (2024). Sensing the impact of extreme heat on physical activity and sleep. Digital Health, 10.

  4. Keogh, A., Dorn, J. F., Walsh, L., Calvo, F., & Caulfield, B. (2020). Comparing the Usability and Acceptability of Wearable Sensors Among Older Irish Adults in a Real-World Context: Observational Study. JMIR mHealth and uHealth, 8(4). https://doi.org/10.2196/15704

Reference person:

  • Moreno Colombo

Interfaces for Understanding and Engaging in Local Energy Communities

With the new federal law for a secure electricity supply coming into force in January 2026 in Switzerland [4] residents will be allowed to form Local Electricity Communities to produce, sell and share renewable energy (electricity). This should help reduce energy costs and encourage local synergies. However, this raises several challenges : How can occupants know when it would make sense to consume or share the energy they produce, how can the community communicate and identify opportunities to realign energy production and consumption ? Energy consumption visualization at individual level and its impact on energy consumption is well studied. How to provide information on electricity that can be shared in a community can however still be explored.

For this subject, the student will explore how to communicate shared energy dynamics clearly and transparently to community members, fostering trust and awareness. The student will first review relevant litterature to determine the best ways to understand how people in an local electrical communitiy can engage in it, see their production, consumption and how they would share it, as well as critical perspectives on techno-solutionist approaches [2]. They will then propose an experiment where they could test a solution and propose ways to evaluate it. The student is encouraged to think about fair solutions that might help to reduce energy consumption.

References:

  1. Georgia Panagiotidou, Enrico Costanza, Kyrill Potapov, Sonia Nkatha, Michael J. Fell, Farhan Samanani, and Hannah Knox. 2024. SolarClub: Supporting Renewable Energy Communities through an Interactive Coordination System. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 490, 1–19. https://doi.org/10.1145/3613904.3642449

  2. Victor Vadmand Jensen, Kristina Laursen, Rikke Hagensby Jensen, and Rachel Charlotte Smith. 2024. Imagining Sustainable Energy Communities: Design Narratives of Future Digital Technologies, Sites, and Participation. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 223, 1–17. https://doi.org/10.1145/3613904.3642609

  3. Victor Vadmand Jensen, Emmelie Christensen, Nicolai Brodersen Hansen, and Rikke Hagensby Jensen. 2024. A Year in Energy: Imagining Energy Community Participation with a Collaborative Design Fiction. In Proceedings of the 13th Nordic Conference on Human-Computer Interaction (NordiCHI '24). Association for Computing Machinery, New York, NY, USA, Article 22, 1–15. https://doi.org/10.1145/3679318.3685355

  4. For information, the explanatory report on the law (DE,FR,IT) : https://www.newsd.admin.ch/newsd/message/attachments/91803.pdf and https://www.swissmint.ch/fr/nsb?id=104172

Reference person:

  • Marion Schoenenweid

Transparent Tools for Music Collaboration: Authorship and Project History in Music Production

Asynchronous collaboration is now common in music production. Producers each have their own working styles, with activity spread across different setups and time zones. This often results in fragmented project version histories, missing stems or metadata, unclear credits, and uncertainty about whether human- or AI-generated material has been used with consent. Such hurdles can slow down production, compromise the creation of new ideas, and make producers more likely to disengage from collaborative work.

This seminar invites students to investigate which tools can make collaboration more transparent without interrupting the creative flow. The focus is on authorship timelines that show who contributed what and when, lightweight credit and consent markers that remain attached to music project files, and visual cues that communicate the rhythm of collaboration and versioning information to the stakeholders involved in these projects. Such tools should support producers in understanding both their own creative processes and their collaborators’ working methods, intentions, values, and contributions, while respecting privacy and country-specific legal constraints. A central question for this topic is: What minimal set of visual cues—such as timeline events, credit tags, or legal attribution indicators—helps collaborators reliably understand contributions and terms without adding unnecessary cognitive load, interrupting creative flow, or requiring expensive or proprietary tools?

Keywords: digital audio workstations; music production; computer-supported cooperative work; authorship tracking; asynchronous collaboration; information visualization

References:

  1. Kjus, Y. (2024). The platformization of music production: How digital audio workstations are turned into platforms of labor market relations. New Media & Society, 0(0). https://doi.org/10.1177/14614448241304660

  2. Emilia Rosselli Del Turco and Peter Dalsgaard. 2023. "I wouldn’t dare losing one": How music artists capture and manage ideas. In Proceedings of the 15th Conference on Creativity and Cognition (C&C '23). Association for Computing Machinery, New York, NY, USA, 88–102. https://doi.org/10.1145/3591196.3593338

  3. Samuel Rhys Cox, Helena Bøjer Djernæs, and Niels van Berkel. 2025. Beyond Productivity: Rethinking the Impact of Creativity Support Tools. In Proceedings of the 2025 Conference on Creativity and Cognition (C&C '25). Association for Computing Machinery, New York, NY, USA, 735–749. https://doi.org/10.1145/3698061.3726924

  4. Reuter, A. (2021). Who let the DAWs Out? The Digital in a New Generation of the Digital Audio Workstation. Popular Music and Society, 45(2), 113–128. https://doi.org/10.1080/03007766.2021.1972701

Reference person:

  • Raphael Tuor

Seminar schedule

(may be subject to changes)

  • 22.09 13h00 1st session, in presence
  • 01.10 (at the latest) communication of preferred topics (1st, 2nd and 3rd choice)
  • 27.10 13h00 1st presentation (ideas,papers,structure) in presence
  • 17.11 1st complete draft (no session)
  • 24.11 13h00 2nd presentation in presence
  • 08.12 2nd draft (no session)
  • 15.12 13h00 final presentation in presence
  • 12.01 final draft (no session)

Work to be done

Students will be asked to :

  • Conduct an in-depth review of the state-of-the-art of their chosen topic
  • Discuss their findings with their respective tutor
  • Imagine a user experiment relevant to existing research
  • Write a 4-pages article summarizing their review work
  • Present their article to the seminar participants

Interested computer science students are invited to participate by expressing their interest on two specific topics (preferred, 2nd 3rd choices) among the ones presented on the topics presented above.

Learning Outcomes

At the end of the seminar, students will know how to do a bibliographic research and be exercised in writing a scientific article. Further, they will build a general knowledge on the field of HCI and its current techniques and trends, as well as an in-depth understanding on their chosen topic.

Registration

Attend the introductory session and express your interest about both your chosen topics with a short text (3-5 sentences) to Dr Julien Nembrini, mentioning (1) your preferred topic, (2) your second-preferred topic, (3) your third preferred. Each reference person will then contact you if you are chosen for participating to the seminar. Others will receive a notification email.

emails : julien.nembrini@unifr.ch and denis.lalanne@unifr.ch

Registration Process

  1. Participate to the first introductory session on Monday 22.09 14h 13h in presence in PER21 A420 or online.
  2. Express your interest for the topics as mentioned above at the latest the 01.10 (first come, first served).
  3. Wait for confirmation of your participation.

Seminar Process

In addition to the first session, there will only be three plenary sessions during the semester, which consist in participants presenting their work. These sessions will be in presence and will happen on Mondays at 14h 13h. Reference persons will organize additional bilateral meetings during the semester.

The seminar process is as follows:

  1. Select state-of-the-art references relevant to the chosen thematic, synthesize these references to structure the research field, discuss and refine your approach with your topic reference person.
  2. Present the structure developed to the other participants of the seminar for the intermediate presentation.
  3. Synthesize the selected bibliographic references in a written 4 pages article, authored in LaTeX following ACM SIGCHI format.
  4. Define a user experiment within the chosen topic that is relevant with regard to existing research
  5. Discuss and refine your article with your topic reference person.
  6. Present your article in one of the final presentation sessions (end of semester)

Evaluation

The evaluation will be conducted by the topic reference person and the person responsible for the seminar.

The evaluation will be based on the quality of :

  • Your written article: readability, argumentation, English
  • Your review work: reference selection and field structure
  • Your final presentation
  • Your initial draft
  • Your review of a colleague's first draft

If each step described below is not formally marked, each contribute towards producing a good final paper (75% of the grade)

Guidelines

First presentation

link to the literature survey methods slides

  • Introduce your theme
  • Present your methodology for selecting papers
  • Present a selection of at least 3 papers most relevant to your theme, different from the ones provided in the topic description. Don't forget to give author names, journal and date!
  • Look in each paper for:
    • Theme: How does the paper fit in the larger picture of your theme?
    • Main contribution: how does the paper contribute to the field?
    • Methods: what are the methods used? which kind of experimental setup, if any?
    • Outlook: strengths / weaknesses, research opportunities, etc
  • Present: the outcome of your literature survey
  • Presentation time 10'.

First draft

  • Your draft should :
    • present the research context of your topic with relevant publications
    • state a research question and an hypothesis
    • propose an experiment to test the hypothesis, and discuss its expected results
  • Provide a complete draft. Enough content will allow reviewers to give meaningful feedback, although some parts may still be in bullet point style.

Rules:

  1. You MUST use the latex template https://www.overleaf.com/latex/templates/association-for-computing-machinery-acm-sigchi-proceedings-template/nhjwrrczcyys
  2. Maximum 4 pages excluding references (it is acceptable to concentrate on some sections and leaving one or two others in bullet point style)
  3. Polish your English grammar and orthography

Review

  • Rephrase the content in 1 short paragraph
  • List the positive points
  • List the negative points, provide examples
  • Propose improvements
  • Do not insist on orthograph/grammar unless it is especially disturbing

Provide your review either as comments in the pdf or as a separate document.

Second presentation

Present your first draft.

  • Your presentation should include:
    • Introduction to the topic
    • Structure of your research field (literature review outcome topic-by-topic instead of paper-by-paper)
    • Research questions/hypotheses
    • User experiment proposal and expected outcomes
  • Include details (example papers, shared methods/approaches, experiments conducted by others, etc)
  • Include full references (preferably not at the end)
  • Presentation time 10'

Prepare also an oral feedback of your fellow student's first draft (structure,strengths/weaknesses, English, style, etc.)

Second draft

Your second draft should combine feedback from your supervisor and from your fellow student

Final presentation

Same guidelines as the seocond presentation, with comments addressed

Final draft

Integrate all feedback received and finalize your paper. this final version will be published on the website (habe a look here for previous examples)

Disclaimer of AI use

You may be using AI tools to help you with your writing and your literature search. However, pay attention that the use of AI tools does not become a substitute for your own thinking and writing. Furthermore, be aware that AI tools, notably for literature reviews, have a strong tendency to introduce errors and biases in your writing. It also has the tendency to propose non existing and/or non-relevant papers, to summarize incorrectly papers and misinterpret their content as well as introducing factual errors.

Importantly, we ask you to explicitly mention any use of AI tools in a 'AI Disclaimer' section, specifying which tools you used and for what purpose. For example, if you used an AI tool to help you with grammar correction, please state that. If you used an AI tool to help you summarize a paper, please state that as well. This section will not count within the 4 page limit of the article. Additionnally, You can also provide a short meta-comment on how you used AI tools and how they helped you (or not) in your work.

Date: Summer 2025