The TC13 Open Symposium  is part of a series of open events organized by the Technical Committee TC13 on Human-Computer Interaction of the International Federation for Information Processing (IFIP) on the field of Human-Computer Interaction.

This event is addressed to students and researchers aiming to discuss topics that are at the intersection of Human-Computer Interaction. It is also an opportunity to discover the recent activities carried out by IFIP and to meet the members of the TC13.

The 2020 edition of the IFIP TC13 Open Symposium will take place on March 25th in Milan (Italy) at the Department of Computer Science of Università degli Studi di Milano.

IMPORTANT: This event is free of charges however, for logistic purposes, we kindly ask you to register for the event following this link : https://forms.gle/udssfHgQj3Mg2sfr7

Organizers: Barbara Rita Barricelli and/or Marco Winckler.

Universittà Degli Studi Di Brescia Dipartimento di Ingegneria dell'Informazione Università degli studi di Milano  IFIP TC13



The TC13 Open Symposium 2020 is organized by the Department of Information Engineering of Università degli Studi di Brescia but will take place in Milan, for logistical reasons, at this address:

Department of Computer Science – Università degli Studi di Milano
Via Celoria, 18
20133 Milan

(See it on Google Maps)

The Open Symposium (25 March) will be held in Aula Magna “Alberto Bertoni”, ground floor (208 seats).

Getting there

There are three main airports:

  • Linate [LIN] (website): 15 minutes drive from the venue that you can reach by cab.
  • Malpensa [MXP] (website): about 50 Km from the venue, it is connected by frequent trains and buses to the Central Train Station (the journey takes about 40 mins). From there, the venue can be reached in 15 min by metro or cab.
  • Orio al Serio [BGY] (website): connected by a shuttle service to the Central Train Station (about 45 min journey). From there, the venue can be reached in 15 mins by metro or cab.


We agreed rates with four hotels in the area (please quote “IFIP TC13 – Università degli Studi di Milano” in your email when booking):

  • Hotel Gamma
    • Double room for single use: €99 per night
    • Double room for two persons: €123 per night 

    The price includes: breakfast, WiFi, and city tax.
For booking send an email to: info@hotelgammamilano.it

  • Hotel Città Studi
    • Double room for single use: €74 per night
    • Double room for two persons: €85 per night 

    The price includes: breakfast, WiFi, and city tax. For booking send an email to: info@hotelcittastudi.it

  • Hotel Dieci
    • Double room for single use: €115 per night
    • Double room for two persons: €140 per night 

    The price includes: breakfast, WiFi, and city tax. For booking send an email to: info@hoteldieci.it

  • Hotel Lombardia
    • Double room for single use: €110 per night
    • Double room for two persons: €135 per night 

    The price includes: breakfast, WiFi, and city tax. For booking send an email to: info@hotellombardia.com

       The price includes: WiFi and the use of a small kitchen in the apartment. The price does not include: breakfast (not available at the Residence) and the city tax that should be paid directly to the hotel (€3/night). For booking send an email to: argonnepark@ih-hotels.com

There are a variety of other hotels in the area (map). 

Also home rental services, such as Airbnb, are quite active in the area (link).


9h00-9h30: Openning session, chair Barbara Rita Barricelli, UNIBS, Italy

Overview of TC13 by Philippe Palanque (Chair of IFIP TC13), Université Paul Sabatier, Toulouse, France


09h30-10h30: Session 1

10h30-11h00: Coffee break

11h00-12h15: Session 2 

12h15-13h30: Lunch break

13h30-14h30: Session 3

14h30-15h00: Coffee break

15h00-17h00: Session 4

17h00-17h30: Closing and Wrap-up

List or presentations

(to be completed)

Robot mediated free play for children with severe motor disabilities,
by Julio Abascal, Sandra Espín and Xabier Gardeazabal. Egokituz Laboratory of HCI for Special Needs University of the Basque Country/Euskal Herriko Unibertsitatea.
Abstract: Children with severe motor disabilities cannot manipulate toys by themselves.  Therefore, their play must be mediated by other people, usually by family members or educators. These plays are frequently oriented to education, training or rehabilitation objectives. Nevertheless, the possibility of enjoying free play with ludic objective is also highly desirable. Egokituz Laboratory is applying Human-Robot Interaction techniques to provide free-play alternatives to children with severe motor disabilities. To this end, we use a bi-manual robot Yumi that allows mediated toy manipulation. Childs are provided with a user interface that allows controlling a number of specific toys, at different difficulty levels. The simplest level  allows to start and stop the play by means of a user-controlled action (e.g. pushing a push-button). In higher levels, the child can choose the toys and decide the actions to be performed with them. Currently we are conducting studies with children with disabilities in order to verify the impact of free play in the cognitive evolution of these children. On the other hand, we are working in a more intelligent interface that applying shared initiative will allow the child performing more complex manipulation tasks, no previously programmed.

Supporting Design of Cognition-based Cultural Heritage Activities,
by Nikolaos Avouris, University of Patras, Interaction Design Lab
Abstract: Cultural heritage institutions, like museums, galleries and archives attract wide and heterogeneous audiences, which need to be supported in order to have access to meaningful content. This introduces various challenges when designing such experiences, given that people have different cognitive characteristics which influence the way we process information, we experience, behave, and acquire knowledge. Our recent studies provide evidence that human cognition should be considered as a personalization factor within cultural heritage contexts, and thus we developed a framework that delivers cognition-centered personalized activities. The efficiency and the efficacy of the framework have been assessed through two user studies, through which the difficulties faced by non-technical professionals (e.g., designers) when using it were identified. In this talk, we also present the framework, and report a user study with seventeen professional designers, who used our tool to design activities for people with different cognitive characteristics.

Using Sensory Substitution to Provide Depth Perception for the Visually Impaired
by James de Klerk, Dieter Vogts and Janet Wesson, Department of Computing Sciences, Nelson Mandela University, Port Elizabeth, South Africa
Abstract: The visually impaired do not have the ability to localize objects in three-dimensional space, and rely on their other senses to gain depth perception. Sensory substitution is the concept of substituting one sense for another, normally substituting an impaired sense with a functioning sense. Visual-to-auditory sensory substitution substitutes an impaired visual sense with a functioning auditory sense. This research aimed to investigate and develop techniques for visual-to-auditory sensory substitution using sound localization as a sensory substitution technique for depth perception. The research investigated the characteristics of human audition, with a particular focus on how humans localize sounds. It then looked at existing visual-to-auditory sensory substitution systems and the techniques used. From the existing systems, a system known as MeloSee was chosen as a baseline for developing further sensory substitution prototypes. The baseline prototype (Prototype 0) was then implemented and a preliminary study performed. Based on the results of the preliminary study, a set of recommendations were generated. The next iteration (Prototype 1) was then developed based on these recommendations. A comparative study between Prototype 0 and Prototype 1 was then performed, and another set of recommendations generated. From the recommendations, a final prototype was developed (Prototype 2). The last comparative study was then performed between Prototype 0 and Prototype 2, with a third set of recommendations being generated as a result. The results showed that the participants preferred Prototype 2 to Prototype 0, and that Prototype 2 was more reliable and more accurate. Based on the three sets of recommendations, a set of visual-to-auditory sensory substitution techniques were derived. These techniques aim to facilitate visual-to-auditory sensory substitution systems, which would provide the visually impaired with the ability to localize objects in three-dimensional space through sound.

Interaction Design for Smart Products
by Marcin Sikorski, Polish-Japanese Academy of Information Technology, Warsaw, Poland
Abstract: Interactive products and services nowadays constitute a large part of digital innovations. These services are usually delivered by mobile apps and are designed rather informal agile design frameworks. Moreover, these products and services often contain a significant part of Artificial Intelligence components, aimed at making them “smart”, which are not subject to certification or institutional qualify and security control. As “smart” products and services learn and adapt by using the data captured from the user and from the environment, they may change the nature user-system interaction and the role of the user/customer in the interaction context – at both individual and social levels. This talk is aimed it discussing main challenges related to designing extended interaction for “smart” products and services: risk assessment in design process, designers’ social responsibility, possible institutional control and building awareness of users/customers as to potential threats to their privacy and safety.

CASPER project (AI, HCI, and security)
by, Aleksandar Jevremovic, Singidunum University, Serbia
Abstract: The main goal of CASPER is to identify and apply potentials of using artificial intelligence to protect young people on the internet. Different types of content are analysed, including text, images, video and audio, as well as the different types of online threats. The resulting system is meant to be modular, extensible, multi-platform, cloud-enabled, and compatible with already existing solutions. A special challenge is to support the collaborative use of results while preserving privacy.

Digitalization of Post-Stroke Training Tasks to allow Assistance by a Humanoid Robot Pepper
by Peter Forbrig, University of Rostock, Germany
Abstract: The number of people affected by stroke increased during the last decades. However, the number of therapists is not large enough to fulfil the demands for specific training for stroke survivors. Within the project E-BRAiN (Evidence-based Robot-Assistance in Neurorehabilitation) we want to develop software that allows a humanoid robot to replace a human therapist after the first training sessions. If necessary, the robot can give instructions to perform tasks. More important is the observation of task performance and the delivery of feedback. Additionally, patients have to be motivated to continue their training tasks. We focus in this talk on the Arm Ability Training (AAT). Conventionally, some AAT exercises are performed with paper and pencil under the supervision of therapists. To improve the collaboration between patient and a humanoid robot the training tasks have to be digitalized. Such a digitalization of three training task is discussed by using apps on a tablet computer. Alternative design decisions are discussed for the aiming task, the crossing out task and the labyrinth task. Some exercises are performed with a mirror that is located in such a way that movements of the healthy arm look like movements of the handicapped arm. The patients are asked to imagine that this is really her handicapped arms. This result sometimes to the intended chances in the brain. For those training tasks a task pattern was identified for the activities of the robot. The communication of the tablet apps to the robot is established by MQTT technology. This allows the social humanoid robot to obtain detailed information about the task performance. Each training task is introduced by some instructions. Additionally, two pictures are shown on the tablet of the robot that show typical situations. Afterwards a video showing a task execution is presented. In addition, the humanoid robot can aid and motivate during the training. Comments on the long-term results of the training tasks related to the individual goals of stroke survivors are planned as well.