|8:40am||Invited Talk Carlos Duarte The (Dumb) Internet of (Smart) Things|
|9:15am||Paper Session I (each 10 min presentation + 5 min discussion)
- Proactivity in Spoken Dialog Systems
- SAsSy - Making Decisions Transparent with Argumentation and Natural Language Generation
- A Sign-to-Speech Glove
|10:30am||Paper Session II
- Smart Objects in Accessible Warehouses for the Visually Impaired
- avatAR - Tangible interaction and augmented reality in character animation
- Combining multi-touch surfaces and tangible interaction towards a continuous interaction space
- A Context Aware Music Player: A Tangible Approach
- Electroluminescent based Flexible Screen for Interaction with Smart Objects and Environment
- On Aiding Supervision of Groups in the Mobile Context
|1:20pm||Paper Session III
- Tangible User Interfaces applied to Cognitive Therapies
- Preserving Privacy in Social City Networks via Small Cells
|1:50pm||Selection of topics to discuss|
|2:05pm||In-depth discussion on selected topics I|
|3:10pm||In-depth discussion on selected topics II|
|4:00pm||leaving for guided walking tour|
Carlos Duarte - The (Dumb) Internet of (Smart) Things
With the proliferation of connected sensors, appliances and applications, the Internet of Things is finally breaking out of the lab and reaching out into the consumer homes. Smart sensors and smart appliances are now capable of controlling their operation based on environmental data. Owners can be aware of the status of their homes while away, through mobile applications. However, a truly useful Internet of Things is still not here. First, most sensors or appliances live in their own cloud, not knowing nor communicating with other sensors or appliances. Second, we haven't really understand how to take advantage of the potential that a connected environment of smart devices can bring to its inhabitants.
This talk addresses the problem of designing the Internet of Things from the peoples' perspective, not from the Things' perspective. While in certain contexts we expect an intelligent environment to be able to operate without requiring input from its users, this will not be true in every context; furthermore, most of the actions from this intelligent environment will impact its inhabitants. Consequently, we need to understand how people react to intelligent interactive environments, something which they are not accustomed to, in order to be able to properly design these environments and seamlessly integrate the Internet of Things within our lives.
Short Bio: Carlos Duarte is a senior researcher at LaSIGE, as a member of the HCIM (Human Computer Interaction and Multimedia) Group, and an Assistant Professor at the Department of Informatics of the Faculty of Sciences of the University of Lisbon. His research interests are centred around how interaction design can contribute to improved interaction between people and technology, mainly considering accessibility issues. Currently, he focuses on assisting the two ends of the user groups' spectrum - children and older adults - through the use of novel user interface technologies, mainly exploring natural interaction mechanisms.
Paper Session I
Hansjörg Hofmann, Ute Ehrlich, André Berton - Proactivity in Spoken Dialog Systems
Proactive speech interfaces have been a hot research topic for many years. However, until today, no precise definition of proactive behavior in spoken dialog systems (SDSs) and its influencing factors has been made. Therefore, this paper aims at defining the characteristics of proactivity with the focus on SDSs. The definitions are derived from other research fields and then transferred to SDSs.
A general proactivity system model, which describes the relevant system components and their interaction is described. A proactive system receives information from a knowledge source and notifies the user about an incoming event without a user request. The system has to act user-friendly and take the current user state and the environment into account. Thus, the proactive behavior can be identified as anticipatory, change-oriented and self-initiated. A proactive human-machine speech dialog can be structured in 3 stages. First, the user has to be notified about an incoming event, then the problem solving process has to be started. Finally, the new task has to be completed and possibly paused tasks have to be resumed.
Nava Tintarev and Roman Kutlak - SAsSy - Making Decisions Transparent with Argumentation and Natural Language Generation
An autonomous system consists of one or more physical or virtual
agents that can perform tasks without continuous human guidance. In order
to realise their promise, techniques for making such autonomous systems
scrutable and transparent are therefore required.
To address this issue the Scrutable Autonomous Systems (SAsSy) demo shows
how argumentation and natural language can be combined to generate a human
understandable dialog explaining the operation of an autonomous system. On
the one hand argumentation theory is used to simulate human understandable
reasoning mechanisms. On the other, natural language generation tools are
used to translate logical statements into simple plain English. The idea
is to generate a dialog that enables the user to understand and question
the reasoning present in autonomous systems.
Solange Karsenty and Olga Katzenelson - A Sign-to-Speech Glove
In this paper, we describe a smart glove – JhaneGlove - that turns sign language gestures into vocalized speech via a computer to help deaf people communicate easily with people who do not understand the sign language. We have developed a handmade device: a glove with sensors, and the software that will transform signs into text. Text is converted into speech using standard software. A neural network based agent can learn the gestures interactively and allows users to define new signs. As a result, each user can have a custom sign language independent from other users. We present the system and early experimental results.
Paper Session II
Tobias Grosse-Puppendahl, Justus Weiss, Pia Weiss, Sebastian Herber and Hansjörg Lienert - Smart Objects in Accessible Warehouses for the Visually Impaired
The inclusion of persons with handicaps in worklife is an important concern of our society. Besides the wish of every person to work and realize one's full potential, politics demand for inclusive design and accessible workplaces, also in consideration of our aging society. In this work-in-progress paper, we investigate the use of smart objects for warehouse workplaces of vision-impaired persons. Based on our observations with an accessible commissioning system in production use, we develop a novel warehouse concept based on smart compartments. We intend to equip these smart objects with various sensing and actuating technologies which leverage a time-efficient and easily accessible commissioning process.
Adso Fernández-Baena and David Miralles - avatAR - Tangible interaction and augmented reality in character animation
In this paper, we present a novel interaction system, which combines tangible interaction and augmented reality for controlling a virtual avatar. By physically interacting with a cube, it is possible to drive avatars motion that occurs in the real world. The cube acts as a motion controller and as an AR marker reaching input and rendering purposes. The cube facilitates users the avatar positioning and motion customization, providing a fine control for both. In this first version, the avatar is able to stand, walk and run. The current motion state is picked rotating the cube over the same plane where the avatar lies. We have implemented two scenarios in our prototype: a sketch-based controller and an interactive controller. The first one enables users to draw paths on the floor that the avatars follow; on the contrary, the second allows drive avatar’s position during all the time. The idea of using tangible objects in augmented reality environments for controlling avatars strengthens the link between the user and the avatar providing a better sense of control and immersion.
David Miralles, Judith Amores, Xavier Benavides, Michel Comín, Anna Fusté and Pol Pla - SmartAvatars
This paper focuses on a new interaction model named SmartAvatars, which is based on a mixed reality environment containing virtual avatars as a medium for the interaction between the user and the smart objects of the environment. As a prototype, we introduce Flexo. With Flexo we investigate the virtues and obstacles which occur when we use an AR avatar to interact with a simple smart object. Then, we conclude with the advantages and disadvantages of this kind of interaction in order to create richer interactions in further prototypes.
Rafael Nunes and Carlos Duarte - Combining multi-touch surfaces and tangible interaction towards a continuous interaction space
Multi-touch interaction scenarios are usually limited to one surface even when combined with tangibles.
Traditional scenarios where people interact with physical objects on and above the table have failed to be fully translated into existing technologies, such as multi-touch set-ups, which don't support natural interactions by combining the surface and the area above into one continuous interaction space.
We aim to build and explore a set-up that allows users to benefit from a continuous interaction space on and above the table with multi-touch and tangible support. We expect to find and solve problems that can arise in various scenarios, both individual and collaborative.
A set of different existing technologies will be integrated to monitor user interactions and an accompanying API will be developed and presented to serve as a tool for future development of applications that draw from this kind of set-up. This API will be tested and validated to ensure it meets the desired goals.
David Sellitsch and Hilda Tellioglu - A Context Aware Music Player: A Tangible Approach
In this paper we explore new ways of interacting to configure and personalize music selection in ambient environments. We propose a prototype for a context aware music player and a novel interaction concept to deal with it. Context information refers in this work both to the user, especially the mood situation, and the environment. The interaction concept lets the user express information about the mood and the current activity in a subtle way and customize the system to music preferences. This is achieved by using sensors to capture the environment data and a tangible user interface to enter and modify the context information related to the user. In usability tests of the prototype and in analysis of the interviews we conducted with test users we found out that customization options and making autonomous decisions transparent are two key factors to enhance user experience in context aware music systems.
Orkhan Amiraslanov, Jingyuan Cheng, Peter Chabrecek and Paul Lukowicz - Electroluminescent based Flexible Screen for Interaction with Smart Objects and Environment
In this paper we propose an adjustable structure for flexible
screens, based on electroluminescent phenomenon. The final
product is thin, flat, flexible, long-lasting, easy to modify, reproduce
and install. When combined with pressure matrix, it
could become a touchscreen. Changeable pixel number and
pixel size, plus the flatness and flexibility, make this structure
ideal for interaction interface prototype for smart objects,
where the surface size, shape and flatness are the main requirements.
As demonstration we show this flexible screen
on a window, on a bottle and on a gymnastics mat.
Daniel Auferbauer and Hilda Tellioglu - On Aiding Supervision of Groups in the Mobile Context
In this work we introduce and examine the possibility of aiding in the oversight of mobile groups by assisting the supervisor in his or her awareness of the physical presence of members. The subject matter of this paper is to find out whether or not that is viable, and why. Our approach is thus: we have lead interviews with users representative of the target audience in order to gather information on group supervision and define requirements. Secondly we have assessed five wireless technologies for use in an actual implementation. As a third step we have engineered an actual prototype based on the information gathered thus far. Lastly, this device was evaluated both under laboratory conditions and in the field. We find high acceptance and demand among prospect users and conclude from evaluation that there are strong indications to the viability of reducing the workload of supervising mobile groups by assisting the person in charge with awareness of physical presence of members.
Paper Session III
Elena de La Guía, Maria-Dolores Lozano and Victor M. R. Penichet - Tangible User Interfaces applied to Cognitive Therapies
Interactive games to support cognitive training are increasingly becoming an indispensable resource in cognitive therapies. At the same time, technological advances are definitely causing the appearance of new paradigms and different styles of interaction. In this paper, we take advantage of real physical objects and the benefits that new technologies offer us, in order to design a new way to interact with interactive games in cognitive therapies. The system is based on physical objects that integrate NFC technology and allow the final user to interact with Distributed User Interfaces. We analyze the effects of interacting with smart objects in Multi-Device Environments developed for people with intellectual disabilities.
Geert Vanderhulst and Fahim Kawsar - Preserving Privacy in Social City Networks via Small Cells
An increasingly large amount of small cells – e.g. WiFi hotspots – is being deployed in residential areas to connect a plethora of smart devices to the Internet. In this paper, we present a social city network leveraging small cells for sharing content geographically and temporarily whilst preserving the privacy of its users. Unlike a social network built around friends, we propose a social city network addressing geographically co-located people and smart objects, e.g. residing in a street, on a square, around a building, etc. Our goal is to facilitate interaction with smart cities by easily sharing short-lived data fragments with others in a given area and for a limited time span. To this end, we designed an architecture in which small cells deliver location proofs that grant access to location-restricted content.