ASL4GUP 2017 - First International Workshop on Adaptive Shot Learning for Gesture Understanding and Production
held in conjunction with IEEE FG 2017, Washington DC, USA, May 30, 2017

Accepted papers will be part of the IEEE FG proceedings and potentially some of them be part of a special issue of IEEE Transactions on Pattern Analysis and Machine Intelligence.

In the aim of natural interaction with machines, a framework must be developed to include the adaptability humans portray to understand gestures from context, from a single observation or from multiple observations. This is also referred as adaptive shot learning – the ability to adapt the mechanism of recognition to a barely seen gesture, well-known or entirely unknown. Of particular interest to the community are zero-shot and one-shot learning, given that most work has been done in the N-shot learning scenario.

Experiencing touchless interaction with augmented content on wearable head-mounted displays in cultural heritage applications

Personal and Ubiquitous Computing

In this study, an interactive wearable AR system to augment the environment with cultural information is described. To confer robustness to the interface, a strategy that takes advantage of both depth and color data to find the most reliable information on each single frame is introduced. Moreover, the results of an ISO 9241-9 user study performed in both indoor and outdoor conditions are presented and discussed.

Human skin detection through correlation rules between the YCb and YCr subspaces based on dynamic color clustering

Computer Vision and Image Understanding

This paper presents a novel rule-based skin detection method that works in the YCbCr color space. The method is based on correlation rules that evaluate the combinations of chrominance values to identify the skin pixels in the YCb and YCr subspaces. The correlation rules depend on the shape and size of dynamically generated skin color clusters, which are computed on a statistical basis in the YCb and YCr subspaces for each single image, and represent the areas that include most of the candidate skin pixels.