St Andrews HCI Research Group

MobileHCI 2017: Workshop on Object Recognition for Input and Mobile Interaction

This workshop is on Sept 4th in Vienna, Austria in conjunction with MobileHCI 2017.

We thank our sponsor (GaussToys).
 
 
 

Key Dates:

Submission deadline: June 1st (extended)
Acceptance notification: June 9th
Camera Ready papers due: July 21st
Papers available online: August 4th
Workshop date: September 4th

Location

Aula der Wissenschaft. 
Wollzeile 27a

A-1010 Wien

 

List of Accepted Workshop Papers

Best paper will be invited to publish at the International Journal of Mobile Human Computer Interaction (IJMHCI).

Final Program (last update 7.30am Sept 4th)

9.00 – 9.30 Setup time (those who are speaking should please test their machines etc.)
9.30 – 9.35 Welcome (Introduction and briefly expected outcomes)
Keynote/Panel
9.35 – 9.50 Nick Gillian, Google ATP, USA
9.50 – 10.05 Mike Chen, National Taiwan University
10.05-10.20 Gierad Laput, CMU, USA
10.20 – ~10.30 Q&A please visit sli.do (#sachi) to post your questions
10.30 – 11 Coffee Break
** during the coffee break some may wish to continue discussions with panel speakers
** anyone who has tested their setup should test now
Talks
11.00 – 11.10 Facilitating Object Detection and Recognition through Eye Gaze
11.10 – 11.20 AquaCat: Radar and Machine Learning for Fluid and Powder Identification
11.20 – 11.30 Pinpoint: Multi-Scale Gestural Interaction for AR Facilitating
11.30 – 11.40 Non-Invasive Glucose Monitoring utilizing Electromagnetic Waves
11.40 – 11.50 EyeLogging: Hybrid Eye Tracker Combining Deep Learning and Crowdsourcing Approaches
11.50 – 12.00 Machine learning for recognizing and controlling smart devices
12.00 – 12.10 EchoTube: Modular, Flexible and Multi-Point Pressure Sensors using Waveguided Ultrasound
12.10 – 12.30 Q&A and Discussion
12.30 – 1.30 Lunch (and demo setup and posters)
1.30 – 2.00 Demos + Posters RadarCat/GaussSense/XAI
2.00 – 2.20 Brainstorm
2.20 – 2.57 Work: Three Parallel Breakout / Work-Group Sessions
2.57 – 3.00 1 min fast foot reporting per group
3.00 – 3.30 Coffee and breakout session continues
3.30 – 4.30 Work: Continue breakout sessions
4.30 – 4.45 Reports (Goal: to produce materials we can add to a poster we will print tonight and display at the conference. inc. sketches, demos of new concepts, gap analysis, critiques, taxonomy etc. – results of the group work and more)
4.45 – 5.00 Wrap up and next steps
Optional: Group dinner (venue to find and numbers to confirm by lunchtime)
Breaks are at 10:30, 12:30 and 15:00. Full programme is here: https://mobilehci.acm.org/2017/program.html
 

Introduction

Today, we are seeing an emergence of devices that incorporate sensing capabilities that go beyond the traditional suite of hardware (e.g., touch sensing or proximity). These devices offer more fine-grained level of contextual information, such as object recognition, and they often vary in their size, portability, embeddability, and form factor. Despite this diversity, the manifestations of these new-generation sensing approaches will inevitably unlock many of the ubiquitous, tangible, mobile, and wearable computing ecosystems that promise to improve people’s lives.
These system are brought together by a variety of technologies, including computer vision, radar (e.g., Google ATAP’s Project Soli), acoustic sensing, fiducial tagging, and in general, IoT devices embedded with computational capabilities. Such systems open up a wide-range of applications spaces and novel forms of interaction. For instance, object-based interactions offer rich, contextual information that can power a wide range of user-centric applications (e.g., factory line optimization and safety, automatic grocery checkout, new forms of tangible interactions). Where, and how these interactions are applied also adds a new dimension to these applications (e.g., if a mobile device can detect which part of your body it is tapped into, it can launch the food app when tapped to your stomach).
Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by “Object Recognition for Input and Mobile Interaction”.

Objective

This workshop aims to bring together active and interested researchers in sensing techniques, particularly those that advance input and interaction using novel capabilities (e.g., object recognition, radar sensing) across a range of modalities (e.g., mobile devices, wearables). This workshop will foster a scholarly environment for sharing approaches and experiences, identifying research and deployment challenges, and envision the next generation of applications that rely on widely deployed sensor systems, paying close attention to not just one, but an ecosystem of touchless, interaction devices.

Challenges

There are many challenges with building the underlying system infrastructures for object recognition for mobile interaction. For example, what are the standards and operating system requirements? How can vision, radar or acoustic sensing or tagging systems which exist in multiple mobile and wearable elements act in a coordinated manner to reliably determine object interaction? What types of software libraries, simulators and IDEs are required for development? How can we ensure inter-device interoperability in the face of heterogeneous device configurations and varying approaches to object recognition?
In addition, there are significant challenges in the design and deployment of these kinds of object based interfaces. For example, what are the implications of interfaces that rely on sensing the movement and type of passive, physical objects? How might multiple points of sensing on, and around the body open up new design considerations? Which elements of the interface are best distributed in outputs? What new interaction techniques are necessary in these environments? What are the performance, comfort, and preference consequences of relying of object-interaction as a mobile interface?

Submissions

We invite short (upto 2 pages) and longer (upto 6 pages) contributions from researchers and practitioners working in the area of object recognition and interaction. Some examples of topics of interest can be found below. Our goal is to have a group who can share approaches and experiences, identify research and deployment challenges, and envision the next generation of applications that rely on object recognition and interaction. Such systems can be embedded in personal, wearable and mobile devices i.e. physically decoupled in different ways yet are virtually coupled due to the interactions they support. We expect researchers who attend the workshop to be working on different views of the problem (e.g., at the interaction technique, application, middleware or hardware level), with a wide range of sensing systems and interaction technologies (e.g., Project Soli, computer vision, alternative hardware, …), and in a wide range of object interaction applications (e.g., education, on-the-go interaction, medicine, smart sensing).

Send submissions to hsy@st-andrews.ac.uk before the end of May 19th (friday, 24:00 AOE, using ACM SIGCHI format. Submissions will be peer reviewed by an international program committee. Submissions do not have to be anonymous.

Topics of interest

Broadly speaking this workshop is interested in object recognition work based on computer vision, radar (e.g. Project Soli by the Google ATAP (Advanced Technology and Projects)), acoustic sensing, tagging, smart objects etc. Participants are encouraged to bring their mobile and desktop based systems suitable for object recognition for mobile interaction. The following are example topics of interest but we are not limited to these:

  • Understanding the design space and identifying factors that influence user interactions in this space
  • Developing evaluation strategies to cope with the complex nature of computer vision or radar-based object interaction
  • Ethnography and user studies of tangible and object interaction
  • Examples of applications of object interaction
  • Social factors that influence the design of suitable interaction techniques for object interaction
  • Exploring interaction techniques that facilitate multi-person object interaction
  • Novel input mechanisms for single and multi-point sensing systems (eg. in mobile, tablet and wrist worn devices)
  • SDK/APIs, IDEs, and hardware platforms for the development of object recognition.

Workshop Details

The workshop lasts a full day and is structured to provide maximum time for group discussion and brainstorming. Each participant is familiar with all position papers (which will be made available to them well in advance of the event). The workshop is structured around four sessions (separated by the morning break, lunch and afternoon break). In the first session the participants briefly introduce themselves and engage in a brainstorming session to outline key discussion topics for the two midday sessions. In the second and third sessions the group is divided into sub-groups moderated by the workshop organisers to have focused discussions on some of the key topics identified earlier. In the fourth session the group reconvenes to both summarise the advances identified in the breakout discussions and to identify next steps. Next steps may include plans for a bilateral exchanges, COST action submission, special issue of a Journal or book and planned research collaborations.

Organisers

Hui-Shyong Yeo, University of St Andrews
Gierad Laput, Carnegie Mellon University
Nicholas Gillian, Google (ATAP)
Aaron Quigley, University of St Andrews

International Program Committee

Last updated August 24th.