<!–Speaker: Jim Young, University of Manitoba, Canada
Date/Time: 1-2pm June 12, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Human-Robot Interaction (HRI), broadly, is the study of how people and robots can work together. This includes core interaction design problems of creating interfaces for effective robot control and communication with people, and sociological and psychological studies of how people and robots can share spaces or work together. In this talk I will introduce several of my past HRI projects, ranging from novel control schemes for collocated or remote control, programming robotic style by demonstration, and developing foundations for evaluating human-robot interaction, and will briefly discuss my current work in robotic authority and gender studies of human-robot interaction. In addition, I will introduce the JST ERATO Igarashi Design Interface Project, a large research project directed by Dr. Takeo Igarashi, which I have been closely involved over the last several years.
About Jim:
James (Jim) Young is an Assistant Professor at the University of Manitoba, Canada, where he founded the Human-Robot Interaction lab, and is involved with the Human-Computer Interaction lab with Dr. Pourang Irani and Dr. Andrea Bunt. He received his BSc from Vancouver Island University in 2005, and completed his PhD in Social Human-Robot Interaction at the University of Calgary in 2010 with Dr. Ehud Sharlin, co-supervised by Takeo Igarashi at the University of Tokyo. His background is rooted strongly in the intersection of sociology and human-robot interaction, and in developing robotic interfaces which leverage people’s existing skills rather than making them learn new ones.
News
Tristan Henderson is co-editing a special issue of the International Journal of Human-Computer Studies on Privacy Methodologies in HCI.
http://www.journals.elsevier.com/international-journal-of-human-computer-studies/call-for-papers/special-issue-of-international-journal-of/
Topic:
Privacy has become one of the most contested social issues of the information age. For researchers and practitioners of human-computer interaction (HCI), interest in privacy is not only sparked by these changes in the scale and scope of personal information collected and stored about people, but also because of the increasing ubiquity, sociability and mobility of personal technology. However, privacy has proven to be a particularly difficult construct to study. As a construct, privacy is also open to investigation from multiple perspectives and ontological approaches, with key research coming from law, psychology, computer science and economics.
The special issue on privacy methodologies in HCI invites high quality research papers that use a variety of methods where the author(s) reflect on and evaluate the method itself, both as applied in their specific context, and more widely, as well as the privacy aspect under consideration.
Authors are asked to consider these key questions in their papers:
- What was the privacy context being researched?
- Why was the particular methodology chosen for a given context?
- What selection criteria were used? What were the advantages and disadvantages of the methodology?
- How was bias and priming avoided? Was there evidence of a ‘measurement problem’?
- How did the researcher ensure the sample was representative, avoiding sample-based biases?
- What were the results? How could this method be used to study other aspects of HCI and privacy?
Submission instructions:
Manuscripts should generally not exceed 8000 words. Papers should be prepared according to the IJHCS Guide for authors, and should be submitted online according to the journal’s instructions. The IJHCS Guide for authors and online submission are available at http://www.elsevier.com/locate/ijhcs.
Important dates:
- Submission deadline: October 15, 2012
- Notify authors: January 5, 2013
- Publication date: late 2013
Guest Editors:
- Dr. Asimina Vasalou (University of Birmingham)
- Dr. Tristan Henderson (University of St Andrews)
- Dr. Adam Joinson (University of Bath)

Prior to this workshop Professor Quigley was asked to comment on some of the grand challenges he saw for User Modelling and Ubiquitous Computing. The following are the challenges he posed:
- Are user models and context data so fundamental that future UbiComp operating systems need to have them built in as first order features of the OS? Or in your opinion is this the wrong approach? Discuss.
- There are many facets of a ubiquitous computing system from low-level sensor technologies in the environment, through the collection, management, and processing of context data through to the middleware required to enable the dynamic composition of devices and services envisaged. Where do User Models reside within this? Are they something only needed occasionally (or not at all) for some services or experiences or needed for all?
- Ubicomp is a model of computing in which computation is everywhere and computer functions are integrated into everything. It will be built into the basic objects, environments, and the activities of our everyday lives in such a way that no one will notice its presence. If so, how do we know what the system knows, assumes or infers about us in its decision making.
- Ubicomp represents an evolution from the notion of a computer as a single device, to the notion of a computing space comprising personal and peripheral computing elements and services all connected and communicating as required; in effect, “processing power so distributed throughout the environment that computers per se effectively disappear” or the so-called Calm Computing. The advent of ubicomp does not mean the demise of the desktop computer in the near future. Is Ubiquitous User Modelling the key problem to solve in moving people from desktop/mobile computing into UbiComp use scenarios? If not, what is?
- Context data can be provided, sensed or inferred. Context includes information from the person (physiological state), the sensed environment (environmental state) and computational environment (computational state) that can be provided to alter an applications behaviour. How much or little of this should be incorporated into individual UbiComp User Models?
This week three members of SACHI, Aaron Quigley, Miguel Nacenta and Umar Rashid are attending the 11th Advanced Visual Interfaces International Working Conference in Italy. “AVI 2012 is held on the island of Capri (Naples), Italy from May 21 to 25, 2012. Started in 1992 in Roma, and held every two years in different Italian towns, the Conference traditionally brings together experts in different areas of computer science who have a common interest in the conception, design and implementation of visual and, more generally, perceptual interfaces.”
We are presenting two full papers.
FatFonts: Combining the symbolic and visual aspects of numbers, Miguel Nacenta, Uta Hinrichs and Sheelagh Carpendale.
Abstract: “In this paper we explore numeric typeface design for visualization purposes. We introduce FatFonts, a technique for visualizing quantitative data that bridges the gap between numeric and visual representations. FatFonts are based on Arabic numerals but, unlike regular numeric typefaces, the amount of ink (dark pixels) used for each digit is proportional to its quantitative value. This enables accurate reading of the numerical data while preserving an overall visual context. We discuss the challenges of this approach that we identified through our design process and propose a set of design goals that include legibility, familiarity, readability, spatial precision, dynamic range, and resolution. We contribute four FatFont typefaces that are derived from our exploration of the design space that these goals introduce. Finally, we discuss three example scenarios that show how FatFonts can be used for visualization purposes as valuable representation alternatives.”
Read the FatFonts paper here. And also FatFonts features in the New Scientist.
and
The cost of display switching: A comparison of mobile, large display and hybrid UI configuration, Umar Rashid, Miguel Nacenta and Aaron Quigley
Abstract: “Attaching a large external display can help a mobile device user view more content at once. This paper reports on a study investigating how different configurations of input and output across displays affect performance, workload and subjective preferences in map, text and photo search tasks. Experimental results show that a hybrid configuration where visual output is distributed across displays is worst or equivalent to worst in all tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best in text and photo search tasks (tied with a mobile-only configuration). After conducting a detailed analysis of the performance differences across different UI configurations, we give recommendations for the design of distributed user interfaces.”
Read the Cost of Display Switching paper here.
Along with our colleagues in Nottingham and Birmingham we are chairing and organising the Workshop on Infrastructure and Design Challenges of Coupled Display Visual Interfaces PPD’12. The proceedings can be downloaded here. Finally, Aaron is the session chair for the Augmented Reality/Virtual Reality papers at AVI.
<!–Speaker: Tristan Henderson, SACHI
Date/Time: 1-2pm May 29, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The prevalence of social network sites and smartphones has led to many people sharing their locations with others. Privacy concerns are seldom addressed by these services; the default privacy settings may be either too restrictive or too lax, resulting in under-exposure or over-exposure of location information.
One mechanism for alleviating over-sharing is through personalised privacy settings that automatically change according to users’ predicted preferences. This talk will describe how we use data collected from a location-sharing user study (N=80) to investigate whether users’ willingness to share their locations can be predicted. We find that while default settings match actual users’ preferences only 68% of the time, machine-learning classifiers can predict up to 85% of users’ preferences. Using these predictions instead of default settings would reduce the over-exposed location information by 40%.
This work has mainly been performed by my PhD student Greg Bigwood, but Tristan will be presenting the paper (at the AwareCast Pervasive workshop) because Greg will be busy in St Andrews graduating!
About Tristan
The next TayViz meeting of the Tayside and Fife network for data visualisation will take place in St Andrews, (School of Computer Science), on Tuesday May 15th, at 6:30.
Read all the details in this page.
Sign up for the TayViz google group (it is free and everybody is welcome to join).
Send any questions and e-mail to miguel.nacenta@st-andrews.ac.uk
We are pleased to announce and welcome Uta Hinrichs who will be joining us in SACHI in the School of Computer Science in the University of St Andrews from August of this year as a research fellow. Originally from Lübeck in Germany, Uta is currently a PhD candidate at the University of Calgary in Canada. She is working at the Innovis Group under supervision of Sheelagh Carpendale. Her research interests include interaction with large displays in public spaces, information visualization, graphic design, and art.
She will be working with Professor Quigley on a number of projects including our JISC project (Trading Consequences) and SFC Smart Tourism project (SMART), and with Dr Nacenta on the LADDIE project. In addition to many other fun and new projects in time!
We are looking forward to Uta coming and we wish her well on her final months as a graduate student.
Everyone in SACHI would like to congratulate Miguel was being awarded a Marie Curie Career Integration Grant on Gaze-Based Perceptual Augmentation called DeepView. Miguel will be recruiting a PhD student on this project, so please contact him if you are interested in a position on this.
The analysis and visualisation of increasing amounts of data is pervasive and indispensable in many of the crucial activities for a countless number of professions. Moreover, the amount and types of data that is available for visual inspection and analysis keeps growing. The DeepView project proposes the use of gaze-tracking technology (i.e., hardware and software that can judge where the user is looking at within a screen) to extend the basic perceptual abilities of the user. The project will iterate on prototypes and empirical evaluations to explore the space of gaze-contingent manipulations that can improve perceptual performance in common tasks such as colour differentiation, visual search, and maxima finding. The project will also seek to apply the results of the initial phases to applied scenarios in other disciplines other than Human Computer Interaction and Information Visualisation.
We all wish Miguel as this project starts later this year. If you are interested in this research please contact him directly or keep an eye on this page for future blog posts.
The current issue of the New Scientist features an article called “Font for digits lets numbers punch their weight” on Miguel’s work on FatFonts which says, “The symbols we use to represent numbers are, mathematically speaking, arbitrary. Now there is a way to write numbers so that their areas equal their numerical values. The font, called FatFonts, could transform the art of data visualisation, allowing a single infographic to convey both a visual overview and exact values.
‘Scientific figures might benefit from this hybrid nature because scientists want both to see and to read data,’ says Miguel Nacenta, a computer scientist at the University of St Andrews, UK, who developed the concept with colleagues at the University of Calgary, Canada.”
Congratulations to Miguel and his colleagues on having their work highlighted in this venue.
<!–Speaker: Helen Purchase, University of Glasgow
Date/Time: 1-2pm May 15, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The visual design of an interface is not merely an ‘add-on’ to the functionality provided by a system: it is well-known that it can affect user preference, engagement and motivation, but does it have any effect on user performance? Can the efficiency or effectiveness of a system be improved by its visual design? This seminar will report on experiments that investigate whether any such effect can be quantified and tested. Key to this question is the definition of an unambiguous, quantifiable characterisation of an interface’s ‘visual aesthetic’: ways in which this could be determined will be discussed.
About Helen:
Dr Helen Purchase is Senior Lecturer in the School of Computing Science at the University of Glasgow. She has worked in the area of empirical studies of graph layout for several years, and also has research interests in visual aesthetics, task-based empirical design, collaborative learning in higher education, and sketch tools for design. She is currently writing a book on empirical methods for HCI research.