News

SACHI Seminar: Jason Alexander (Lancaster University) – What would you do if you could touch your data?


Event details

  • When: 29th November 2018 14:00 - 15:00
  • Where: Cole 1.33a

Title:  What would you do if you could touch your data?

Abstract: Data Physicalizations are physical artefacts whose geometry or material properties encode data. They bring digital datasets previously locked behind 2D computer screens out into the physical world, enabling exploration, manipulation, and understanding using our rich tactile senses. My work explores the design and construction of dynamic data physicalizations, where users can interact with physical datasets that dynamically update. I will describe our data physicalization vision and show our progress on designing, building, and evaluating physicalizations and discuss the many exciting challenges faced by this emerging field.

Speaker biography:  Jason is a Senior Lecturer in the School of Computing and Communications at Lancaster University. He has a BSc(Hons) and PhD in Computer Science from the University of Canterbury in New Zealand and was previously a post-doctoral researcher at the University of Bristol. His research is broadly in Human-Computer Interaction, with a particular interest in developing novel interactive systems to bridge the physical-digital divide. His recent work focuses on the development of shape-changing interfaces—surfaces that can dynamically change their geometry based on digital content—and their application to data physicalization. He also has interests in digital fabrication and novel haptic interaction techniques.

SACHI Seminar – Professor Anirudha Joshi: The story of Swarachakra – Cracking the puzzle of text input in Indian languages


Event details

  • When: 29th October 2018 15:00 - 16:00
  • Where: Cole 1.33a

Title: The story of Swarachakra – Cracking the puzzle of text input in Indian languages

Abstract: There was a time when text input in Indian languages was called a ‘puzzle’. People found it so difficult that became a barrier that prevented them from using most other technology products, from doing common tasks such as searching the web or saving a contact. As a result, Indians typed very little in their own languages. The Roman script (in which we write English) is an Alphabet. In contrast, a large majority of Indian scripts are Abugidas – a different type of scripts. In our lab, we were convinced that we need different solutions – what works for Alphabets may not work for Abugidas. Over the years we explored several designs. Our early solutions were for desktop computers. Later we developed concepts for the feature phones. We tried several creative ideas and made prototypes. We got interesting results in the lab. We published papers and case studies. But beyond that, we could not reach out and make a difference to the end-users. Then smartphones arrived, and quickly became popular. It became relatively easier to develop and deploy keyboards. Again, we tried several ideas. One solution stood out in comparison with others. We called it “Swarachakra”. Today, Swarachakra is available for 12 Indian languages and has been downloaded by about 4 million users. What was the problem, and how was it solved? And what challenges remain? Come to the talk to find out.

Speaker biography: Anirudha Joshi is professor in the interaction design stream in the IDC School of Design, IIT Bombay, India, though currently he is on a sabbatical, visiting universities in the UK. His specialises in design of interactive products for emergent users in developing economies. He has worked in diverse domains including healthcare, literacy, Indian language text input, banking, education, industrial equipment, and FMCG packaging. Anirudha also works in the area of integrating HCI activities with software engineering processes. He has developed process models, tools, and metrics to help HCI practitioners deliver a better user experience. Anirudha is active with HCI communities in India and outside. He has chaired in various roles in several conferences including India HCI, INTERACT and CHI. Since 2007, he represents India on IFIP TC13. He is the founding director of HCI Professionals Association of India since 2013. Since 2015 he is the Liaison for India for the ACM SIGCHI Asian Development Committee. Since 2016, he has been the VP Finance of the ACM SIGCHI Executive Committee. Anirudha has diverse backgrounds. He is a BTech (1989) in Electrical Engineering, an MDes (1992), in Visual Communication Design, and a PhD (2011) in Computer Science and Engineering, all from IIT Bombay.

SACHI Seminar – Professor Patrick Olivier – Digital Civics: Infrastructuring Participatory Citizenship


Event details

  • When: 18th October 2018 14:00 - 15:00
  • Where: Cole 1.33a

Title:  Digital Civics: Infrastructuring Participatory Citizenship

Abstract:  Firstly, this is not technical talk, its a talk about a research initiative in “Digital Civics” that Open Lab is undertaking primarily with partners in the North East of England, but also nationally and internationally. Digital Civics proposes the use of digital technologies in the provision of relational models of public services, that is, models that take as a starting point the potential of digital technologies to support citizen-focused sharing of knowledge, experience and resources. By framing government as more than simply the provider of uniform and mechanistic services, digital civics aims to leverage technology to foster environments in which local agents (e.g. charities, local businesses, citizens) are able to solve problems together. Digital Civics research is inherently cross-disciplinary, action-oriented and place-based, and this requires us (as academic researchers) to configure ourselves differently to the communities with whom we conduct our research. In this talk I will be describing examples of our digital civics research, from applications in community engagement and education to public health and social justice, as well as the trajectory and pragmatics of the overall endeavour.

Speaker biography:  Patrick Olivier is Professor of Human-Computer Interaction in the School of Computing, Newcastle University, UK. He founded and leads Open Lab, Newcastle University’s centre for cross-disciplinary research in digital technologies. His research interests span interaction design, social computing and ubiquitous computing, particularly in public service and civic application contexts (education, public health and social justice). He is director of the EPSRC Centre for Doctoral Training in Digital Civics (55 cross-disciplinary PhD students) and the EPSRC Digital Economy Research Centre (a multidisciplinary five-year project involving 25 postdocs).

Google scholar:

https://scholar.google.co.uk/citations?hl=en&user=CUu9heMAAAAJ

ORCID:

https://orcid.org/0000-0003-2841-7580

Open Lab:

https://openlab.ncl.ac.uk/

Digital Civics:

https://digitalcivics.io/

SACHI Seminar – Professor Alan Dix: Sufficient Reason


Event details

  • When: 16th October 2018 14:00 - 15:00
  • Where: Cole 1.33a

Title:  Sufficient Reason

Abstract:  A job candidate has been pre-selected for shortlist by a neural net; an autonomous car has suddenly changed lanes almost causing an accident; the intelligent fridge has ordered an extra pint of milk.  From the life changing or life threatening to day-to-day living, decisions are made by computer systems on our behalf.  If something goes wrong, or even when the decision appears correct, we may need to ask the question, “why?”  In the case of failures we need to know whether it is the result of a bug in the software,; a need for more data, sensors or training; or simply one of those things: a decision correct in the context, that happened to turn out badly.  Even if the decision appears acceptable, we may wish to understand it for our own curiosity, peace of mind, or for legal compliance.  In this talk I will pick up threads of research dating back to early work in the 1990s on gender and ethnic bias in black-box machine-learning systems, as well as more recent developments such as deep learning and concerns such as those that gave rise to the EPSRC human-like computing programme.  In particular I will present nascent work on an AIX Toolkit (AI explainability): a structured collection of techniques designed to help developers of intelligent systems create more comprehensible representations of the reasoning.  Crucial to the AIX Toolkit is the understanding that human-human explanations are rarely utterly precise or reproducible, but they are sufficient to inspire confidence and trust in a collaborative endeavour.

Speaker biography:  Alan Dix is Director of the Computational Foundry at Swansea University.  Previously he has spent 10 years in a mix of academic and commercial roles, most recently as Professor in the HCI Centre at the University of Birmingham and Senior Researcher at Talis. He has worked in human–computer interaction research since the mid 1980s, and is the author of one of the major international textbooks on HCI as well as of over 450 research publications from formal methods to design creativity, including some of the earliest papers in the HCI literature on topics such as privacy, mobile interaction, and gender and ethnic bias in intelligent algorithms.   Issues of space and time in user interaction have been a long term interest, from his “Myth of the Infinitely Fast Machine” in 1987, to his co-authored book, TouchIT, on physicality in a digital age, due to be published in 2018. Alan organises a twice-yearly workshop, Tiree Tech Wave, on the small Scottish island where he has lived for 10 years, and where he has been engaged in a number of community research projects relating to heritage, communications, energy use and open data.  In 2013, he walked the complete periphery of Wales, over a thousand miles.  This was a personal journey, but also a research expedition, exploring the technology needs of the walker and the people along the way.   The data from this including 19,000 images, about 150,000 words of geo-tagged text, and many giga-bytes of bio-data is available in the public domain as an ‘open science’ resource. Alan’s new role at the Computational Foundry has brought him back to his homeland.  The Computational Foundry is a 30 million pound initiative to boost computational research in Wales with a strong focus on creating social and economic benefit.  Digital technology is at a bifurcation point when it could simply reinforce existing structures of industry, government and health, or could allow us to radically reimagine and transform society.  The Foundry is built on the belief that addressing human needs and human values requires and inspires the deepest forms of fundamental science.  http://alandix.com/

SACHI Seminar – Alyssa Goodman: Visualization and the Universe


Event details

  • When: 12th October 2018 12:00 - 13:00
  • Where: Cole 1.33b

Title:  

Visualization and the Universe: How and why astronomers, doctors, and you need to work together to understand the world around us

 

Abstract:

Astronomy has long been a field reliant on visualization. First, it was literal visualization—looking at the Sky. Today, though, astronomers are faced with the daunting task of understanding gigantic digital images from across the electromagnetic spectrum and contextualizing them with hugely complex physics simulations, in order to make more sense of our Universe.   In this talk, I will explain how new approaches to simultaneously exploring and explaining vast data sets allow astronomers—and other scientists—to make sense of what the data have to say, and to communicate what they learn to each other, and to the public.  In particular, I will talk about the evolution of the multi-dimensional linked-view data visualization environment known as glue (glueviz.org) and the Universe Information System called WorldWide Telescope (worldwidetelescope.org).  I will explain how glue is being used in medical and geographic information sciences, and I will discuss its future potential to expand into all fields where diverse, but related, multi-dimensional data sets can be profitably analyzed together.  Toward the aim of bringing the insights to be discussed to a broader audience, I will also introduce the new “10 Questions to Ask When Creating a Visualization” website, 10QViz.org.

 

Speaker biography: Professor Alyssa Goodman, Harvard University

Alyssa Goodman is the Robert Wheeler Willson Professor of Applied Astronomy at Harvard University, and a Research Associate of the Smithsonian Institution. Goodman’s research and teaching interests span astronomy, data visualization, and online systems for research and education. Goodman received her undergraduate degree in Physics from MIT in 1984 and a Ph.D. in Physics from Harvard in 1989. Goodman was awarded the Newton Lacy Pierce Prize from the American Astronomical Society in 1997, became full professor at Harvard in 1999, was named a Fellow of the American Association for the Advancement of Science in 2009, and chosen as Scientist of the Year by the Harvard Foundation in 2015. Goodman has served as Chair of the Astronomy Section of the American Association for the Advancement of Science and on the National Academy’s Board on Research Data and Information, and she currently serves on the both the IAU and AAS Working Groups on Astroinformatics and Astrostatistics. Goodman’s personal research presently focuses primarily on new ways to visualize and analyze the tremendous data volumes created by large and/or diverse astronomical surveys, and on improving our understanding of the structure of the Milky Way Galaxy. She is working closely with colleagues at the American Astronomical Society, helping to expand the use of the WorldWide Telescope program, in both research and in education.

SACHI Seminar: Alessio Malizia – User Experience: a step towards Natural User Interfaces.


Event details

  • When: 7th June 2018 14:00 - 15:00
  • Where: Cole 1.33a

Title: User Experience: a step towards Natural User Interfaces.

Abstract: The road to natural interfaces is still long and we are now witnessing an artificial naturality. These interfaces are natural, in the sense they employ hand gestures, but they are also artificial, because the system designer imposes the set of gestures. In this lecture we will explore together the benefits and issues of Natural User Interfaces.

Speaker biography: Alessio Malizia is a Professor of UX Design at the University of Hertfordshire and a distinguished speaker of the ACM (the international Association for Computer Machinery); he lives in London but is a “global soul” and has been living in Italy, Spain and US. He is the son of a blacksmith, but thereafter all pretensions of manual skills end. Prof. Malizia began his career as a bearded computer scientist at Sapienza – University of Rome and then, after an industrial experience in IBM and Silicon Graphics, moved on with a career in research. He was visiting researcher at the Xerox PARC where he was appreciated for his skills in neural networks (Multilayer Perceptrons) and as peanut butter and chocolate biscuits eater. He worked as Senior Lecturer at Brunel University London and as Associate Professor (and Spanish tapas aficionado) at the University Carlos III of Madrid. Prof Malizia’s research and teaching interests focus on Human-Centred Systems.

He is interested in the design of Ubiquitous Interactive Systems with a special focus on the End-User Development community. He is particularly interested in systems where the physical and digital become seamlessly intertwined producing a new hybrid landscape and the study of problems arising from designing such complex hybrid environments involving collaboration of various disciplines and stakeholders. In his role at the School of Creative Arts at University of Hertfordshire, he is keen to develop novel approaches and attract funding for improving methods to design almost invisible interfaces embedded in a physical environment naturally exploited by users’ innate interaction modalities.

SACHI Seminar – Cecilia Mascolo (Cambridge): Systems, Models and Learning: From mobile devices to mobile data


Event details

  • When: 30th January 2018 14:00 - 15:00
  • Where: Cole 1.33a

Abstract:  This talk concentrates on our efforts over the years to make the harvesting of relevant data from mobile devices accurate and efficient, to allow on device data interpretation and to produce models able to interpret the data so that it can be exploited for a wide range of applications. In this sense I will describe specifics of our work which range from fitting mobile sensing inference on devices and how we are able to exploit local device heterogeneous computation resources efficiently to data analytics for mobile health and urban computing. I will discuss challenges and opportunities of the field throughout the talk.

Speaker Bio:  Cecilia Mascolo is a mother of a teenage daughter. She is also Full Professor of Mobile Systems in the Computer Laboratory, University of Cambridge, UK, a Fellow of Jesus College Cambridge and a Faculty Fellow at the Alan Turing Institute for Data Science in London. Prior joining Cambridge in 2008, she has been a faculty member in the Department of Computer Science at University College London. She holds a PhD from the University of Bologna. Her research interests are in human mobility modelling, mobile and sensor systems and networking and spatio-temporal data analysis. She has published in a number of top tier conferences and journals in the area and her investigator experience spans projects funded by Research Councils and industry. She has received numerous best paper awards and in 2016 was listed in “10 Women in Networking /Communications You Should Know”. She has served as steering, organizing and programme committee member of mobile, sensor systems, networking, data science conferences and workshops. She has delivered a number of keynote talks at conferences and workshops in the area of mobility, data science, pervasive computing and systems. She is Associate Editor in Chief for IEEE Pervasive Computing and sits on the editorial boards of IEEE Transactions on Mobile Computing, ACM Transactions on Sensor Networks and ACM Transactions on Interactive, Mobile, Wearable and Ubiquitous Technologies. More details at www.cl.cam.ac.uk/users/cm542.

War Stories: Building new tech products in an uncertain world


Event details

  • When: 19th April 2018 14:00 - 15:00
  • Where: Cole 1.33a

Steven Drost (CodeBase Chief Strategy Officer) and Jamie Coleman (CodeBase CoFounder and Chair) will talk about the topics that are rarely discussed in an academic environment around startups, product management, jobs to be done and disruption. Discussing aspects of UX, HCI, AI and systems development this is the stuff that they wish every computer scientist and startup founder knew before trying to create an innovative new business.

What is CodeBase?

CodeBase is the UK’s largest startup incubator, home to around 100 technology companies in Edinburgh and Stirling. It brings together ambitious entrepreneurs, world-class technological talent and top investors, in a creative, collaborative environment designed for the new digital economy. We host a vibrant, open community of experts in a diverse range of fields, with hands-on mentorship, networking and world-class business support. http://www.thisiscodebase.com

Jamie and Steven are quite inspiring speakers and if you are looking for project partners, collaborators or just to learn how to develop your ideas commercially, this could be a good talk for you.

SACHI Seminar: Matjaž Kljun – Large scale studies of habit changing interface design


Event details

  • When: 12th April 2018 14:00 - 15:00
  • Where: Cole 1.33b

SACHI Seminar – Large scale studies of habit changing interface design

Speaker: Matjaž Kljun

Abstract:

Various technologies can be used in persuading people to change their habits, behaviours or attitudes. Such technologies are defined as persuasive and they are used in a variety of fields such as marketing, public health and education.

We are daily exposed to persuasion through different visualizations and triggers on all our devices. For example, a social networking application tries to persuade us in opening the app with a push notification and once the app is opened other hooks are placed so we spend more time in it. However, such applications are usually installed by us and we are inclined in using them. But could we persuade highly busy professionals in completing a training course or just about everybody to read terms of service? We will discuss these issues through large-scale studies that have been in done in the wild.

Speaker biography: Matjaž Kljun is an assistant professor at the Faculty of Mathematics, Natural Sciences and Information Technologies at University of Primorska and is co-directing the HICUP lab (Humans Interacting with Computers at University of Primorska) and a research associate at the Faculty of Information studies, Slovenia. He received his Ph.D. degree in computer science from Lancaster University, UK. His research interests span across various fields related to Human-Computer Interaction, Personal Information Management and the use of technologies in teaching and learning.

SACHI Seminar: Klen Čopič Pucihar – The Missing Interface: Micro Gestures on Objects for Augmented Reality Interaction


Event details

  • When: 12th April 2018 14:00 - 15:00
  • Where: Cole 1.33b

SACHI Seminar – The Missing Interface: Micro Gestures on Objects for Augmented Reality Interaction

Speaker: Klen Čopič Pucihar

Abstract:

Augmented reality technology can introduce digital elements to arbitrary objects. However, these objects were never designed to incorporate the digital component, hence do not provide the necessary interface. To overcome this limitation, AR Interaction systems add sensors to objects, use additional handheld hardware or perform hand and body tracking. These methods are not optimal for direct interaction with physical objects  because they:

  • require modification of existing objects,
  • require the the user to hold the controller in their hand,
  • are based on synthesis of captured RGB or RGB-D data streams imposing the following limitations: (i) gestures need to be  performed within the view of the camera; (ii) the gestures include reasonable large hand or finger movements (e.g. pinching the fingers, blooming gesture of opening the palm; (iii) the hand performing gesture is not occluded (e.g. cannot detect gestures if performed whilst grasping an object).

In this talk Klen will look at alternative methods which try to overcome such limitations and make inconspicuous, precise and flexible object oriented interaction possible for both augmented and mediated reality applications.

Speaker Biography

Klen Čopič Pucihar is assistant professor at the Faculty of Mathematics, Natural Sciences and Information Technologies at University of Primorska. Klen’s research vision is to look for new ways in which one could augmented, modify and mediate rich sources of visual, auditory and tactile stimuli that fabricate our everyday life experiences. The goal is to augment human abilities with new ways of using computational resources. This is important because the interface presents itself as the bottleneck between us humans and the benefit ever increasing computational resources could have on our everyday life. This makes the interfaces the core challenge for the future and the essence of Klen’s research which is currently mainly concentrated on augmented reality, mobile computing and human-computer interaction focusing on different perceptual issues that arise whilst interacting with various computer systems which lead to innovative user interface designs. Klen’s work was published as high ranked scientific publications and won him best poster award at ISMAR 2014.