Doris_Pischedda
Invited Talk - 15th June 2018 – 11am Chichester 3 Building, Room: 3R143
Talk by Doris Pischedda on “Neural representation of task sets: how the human brain learns task structures from regularities in the environment and represents rules.”

More information

Close

Abstract

In everyday life, humans perform various tasks that can be described in terms of the rules that specify how to perform the task. In simple situations, a single rule may suffice to achieve our goal; however, more difficult tasks require multiple rules organised in hierarchies. While an individual can perform some action plans alone, more complex tasks require interaction with other people. Task rules are not intrinsic to the human brain; people learn rules constantly, by discovering regularities in their environment and identifying the course of action that leads to outcomes matching their goals. In this talk, I will present some of my studies investigating how task rules are represented in the human brain. I considered task sets with a hierarchical structure to identify neural representations of rules from different levels. Then, I explored task encoding when people interacted to achieve a common goal, to pinpoint neural representations of tasks performed by either the subject or their partner. Results showed that task rules were encoded within the frontoparietal control network, with no difference between rules from distinct hierarchical levels. Regions within this network encoded also information about the task assigned to either the subject or their partner. However, task information was encoded in different brain networks depending on whom the task was assigned to. Finally, I will present results from my recent work on rule acquisition showing that the amount of information available in the environment affects learning and determines how outcome value is encoded in the brain.

Short Bio

Doris Pischedda is a Postdoctoral Research Fellow at the Center for Mind/Brain Sciences, University of Trento, where she is currently investigating how humans represent the value of choice outcomes. Beyond decision making, her research interests include cognitive control, reasoning, and strategic as well as non-strategic behavior during social interaction. In 2014, she received a PhD in cognitive neuroscience from the University of Milano-Bicocca with a thesis on rule-guided behavior investigating where and how rules are represented and processed in the human brain. She carried out her postdoctoral research at the Bernstein Center for Computational Neuroscience Berlin under the supervision of Prof. John-Dylan Haynes, investigating neural representations of collaborative tasks. Then, within a project funded by Prof. Aldo Rustichini from the Department of Economics, University of Minnesota, she explored how game variables are encoded by the brain during strategic interactions.

Yunwen Tu
Invited Talk - 29th May 2018 – 11am Chichester 3 Building, Room: 3R143
Talk by Yunwen Tu (Tutu) on “Design Prototyping for the Future Food”

More information

Close

Abstract

As a designer, how do I apply speculative future thinking to my research (project) on emerging food culture and technology? How are designers inspired and able to test their ideas through the process of fast and dirty prototyping? In this talk, I will be sharing my future food design project "Protein Fantasy", about how I explore the design language through context design and food prototyping, and how I extend the possibilities of research and technology through design.

Short Bio

Yunwen Tu (Tutu) is a San Francisco based experience designer whose passions are the future of food and education. Tutu seeks ways to push design boundaries through her work envisioning how the food of the global diaspora will be impacted by environmental, socioeconomic, political, and technological trends. Tutu has collaborated with social mission restaurants Perennial and Don Bugito on reducing the environmental impacts of the food we eat. Tutu's work has been featured in design exhibitions at the local art and science museums in the USA. See for more details: tuyunwen.com

keisuke
Invited Talk - 18th May 2018 – 11am Arundel Building, Room: 223
Talk by Keisuke Suzuki on “Investigating Embodied Self with Virtual Reality”

More information

Close

Abstract

One key aspect of self-consciousness is the experience of being or having a body. Bodily experience is often approached from the perspective of multisensory integration, as illustrated in the now-famous rubber hand illusion. At the Sackler Centre, I have been working on several experiments using the state-of-art virtual reality technologies to investigate the embodied experiences of selfhood, such as body ownership and the feeling of agency. In this talk, I will first talk about the cardiac rubber hand illusion experiment, in which we found the visual feedback of our own heartbeat projected on a virtual hand induced the body ownership on the hand. Next, I will talk about our recent experiment investigating the intentional binding effect, an implicit measure of the feeling of agency, with a virtual hand setup. We found the intentional binding occurs by just observing the hand movement in the realistic virtual environments, even in the absence of the intentional action. Time permitting, I will also briefly introduce other VR setups I have developed.

Short Bio

Keisuke Suzuki obtained his Ph.D in Artificial Life from the University of Tokyo in 2007. He stayed as a research fellow in RIKEN Brain Science Institute, working on human cognitive functions in virtual reality environments (2008-2011). Here, with his colleagues, he developed a novel virtual reality system called Substitutional Reality. In this setup, people believe they are experiencing real-world scenes even though they are just exposed to pre-recorded ones. In 2011 he joined the Sackler Centre for Consciousness Science at the University of Sussex as a post-doctoral research fellow. Keisuke's research focuses on the study of consciousness in terms of embodied cognition, investigating ideas like body ownership, feeling of agency, sense of presence, etc.. His approach builds on state-of-the-art virtual reality setups for the study of conscious presence and the bodily-self, complemented by theoretical modelling of embodied self-consciousness.

David Green
Invited Talk - 13th April 2018 - 11am Chichester 3 Building, Room: 3R143
Talk by David Green on "Representing Realities in Virtual Reality.".

More information

Close

Abstract

I will begin with a short introduction to the Bristol VR Lab. I will share some insights from the early stages of the 'Virtual Realities: Immersive Documentary Encounters' project (October 2017), drawing on my previous research into interaction design for non-fiction (2011-2016). This will include observations from two exploratory activities - a database of non-fiction VR works (2012-2017) [w/ Chris Bevan] and a survey of award-winning VR producers [w/ Mandy Rose]. Given that we are still in the early stages of the (2.5 year) project, I will focus on our emerging research questions, with an invitation to feedback, ask questions and discuss.

Short Bio

David Green is a Research Fellow at the University of the West of England, based at Bristol VR (Virtual Reality) Lab. A documentary-maker and computer scientist, his PhD (at Open Lab Newcastle University) focused on interaction design for participation in interactive documentaries. After postdocs at Newcastle and Northumbria Universities, he is now working on the EPSRC-funded 'Virtual Realities: Immersive Documentary Encounters' project, which is taking a highly interdisciplinary approach to exploring immersive forms of non-fiction (e.g. journalism and documentary).

Dr Oussama Metatla
Invited Talk - 24th November 2017 - 11am Arundel Building, Room: 1A
Talk by Dr Oussama Metatla on "Designing multisensory technology with and for people living with visual impairments".

More information

Close

Abstract

Involving people in the process of designing technology that affects them is now a well established component of HCI research and practice. However, as with many forms of participation in decision-making in society, people living with visual impairments have had more limited opportunities to influence technology design across a variety of domains. A number of factors contribute to this; for example, many participatory design methods often rely on visual techniques, such as post-it notes and low-fi paper prototyping, to facilitate the expression and communication of design ideas; and while using visual means to express ideas for designing graphical interfaces is appropriate, it is harder to use them to articulate the design of, say, sonic or haptic artefacts, which are typical alternative modalities of interaction for people living with visual impairments. In this talk, I will outline our experience of engaging with people living with visual impairments and people with mixed visual abilities, where we adapted participatory design methods in order to jointly create meaningful technology, and describe some resulting research investigations that such engagement opened up in the areas of multisensory and crossmodal interaction design.

Short Bio

Oussama Metatla is an EPSRC Research Fellow at the Department of Computer Science, University of Bristol, where he currently leads a project researching inclusive educational technology for children with mixed visual abilities in mainstream schools. His research interests include investigating multisensory user experiences with interactive technology and designing with and for people living with visual impairments. He received his PhD in 2011 from Queen Mary University of London for a thesis exploring and characterising the use of sound to support non-visual interaction. Following this, he was a Researcher Co-Investigator on two EPSRC projects Crossmodal Collaborative Interfaces and Design Patterns for Inclusive Collaboration, also at QMUL, and an Associate Lecturer at Oxford Brookes University before being awarded an EPSRC Early Career Fellowship, hosted at the University of Bristol.

Page 1 of 4

About the SCHI Lab

The SCHI Lab research lies in the area of Human-Computer Interaction (HCI), an area in which research on multisensory experiences makes a difference on how we design and interact with technology in the future. The interdisciplinary team explores tactile, gustatory, and olfactory experiences as novel interaction modalities.

Contact

Sussex Computer Human Interaction Lab

Creative Technology Research Group

School of Engineering and Informatics

University of Sussex Chichester, 1

BN1 9QJ Brighton, UK

Phone: +44 (0)1273 877837

Mail: m.obrist [at] sussex.ac.uk

University of Sussex

ERC

© 2017 SCHI Lab. All Rights Reserved. Designed By JoomShaper