|Affiliation||University of Oxford|
|Research area code||(C8) Psychology|
|Fellowship Inauguration Year||2016|
Computational modelling within cognitive (neuro)science aims to understand the nature and relationship between the computational (the what), the algorithmic (the how), and the implementational (the physical substrate) of the brain. Using computational models we can ask and answer detailed computational- and algorithmic/representational-level questions. Framing questions about the brain using computational models leads to robust theory-testing as well as clear predictions and explanations for cognitive and computational phenomena. For these reasons, computational modelling plays a vital role in cognitive science and psychology.
My general research focus is cognitive computational modelling with a specific interest in creating well-documented replicable code, in replicating important and useful models, in promoting the teaching of programming to psychology under- and post-graduates, and in creating novel computational models of cognitive processes.
My BSc was in Computer Science with a focus on Artificial Intelligence and Machine Learning, before moving to masters degrees in Cognitive and Decision Sciences and then specialising further with a PhD in cognitive models of semantic memory in both healthy participants and patients with frontotemporal neurodegeneration. I am an expert with using C and Python. Moreover, I can program in about 20 other languages too and have the ability to learn the basics of a new programming language in a matter of days. Although it takes years to fully appreciate the idiosyncrasies of a language and learn the specific ways certain algorithms can be implemented most efficiently and in a human-readable way, I nonetheless enjoy the process, e.g., drawing connections between different languages conceptual foundations. I especially like to convey my enthusiasm with others in order to motivate them to learn how programming is an invaluable and enjoyable skillset.
I am very motivated in terms of both teaching and learning more about coding. I have been programming since I was 16 and I would like to share this passion with others. I have been a rolemodel to students, especially women, over most of my academic career, something I had not realised was a default position for women in computational subfields. It had not occured to me that merely by virtue of being a woman other women often see me as a rolemodel. This was made obvious to me through discussions with my students over the course of teaching during my PhD. Hopefully, my rolemodel status can be used to inspire more women in cognitive science to take up computational modelling.
For the current academic year, I have been put in charge of running a computational modelling course, focussing on neural networks, however I will also have to teach students the very basics of programming too as there is no general programming course for undergraduates in Experimental Psychology. For this course I developed my own version of a neural network simulator with pedagogical value, where the students will be required to read and add code to the simulator and then run it with a GUI. In other words, I would like to be able to bootstrap their learning in order for them to be able to continue without my guidance after that course has ended. Importantly, undergraduate psychology students are mostly women - an important, yet underrepresented and underutilised talent pool exists therein. Their perspectives, as women but also as undergraduate students, will likely be a refreshing contribution to modelling work as a lot modelling is dominated by postgraduate students often coming from outside psychology and cognitive science.
Ideally, I would like to see programming courses being taught to under- and post-graduate students across the board in Psychology. Coding is not only important for modelling, as even basic behavioural experiments require programming skills, let alone performing state of the art statistical analyses in R, sharing code for transparency and replicability, or designing computational models to explore their theories. Programming in other words opens doors to further theoretical and empirical work that otherwise individual non-coders would not have access to.
My main work in the present as a postdoctoral researcher focuses on the computational modelling (i.e., writing code that runs efficient and scientifically useful simulations) of the abilities of infants in performing categorisation. With a specific aim of looking at what effects the use of labels (i.e., words to denote category membership, e.g., “dog”) have on their performance. Infants and adults can categorise stimuli often effortlessly using perceptual information and linguistic labels as input. For example, an infant can learn that the label “dog” denotes animals with fur, that bark, that can fetch sticks, that have four legs, a tail, etc., by repeated exposure to various different types of dog. Thus they generalise the meaning of “dog” to every animal that is a dog, but not to, for example, cats. Therefore, babies learn that certain labels apply to certain classes of things but not others. However, there are outstanding research questions as to what sorts of roles labels play in categorisation. Does the role of labels change from being seen as just another feature to having the ability to control categorisation in a top-down manner, or do they carry some innate special status. These sorts of questions cannot be answered without the use of computational modelling because the questions that are being asked about the types of learning that are possible given specific known inputs and outputs are inherently computational in nature.
I and my fellow researchers on this National Science Foundation (of the USA) grant Kim Plunkett in Experimental Psychology at the University of Oxford and Samuel Rivera, Keith S. Apfelbaum, and Vladimir Sloutsky at the Ohio State University, propose they start off as being no different to any sensory (e.g., visual, tactile, etc.) feature and then they gain a more important status due to their highly valuable status as a dependable cue for category membership. I am currently tackling this research question using self-organising maps (SOMs - a type of learning algorithm that shares certain properties with the way the brain is topologically organised). I am extending their abilities using a type of multi-layered SOM I have developed myself with Kim Plunkett. In addition I have jointly (with my collaborators at the Ohio State University) developed a different model - also based on SOMs - that uses a decision rule to categorise input stimuli (e.g., analogues to dog, car, and so on, are classified using some basic rule that interacts with what the network has learned). These two models are not mutually exclusive and they can in theory be merged to create a multi-level SOM that uses this type of decision rule. At the present time however, one is a qualitative account and the other provides a rough explanation and prediction for adult and infant data.
In addition, I am developing a model of the development of tool use in infants and birds, specifically corvids but with a long-term goal of extending it to other problem-solving species of animals, e.g., cockatoos, apes, etc. Such cross-species comparisons are invaluable to understanding what is and is not universal in terms of cognition/brains and what is unique to humans. Comparing babies to birds, for example, allows us to resolve questions about what it is that babies are actually doing - are they following certain preset learning steps in order to understand the world, are they learning how to learn in the same way crows do, and when does the learning that happens in crows vs babies diverge? Babies and crows have some very specific differences; for example, crows will never learn complex natural language, but they will surpass babies (although obviously not adult humans) in certain basic tasks that require the use of tools (e.g., using a rake or opening a box to obtain food/a toy). Cross-species comparisons are invaluable ways to further test our higher-level theories of cognition. For example, if a theory proposes complex mechanisms are at play in human cognition, e.g., language is a requirement for a specific task, and then it is discovered that a non-linguistic animal can accomplish the task just as well, the theory would need to be amended to reflect this or alternatively to be completely discarded in favour of a simpler account. This work is being carried out along with a fellow postdoctoral researcher who is jointly in both Kim Plunkett’s and Alejandro Kacelnik’s (in the department of Zoology at the University of Oxford) labs. Lauriane Rat-Fischer has developed some impressive experiments in which she tests the tool-using abilities of corvids and babies in near identical tasks. Together we plan to create a computational model for both the babies and the birds that can both derive tools’ and other objects’ affordances as well as choose an appropriate sequence of actions to undertake in order to obtain the food/toy reward. This model will probably use topic modelling and decision theoretic approaches to answer the two main research questions: how do agents learn affordances, and how do they choose a series of actions to take given an environment.
As part of another side project, I have recently finished jointly designing a set of experiments, for which I am programming the presentation of cross-modal stimuli in a novel way with Charles Spence and Alejandro Salgado-Montejo, in the Experimental Psychology department at the University of Oxford. This experiment looks at whether or not cross-modal (i.e., across different senses) visual and gustatory experiences are mediated via semantic memory processing or not. The reason this question is interesting is because a lot of the cross-modal literature is devoid of any appeal to semantic processing (i.e., the contribution of potentially abstract conceptual knowledge, e.g., the concept “apple”) to the association between visual and taste experiences. Instead they often appeal to direct correlations between sensory experience without any higher-level mediation coming from, e.g., language.
Moreover, I have experience with submitting data to the Open Science Foundation for a sideproject I worked on with Bradley Love and Łukasz Kopeć: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0137685 OSF link here: http://osf.io/3xrfa We looked at how optimism bias manifests in a zero-sum (i.e., if one team wins the other must lose) game, specifically the National Football League in the USA. We used Amazon’s Mechanical Turk - another very useful tool that requires some coding skills that the younger generations of psychologists should be able to use to their research’s benefit.
|Title||Start date||End date|
|Open Data Science Conference UK 2016||Tuesday, 17 October 2017|
|Open Data Science Conference UK 2016||Thursday, 19 October 2017|
|PyData London 2016||Monday, 16 October 2017|
|How do you teach Sustainable Software Practices 101?||Monday, 16 October 2017|
|PyCon UK 2016||Friday, 09 June 2017|