Friday 12 April 2024

Cosyne 2024

 Here's much delayed post on my experience at Cosyne 2024! 


Big themes at Cosyne:

1. Compositionality (in task representation, perception and planning)

2. Neural manifolds (low dimension structures in neural data while performing a task)

3. Using RNNs for standard neuro/psych tasks

 

But that's not all, there's many other interesting talks and posters (and workshop sessions) that I can't go through in blog, but all compiled in these slides I created for a lab presentation


What themes I think could be up and coming in future Cosynes - 

1. Compositionality (again) and structure learning

2. Continual learning (especially in motor repertoires and other skills)

3. Solving complex tasks with local learning rules

4. Homeostatic RL

5. Neuro specific RL gym environments opposed to OpenAI's standard RL gym environments


Until next time :)


Monday 28 August 2023

CCN 2023

CCN 2023 was in our very own Oxford and it was an enjoyable and insightful experience. Here I share some of the works that I found interesting at the conference with some of my thoughts:

 

This is one is quite mathematically involved but tackles a very important problem of causal learning of not only the causality but also the necessary hyperparameters using variational inference. The amazing bit it is that it can explain several human behaviours (which might be often viewed as sub-optimal) through errors in estimating these hyperparameters or by having a preference and gathering evidence to test a hypothesis! I'd be very curious to see how this works out when you include action selection based on the learned causal model - and it's links to our work on safety-efficiency trade-off.

 
 
It's very exciting to see something quite similar to the QMT that we have developed (https://www.youtube.com/watch?v=Gx_jnq4hvSY) can be used for early detection/tracking of Parkinson's diseases as well!

  I've been following Jack's work since his very interesting paper on explaining action prediction errors in a normative fashion. This one uses a a clever trick that would certainly work well for smaller problems (not very deep RL), and makes very interesting predictions about the role of cortical dopamine

The next work shows that an auxilliary learning signal can explain several findings about the hippocampus, I'm curious to see it's role in social learning as a similar auxilliary signal has been shown to be useful in deciding who to learn from in MARL: https://proceedings.mlr.press/v139/ndousse21a.html

Thanks to this poster, I finally got a grip on Rate-Distortion theory and how it can be used with RL which leaves me think how can this be used for neuroscience and be mapped to neural substrates. It is indeed a very neat idea, with several applications on capacity-constrained RL (working memory, energy etc)

The next one is a bit reminiscient of Bayesian SLAM, but also a very old idea of hippocampus as a Kalman filter - https://pubmed.ncbi.nlm.nih.gov/9697220/ (but rather now extended to LQG)

Mufeng's keynote on temporal Predictive Coding Networks was magnificent, this work by Gaspard overcomes the limitations of MAP estimate in PCNs by using MC-methods, thus thereby making distinction and predictions for spontaneous and evoked activity.

I was quite excited to see this work, which shows a grid-like encoding in value space. A recent work has also found a grid-like coding in valence space (https://www.biorxiv.org/content/biorxiv/early/2023/08/12/2023.08.10.552884.full.pdf) - which supports the hypotheses that I've developing on potential model-based state-space representation of injury state (beyond a single inferred variable).

   The tutorial by Deepmind was great! (https://github.com/kstach01/CogModelingRNNsTutorial) I'm excited to try out their Disentagled RNNs and Hybrid RNNs on Pavlovian-Instrumental tasks (hit me up if you are intereseted as well). I do feel these tools can also help us create better experiments to ensure the tasks help us elicit the behaviour that want to capture.

   This work was quite a surprise, simply from trajectories of usual robot behaviour, they show that we can uncover an (oscillatory) latent space. This has interesting implications for out of distribution detection/injury detection if viewed from homeostatic RL pov, except that the drive/surprise function is entirely data driven. Maybe we should start referring to homeostasis as homeodynamics :) I'd be interested if we approached this with energy based models instead of VAE.

This poster really got me excited (from a computational psychiatry lens) as they apple homeostatic/multidimensional RL to anhedonia and show a surprising result that anticipatory anhedonia groups surprisingly tend to take actions that increase multidimensional rewards instead of unidimensional rewards. I'd be curious to follow future results which might help us get a better picture of anhedonia beyond reward insensitivities. I'm also curious on model-based multi-dimensinal RL, where an agent might feel a particular state is unattainable - which can be potentially treated with several approaches to CBT.

   
This was a quite interesting study from our collabolator at KAIST (Prof. SangWan) highling the role of GPe astrocytes and GPe-STN neurons in signalling uncertainty to modulate exploration.
Absolutely loved this work showing the confirmation bias in pain perception, thus capturing aspects which I believe a vanilla RL or Bayesian inference models won't capture!
I would like just give a quick shout-out to Charlie from our lab who presented his work on world and body model of pain :)


All of the above abstracts are available on CCN website here: https://2023.ccneuro.org/search.php
 

I'd also like to just give a link to the presentation I had made from my COSYNE 2022 visit, but never made a blog post over it - https://docs.google.com/presentation/d/1F74ogMggwB-SIJN5RqrnX7F9B4f1YnRl52kRoiWnCoU/edit?usp=sharing





Wednesday 2 August 2023

Hello :)

Hi all,

I am Pranav Mahajan, I'm currently pursuing my DPhil at University of Oxford with Prof. Ben Seymour and Dr Ioannis Havoutis. I've been wanting to write a blog about -

1. My prior, current and future research

2. Thoughts on other people's research that I find interesting

3. Ideas that are far too nascent to be published

4. Thoughts on how our science can have a real world impact


I'd love hearing back from the readers, as this blog is one of the ways I'd want to bounce off ideas beyond my research group. I am also very open to collaborations within and especially outside the academic sphere, if you believe we can create something together that can have a positive impact on this world, please do reach out.