Monday 28 August 2023

CCN 2023

CCN 2023 was in our very own Oxford and it was an enjoyable and insightful experience. Here I share some of the works that I found interesting at the conference with some of my thoughts:

 

This is one is quite mathematically involved but tackles a very important problem of causal learning of not only the causality but also the necessary hyperparameters using variational inference. The amazing bit it is that it can explain several human behaviours (which might be often viewed as sub-optimal) through errors in estimating these hyperparameters or by having a preference and gathering evidence to test a hypothesis! I'd be very curious to see how this works out when you include action selection based on the learned causal model - and it's links to our work on safety-efficiency trade-off.

 
 
It's very exciting to see something quite similar to the QMT that we have developed (https://www.youtube.com/watch?v=Gx_jnq4hvSY) can be used for early detection/tracking of Parkinson's diseases as well!

  I've been following Jack's work since his very interesting paper on explaining action prediction errors in a normative fashion. This one uses a a clever trick that would certainly work well for smaller problems (not very deep RL), and makes very interesting predictions about the role of cortical dopamine

The next work shows that an auxilliary learning signal can explain several findings about the hippocampus, I'm curious to see it's role in social learning as a similar auxilliary signal has been shown to be useful in deciding who to learn from in MARL: https://proceedings.mlr.press/v139/ndousse21a.html

Thanks to this poster, I finally got a grip on Rate-Distortion theory and how it can be used with RL which leaves me think how can this be used for neuroscience and be mapped to neural substrates. It is indeed a very neat idea, with several applications on capacity-constrained RL (working memory, energy etc)

The next one is a bit reminiscient of Bayesian SLAM, but also a very old idea of hippocampus as a Kalman filter - https://pubmed.ncbi.nlm.nih.gov/9697220/ (but rather now extended to LQG)

Mufeng's keynote on temporal Predictive Coding Networks was magnificent, this work by Gaspard overcomes the limitations of MAP estimate in PCNs by using MC-methods, thus thereby making distinction and predictions for spontaneous and evoked activity.

I was quite excited to see this work, which shows a grid-like encoding in value space. A recent work has also found a grid-like coding in valence space (https://www.biorxiv.org/content/biorxiv/early/2023/08/12/2023.08.10.552884.full.pdf) - which supports the hypotheses that I've developing on potential model-based state-space representation of injury state (beyond a single inferred variable).

   The tutorial by Deepmind was great! (https://github.com/kstach01/CogModelingRNNsTutorial) I'm excited to try out their Disentagled RNNs and Hybrid RNNs on Pavlovian-Instrumental tasks (hit me up if you are intereseted as well). I do feel these tools can also help us create better experiments to ensure the tasks help us elicit the behaviour that want to capture.

   This work was quite a surprise, simply from trajectories of usual robot behaviour, they show that we can uncover an (oscillatory) latent space. This has interesting implications for out of distribution detection/injury detection if viewed from homeostatic RL pov, except that the drive/surprise function is entirely data driven. Maybe we should start referring to homeostasis as homeodynamics :) I'd be interested if we approached this with energy based models instead of VAE.

This poster really got me excited (from a computational psychiatry lens) as they apple homeostatic/multidimensional RL to anhedonia and show a surprising result that anticipatory anhedonia groups surprisingly tend to take actions that increase multidimensional rewards instead of unidimensional rewards. I'd be curious to follow future results which might help us get a better picture of anhedonia beyond reward insensitivities. I'm also curious on model-based multi-dimensinal RL, where an agent might feel a particular state is unattainable - which can be potentially treated with several approaches to CBT.

   
This was a quite interesting study from our collabolator at KAIST (Prof. SangWan) highling the role of GPe astrocytes and GPe-STN neurons in signalling uncertainty to modulate exploration.
Absolutely loved this work showing the confirmation bias in pain perception, thus capturing aspects which I believe a vanilla RL or Bayesian inference models won't capture!
I would like just give a quick shout-out to Charlie from our lab who presented his work on world and body model of pain :)


All of the above abstracts are available on CCN website here: https://2023.ccneuro.org/search.php
 

I'd also like to just give a link to the presentation I had made from my COSYNE 2022 visit, but never made a blog post over it - https://docs.google.com/presentation/d/1F74ogMggwB-SIJN5RqrnX7F9B4f1YnRl52kRoiWnCoU/edit?usp=sharing





No comments:

Post a Comment