About
Hello! 👋
I’m Kola, an ML Researcher based in London. I work at the UK AI Safety Institute on Agency, Capability Evaluation, and Elicitation. This is my technical blog about Machine Learning.
My current main research interests are:
- Mechanistic Interpretability
- I’m particularly interested in universal representation learning (and its philosophical implications), compositional representations and principled approaches to feature disentanglement.
- I’m also interested in how we can understand the inherent modularity in neural networks in the wild and the associated lessons about multi-task learning and generalisation from modular architectures.
- Theories of Agency
- I’m particularly thinking about Active Inference and Bayesian Mechanics as candidates for unified theories of agency which we can use as models for agency at different levels of abstraction.
- I’m especially interested in how we can use these theories to understand multi-agent systems.
- I’m also interested in evaluations for dangerous capabilities in agentic systems, in particular in the context of cybersecurity as one of the first applications where our theory may apply.
- Philosophy of AI
- AI has both implications for and can take lessons from many parts of the philosophy literature.
- I’m particularly interested in the nascent Philosophy of Interpretability and its relationship with Cognitive Science, Neuroscience, Linguistics Theory, Philosophy of Mind and the Philosophy of Science more broadly.
- Adaptive Neural Computation
- I’m especially interested in approaches which allow networks to spend more compute on difficult tokens via early-exiting mechanisms, MoE and related approaches.
- I maintain an annotated collection of research papers in Adaptive Computation for the community.
Previous research interests have included the Linguistic properties of Mathematics, ML applied to Musicology, and Logic.
Find me on Substack for other writing or on GitHub for code. You can find my publications and pre-prints on Google Scholar.
I occasionally mentor and supervise projects through various AI Safety programs.
Please reach out if you’re interested in collaborating on one of the above topics.
Feel free to reach out by email, give me anonymous feedback here or schedule a chat with me about the topics above here.
I am grateful to the Foresight Institute and Machine Learning Alignment & Theory Scholars (MATS) for supporting this research.