Current Practices for Building LLM-Powered Reasoning Tools Are Ad Hoc—and We Can do Better
This program is tentative and subject to change.
There is growing excitement about building software verifiers, synthesizers, and other Automated Reasoning (AR) tools by combining traditional symbolic algorithms and Large Language Models (LLMs). Unfortunately, the current practice for constructing such neurosymbolic AR systems is an ad hoc programming model that does not have the strong guarantees of traditional symbolic algorithms, nor a deep enough synchronization of neural networks and symbolic reasoning to unlock the full potential of LLM-powered reasoning. I propose Neurosymbolic Transition Systems as a principled computational model that can underlie infrastructure for building neurosymbolic AR tools. In this model, symbolic state is paired with intuition, and state transitions operate over symbols and intuition in parallel. I argue why this new paradigm can scale logical reasoning beyond current capabilities while retaining the strong guarantees of symbolic algorithms, and I sketch out how the computational model I propose can be reified in a logic programming language.
I am a computer science postdoc at the University of Melbourne supervised by Toby Murray. I work in the areas of programming languages and automated formal methods, with a current focus on automatically proving security properties about binary code.
I earned a PhD in computer science at Harvard University, where I was advised by Stephen Chong. My PhD research focused on combining logic programming and constraint solving, with applications to program analysis and synthesis.