Publications
Probing the quantitative-qualitative divide in probabilistic reasoning (joint work with Thomas Icard, Duligur Ibeling and Milan Mossé), The Annals of Pure and Applied Logic [journal link][arXiv preprint]. This paper explores the space of (propositional) probabilistic logical languages, ranging from a purely ‘qualitative’ comparative language to a highly ‘quantitative’ language involving arbitrary polynomials over probability terms. We identify a robust distinction in the space of probability logics between merely additive and multiplicative systems and show that the distinction tracks a divide in computational complexity. We also address axiomatic questions, offering several new completeness results as well as a proof of non-finite-axiomatizability for comparative probability.
Probabilistic stability, AGM revision operators and maximum entropy, The Review of Symbolic Logic, 15(3), 553-590 [journal link]. In this paper, I show that AGM revision operators emerge from the stability rule for acceptance and the principle of maximum entropy.
When random graphs are safe for travel: a note on the sabotage game, forthcoming in: van Benthem, J. and Liu, F. (eds.), Graph Games and Logic Design: Recent Developments and Future Directions, Springer Nature. In this note, I study the sabotage game played on random (multi)graphs as given by the Gilbert-Erdős–Rényi model. I show that Traveller almost surely has a winning strategy in the model with fixed edge probability, and give asymptotic growth bounds on the threshold function for sure-win graphs in the model with a decreasing edge probability (where Traveller has a winning strategy for any choice of start and goal vertices). I conclude with a few remarks on the logical characterisability of winning strategies, and show that the existence of a winning strategy for Traveller is not first-order characterisable over finite models.
The Modal Logic of Stepwise Removal (joint work with Johan van Benthem and Francesca Zaffora Blando), The Review of Symbolic Logic, 15(1), 36-63 [journal link][arXiv preprint]. We study a modal logic of stepwise removal of objects, giving a completeness theorem and analysing its satisfiability and model-checking complexity, in an effort to explain the jump in complexity between dynamic epistemic logics of model transformations and logics of freely chosen graph changes that get registered in a growing memory.
The Modal Logics of the Poison Game (joint work with Carlos Areces and Francesca Zaffora Blando, published version here), in: Liu F., Ono H., Yu J. (eds.), Knowledge, Proof and Dynamics (pp. 3-23), Logic in Asia: Studia Logica Library, Springer: Singapore. We study dynamic logics designed to model the poison game, a sequential two-player game played on graphs: we give results on their decidability, expressive power, and model-checking complexity.
In preparation
Bayesianism with a Humean Face: Between Symmetries and Best Systems (joint work with Francesca Zaffora Blando)[slides v1][slides v2]. Humeans about chance often endorse the Best-Systems account, according to which chances are but summaries of the patterns found in the totality of local matters of fact. More thoroughgoing subjectivists like de Finetti, on the other hand, rationalize chance-talk by appealing to certain symmetries in an agent’s credences. We explore the extent to which these two different approaches can inform each other. de Finetti’s framework admits a Humean interpretation where the underlying symmetries in the agent’s prior can be seen as implicitly providing a Best-Systems theory of chance. This perspective gives rise to hybrid views of chance lying somewhere on the spectrum between traditional Best-Systems conceptions of chance and de Finetti’s ‘radical’ subjectivism, where each position on that spectrum depends on the degree to which one allows Best-Systems probabilities to be sensitive to the agent’s inductive assumptions. The interactions between the Best-Systems view and de Finetti’s account of chance are especially fruitful in the context of computationally bounded Bayesian agents: i.e., Bayesian agents with computable priors. We discuss a recent criticism of Humean conceptions of chance due to Gordon Belot, who shows that there are several chance-credence principles in the spirit of Lewis’ Principal Principle that computationally bounded Bayesian agents cannot obey. On the other hand, we explain that, on de Finetti’s conception of chance, ergodic decomposition theorems supply a general version of the Principal Principle, which applies to computable and uncomputable priors alike. De Finetti’s conception of chance is thus largely unaffected by Belot’s impossibility results: computationally bounded Bayesian agents automatically obey a version of the Principal Principle. So, with a healthy dose of subjectivism, Humeans about chance needn’t be too worried about these results. Our discussion will raise a number of questions about the right formulation of the Principal Principle, the extent to which computationally bounded Bayesian agents can entertain, and defer to, uncomputable chances, the right constraints to impose on theories of chance when the underlying agents are computationally bounded, and the role of algorithmic randomness in Humean conceptions of chance.
Probabilistically stable revision and comparative probability: a representation theorem and applications [arXiv preprint]. The stability rule for belief, advocated by Leitgeb [Annals of Pure and Applied Logic 164, 2013], is a rule for rational acceptance that captures categorical belief in terms of probabilistically stable propositions: propositions to which the agent assigns resiliently high credence. The stability rule generates a class of probabilistically stable belief revision operators, which capture the dynamics of belief that result from an agent updating their credences through Bayesian conditioning while complying with the stability rule for their all-or-nothing beliefs. In this paper, I prove a representation theorem that yields a complete characterisation of such probabilistically stable revision operators and provides a `qualitative' selection function semantics for the (non-monotonic) logic of probabilistically stable belief revision. Drawing on the theory of comparative probability orders, this result gives necessary and sufficient conditions for a selection function to be representable as a strongest-stable-set operator on a finite probability space. The resulting logic of probabilistically stable belief revision exhibits strong monotonicity properties while failing the AGM belief revision postulates and satisfying only very weak forms of case reasoning. In showing the main theorem, I prove two results of independent interest to the theory of comparative probability: the first provides necessary and sufficient conditions for the joint representation of a pair of (respectively, strict and non-strict) comparative probability orders. The second result provides a method for axiomatising the logic of ratio comparisons of the form “event A is at least k times more likely than event B''. In addition to these measurement-theoretic applications, I point out two applications of the main result to the theory of simple voting games and to revealed preference theory.
Logics of closures and kernels [slides from the Online Logic Seminar]. This paper is a study of bimodal logics of lower and upper approximation operators on Boolean algebras, where an element of a Boolean algebra is approximated from below and from above, respectively, by closure and kernel operators generated by a substructure of the algebra. Each class of substructures generates a different class of algebraic approximation operators. For each class of substructures I consider, I provide a representation theorem which characterises the corresponding class of approximation operators, and prove a completeness theorem, identifying the corresponding bimodal logic. These approximation semantics allow to recover a natural family of modal systems: the fusions S4 + S4 and EMNT4 + EMNT4, the temporal logic S4t, as well as S4 and S5 are all shown to be logics of approximation operators. The completeness proofs involve an investigation of Tarski-like dualities for closure and kernel operators, and of the relation between approximation operators, closure-kernel algebras and bitopologies, yielding as corollary the result that S4+S4 is the complete logic of pairwise zero-dimensional bitopological spaces.
Probabilistic stability and statistical learning. In this paper, I study the behaviour of Leitgeb’s stability rule for acceptance on Bayesian statistical models. I show that, for a very wide class of probabilistic learning problems, Leitgeb's rule yields a notion of acceptance that either fails to be conjunctive (accepted hypotheses are not closed under finite conjunctions) or is trivial (only hypotheses with probability one are accepted). An analogous result also affects refined notions of stability which take into account the evidence structure in the learning problem at hand. An analysis of the counterexamples exhibits a serious tension: in statistical learning, certain properties of priors that are conducive to inductive learning—open-mindedness, as well as certain symmetries in the agent’s probability assignments—act against conjunctive belief.
Acceptance and invariance on atomless spaces. A theorem by Smith generalises a form of the Lottery Paradox to atomless probability spaces, revealing an important conflict between certain invariance properties of rational acceptance and logical closure requirements on accepted hypotheses. I strengthen Smith’s result and then consider a simple strategy for evading both impossibility theorems. The strategy consists in using refined acceptance rules that depend on the learning problem at hand, where acceptance is sensitive to the learning protocol being modelled (for instance, the acceptance of a hypothesis can depend not only on the agent's probability measure, but also on the structure of the observable data in the given experimental setup). While these rules escape Smith’s result, I show that they are subject to another limiting theorem in this learning-theoretic setting. I prove that, on learning problems based on standard Borel probability spaces, sufficiently invariant acceptance rules are either (1) not conjunctive, (2) inconsistent, or (3) trivial in a learning-theoretic (topological) sense. These results raise a number of questions about the context-sensitivity of acceptance, the aggregation of uncertain information in highly symmetric probability models, and the relationship between acceptance rules and statistical testing procedures.
Theses
Acceptance and Approximation: studies on probabilistic stability, acceptance rules on atomless spaces, and algebraic logics of approximation. My PhD dissertation, written under the supervision of Thomas Icard at Stanford University. Dissertation committee: Thomas Icard, Johan van Benthem, Brian Skyrms, and Persi Diaconis.
Probabilistic stability: dynamics, nonmonotonic logics, and stable revision. My MSc thesis, written under the supervision of Alexandru Baltag. (MSc Logic, Mathematics track, Institute for Logic, Language and Computation, University of Amsterdam).
Expository
Spooky Sets and Non-Measurable Monsters. CMU Philosophy Department Halloween Lecture, October 2024.
Measurable Cardinals and Scott’s Theorem. Slides from a presentation at the Stanford Mathematical Logic Seminar, March 2015.
Ultraproducts in model theory. Slides from a presentation at the Mathematical Stuctures in Logic seminar at the University of Utrecht, May 2014.