Dr. Timo Freiesleben

Research

While practical applications of machine learning are ubiquitous, the theoretical and philosophical foundations of machine learning lag behind. Core concepts such as interpretability, contestability, generalization, or benchmarks are often only vaguely defined. Without precise definitions and conceptual foundations, machine learning risks remaining mere engineering rather than developing into a mature science or a reliable tool for scientific and industrial applications.

The goal of my research is to address these fundamental challenges. My approach combines philosophical conceptual analysis with rigorous mathematical modeling. I consider myself a mediator between disciplines: I am a philosopher close to machine learning practice, and a philosophically inclined AI researcher; but faculty lines are so 20th century anyway. You can find my complete publication record on my Google Scholar profile.

Philosophy of Machine Learning

My work in the philosophy of machine learning examines the interplay between machine learning and the philosophy of science. On the one hand, I aim to provide epistemological grounding for machine learning practices such as benchmarking, robustness, and interpretability. On the other hand, I study the philosophical implications of applying machine learning tools in scientific inquiry, particularly with respect to scientific goals such as prediction and explanation.

Ethics of AI

My work in AI ethics focuses on the challenges faced by individuals who receive unfavorable outcomes from automated decision-making systems. In particular, I study the normative implications of deploying machine learning systems in social contexts.

  • Algorithmic Contestability.
    Work in progress.
  • Performative Validity of Recourse Explanations.
    We analyze the performative effects recourse explanations have on their own validity. We find that recourse recommendations that focus on causes of the target largely remain valid under their own performative effects.
    NeurIPS conference.
    Joint work with Gunnar König and others.
  • Improvement-Focused Causal Recourse.
    We argue that many counterfactual explanations enable users to game systems rather than improve their qualifications. Consequently, we show how causal knowledge can support meaningful recourse.
    AAAI conference.
    Joint work with Gunnar König and Moritz Grosse-Wentrup.
Explainable AI

My work on explainable AI examines the conceptual foundations of explanation techniques. I provide critiques of common misconceptions as well as philosophically motivated technical contributions.