Michele Mignani

I am a mathematician, with a particular interest in mathematical logic and its applications to real-world scenarios. My range of interests is quite broad and includes philosophy, literature, and politics. More recently, I have been focusing on AI safety, which I believe to be one of the most important research topics in contemporary science.

Education

About Me

I'm a PhD student in Computer Science at the University of Udine, Italy. My research interests are in the field of AI safety and alignment, with a particular focus on the use of Large Language Models to create formalized models of aspects of the world. The goal of this research is to understand how such models can be used to provide safety and reliability guarantees for current and future AI systems.

I studied Mathematics at the University of Udine, where I obtained both my Bachelor's and Master's degrees. My shift toward AI safety was driven by my interest in having a positive impact on the world and by spending a significant amount of time engaging with discussions on platforms such as LessWrong and the Alignment Forum.

Outside of academic activities, I am particularly interested in philosophy, literature, politics, and board games.

Education

Experience

PhD in Computer Science and AI (2025 – Present)

Autoformalization is the task of translating natural-language text into a given formalism. Success in this task could represent a key step toward improving the reliability and safety of AI systems, by building a bridge between sub-symbolic AI approaches and formal methods.

Supervisor: Prof. Angelo Montanari – Chairholder of the Formal Methods Chair.

Extra Activities

Here are some of the recent extracurricular activities I pursued

  • ARENA, Alignment Research Engineer Accelerator (2025)

    I attended the ARENA bootcamp, a five-week program focused on AI safety and alignment.

  • AGI Safety Discussion Group (2025)

    I lead a discussion group on AGI-related topics at the University of Udine for researchers and advanced students.

  • AI Act Summer School (2024)

    I participated in the AI Act Summer School at the University of Udine, a three-day program focused on AI ethics, AI policy, and AI governance.

  • AI Safety Fundamentals, BlueDot (2024)

    I attended this introductory course on AI alignment, covering both technical approaches and governance-related strategies for mitigating risks posed by future AI systems.

Publications