Michele Mignani

I'm a mathematician, especially interested in mathamatical logic and its applications in real-world scenarios. My range of interests are really wide: philosphy, literature, politics and recently I'm focusing on AI safety, believing that this is one of the most important research topic nowadays in the scientific field.

Education

About Me

I'm a PhD student in Computer Science at the University of Udine, Italy. My research interests are in the field of AI safety and alignment, with a particular focus on the use of Large Language Models to create formalized model of some pieces of world, in order to understand how they can be used to create safety and reliable guarantees for actual and future AI systems.

I studied mathematics at the University of Udine, where I got my Bachelor's and Master's degree. The shift towards AI safety was a consequence of my interest for a poistive impact on the world and of spending some time around "LessWrong" or "Alignment Forum" posts.

Outside academic stuff, I'm very fond of philosphy, literature, politics, and board games.

Education

Experience

PhD in Computer Science and AI(2025 - Now)

Autoformalization is the task of translating in a given formalism a text in natural language. Succeeding in this task could be a key step to advance the reliability and the safety of AI systems, building a bridge between sub-symbolic AIs and formal methods. Supervisor: Prof. Angelo Montanari - Chairholder of Formal Methods Chair.

Extra Activities

Here are some of the recent extracurricular activities I pursued

  • ARENA, Alignment Research Engineer Accelerator (2025)

    I've attended the ARENA bootcamp, a 5-week program on AI safety and alignment.

    Website

  • AGI Safety Discussion Group (2025)

    Leading a discussion group on AGI topics at University of Udine for researchers and major students.
  • AI Act Summer School (2024)

    I've participated in the AI Act Summer School at University of Udine, a 3-day program on AI ethics, AI policy, and AI governance.

    Website

  • AI Safety Fundamentals, BlueDot (2024)

    I've attended this introductory course on AI alignment and on technical and governative approaches for mitigating issues from future AI.

    Website

Thoughts