Saturday, April 12th, 2025

12:00–17:00 JST / Doors Open 11:30

Tokyo Midtown Tower (Akasaka)

TAIS 2025 Sponsors

Technical AI Safety Conference 2025

TAIS 2025 is a one-day workshop on AI Safety, to be held on Saturday, April 12th, 2025 from 12:00 - 17:00 at Tokyo Midtown Tower.  Doors open at 11:30.

For our 2025 conference, we hope to welcome back all the attendees who joined us in 2024, plus any other domestic and international researchers and professionals interested in discussing AI Safety.

We welcome attendees from all backgrounds, regardless of your prior research experience. This event is free, so we welcome you to come and join us!

Last year, we ran TAIS 2024: a small non-archival open academic conference structured as a lecture series. It ran over the course of 2 days from April 5th–6th 2024 at the International Conference Hall of the Plaza Heisei in Odaiba, Tokyo.

Recordings from the conference are hosted on our youtube channel.

Invited Speakers

  • Ryota Kanai

    Founder & CEO: ARAYA

    Co-Founder: ALIGN

    Workshop 1: “Global Workspace: A Key to AI Consciousness, Brain-to-Brain Interfaces, and the Future of Humanity.”

  • Adam Gleave

    CEO & Co-Founder: FAR.AI

    Board Member: Safe AI Forum, London Initiative for Safe AI, and METR

    Workshop 2: “Securing Advanced AI: Capabilities, Risks and Solutions.”

Schedule of Events & Workshops

Time Event

11:30 - 12:00 Doors Open / Registration Check

12:00 - 12:30 Opening Ceremonies: Noeon Research, Ashgro, ALIGN

Workshops 

12:30 - 14:15 Workshop 1: Ryota Kanai (Araya / ALIGN):

“Global Workspace: A Key to AI

Consciousness, Brain-to-Brain Interfaces,

and the Future of Humanity.”

14:15 - 15:00 Networking + Lunch Break

15:00 - 16:45 Workshop 2: Adam Gleave (FAR.AI):

“Securing Advanced AI: Capabilities, Risks

and Solutions.”

16:45 - 17:00 Farewells, Breakaway for Evening Social

17:30 - 20:30 Evening Social (80 people, Separate Registration Required)

Concurrent Poster Sessions

12:30 - 16:45 Poster Sessions 

Call for Papers

We are inviting submissions of short papers (maximum 8 pages) outlining new research, with a February 1, 2025 deadline. We welcome papers on any of the following topics, or anything else where the authors convincingly argue that it advances the field of AI safety in Japan.

  • Mechanistic Interpretability: the detailed study of particular neural networks. Interpretability researchers often take a circuits-style approach following the foundational work of Chris Olah, or use causal linear probes to understand directions in latent space. Work that investigates how and why structure emerges in neural networks during training can also fall under this category.

  • Developmental Interpretability: understanding the process by which neural networks learn. By applying the tools of Watanabe's singular learning theory to modern neural network architectures, developmental interpretability researchers aim to detect phase transitions during training, thereby detecting new capabilities as they emerge.

  • Agent Foundations: the name given to a number of parallel research efforts, each of which aims to designing AI that is provably safe. Problems in agent foundations include embedded agency and cognition, corrigibility and the shutdown problem, natural abstractions, and game theory. With sufficient rigour, agent foundations research hopes to build AGI that is safe by construction.

  • Scalable Oversight: a prosaic alignment approach that aims to make advanced AI systems amenable to human oversight. Often this involves training less powerful AIs to oversee more powerful AIs in a hierarchical way with a human at the root, as in Paul Christiano's Iterated Amplification. Other scalable oversight approaches aim to make even the largest AIs directly overseeable.

  • ALIFE: a broad theme grouping approaches that understand artificial life as it relates to natural life. Often, as in active inference or collective intelligence, this involves replicating natural systems in silico such that we can understand and improve on them. Other ALIFE approaches overlap with the agent foundations research agenda, as in the study of emergent agentic phenomena in cellular automata.

  • Artificial Consciousness and Posthuman Ethics: thinking seriously about whether machines can be conscious or worthy of moral patienthood, including determining whether current systems are conscious or the intentional construction of conscious artificial systems. Works that address the ethical treatment of machines by humans and humans by machines also fall into this category.

  • AI Governance: research into how frontier AI systems can be effectively deployed to benefit Japan and the rest of the world and what regulation is necessary or sufficient to ensure the safety of the public, from both regular and existential risks.

  • Company Name and Logo Displayed on TAIS Website

    Acknowledgement in Opening Ceremony

    Distribution of your Company Flyer among participants

    *Japanese Consumption Tax (10%) will be added to the sponsorship invoice.

  • Company Name and Logo Displayed on TAIS Website

    5-minute Company Presentation to Attendees

    Optional presentation or demonstration (limited to 10 minutes) during a workshop

    *Japanese Consumption Tax (10%) will be added to the sponsorship invoice.

  • Company Name and Logo Displayed on TAIS Website

    5-minute Company Presentation to Attendees

    Lead a workshop on a topic relevant to the conference

    Access to Attendee List

    *Japanese Consumption Tax (10%) will be added to the sponsorship invoice.

Sponsorship Tiers

If you are interested in becoming a sponsor, please contact someone@aisafety.tokyo.

Interested in attending TAIS 2025?

Please sign up on Luma: