Ilya Sutskever is a pioneering figure in artificial intelligence (AI) and machine learning, renowned for his transformative contributions to deep learning. Formerly the co‑founder and Chief Scientist of OpenAI, he now leads Safe Superintelligence Inc. (SSI), a company dedicated to the safe development of advanced AI. His work has shaped the modern AI landscape, and he remains central to the quest for artificial general intelligence (AGI) while championing safety and ethical innovation.
Born in the former Soviet Union and raised in Israel and Canada, Sutskever studied at the University of Toronto, where mentorship from deep‑learning pioneer Geoffrey Hinton guided him toward breakthrough research in neural networks. His Ph.D. laid groundwork for many subsequent advances in the field.
After his Ph.D., Sutskever joined Google Brain, collaborating with Hinton and Jeff Dean on large‑scale image recognition and language systems—advances now integral to everyday AI applications.
In 2015, Sutskever co‑founded OpenAI. As Chief Scientist he oversaw pivotal projects:
After a brief 2023 board conflict, Sutskever left OpenAI in May 2024, expressing renewed commitment to safe AGI.
In June 2024 Sutskever co‑founded SSI with Daniel Gross and Daniel Levy, establishing the world’s first “straight‑shot” lab with one product: a safe superintelligence.
Sutskever views AGI as a tool to address grand challenges but insists safety and ethics remain paramount: “As we develop more powerful AI systems, it’s crucial that we prioritize safety and ensure these technologies are aligned with human values.”
Sutskever’s work underpins today’s NLP, computer‑vision, and RL systems. Geoffrey Hinton calls him “a brilliant researcher and visionary leader balancing innovation with ethical considerations.” At SSI, Sutskever continues to steer the field toward safe and beneficial AGI.
Now leading SSI, Sutskever directs a concentrated effort to solve the long‑term risks and opportunities of AGI. His leadership and safety‑first philosophy remain pivotal in shaping a future where superintelligence is developed responsibly and benefits all of humanity.