At Safe Superintelligence (SSI), our team is our greatest asset. We are a diverse group of experts in AI research, engineering, and ethics, united by a shared mission: to ensure the safe and ethical development of superintelligent AI. Our members bring a wealth of experience from various disciplines, including computer science, cognitive psychology, and policy development, all of which are essential in addressing the multifaceted challenges of AI. Each team member's passion and expertise fuel our commitment to transparency, innovation, and safety as we pioneer the future of artificial intelligence. We believe that a collaborative, multidisciplinary approach is key to building AI that is not only powerful but also safe and aligned with human values.
Our culture at SSI is built on openness, collaboration, and a deep respect for the societal impact of AI technologies. We are committed to pushing the boundaries of what is possible while ensuring that ethical considerations are always at the forefront. By bringing together experts from diverse fields, we foster an environment where creativity and rigorous scientific inquiry can thrive. We understand that the development of superintelligent AI will have profound implications for society, and we are dedicated to making sure these technologies serve the best interests of humanity.
Co-Founder and Chief Scientist
Ilya Sutskever is a visionary leader in artificial intelligence and a co-founder of OpenAI. He has been instrumental in advancing deep learning research, leading efforts behind transformative technologies such as the GPT series, which have revolutionized natural language processing. Ilya holds a Ph.D. in Computer Science from the University of Toronto, where he studied under the renowned AI pioneer Geoffrey Hinton. His research has been pivotal in shaping the AI landscape, including co-inventing AlexNet, a foundational deep learning model that significantly advanced the field of computer vision. He was also a key contributor to the development of TensorFlow at Google Brain, an open-source library that has become a cornerstone in machine learning development.
Ilya's numerous accolades include being elected a Fellow of the Royal Society in 2022, recognizing his outstanding contributions to science. At SSI, Ilya's leadership continues to shape the field of safe AI research, focusing on creating superintelligent systems that prioritize ethical considerations. His work not only pushes the boundaries of AI capabilities but also emphasizes the importance of building AI that can coexist safely with humanity. Under his guidance, SSI aims to lead the way in developing AI technologies that are both groundbreaking and inherently safe.
Co-Founder and CEO
Daniel Gross is an accomplished entrepreneur and investor with a strong track record in AI innovation. At the age of 19, Daniel co-founded Cue, an AI-powered search engine acquired by Apple, where he subsequently led AI and search initiatives. He later served as a partner at Y Combinator, fostering AI innovation and supporting promising startups through initiatives like Pioneer and AI Grant. Daniel's experience in nurturing startups has given him a unique perspective on the challenges and opportunities in AI development, particularly in ensuring that emerging technologies are aligned with societal needs.
As a co-founder of SSI, Daniel provides strategic direction to advance the development of safe, aligned AI technologies. His focus on responsible AI growth ensures that our technologies are developed with long-term societal benefits in mind. He is passionate about bridging the gap between cutting-edge AI research and practical, real-world applications that can enhance human well-being. Daniel's leadership at SSI is driven by a vision of creating AI systems that are not only intelligent but also beneficial, safe, and transparent. His strategic insights are vital to our mission of shaping a future where AI serves humanity in a positive and constructive manner.
Co-Founder and Principal scientist
Daniel Levy is an expert in AI optimization, previously leading the Optimization Team at OpenAI. He has an extensive background in designing and deploying advanced AI systems, ensuring they are both effective and secure. Daniel's expertise lies in optimizing complex machine learning models to maximize their efficiency while maintaining safety and ethical standards. He has played a critical role in developing techniques that allow AI systems to operate reliably, even under challenging conditions.
As Principal scientist at SSI, Daniel oversees the efficiency and robustness of our AI solutions, driving our mission to create AI systems that are beneficial and aligned with human values. His work focuses on balancing performance with safety, ensuring that our AI technologies are optimized not only for computational efficiency but also for ethical integrity. Daniel's commitment to ethical AI development is reflected in his approach to building systems that are transparent, explainable, and aligned with the broader goals of society. His expertise is vital to ensuring that our AI technologies are optimized not only for performance but also for ethical safety, allowing us to build systems that people can trust and rely on.
We are always seeking talented, passionate individuals to join us in shaping the future of AI. At SSI, we value creativity, collaboration, and a deep commitment to ethical AI development. If you are driven by a desire to contribute to the safe and ethical development of Safe Superintelligent AI, please visit our careers page for current opportunities. We offer a dynamic work environment where you will have the opportunity to work alongside some of the brightest minds in AI research, engineering, and ethics. Together, we can make a difference in how AI impacts the world, ensuring that it is developed responsibly and for the benefit of all.
A straight shot to safe superintelligence.
For more information about our team and our work, visit our website. Learn more about our projects, our research, and our commitment to ethical AI. We are actively engaged in collaborations with academic institutions, industry partners, and policymakers to ensure that our work aligns with the best practices and standards in AI safety and ethics.
Together, we are pioneering a future where AI serves humanity's best interests, ensuring a safer, smarter world for everyone. Our mission is not just about creating advanced AI technologies but also about setting new standards for safety, transparency, and ethical responsibility in the AI community. By working together, we can harness the power of superintelligent AI to solve some of the world's most pressing challenges, making a positive impact on society and future generations.