Johnian magazine issue 51, autumn 2023
Jamie Bernardi: since St John’s
Jamie Bernardi (2015) is Co-Founder and AI Safety Lead at BlueDot Impact, where he works to help educate the next generation of machine learning engineers, policymakers and students about the risks posed by advanced Artificial Intelligence systems. In this article he explains what led him to found the company and what he expects for the future of the field.
While reading Physical Natural Sciences at St John’s, I was lucky enough to find friends who shared an interest in pondering emerging, yet overlooked, societal challenges. Since artificial intelligence blasted through image recognition benchmarks in 2011, AI capabilities continued to improve exponentially throughout my undergraduate study, which began in 2015. However, AI still harboured multiple open problems; practitioners were (and still are) concerned about malicious use of advanced systems, value misalignment, and even losing control of a powerful ‘rogue AI’. It felt clear that – like any powerful technology – AI would bring risks as well as opportunity in our lifetime, and I was keen to use my career to help steer it.
After graduating from a Physics MSci in 2019, I spent two years learning more about the technology as a machine learning engineer. I joined Cambridge-based tech start-up Audio Analytic, where I created sound recognition algorithms. Our mission was to give machines a sense of hearing. To pay the bills, we categorised sounds like glass breaking or dogs barking, for applications like additional home security.
During the pandemic I began to pursue my ambition to directly develop safer AI algorithms by starting a part-time collaboration with researchers at the University of Oxford’s Future of Humanity Institute. We designed a ‘pessimistic reinforcement learning’ algorithm that expects the worst and asks a human for help if it thinks the outcome of its action might cause harm.
I applied for a grant to continue that work full-time and took the plunge by leaving Audio Analytic with ambitions to complete this work and progress to PhD study in safe AI. (Out of interest – Audio Analytic was acquired by Meta in 2022, soon after I left).
Still in Cambridge, and now working as an independent researcher, I met my now co-founders – one of whom is a fellow Johnian, Will Saunter (2017). Our other co-founder had developed prototypical courses introducing some of the world’s most pressing problems and needed a point person to develop his AI safety course. Having spent four long years getting to the forefront of the field myself, I saw that a faster on-ramp was sorely needed.
I therefore started running ‘AI Safety Fundamentals’ courses in 2021. I had planned to continue with my research simultaneously, but the courses quickly took over as my primary endeavour when we saw demand grow sharply. It was off the back of that demand that we decided to found BlueDot Impact in August 2022, receiving a seed grant from a philanthropic foundation. Our mission is to empower driven individuals to solve the world’s most pressing but neglected problems.
My primary responsibility at BlueDot Impact continues to be working with experts to keep our AI safety strategy and course content up to date with the field. We receive input from researchers at organisations like OpenAI, The Centre for the Governance of AI, RAND and Cambridge’s Machine Learning Group.
This all happened before AI safety alarmingly hit the headlines, so what’s changed for us since then?
I admit I expected AI safety to remain a niche field well into the 2020s, but the release of ChatGPT and faster-than-expected AI progress has quickly brought the topic onto the world stage.
I’m particularly excited that the UK will be hosting the world’s first AI Safety Summit in November 2023, at Bletchley Park. The summit will bring together world leaders to discuss international standards for AI safety. While the US is home to most world-leading AI companies, I believe the US and the rest of the world will be watching what happens here this November.
Discussions are expected to be held on establishing pre-deployment AI model evaluations to detect any new, advanced capabilities and bad behaviour, and on what new international institutions are needed to co-operate on these rules, among other topics.
Having been in the right place at the right time, BlueDot Impact’s latest project has been to provide education to the UK policymakers who have been tasked with producing world-leading regulation on AI.
I expect we’ll now see a decade of back and forth between industry and governments to strike the right balance of applying the brakes and accelerator to this potentially world-altering technology. Aside from technological progress, it is difficult to see what societal changes are on the other end. Assuming we avoid catastrophes, some imagine a world where work is more meaningful while others imagine a world without work. Furthermore, regulation could lead to the continued concentration of capital in the hands of the very few companies and governments that currently develop and deploy advanced AI systems first.
In the meantime, BlueDot Impact aims to equip upcoming talent with the knowledge and tools they need to steer AI in a positive direction, and I’m proud to have been at the forefront of the challenge at such a pivotal movement.
You can hear more about Jamie’s work in a recent podcast episode of the Artificial General Intelligence (AGI) Show with Soroush Pour, ‘Getting started in AI safety & alignment’.