A disclaimer: these are my views and my views only. They are not the views of my employer (Google DeepMind) nor my academic institution (University of Cambridge). I change my views frequently, so expect the perspectives you read here to evolve over time.

You may ask yourself: well, how did we get here?

There's a (quite natural) tendency for lots of people hearing about, working on, or interested in AI to think 'nothing like this has ever been done before!', which is a response that I sympathise with given the impressive capabilities of today’s AI systems. Often, though, we have been here before (or at least somewhere similar).*

Machine learning is the dominant paradigm underpinning modern AI. It can best be summed up in three words: Learning From Examples. That very simple idea, looking at the past to predict the future, is the motivation behind this project. I’m interested in thinking about how best to maximise the potential of AI whilst minimising its risk, focusing on questions related to policy, governance, ethics, history, and more. I’ll unpack these issues in this newsletter in a few different ways:

  • Histories of AI (usually focused on either one particular moment in time or a longer trend over a number of years);

  • Reviews of new research and reports covering AI ethics or governance issues;

  • Non-AI focused histories on relevant issues, episodes, or institutions (such as the IAEA or NIST);

  • Longer essays focused on AI policy and governance;

  • Interviews with influential thinkers interested in AI governance, ethics, policy or history (or someone who can talk about another relevant field that we can learn from).

Of course, this isn’t exhaustive and will probably change over time. Please also let me know if there’s anything else I should cover!

About Harry Law

A rare up-to-date picture of me from 2024

I work on ethics and policy issues at Google DeepMind. When I’m not there, I spend my time reading and writing my way through a PhD at the Department of History and Philosophy of Science at the University of Cambridge. I’m also a postgraduate fellow at the Leverhulme Centre for the Future of Intelligence. My academic research examines the social, material, and political circumstances surrounding the development of artificial neural network technology. I write papers, reviews, and the occasional op-ed.

You can also catch me on LinkedIn and Twitter (especially the latter, where I spend far too much time). My personal website is here and my academic page is here.

*Clearly, I don’t mean to say that there have been technologies with identical or even particularly similar social, economic or political loci (especially with respect to the speculated forms that AI might take in the future). What I mean to say is that for anyone interested in ethics and governance issues, our past is replete with lessons to be learned.

Subscribe to Learning From Examples

AI history, policy, governance and more.

People

AI history, philosophy, and governance at Google DeepMind, the Leverhulme Centre for the Future of Intelligence, and the University of Cambridge.