AI Reading List
When ChatGPT was publicly released, I was initially quite scared about keeping a job for the next 5 years. Then I used it and was no longer afraid, but vowed to learn more, keep an eye on the state of LLMs (aka use them and anything similar), and do my best for it to not catch me unaware. Part of that self-reassurance was reading what other folks have written. Here are some of the books I've read on this topic that I found helpful in different ways
-
A World Without Work, by Daniel Susskind: This is a policy oriented view that looks to future of how to handle a world where more and more labor is done by machines (robots, AI, etc.). It talks about social and economic considerations and puts forth policies that we might want to adopt so that the majority of people can have improved lives, instead of careening towards some dystopia where there's a few island kings and everyone else is impoverished and struggling. It is well written and optimistic, and a valuable lens to consider the world around the technology. I appreciate the the author doesn't say "it is upon us", but rather talks about a near-ish future where we might eventually have to reckon with superhuman agents.
-
Nexus, by Yuval Noah Harari: This is a history oriented view that looks at the past to learn about how we might handle an AI future. Also extremely well written and engaging, the author talks about "information networks" generally and discusses humanity's use of various technologies that facilitate network effects (e.g. books, the printing press, the radio, the internet, etc.). I don't agree with all the future-looking things he says, but I respect how well he argues them and roots his arguments in history. My main critique would be the "AI is upon us and these changes are now" angle, which I heartily disagree with, but the arguments are still really good to consider for whenever the assumptions present occur. I also like how the author emphasizes that things are not inevitable; we make choices, we can decide to use or not use certain technologies in certain ways, and the context in which a technology exists matters a lot (e.g. how something akin to AGI could destabilize or empower a democracy or a totalitarian regime in completely different ways).
-
Superintelligence, by Nick Bostrom: Often pointed to as a theoretical, philosophical, foundational tome on the subject, it discusses a lot of ideas around AI. The author defines it (i.e. how superhuman can mean doing something faster, better, or both) and the implications of it, the "alignment problem", interesting thoughts on how we might perceive and be perceived by an intelligence "smarter" than us, and lots more. It is really interesting, but a lot of it feels so out there and theoretical that its often hard to digest. I think that's part of why the paperclip factory example is one of the most quoted sections of the book, since it is so easy to understand (its the idea that if some super powerful AI was directed to produce as many paperclips as possible, it would kill all of humanity and eat the entire universe in the pursuit of its goal). I'll be honest, I stopped reading about 2/3 of the way through because it was a dry slog.
-
Genius Makers, by Cade Metz: A brief encapsulation about the recent history of AI and the people involved at the top of it. This is written in a more story-telling, journalistic style. It was interesting to learn about some of the prominent figures in the current LLM evolution, and its also interesting to see the patterns and (in my opinion) delusions that have been present since folks have been working on this stuff. It seems like AGI has always been 10-20 years away. Probably the least important of what I've read here, but interesting to understand the people and industry.