I have subscribed to MIT Technology Review for almost a decade…it remains one of my best sources for what is emerging in technology. In a recent article, Bill Gates noted his ten favorite books about technology…and first on the list was Max Tegmark’s (2017) Life 3.0: Being Human in the Age of Artificial Intelligence. Apparently I am behind the times, as former President Barack Obama also listed this book as a favorite he read in 2018.
Tegmark and his wife formed the Future of Life Institute in 2014 and invited scholars, economists, and AI researchers to consider the directions the evolution of technology was taking. Tegmark noted that technology “…is giving life the potential to flourish like never before – or to self-destruct.”
Obviously, their hope is on the former…not the latter!
Tegmark defines Life 3.0 as the coming technological age of humans (we are not there yet), with Life 1.0 being biological only in nature…abilities hard wired in DNA, and Life 2.0 being cultural, i.e., the ability to learn. Tegmark purposely uses definitions that could be applied to both human beings and intelligent machines, and for the most part, the book is meant to spark discussion on what it would mean if or when machines reach human-level intelligence.
The books consists of a number of chapters…some fairly fact-based and others pretty speculative.
Tegmark noted that his audience includes Luddites, those that are skeptical that we will ever reach human-level intelligence, those who foresee a utopian society after AI exceeds human ability, and the majority that hope to shape AI to be a beneficial movement. Tegmark’s main argument is that the risks of AI in the future come not from Terminator-like machines or a consciousness that sees us like ants…but rather from the misalignment of the goals of AI with those of humans.
Artificial intelligence development is at the forefront of big companies like Google, Facebook, and IBM, as well as hundreds of start-ups. How to do this in ways that align with goals of humanity (and who decides those goals) remains elusive. Just this past week, Google announced it was going back to the drawing board on its AI ethics panel when some complained that the membership was anti-LGBT. So figuring out how to shape AI as a beneficial movement is still just beginning.
In some ways, his push for shaping AI as a beneficial movement aligns with Joseph Aoun’s book (2017) Robot-Proof, which suggested that the fear of AI is misplaced, and that we need to realign higher education towards more of a lifelong learning environment so that we humans can continue to evolve as AI evolves. Aoun noted that technology has always impacted humans ever since we figured out that flint was sharper than fingernails. Aoun is optimistic that our future involves partnering with artificial intelligence.
Or as Kelly (2016) noted in The Inevitable, “This is not a race against the machines. This is a race with the machines.”
Tegmark suggested in his opening that shaping the future of AI is the most important conversation of our time. Maybe the second most important is reflecting on the impact racing with machines has for higher education and leadership in general.