Breakthroughs in technology are not new. They are, moreover, predictable. Not what they will consist of. Nor the changes to which they will lead. Instead, what we can predict is that such breakthroughs are inevitable, and that when they occur, being caught flat-footed is ill-advised. What we similarly know is that the leaders we now have – including our political leaders, business leaders, leaders in education and religion, in the military and the media – generally are ill-equipped to deal with the latest technological breakthrough, artificial intelligence, AI.
Previous technological breakthroughs – for example, the printing press and during the industrial and information revolutions –led to upheavals that ultimately were difficult or impossible to control. These upheavals were in every area including ideas and information; politics and economics; modes of production and distribution; arts and culture; power and influence at both the national and international levels.
Now is another breakthrough in technology – AI. While it is not new, the response by experts and laypersons alike to ChatGPT, the artificial intelligence chatbot developed by OpenAI and released just six months ago, suggests that AI has crossed a threshold. From an early and still obscure technology to one that seems almost overnight to present a personal, professional, and maybe even existential threat. Of course, the level of uncertainty remains extremely high. Not only do we have no idea how many jobs will, for example, be automated by AI. We have no idea what this automation implies. Elimination of tens of millions of jobs? Or transformation of tens of millions of jobs?
Leaders have a bad track record controlling or even managing the technologies that in time tend to outrun them. Though there has been much handwringing about the harm done by, for example, social media, there is no sign that leaders are able or willing to harness the beast they let loose.
Not good – especially if you believe that the risks of AI are best expressed by those who presumably know best. Specifically, by some 350 industry leaders who recently wrote in an open letter that “mitigating the risk of extinction from A.I. should be a global priority alongside other society-scale risks such as pandemics and nuclear war.”
However well intended the executives, researchers, and engineers who cooperated to write the letter, there’s not a chance they’ll be able or even willing to stop the train that already left the station. Quite the opposite. They’re all competing to see who can go faster and further. Who can outstrip the competition to raise AI to greater and potentially more dangerous heights.
If leaders in business and science are unlikely watchdogs, what about leaders in politics? The prospects are not promising. The average age of members of Congress is 65 – itself a major problem. A generational problem. A cohort of political leaders still struggling to manage the information revolution, not to speak of the revolution in artificial intelligence. Most cannot even grasp what they are supposed to control. Moreover, given their track record of helplessness and haplessness particularly as it pertains to tech, how can we possibly be confident that America’s elected officials will better manage breakthrough technologies in the future?
Europe’s leaders have done a better job addressing the problem than their American counterparts. Professor James Heskett credits the European Union’s Artificial Intelligence Act for achieving the world’s “the most extensive approach” to relevant regulation. But while the Act does exercise a measure of control, this could be a case of the genie already out of the bottle. It remains unclear whether it will be possible to regulate AI in a way that is substantial, that ultimately is meaningful.
What then is to be done? Will, can, anything, anyone, have a positive impact? Some ideas have already been put forth. For instance, the CEO of OpenAI, Sam Altman, suggested enacting federal regulations such as requiring licenses ensuring that AI models are thoroughly tested before being made available. But it all seems a bit like putting a finger in a dike, like, to mix metaphors, shutting the barn door after the horse already got out.
How then to maximize Leader Intelligence in the age of Artificial Intelligence? Given that leaders – all leaders, including political leaders and corporate leaders – cannot possibly conquer or even control technology, the question becomes what is leader intelligence in the age of artificial intelligence? What should leaders be, what should leaders know, when technologies threaten to outrun them at every turn? Threaten to out-know them, to out-pace them, to out-perform them?
Three attributes stand out.
- The first is for leaders to become more human. To become more humane. To return to being generalists as opposed to specialists.
- The second is for leaders to become less national and more international. To become less parochial and more ecumenical.
- The third is for leaders to become more contextually aware. To become less focused on themselves and more focused on their followers – and on the multiple contexts within which they and their followers are located.
How can these attributes be cultivated, inculcated? How should leaders learn in the age of AI? My book on how leaders should learn – Professionalizing Leadership – was published in 2018, before AI was front and center.* Still, the sequential, three step process that I advocated then is even more relevant now – in the age of AI – than it was then. Step 1: Leaders should be educated. Step 2: Leaders should be trained. And Step 3: Leaders should be developed. The process is not quick; it is slow. It takes years to learn how to lead – a lifetime of learning, a lifetime of accommodating and adapting to change. Change such as AI entering our bloodstream – our mainstream.
I’ll conclude this post by focusing on Step One, on educating leaders. (Subsequent posts will focus on Steps 2 and 3.) How should leaders be educated in the age of AI? How could they become more human? And more humane? And more generalists than specialists? The answer is clear, and it is not arcane. In fact, in a June 10 Wall Street Journal article titled, “Great Books Can Heal Our Divided Campuses,” Professor Andrew Delbanco applied the same logic to a different, though related, problem. How to repair the nation’s fractured colleges and universities?
Delbanco properly notes that the great moral and historical questions belong to the humanities (history, literature, philosophy, and the arts) and to social sciences such as political science and sociology, and, I would add, psychology. He writes about the virtues of a core curriculum that represents the best of a “general education that assigns or attracts students to classes explicitly focused on broad human themes, with common reading lists and with peers whose origins, interest and ambitions differ from their own.” Delbanco concludes his piece by arguing that “at our centrifugal moment, we have an opportunity and an obligation to rethink general education” – which is precisely what I am arguing applies equally to leadership education.
If humankind is to have a future it must educate leaders to become humanists, globalists, and ecumenicists. Without exception the greatest dangers to planet earth – pandemics, nuclear wars, climate catastrophes, and the risks posed by artificial intelligence – require deep thinking, broad thinking, and open thinking.
It’s why leaders should be educated before they are trained, and it’s why their education should consist of shared experiences in the humanities and social sciences. These experiences should include reading, analyzing, and discussing a wide range of works by, for example, Confucius and Plato, Shakespeare and Tagore, Elizabeth Cady Stanton and Nelson Mandela, Dostoyevsky and Freud. Are thinkers, writers, activists, poets and philosophers such as these relevant to managing artificial intelligence? Yes.
———————————-
*Oxford University Press, 2018.
