I have recently read Ethan Mollick’s book, Co-intelligence. Living and working with AI. The book is well written and an easy and enjoyable read. The author explores the potential of conversational AI (LLMs) in the context of personal and professional realms.
As the book makes clear, LLMs are adept at creative work and at identifying unanticipated correlations. It can falter in elementary intellectual pursuits. It is a statistical model of language that generates a plausible imitation of human responses by calculating the probability of clusters of words. The responses are devoid of cognitive processes, knowledge, or logical validation. They are the result of calculations regarding the optimal sequence of words. The provided response does not need to be accurate or true.
Mollick’s book presents four guiding principles for the practical and ethical use of LLMs:
[1] Use LLMs to investigate and delineate the boundaries of their functionalities.
[2] Engage in active monitoring of the responses provided, ensuring that they are accurate, true, and free from hallucinations.
[3] Treat conversational AI as a person. Critique its answers, clarify where necessary, reformulate questions, and offer feedback.
[4] Treat every LLM as if it is the least sophisticated model you ‘will ever use,’ recognizing that new iterations or models may offer improvements.
What struck me is the banality of conversational AI. It will be unknown whether the words were written by conscious minds or AIs and have ‘feeling’. Mollick refers to this as a ‘crisis of meaning’, but a crisis of language seems more accurate.
Mollick emphasizes the concept of ‘co-intelligence’, humans and AI working together to enhance productivity, creativity and decision-making. He proposes a symbiotic relationship in which AI tools enhance human capabilities. The book is full of anecdotes, examples, and texts that seek to demonstrate the veracity of that proposition. There is, however, no theoretical basis to support this assertion.
In contradiction, an article in Nature Human Behaviour in October 2024 found that, on average, human–AI combinations performed significantly worse than the best of humans or AI alone. See: M. Vaccaro, A. Almaatouq, and T. Malone (2024). ‘When combinations of humans and AI are useful: A systematic review and meta-analysis’, Nature Human Behaviour (2024). https://doi.org/10.1038/s41562-024-02024-1.
The book’s implications for the use of conversational AI in an educational context are interesting. This is in line with Mollick’s educational experience at the Wharton School of Business. He requires the use of such tools by his students. Allegedly, a teacher has the potential to significantly enhance the quality of learning outcomes.
The book’s lack of academic depth and its anecdotal nature make it an excellent introduction for those with limited knowledge of AI. I can recommend reading it, especially for students.
E. Mollick (2024). Co-Intelligence: Living and Working with AI, Penguin Portfolio, 256 pp.
Published as e LinkedIn post here.