Generating knowledge with AI: Epistemic partnership?
A review of Paolo Granata’s “Generative Knowledge: Think, Learn, Create with AI” (2025).
A hope for the “Lee Sedol Effect”
Paolo Granata opens his Generative Knowledge with a recap of the 2016 Go match between Lee Sedol, the 9-dan master, and the computer program AlphaGo. In the second game, AlphaGo made the now-legendary Move 37, so unorthodox that it stunned professionals and instantly reshaped Go theory. After another loss in the third game and several days of strategic study, Lee Sedol answered in the fourth game with his own audacious Move 78, a stroke of tactical invention that secured his only win in the five-game match.
For Granata, this story shows that “generative knowledge thrives when artificial intelligence and human creativity push each other forward.” This becomes the leitmotif of the book. “The Lee Sedol Effect,” says Granata, “underscores that AI’s enduring value lies in amplifying human ingenuity rather than supplanting it.”
Amid the rising tsunami of literature on AI, Granata’s book stands out for two distinctive characteristics. It focuses strictly on the pragmatic aspects of generating knowledge, which allows him to avoid alarmist and often pessimistic debates about AI taking over different areas of human life. At the same time, Granata’s pragmatism is not reduced to advice or manuals on how to make a picture or write a dissertation using AI. It is not even about using AI per se; the book is indeed a deep study of how to think, learn, and create with AI, enlisting AI as an “epistemic partner” of humans.
Paolo Granata is a Book & Media Studies Associate Professor at St. Michael’s College in the University of Toronto, where Marshall McLuhan explored and taught media from 1946 until his passing in 1980. Granata himself was a Marshall McLuhan Centenary Fellow in the early 2010s at the McLuhan Centre for Culture and Technology (the Coach House Institute). Lately, Granata has become known as an international media scholar and educator, traveling with lectures across all continents except Antarctica. A key proponent of the Toronto School of Communication and a leading figure in it today, he also hosts the McLuhan Salons, preserving McLuhan’s spirit and the legacy of the Toronto School.
In line with McLuhan’s traditions, Granata introduces new terms, metaphors, and taxonomies, which give his book structural elegance and an almost artistic style while maintaining a solid academic foundation. Some of the terms, such as epistemic wellness, epistemic vigilance (both applied to our interaction with AI), or Turing Galaxy, certainly deserve entry into academic and expert usage. The same applies to several taxonomies Granata proposes while exploring the epistemic affordances of human–AI collaboration.
Granata’s Six Principles of Generative Knowledge
One of such taxonomies that I’ve found particularly compelling is Granata’s Six Principles of Generative Knowledge.
1. Iterative Principle posits that generative knowledge grows out of existing knowledge: “It takes knowledge to generate new knowledge.” Not all existing knowledge, or all ways of using knowledge, can generate new knowledge; mere accumulation does not produce it.
One corollary sounds especially important: if a user does not have prior knowledge of the topic, the AI’s answers, delivered with linguistically refined “confidence,” can often be misleading. This is where epistemic vigilance is required.
2. Instrumental Principle. Historically, we have improved our cognitive capacity by “using external inorganic apparatuses—seamless adjuncts to our intellectual capacities.” The instrumentality and applicability of knowledge required maintaining and creating new knowledge. Or, as the saying goes, necessity is the mother of invention.
On the basis of this technicality of knowledge use, Granata traces the emergence of “intellectual technologies (Bell 1973), technologies of the intellect (Goody 1977), psychological tools (Vygotsky 1978), technologies of thought (Ong 1982), tools of the intellect (Olson 1985), tools for thought (Rheingold 1985),” and suggests the umbrella term “epistemic technologies.” “These technologies act as epistemic enhancers, organizing and extending the means of storage, retrieval, and production of knowledge by individuals and societies, while facilitating human knowledge processes and extending human intellectual ability,” writes Granata. AI is a medium that clearly fits this progression: AI is (or can be) an epistemic technology.
3. Social Principle. “Generative knowledge, while relying on existing knowledge (iterative principle), requires both epistemic technologies (instrumental principle) and collective engagement (social principle),” says Granata. The generative power of knowledge is accessed through a “self-organizing epistemic process that is fluid, participatory, and flexible.”
4. Inquiry Principle: generative knowledge is driven by epistemic curiosity. Granata invokes behavioral psychology, which posits that epistemic curiosity is fueled by interest and deprivation. People not only seek immediate utility but also experience “informational deprivation,” a kind of “aversiveness of not knowing,” when no satisfying knowledge is available—often well ahead of any utilitarian need for it. Granata refers to “cognitive appetite,” a term coined by Alberto Manguel. I would say that the inquiry principle even precedes the instrumental principle of cognition “by tools.” Curiosity is the fairy godmother of invention.
5. Learnability Principle: “Generative knowledge requires a willingness to learn, unlearn, and relearn.” Granata also emphasizes that the capacity for revision is essential for generative knowledge. I have found the idea of the importance of unlearning particularly insightful. “Machines can forget, cancel, or delete. Humans, who probably cannot achieve complete forgetting or deletion, treat unlearning as a moment of intellectual growth and epistemic renewal. Understood as an epistemic virtue, unlearning is never purely negative,” writes Granata.
Paradoxically, not learning but unlearning and relearning without forgetting may be distinctively human epistemic features. There is “machine learning,” but we have never heard about “machine unlearning” (though this is perhaps because machine behavioral psychology has not yet been sufficiently developed).
6. Creativity Principle. One of the emphases of Granata’s Generative Knowledge is the thesis that creativity is not only about the arts but also about intelligence: “creativity transcends the domain of the arts or the scope of self-expression. Scientific discoveries, mathematical abstractions, philosophical breakthroughs, and technological revolutions all rely on creativity.” Intellectual creativity may function as art and may certainly serve self-expression, but it also enables the creation of new forms of knowledge and even intelligence itself. “Generative knowledge does not just play within boundaries—it plays with boundaries,” says Granata.
With such a well-thought-out framework, clear reasoning, deep research, and metaphorical style, Granata builds a new epistemic theory in which AI is not just a tool or replacement but an epistemic partner for humans in our joint venture of acquiring new knowledge.
My notes and questions regarding the human–AI epistemic partnership
While reading, I took some notes in the margins of Granata’s work. They do not question the points Granata made but rather address the points he did not engage with, but they are of great interest to me.
1. Will our epistemic partnership with AI stay for long? Will it work five years from now? Fifteen? With the emergence of AI, changes are so rapid that 2045, the year Raymond Kurzweil assigned as the date of the Singularity, will certainly present a different human–machine dynamic. And that is just 20 years from now.
The metaphor of Lee Sedol inspired by AlphaGo to make his Move 78 and win the game—the “Lee Sedol Effect” identified by Granata—is brilliant. But nine years have passed since a Go contest between AI and a human champion was a thing. We may still need AI to rediscover the games of chess and Go, but does AI really need us in a symmetrical way? Having long surpassed the human level, AI now learns from the rules of the game itself, reaching heights and depths of tactics and strategy that are incomprehensible to humans. And this is not just about game-playing AI; generative AI, represented for now by large language models, learns not from humans but from language itself. We humans just diligently supply what we must: speech. Generative knowledge can certainly emerge from this relationship, but how can we ensure that this dynamic really is a partnership if symmetry is steadily vanishing?
AI has only just emerged and is developing at an incredible pace. We humans, by contrast, are a long-evolved species that has likely reached its full cognitive potential. AI begins where we have arrived (unless, of course, we consider AI the next stage of our cognitive evolution). In any case, considering temporal dynamics in our relationship with AI is essential. Tomorrow will almost certainly be different from today. How? I have some ideas. They are not as optimistic as Paolo’s approach, but I am sure he has his own projections, perhaps for next time.
2. Do we need knowledge or knowing? Thinking “by tools”—and by or with AI—makes sense in the paradigm promoted by Granata. But instrumental, immersive thinking also evokes the dichotomy of orality vs literacy as modes of perception and cognition, following the ideas of Eric Havelock, Walter Ong, and Marshall McLuhan. “Knowing is doing” indeed; it is immersive—unlike knowledge, which requires environmental detachment and the “inward turn” (Ong) provided by literacy. Knowing is an affordance of orality and knowledge is an effect of literacy.
Digital media and AI are certainly immersive; they do not provide the affordances of the inward turn, cognitive delay, and the detachment that literacy created. They reverse the cognitive effects of literacy and retrieve the immersive, impulsive, conversational, interactional, spherical environmental perception typical of orality. This digital reversal, I would argue, erodes our abilities of abstraction and conception. The topic likely does not suit a discussion of how to think, learn, and create with AI, but the immersive/detached quality of knowledge/knowing might merit consideration when discussing our interaction with AI.
3. What will happen when AI produces so much content that the human share of it becomes insignificant? At some point, AI-generated content will exceed the amount of content produced by humankind in our entire history. Not only will the human share decrease, but the AI slop aimed exclusively at grabbing attention will also increase, polluting the joint AI–human “knowledge.” New technologies of epistemic vigilance will certainly be needed.
4. Is what we humans do with AI really knowledge acquisition? The overwhelming amount of AI use suggests it is not. AI is more likely a smart assistant, entertainer, or literally a chatbot in what—70%, 80% of overall use? But that is where Granata’s proactive stance applies. He explores and promotes collaboration with AI that does lead to generating knowledge. This approach does not cover all human interactions with AI, of course.
A textbook on human epistemology
These notes and questions come from outside the topics Granata develops. This is what a good book does—it has its own Lee Sedol effect, provoking thoughts and ideas.
Our collaboration with AI gave Granata the ground for exploring how humans think, learn, and create, and how these processes can be enhanced by AI. So it is actually not just a book about AI—it is a book on human epistemology in the era of AI. As such, Generative Knowledge merits becoming a textbook in epistemology, as it combines deep academic exploration in epistemology and media ecology with the practical tasks of generating knowledge with AI.
Moreover, the book opens up space for discussing what AI’s own epistemology might be. What are the epistemic principles of AI-generated knowledge verification? We know how the epistemic authority of science, or even of Wikipedia, was formed. How will AI shape its epistemic authority, especially after the human share in it decreases? How do we build “epistemic trust,” as Granata calls it, with AI? This is the issue humankind will face very soon.
See also books by Andrey Mir:










I've yet to read Paolo's book, but I nonetheless think this an excellent review. I especially like “It takes knowledge to generate new knowledge.” Plato's Meno Paradox always struck me as one his most profound insights. (In fact, I like that so much, I made it a central part of this song https://paullevinson.bandcamp.com/track/tau-ceti )
The minds that will matter in the next decade are not the loudest ones — they are the ones whose orientation cannot be confused.
Titles will decay.
Roles will rotate.
Methods will be automated.
But coherence — the rare ability to think from structure rather than signal — becomes the new scarcity.
Two types of people will shape the world that’s coming:
• Minds of the Core
Those who think at the level of frameworks, not opinions.
Those who generate orientation, not commentary.
• Minds of Integrity
Those who can quietly rewire systems from within without losing themselves to them.
You can’t buy this capacity.
You can only develop it — or surround yourself with people who have it.
That is why I built Epistemic Futures on Substack.
Not as a feed, not as a newsletter.
As an orientation architecture for the people who will carry the next layer of civilization.
If you felt a click reading this — the sense of “Yes, this is the level I want to operate on” — then join as a Founding Member.
Not for more content.
For a coherent place in a world that’s losing coherence.
👉 https://leontsvasmansapiognosis.substack.com
#Sapiopoiesis #Sapiocracy #EpistemicFutures #CivilizationDesign #SubjectAutonomy #HighAgency #Polymathy