Home IMJ Features Opinion: The Cooperative Illusion: AI, Language, and the Craft of Co-Creation

Opinion: The Cooperative Illusion: AI, Language, and the Craft of Co-Creation

There is a danger that we might over-rely on AI’s many superhuman-like capabilities. But AI and LLMs are nothing more than a mirror of the data they have consumed and to get the best out of them, human intervention and collaboration will be required for them to shine writes Louis Keegan

One of the most fascinating aspects of using AI or more specifically Large Language Models (LLMs) in research is how its interaction mimics (I say mimics as in actuality; it is entirely based on probability) the natural rhythms of human conversation.

Interacting with AI in research evokes the cadence and fluidity of human dialogue. While the responses are generated through complex statistical modelling rather than consciousness or intention or even deductive reasoning, the experience of engaging with an AI often mirrors the act of speaking out loud with another person.

This dynamic strongly parallels the oral traditions that have long shaped the way we shared, preserved, and evolved knowledge. Folklore and storytelling being a prime example.  Before the codification of ideas into written texts, oral storytelling and dialogue were primary vehicles for teaching, remembering, and innovating. Knowledge wasn’t simply recited; it was performed and co-created in real-time. Wisdom passed between generations and individuals through conversations, debates, stories, and proverbs, evolving with each retelling and enriched by the context of each interaction. The seanchaí being our very own Irish example of this.

Similarly, prompting an AI model is not a one-sided transaction. It’s not about retrieving a static answer, but about shaping a dialogue, refining the prompt, interpreting the response and asking again with new insight. Each interaction becomes a moment of potential intellectual co-creation.

This poses an interesting question, am I having a conversation?

Paul Grice, a British philosopher of language, viewed conversations as cooperative endeavours where participants recognise a shared purpose and contribute meaningfully to that exchange.

His “Cooperative Principle” suggests that effective dialogue relies on making contributions appropriate to the stage and purpose of the conversation. This principle at face value might seem highly relevant to AI-assisted research.

When engaging with AI models, researchers guide the conversation while the AI responds. This cooperative dynamic potentially enhances the research process, allowing AI to act as an informed conversational partner rather than just a passive data retrieval tool.

Yet, while the interaction may feel conversational, even at times startlingly human, the AI is not a person. It does not understand in the way we do, nor does it possess consciousness, intention, or a sense of shared purpose. So, we can see it directly contradicts the previously mentioned principle in that conversation is underpinned by our will to understand one another.

What AI produces is the result of complex pattern recognition, predictive modelling of likely next words based on textual data. It mimics behaviour that we associate with intelligent thought, but it does so without a will, awareness or comprehension.

This distinction is crucial. The apparent fluency and coherence of AI-generated language can easily lead to anthropomorphism, the tendency to attribute human traits to non-human entities.

A paper released last year by American scientists Kyle Mahowald, Anna Ivanova  attempted to address this by suggesting that we can confuse or conflate language for thought,  and so because AI LLMs use language quite well (most of the time) we can be made to feel as though they “think”. When we forget that the AI is a tool trained on patterns rather than a mind engaged in reflection, we risk over-relying on it or misinterpreting its outputs as definitive or intentional. It is, in essence, a mirror of the data it has consumed, reflecting back to us an intricate tapestry of human language, ideas, and biases.

While AI can act as a powerful collaborator in the research process, offering new angles and synthesising information at extraordinary speeds, the responsibility for interpretation, critical thinking, and ethical judgment remains entirely with us.

It challenges us to critically assess its responses, and it takes skill to tease out the insights it might present. AI is far from perfect.  Its role is not to replace the researcher, but to augment the researcher’s ability to explore, test, and imagine. To serve as a kind of externalised cognition that supports, but does not substitute, our intellectual agency.

This is why the often-discussed Pilot, Co- Pilot relationship is so crucial. This blend of technological efficiency and human expertise is how AI as a tool can shine.

Louis Keegan is an account manager with Pluto the Agency and part of its strategy team

Previous articleBlanco Niño Wins 2025 Brand Development Award
Next articleCiara Reilly Takes Over as MD of RED C as Sinead Mooney Retires