Skip to Content
At WISE 12, Experts and Delegates Question Who Controls Knowledge in the AI Era

At WISE 12, Experts and Delegates Question Who Controls Knowledge in the AI Era

NU-Q faculty lead discussions about how artificial intelligence is reshaping knowledge, authority, and learning

A Google search now often comes with an AI summary, along with hyperlinks. But how often do users click these links and follow the trail? When users accept these summaries for truth, then who gets to define what knowledge is? And whose perspectives get encoded into the systems that shape the way billions of people learn?

These are questions that took center stage at the 12th edition of the World Innovation Summit for Education (WISE), held under the theme “Humanity.io: Human Values at the Heart of Education.” The event featured over 60 sessions, from podcasts to panel discussions and masterclasses.

Within this diverse lineup of conversations was a panel featuring Marc Owen Jones, associate professor in residence at Northwestern University in Qatar (NU-Q) and an expert in digital authoritarianism and disinformation. The panel, titled “Truth, Trust, and Technology: The Future of Knowledge,” examined the role of AI in shaping public trust and academic integrity.

“Truth, Trust, and Technology: The Future of Knowledge” panel at WISE (Alexander Binay)

“An important takeaway here is the distinction between truth and facts,” Jones said. “Facts might be considered objective elements that we tend to agree on. But truth is often seen as a thing that people agree on, especially in the digital age.”

According to Jones, AI accelerates this crisis exponentially. 

“AI is allowing bad actors or even good actors to actually influence the information space, and I’m seeing this in my research now. [Anyone can] create thousands of bots online or fake accounts, create millions of lines of generative content in the click of a button, and then use that to make things go viral,” he added.

Jones at the Panel (Alexander Binay)

The implications of AI-mediated knowledge are already observed in user behaviour. Research from the Pew Research Center analyzed browsing activity in March 2025 and found that users who encountered an AI summary clicked on a traditional search result link in just 8 percent of all visits. Even more striking, users rarely clicked on cited sources within AI summaries, doing so in just 1 percent of visits. Users were also more likely to end their browsing session entirely after visiting a search page with an AI summary.

AI-generated content has become a terminal point for knowledge rather than a gateway to deeper investigation, raising questions about context, nuance, and the ability to evaluate information quality independently.

Another perspective came from Khaled Harras, senior associate dean for faculty and director of the Hamad Bin Jassim Center for Computer Science Education at Carnegie Mellon University in Qatar (CMU-Q). He referenced Mo Gawdat’s opening plenary concept of AI as an “intelligence booster,” which argues that AI could add points to anyone’s IQ.

“We should not lose track of the base,” Harras said. “We should not jump on the bandwagon of the tool and excitement while losing track of what we think in our humanity and what we know has worked with the development of human thinking and mind. If the base goes down, and I feel very sad to say, I’m starting to see glimpses of that.”

He argued that if this human foundation erodes, then adding an AI “IQ booster” on top of it results in a mediocre net gain. The concern for the “base” makes Harras’s emphasis on value-based education even more critical in an AI-amplified era.

The elimination of entry-level positions, Harras suggested, prevents young people from developing skills that AI cannot replicate, which is the human judgment that comes from experience, failure, and mentorship.

For Wajdi Zaghouani, NU-Q associate professor in residence, these challenges carry particular urgency for Arabic and other low-resource languages.

Zaghouani at WISE (Aiganym Akhmetova)

“If we just try to import this technology and then translate this technology to our local language, it’s not enough,” Zaghouani said in the panel “Can AI be Bilingual in Both Code and Culture? Lessons from Global Classrooms.” He advocated for regional investment in “small language models” that are “less costly and more tailored to the local needs and the local context of the educational sector.”

“Can AI be Bilingual in Both Code and Culture? Lessons from Global Classrooms” panel at WISE (Alexander Binay)

Aaron Jelley, who leads learning support and digital learning at Al Khor International School, confronts these theoretical concerns daily in K-12 education. He emphasized the need for digital citizenship education tailored to children that goes far beyond tool usage.

“Students need to understand how to use these tools safely,” Jelley said. “They need to understand what they actually do, how they work, and that they are biased and built from the information that’s out there already online.”

Teaching that level of critical literacy requires time, expertise, and resources that many schools lack. It also requires addressing ChatGPT’s “yes-man” problem, a tendency to lean toward agreement and affirmation that poses particular risks for young users still developing critical thinking skills.

“How much of your personal information have you given away?” Jelley asked. “Maybe right now it’s safe, but what comes next?”

Despite these concerns, Jelley sees no alternative. “There’s no denying that it’s here to stay; it’s not going away because of the amount of momentum that it has and the fact that it is a revolutionary technology,” he said.

After listening to these discussions at the summit, Fan Wu, a communications senior at NU-Q, found particular value in the discussions about AI as an assistant rather than a replacement.

“I think everyone talked about how AI should be a sort of an assistant,” Wu said. “As humans, we can’t think of it as something that would replace human beings, but instead we have to use it wisely as an assistant.”

Wu at WISE [right] (Aiganym Akhmetova)

Furthermore, Wu highlighted how frequently she heard the term “AI literacy” at the summit. “Before, as a communication student, I always heard about media literacy, but now, because of the development of AI, people have to gain more knowledge about AI literacy,” she said.

Beyond just understanding the theory, Wu saw this put into practice at a panel hosted by Chinese scholars discussing how to encourage students to combine AI in education. They emphasized project-based assignments rather than traditional essays, an approach Wu recognized from her own classes.

This consensus on AI as an assistant, however, depends on the human capacity to use it wisely. For Jones, this returns to the discussion of agency and resistance, but with a recognition that digital literacy must be age-appropriate, context-specific, and proactive.

“There are different levels of digital literacy and information literacy that are necessary and changing all the time,” he said. “Maybe we should be teaching people to challenge.”

Teaching students to challenge AI systems is one thing when those systems are tools. It becomes far more complex when those systems increasingly control access to information, employment, and opportunities, especially when the companies building them operate with minimal accountability.

Diana Forsythe, one of the leading anthropologists in the Science and Technology Studies (STS) field, said in 1993 that “the ability to decide what will count as knowledge in a particular case is a form of power.” Today, with AI systems influencing everything from hiring to policing to healthcare, that power has only intensified.

WISE 12’s theme incorporated “.io” to serve as a bridge between technology and humanity, reminding its participants that innovation in education must remain deeply human-centered. However, when companies appropriate knowledge at scale, operate without accountability, and increasingly control the infrastructure of truth itself, do educators, policymakers, and citizens truly have the agency to push back?

Or, as Harras noted, has the base of human critical thinking, persistence, and judgment already begun to erode?

This question may determine not only the future of education but the future of knowledge itself: whose knowledge counts, who decides what truth is, and whether the next generation will be equipped to question the answers their AI assistants provide or simply accept them as the last word.

Facebook Comments Box
More to Discover