As the Web Summit Qatar unfolded like an epic battleground of innovation, attendees found themselves caught in a whirlwind of futuristic booth displays and ambitious Artificial Intelligence (AI) startups. The scene—encapsulated within an event space of 29,035m² at the Doha Exhibition & Convention Center (DECC)—was reminiscent of a high-stakes competition.
Each startup strived to capture the attention of investors and onlookers, yet their interactive demos and promises of unparalleled capabilities not only provoked curiosity but also sparked a desire for many to understand and discuss the ethical implications of these innovations.
In moments like these, where the sky is the limit for AI’s potential, it is crucial to address concerns regarding data protection, algorithmic bias, and academic dependency.
Drawing from her interests in AI and her experience as a senior learning engineer within the IT department of Northwestern University in Qatar (NU-Q), Sahar Mari delivered a Sound Byte at the Web Summit about AI’s ethical implications in higher learning. As one of the speakers selected by the university, she explored the impact of generative AI in educational settings under the title ‘GenAI in Education: Equalizer or Divider? Navigating Equity, Bias, and Ethics,’, focusing on AI’s potential to promote equity while addressing concerns about bias and ethical considerations.
“One of the main takeaways I hope the attendees from my session had is the critical AI literacy,” she said. “Right now, it is very unclear how AI models are being trained or what bias is going into it because we don’t see behind the system.”
Yet, among these challenges, there are also positive sides to this integration for students.
“Generative AI is empowering students to take charge of their own learning. Our campus is diverse where for a lot of people, including myself, English isn’t their first language,” she said, adding how this technology has provided language support by reducing the barriers to learning content for many students.
Mari’s familiarity with AI tools was reflected in her role at NU-Q, where she introduced the university’s partnership with Microsoft’s AI chatbot, Copilot, and led in-class demonstrations throughout the Fall semester of 2024. Copilot is designed to champion responsible AI, prioritizing user privacy and data security to ultimately ensure a safe and reliable experience for its users.
Amidst the numerous projects occurring late in the semester, professors encouraged the use of this data-encrypted companion. Despite this, the tendency of students to use other AI models, most especially ChatGPT, persisted, carrying certain risks with relying on a tool tied to biases and privacy issues.
To further compare the two AI-powered chatbots, an external discussion by Qatar Datamation Systems (QDS) on ‘Responsible and Secure AI’ at the Web Summit highlighted how Copilot still lags behind ChatGPT with respect to the advancement of its responses. As the former does not retain and utilize user data for ongoing learning, it remains limited to its initial training.
However, QDS looked at the bright side, stating: “The data of ChatGPT currently is huge right now as billions of people use it on a daily basis. But, if more people start using Copilot, it will get far ahead from other AI models, as it becomes your personal assistant that brings you outputs that are secure.”
Without a doubt, students are utilizing the myriad of accessible AI tools more and more in order to assist them with their schoolwork, totaling to 86% worldwide. For instance, Jana Al-Outoum, a journalism student who was also selected by NU-Q to perform a Sound Byte and share her experiences with AI at the Web Summit, exchanged views on ‘Engaging with Generative AI in Journalism: A Critical Reflection on Its Potential and Pitfalls’ with Eddy Borges-Rey, Assistant Professor in Residence at NU-Q.
“I rely on Generative AI when class readings are dense and difficult to digest, like in history or political science, where understanding the bigger picture requires context,” Al-Outoum said. She clarified, however, that because of the inherent biases in these tools, she tends to revisit the material to verify the accuracy of their responses.
“Sometimes, this makes the process longer rather than more efficient, she admitted
With a wide variety of AI tools available today, Al-Outoum believes it is critical to understand how each one can affect results.
“Students should be aware that these tools carry biases which reflect the preconceived ideas of their programmers,” Al-Outoum advised fellow students. She found the over-reliance on AI to be “especially concerning for journalism and media students where credibility is crucial, along with human and personal elements in writing and storytelling.”
A more intense battleground emerges as individuals are challenged to face the advent of AI head-on with a more critical approach. In recognizing these challenges, from technology dependence to data protection, a hopeful vision for a future continues to spark—where advancements in technology are more seamlessly aligned with responsible and ethical values. Indeed, innovation thrives best when paired with strong ethical values.