I grew up between Pakistan and the United Arab Emirates, moving through cities, classrooms, and people. In each place, I was a reflection of the other – something inherently ‘different’, never one of them. In Pakistan, I was “the girl from abroad.” In Dubai, I was “the Pakistani student.” I learned early on that you never exist in a vacuum. Your identity is a reflection of the society you live in, mirroring your experiences, interactions and aspirations. Despite learning this about humans from a young age, I was shocked to realise that this same systemic bias exists in AI and algorithms.
My earliest memory of this realisation is when I turned to AI for a project, asking for a representation of “South Asian Woman”. The result was laughable at best, and concerning at worst. I could not recognise the image, like a stranger staring at me through a mirror. It felt like a slight against my sense of self, and identity. If an algorithm that posits to replace human intelligence cannot recognise human identities, is letting AI seep into our very fabric of society a smart idea?
Growing up with one foot in Pakistan and one in UAE, I felt the intensity of being misrepresented and ignored. My identity was always lost in translation: too Pakistani, too foreign, too much. However, my issue with AI bias was more than personal. The implication of AI algorithmic bias extends far beyond individuals. A single flaw or misrepresentation can shape millions of decisions, from corporate to law and healthcare. The stakes are far higher than a mislabeled photo.
This is what led to my AI Literacy Initiative. I wanted to spread awareness over the potential disastrous implications of AI bias, something I felt was largely overlooked. Over the course of a year, I led workshops for more than 700 students and faculty members, sharing and designing information that is easily digestible. I used examples from my own life, with the story of how I felt the system overlooked my identity, slipping through the gaps of the algorithm.
My work aimed to not only spread this information, but also to rectify this issue to the maximum extent I could. For this, I worked with teachers and other faculty members on designing AI Ethical-Use guidelines to be implemented for a more equitable experience with Artificial Intelligence. This was eventually adopted school wide by multiple departments.
During this project, I learned how deeply inhuman AI actually is. It fails to regard my context, identity and culture – everything that makes me a human. I aimed with this to push for a more equitable Artificial Intelligence, since it is something that has seeped into every sphere of our lives, either willingly or unwillingly.

