sandcup x ai JUL 25, 2025
Artificial Intelligence or AI rapidly shapes how we think, create, and act, regardless of what we do. It has proven to be a great help, saving time and cutting costs on mundane tasks, or having a different outlook when feeling stuck. Not just that, it provides clarity and validation to inputs, enabling us to come up with fool-proof solutions and decisions, which would otherwise take a lot of time and revision.
It has forged perspectives across the globe, making the virtual world open and accessible for anyone seeking a glimpse into a particular idea. However, along with this trait, comes the risk of over-generalizing concepts and missing the very essence of peculiar aspects that help something stand out, as humans are also constantly doing. Experts suggest that with the right programming, AI can be less biased than humans when built on inclusion and transparency. In doing so, it can also enhance human judgment in collaborative settings.
But before we achieve that, we need to ask: How is AI biased and what are the factors?
Let's go back a few months when we were obsessed with getting our Ghibli-style portraits made. It felt as if the whole internet turned into a Miyazaki film. Apart from the debate of whether it was morally right for AI to imitate a meticulously created art form, there was also the question of some portraits being racially inaccurate.
Stable Diffusion, a popular image generation tool, has been used for outputs in ads, social media campaigns, and digital storytelling. The more these visuals circulate, the more they reinforce stereotypes. There are many examples of bias in AI image generators, like when Buzzfeed shared 195 AI generated images of Barbies from countries all around the world, and the results were disturbing. The German Barbie donned a Nazi outfit, the Lebanese Barbie stood in front of rubbles, South Sudanese Barbie held a gun (changed later after backlash), and South Asian Barbies were whitewashed as blondes. Rather than appreciating culturally diverse identities, AI amplified cultural appropriation by inaccurate depictions.
Similarly, when you ask an AI to generate an image of a CEO, it’ll mostly present images of white men, or a Mexican man will mostly be shown wearing a sombrero. This is not purely a technical flaw. The truth is that amplification only happens when bias already exists. The system learns from us. Then we unknowingly learn back from it.
So then how does one decide who holds the prejudice in the process? The bias in people’s perception? Or the bias in AI’s functionalities?
AI systems are trained on massive datasets from the past, based on notions of the dominant voices that program it, and not the full range of perspectives of all humans. A study conducted by Microsoft categorizes five main types of AI bias:
Cognitive bias is fundamental to human behavior. It is the byproduct of the human tendency to generalize based on limited viewpoints. When crafted into data or design, it becomes systemic in AI, as it learns from how we interact with it. Based on human responses and queries, the system interprets and adapts. Thus, a feedback loop is created as AI evolves from code as well as human validation.
Bias in AI is a challenge to design with integrity as it affects aesthetics, fairness, inclusion, and public perception. In design, AI tools are often promoted as accelerators. But if they push improper decisions, they harm in the long run rather than helping. If an AI tool is used to generate hiring, branding, or advertorial visuals, it must be careful to avoid discrimination and reinforcing stereotypes. There is potential for using design thinking to reduce AI bias and thus mitigate it. By restructuring how information is displayed and prioritized, the tool shapes more thoughtful decisions. Thus, bias can be worked on through information architecture. Ethical design practices like these can be embedded into AI-generated content.
AI simply stems from what we teach it to be. Apart from responsible design thinking, other efforts should include auditing and refining AI models regularly for fairness checks, diversifying training data from marginalized communities and making outputs aware, inclusive, and respectful.
As users, designers, developers, and decision-makers all share the responsibility of moral judgement and oversight. By not considering the output as a final word, we should seek AI’s help to refine and grow on our original thought.
This is a call for all those who build screens, as well as those who seek answers from it: Let’s create better AI and employ it for a more inclusive purpose towards solving problems around the world.
Bringolf, Jane. Artificial Intelligence and Universal Design.
Demartini, Gianluca. “Bias in Humans and AI - What to Do about It?” Companion Proceedings of the ACM on Web Conference 2025, ACM, May 2025, pp. 2473–73.
RUSSSMITH, JESSICA, and MICHELLE D. LAZARUS. “Bias in AI: Building the Machine to Support All Life.” The AI (R)Evolution, Monash University Publishing, 2024, pp. 48–78.
Do join us on these platforms for more interesting content and behind the scenes!