Current Exhibition | Winter 2026

Default vs. Designed: Exposing Cultural Bias in AI-Generated Branding Through AR

Curatorial Statement

AI-generated branding is not neutral. Every model reflects the data it was trained on, and the tools most widely adopted in professional creative workflows, including but not limited to ChatGPT, DALL-E, Midjourney, and Ideogram, are products of US-based companies trained predominantly on Western, English-language content. The result is a default: a set of aesthetic and communicative assumptions that read as universal precisely because their origin goes unexamined.

This exhibition makes that default visible.

Four AR-enabled posters. Four cultural contexts: the United States, Japan, the Dubai (United Arab Emirates), and Brazil. Each poster was generated using the same base prompt, applied without cultural adjustment, because that is how these tools are typically used in practice. What the model produces unprompted is the argument.

Research already shows that audiences notice. Survey data collected for this project found that 50% of respondents would trust a brand less upon learning its advertising was AI-generated. That distrust is not irrational. It reflects an intuition that something human is missing. Cultural specificity is part of what is missing.

When visitors point their camera at each poster, the AR layer activates. The default poster surfaces annotations that name what the AI produced and why it reflects a cultural assumption. After exploring those, visitors can tap to reveal the redesigned alternative, which carries its own annotations explaining the intentional shifts made for that cultural context. The learning happens in both states: what the default reveals about bias, and what the redesign demonstrates about intentional, culturally informed creative practice.

This is not an argument against AI in creative practice. It is an argument for using it with awareness: auditing outputs, building cultural review into workflows, and recognizing that speed without context replicates bias at scale.

View Gallery