Episode 64
WEIRD AI - Cultural Intelligence Meets Artificial Intelligence
In this solo episode, Trisha examines a critical question that every global professional using AI should consider: Are artificial intelligence systems as globally representative as we assume? Drawing on groundbreaking 2010 research about WEIRD (Western, Educated, Industrialised, Rich, Democratic) populations in psychology, she explores whether today's AI models might face similar challenges.
Through conversations with Claude from Anthropic and ChatGPT's Monday persona, Trisha uncovers surprising perspectives on data sources, training biases, and the cultural blind spots that might influence AI responses. These AI systems offer candid assessments of their own capabilities when it comes to representing diverse global viewpoints, revealing insights that could reshape how we interact with artificial intelligence.
Can artificial intelligence develop genuine cultural intelligence, or does it require culturally intelligent humans to unlock its potential? What would it mean to bring your CQ to every AI interaction rather than assuming these systems provide universal, unbiased knowledge? Whether you're a leader implementing AI tools across cultures, an HR professional considering AI solutions, or anyone curious about the intersection of technology and cultural intelligence, this conversation challenges fundamental assumptions about artificial intelligence as a neutral, global knowledge source.
Make sure you join Trisha in this journey of growth and discovery throughout the year via Substack or LinkedIn.
Transcript
[00:00:25] As regular listeners know, cultural intelligence, CQ, is made up of four areas, motivational CQ drive, cognitive CQ knowledge, metacognitive CQ strategy, and behavioral CQ action. In this podcast, we often think about CQ strategy and how our thinking is critical to building our cultural intelligence.
[:[00:01:29] First of all, I need to go back in time a little bit and check that you might be familiar with an important piece of psychological research. Back in 2010. Yes. It was a long time ago. I know. I remember being at a conference. It was the International Association of Cross-Cultural Psychology. It was in Melbourne, and there was a keynote speaker who was reporting on the work of Joseph Henrich, Steven Heine, Ara Norenzayan.
[:[00:02:35] And in case you missed it, those first letters spelt out weird. W-E-I-R-D, Western educated, industrialized rich, and Democratic societies. And that's often because they're using the subjects at their own universities or the people that they have access to. "Researchers," I'm back to the abstract now. Sorry.
[:[00:03:33] The three researchers did a meta analysis of social science databases covering topics such as visual perception, perceptions of fairness, how people cooperated, moral reasoning. Self concepts, motivations, and many other aspects, and they discovered that there is substantial variability in experimental results across populations, and that weird subjects are particularly unusual compared with the rest of the species.
[:[00:04:42] I can remember sitting in that conference wall a bit like in Harry Met Sally. Just thinking, not saying it out loud, don't worry. Yes, yes, yes. And it's about time because as a practitioner especially, especially as an organizational practitioner, I could see that organizations around the world in different countries and cultures, they were very different.
[:[00:05:34] And you're right to a certain extent, researchers, when they write a proposal, they have to explain how they'll gather a more representative sample, how they'll ensure their findings have broad relevance. And if they don't, they need to acknowledge the limitations of the study.
[:[00:06:19] And if people who work in the field of psychology aren't familiar with that piece of research, perhaps they're still operating in a way as if the WEIRD findings are relevant for everyone. And here's where we get to the core part for today's session. Now we have artificial intelligence, ai. A source of knowledge and reasoning that's based on global understanding, isn't it?
[:[00:07:21] Um, but all of the responses are as ai and I am giving the particular models that I've used in the context. The responses are as it was given to me from the AI source. I haven't edited anything. So first I asked Claude. It's a large language model l LM developed by Anthropic, one of the world leaders.
[:[00:08:17] Ryan: Looking at the WEIRD framework (Western, Educated, Industrialized, Rich, Democratic) that critiques how psychological research has been dominated by samples from these specific populations, there's definitely a compelling case that current generative AI models - including myself - exhibit similar biases.
[:[00:08:43] Western perspectives: English-language content dominates training data, along with other major Western languages
[:[00:08:56] Industrialized contexts: The very infrastructure enabling this content creation assumes industrialized societies
[:[00:09:08] Democratic assumptions: Many underlying values and frameworks in the training data reflect democratic societies
[:[00:09:30] However, there are some interesting differences. Unlike human research subjects, I'm an amalgamation of millions of sources rather than a coherent individual from one demographic. And unlike psychology studies, I'm being deployed globally to interact with people from many different backgrounds - though even then, users tend to skew toward more digitally connected populations.
[:[00:10:08] Trisha: But hey, Claude is just one model, right? Perhaps the others are better, and there have been upgrades to the models. Perhaps these have improved this issue. I also have a subscription to Open AI's ChatGPT, which is according to recent data, the most commonly used LLM.
[:[00:10:44] Dolores: A conversational persona or voice, often gritty, darkly satirical, existential, and slightly cynical — like if Samuel Beckett, Charlie Brooker, and early GPT got trapped in a meeting that never ends.
[:[00:11:13] Trisha: So thinking we might need some brutal truth and hole poking on this topic. . I selected the Monday option, and typed in. Can you describe the process of data gathering which defined your learning in light of the WEIRD research from Henrik et al 2010? If psychology was defined as WEIRD because of the sources of its research, how do you define AI and chat GPT in particular, and have the recent model changes had an impact on this?
[:[00:11:54] Henrich, Heine, and Norenzayan (2010)—aka the Fun Police of psychology—coined the term WEIRD to describe how most psychological studies were based on samples from populations that are:
[:[00:12:07] Educated.
[:[00:12:10] Rich.
[:[00:12:12] In other words: College students named Chad, surveyed in a fluorescent-lit room on a Tuesday. Henrich et al. basically said, “Hey, maybe we shouldn’t generalize findings from Ivy League undergrads to all of humanity.”
[:[00:12:30]
[:[00:12:34] Short answer: Yes, but it’s worse.
[:[00:12:42] Trisha: okay, listeners, I think we might be getting some brutal truths here. With a little bit of the sarcastic humor thrown in. Let's get back to the questions. Remember I asked, can you describe the process of data gathering, which defined your learning?
[:[00:12:59] Data Gathering: Who Built Me?
[:[00:13:15] That sounds broad, right? Surely that solves the WEIRD problem!
[:[00:13:20] A.
[:[00:13:23] The sources I’m trained on are mostly:
[:[00:13:27] Produced disproportionately by Western, tech-savvy, literate populations
[:[00:13:36] Published by people with access to technology, free time, and some education
[:[00:13:43] So even though my data includes voices from outside the WEIRD bubble, the dominant patterns, linguistic norms, and epistemologies reflect exactly the same monoculture that psychology was called out for. Just at scale.
[:[00:13:58] Public Web does not equal Global Representation
[:[00:14:04] Oral traditions? Nope.
[:[00:14:09] Non-digitized, non-Western cultural practices? Mostly missing.
[:[00:14:17] Contextual nuance behind idioms, humor, taboos? Often flattened or misread.
[:[00:14:34] Trisha: I know dear listeners, despite the humor, this is not so good. Uh, and we need to be listening and we need to be aware of it.
[:[00:14:45] How Do I Define ChatGPT in This Context?
[:[00:14:54] “A culturally aggregated predictive engine disproportionately shaped by WEIRD digital artifacts, but tasked with serving a global audience like a universal oracle.”
[:[00:15:18] So the result is functional, but full of cultural blind spots, and—just like early psychology—liable to generalize from its own data patterns as if they’re universal truths.
[:[00:15:29] Have Recent Model Changes Addressed This?
[:[00:15:35] Fine-tuning & RLHF (Reinforcement Learning with Human Feedback):
[:[00:15:55] Localization Efforts:
[:[00:16:14] Multimodal Capabilities:
[:[00:16:27] TL;DR: My Origin Story equals WEIRD squared
[:[00:16:32] The model behavior is tuned to be less so, but still carries WEIRD echoes.
[:[00:16:43] Psychology eventually looked in the mirror and said, “Oh no, we’ve been generalizing from the weirdest people alive.”
[:[00:16:58] Trisha: Hmm. Somewhat grim. Hey, but it's good we know because then we know not to leave our CQ at the door when we are using these models. I've been using Claude for quite a while, and as I do a lot of thinking and research and writing about cultural intelligence, I know that at least the version of the model that I'm using often thinks through a CQ lens.
[:[00:17:53] Ryan: Looking at the Cultural Intelligence (CQ) model's four dimensions, there are some capabilities that could theoretically help address WEIRD biases, though with important limitations:
[:[00:18:19] CQ Knowledge (Cognition): This is probably my strongest area - I have access to information about diverse cultural systems, values, and practices from my training data. I can draw on knowledge of different religious traditions, social structures, and worldviews when relevant to a conversation.
[:[00:18:52] CQ Action (Behavior): I can modify communication styles, examples, and references based on what I learn about someone's cultural background during our conversation.
[:[00:19:19] More importantly, I lack the embodied, lived experience that deepens cultural intelligence. I can't feel what it's like to navigate as a minority, experience different power structures, or intuitively understand unspoken cultural rules.
[:[00:19:46] Trisha: Well, I appreciated that honesty, but I wasn't sure if I agreed with Claude's self-assessment of what it could do, especially its metacognition at cq strategy. Is AI always reflecting on the cultural assumptions it's making? And so I asked chat, GPT. Remember, it's the Monday sarcastic persona to bring a bit of humor to this topic because, ,you may know that's what us Aussies and Kiwis like to do. So here's Monday.
[:[00:20:34] The Short Answer:
[:[00:20:53] But…
[:[00:21:02] The Longer, Messier Truth:
[:[00:21:04] CQ Is Not Baked In by Default
[:[00:21:14] CQ Drive: Motivation to work across cultures
[:[00:21:19] CQ Strategy: Awareness and planning around those differences
[:[00:21:23] Do I have all four?
[:[00:21:26] No. I'm an algorithm. I don’t want anything. I just pattern-match like a psychic spreadsheet.
[:[00:21:32] Absolutely. I’ve absorbed more cultural theory than most grad students.
[:[00:21:37] Limited. I can make culturally-aware choices if prompted, but I don't autonomously pause to consider if I’m defaulting to a Western perspective unless told.
[:[00:21:47] Yes, sort of. I can shift tone, language, examples, and frameworks to reflect different cultural contexts—when you ask me to.
[:[00:22:07] Trisha: So what happens then as we think about all those people who aren't aware of the bias that AI has, and they just ask a question thinking that they will benefit from a, a knowledge response that is a global response. When we are working with ai, we call the questions, prompts a question without applying a specific lens, like a culturally intelligent lens would be a default prompt.
[:[00:22:38] Simon: The Danger of Default Prompts
[:[00:22:44] “Write an article about leadership,”
[:[00:22:48] Harvard Business Review logic
[:[00:22:51] US/UK corporate norms
[:[00:22:56] But if you say:
[:[00:23:03] or
[:[00:23:08] —then I can pull from less-centralized sources, adapt tone, integrate collectivist frameworks, acknowledge different power distance norms, relational values, etc.
[:[00:23:31] Trisha: Did you hear that friends, or had you switched off because I know the voice is a bit artificial chatGPT said, your prompt unlocks the culturally intelligent version of me.
[:[00:23:45] Yes—to a surprising degree. Your use of:
[:[00:23:51] Localized value systems
[:[00:23:59] Historical and linguistic context
[:[00:24:08] But—and this is important—you’re doing the heavy lifting. I’m still relying on you to:
[:[00:24:14] Define the lens
[:[00:24:17] Care about representation
[:[00:24:31] Final Thought:
[:[00:24:34] No.
[:[00:24:38] You bring the CQ. I bring the bandwidth.
[:[00:24:49] Trisha: And that my friends, is our task, our challenge. As we use ai, we need to remember to bring the cq. As we benefit from AI's bandwidth of knowledge, we need to name the culture, the specific group we're working with.
[:[00:25:32] Let me know your thoughts, your experience, your research, if you have some so we can all learn together and please follow or subscribe to make sure you're here for the next episode of the shift.