Executive Summary
Today’s podcast features a unique convergence of perspectives on AI bias, happening at a pivotal moment when the EU AI Act’s bias provisions just became active (2 August 2025) and the landmark Mobley v. Workday case was certified as a nationwide class action (May 2025).
Your 40+ years in human-computer interface design positions you perfectly to address how design choices shape algorithmic fairness, whilst your diverse panel brings legal, technical, and ethical perspectives to this critical conversation.
How Algorithms Become Biased: Technical Mechanisms
Data-Level Mechanisms
Historical Bias
Training data reflects past societal inequalities. Amazon’s infamous recruiting tool penalised CVs containing “women’s” because it learnt from 10 years of male-dominated hiring data.
Representation Bias
Systematic underrepresentation of certain groups. Over 50% of AI training databases primarily reflect US and Chinese populations.
Measurement Bias
Systematic errors through proxy variables – postcode serving as racial proxies in credit scoring.
Algorithmic Level
- Optimisation functions that aggregate errors without considering subgroup membership
- Cross-entropy loss functions inherently favour majority classes
- Regularisation techniques inadvertently suppress minority group signals
- Neural network architectures contain inductive biases favouring certain patterns
- Word embeddings encode societal stereotypes (“man is to computer programmer as woman is to homemaker”)
Feedback Loops
These amplify biases over time through:
- Predictive policing creating self-reinforcing cycles
- Recommendation systems creating filter bubbles
- Data drift where population distributions change but training data doesn’t
Current State of AI Bias in 2025
Regulatory Landscape
EU AI Act: Entered force 1 August 2024, with bias provisions active 2 August 2025. Requires “appropriate measures to detect, prevent and mitigate possible biases” with penalties up to €35 million or 7% of global revenue.
US Developments: Senator Markey introduced AI Civil Rights Act (September 2024). Colorado’s comprehensive AI Act delayed to 2026.
Recent Research & Cases
University of Washington Study (October 2024)
Major language models show intersectional bias in CV ranking – Black male candidates were never preferred over white males.
MIT Breakthrough (2024)
TRAK-based debiasing technique identifies specific problematic training examples, removing 20,000 fewer samples than conventional balancing whilst maintaining accuracy.
Mobley v. Workday Landmark Case
Derek Mobley (one of your panellists) saw his case certified as nationwide collective action challenging AI discrimination in hiring tools (May 2025).
Solutions and Mitigation Strategies
Technical Approaches
Industry-Standard Tools
IBM AI Fairness 360
70+ fairness metrics and 9 bias mitigation algorithms
Microsoft Fairlearn
Constraint-based optimisation with scikit-learn integration
Google What-If Tool
Interactive exploration with counterfactual analysis
TensorFlow Model Remediation
MinDiff for balancing error distributions
The Critical Role of Diversity
Current Statistics
- 22% of AI professionals worldwide are women
- 10.2% of US AI specialists are Black
- At Google: 56.1% White, 34.7% Asian, 3.7% Black, 6.5% Hispanic
- Companies in top 25% for diversity are 35% more likely to have above-average profitability
Measurable Impact
Facial Recognition: 34% error rates for darker-skinned women vs 0.8% for lighter-skinned men
Image Generation: DALL-E 2 and Stable Diffusion generate 75-100% male images for STEM professions
Success Stories
- Joy Buolamwini: Algorithmic Justice League challenging bias through advocacy
- Timnit Gebru: DAIR Institute advancing ethical AI research
- Pelonomi Moiloa: Lelapa AI with “by Africans, for Africans” solutions
- Kate Kallot: Amini Corp secured $2M for environmental data solutions
Democratisation’s Impact on AI Bias
Open Source Revolution
- Meta’s LLaMA: 1.2 million downloads in one week
- Hugging Face: 100,000+ models, 400+ million downloads in 2024
- BigScience: 1000+ researchers from 70 countries creating BLOOM
Infrastructure Barriers
- Only 25% of sub-Saharan Africa has reliable internet
- Training large models requires 415 TWh globally (1.5% of world electricity)
- 29% gender gap in mobile phone usage in sub-Saharan Africa
Real-World Impacts: Quantified Consequences
Healthcare Bias
200+ million people affected by biased algorithms. Black patients with similar health scores received half the care of white patients.
Criminal Justice
COMPAS shows Black defendants are 77% more likely to be flagged high-risk. 45% false positive rates for Black defendants vs 23% for white.
Employment
Amazon’s tool penalised women’s CVs. HireVue showed bias against disabilities. Derek Mobley rejected from 100+ jobs over 7 years.
Financial Services
Apple Card: Women received credit limits 1/20th of men with identical finances. 50+ million Americans underserved for credit.
HCI Design as a Solution Pathway
Transparency Through Design
- Clear explanations of algorithmic decisions
- Visual indicators when AI is used
- Accessible language rather than technical jargon
- Progressive disclosure showing basic rationale first
Human-Centred AI Framework
- Diverse expertise involvement
- End user participation throughout development
- Understandable decision-making
- Continuous human oversight
Effective Feedback Mechanisms
- Easy-to-access reporting tools for flagging bias
- Clear channels for disputing automated decisions
- Iterative design incorporating fairness feedback
- Regular testing with diverse demographic groups
Why “Machine Learning” Beats “AI” Terminology
The Problem with “AI”
- 40% of Europe’s 2,830 “AI” startups don’t actually use AI
- Creates unrealistic expectations of human-like reasoning
- Leads to over-trust in automated systems
- Makes bias seem inevitable
Benefits of “Machine Learning”
- Technically accurate description
- No anthropomorphic implications
- Clear that humans designed algorithms
- Emphasises data and design choices
- Systems can be audited and improved
Your Panellist Backgrounds
Derek Mobley
Lead plaintiff in Mobley v. Workday, the most significant AI bias lawsuit. Applied to 100+ jobs through Workday over 7 years, rejected every time allegedly due to biased screening. Case achieved nationwide collective action certification (May 2025).
Kenn Jordan
40+ years evaluating emerging technologies. Founded Elmspark Consultants (2005). Converts technologies into user-friendly applications. Clients include Diageo, Levi’s, PlayStation, Disney.
David Paskin
“Torah Tech Guy” combining spiritual leadership with technology expertise. Teaches clergy nationwide on technology integration. Hosts “The Rundown” exploring technology’s world impact.
Critical Statistics for Discussion
- 200+ million people affected by biased healthcare algorithms
- 34% error rate for facial recognition on darker-skinned women vs 0.8% for lighter-skinned men
- 77% higher likelihood of Black defendants being flagged high-risk by COMPAS
- Only 22% of AI professionals worldwide are women
- 40% of “AI” startups don’t actually use AI technology
- £31 million or 7% global revenue maximum EU AI Act penalties
- 84% reduction in healthcare algorithm bias after intervention
- 45% higher likelihood of women leaving tech within a year
Forward-Looking Discussion Points
The convergence of technical capability and regulatory pressure creates a unique moment for addressing AI bias. Your HCI expertise positions you to discuss how interface design can make algorithmic bias visible and actionable for everyday users.
Key Questions to Explore
- How do 40 years of HCI evolution inform current bias challenges?
- What specific design patterns help users understand AI decisions?
- How do Caribbean perspectives on technology development differ from Silicon Valley approaches?
- Does democratisation ultimately help or harm bias mitigation efforts?
- What role should regulation play versus industry self-governance?