Question the Algorithm

Why "data-driven" doesn't mean unbiased, and what you can do about it

Surveillance camera system monitoring urban environment
Your loan application was denied by an algorithm. The credit card company can't tell you exactly why. Should you trust that decision?

We've been taught to trust numbers and data. When an algorithm makes a decision, whether it's approving your loan, predicting crime risk, or deciding what news you see, it feels objective. Scientific. Unbiased.

But here's the problem: algorithms aren't neutral. They're designed by people, trained on imperfect data, and optimized for specific goals that may have nothing to do with fairness or accuracy. The technical complexity just makes these human choices invisible.

This matters because algorithmic systems now affect major parts of your life. And most people have no idea how they actually work.

Where the Bias Hides

Complex data and code representing algorithmic systems

When people say a decision is "data-driven," they make it sound automatic, like the computer just crunched numbers and spit out truth. But look closer at what actually happens:

1
Data Collection
Human choice: What gets measured? Who's included? What's left out?
2
Feature Selection
Human choice: Which variables matter? How are they weighted?
3
Model Design
Human choice: What counts as "success"? What gets optimized for?
4
Output
Looks objective, but reflects all the assumptions baked in above

At every single stage, people are making judgment calls. The algorithm just executes those judgments at scale, making them look like facts.

Real-World Impact

Person interacting with technology and algorithmic systems

Criminal Justice Algorithms

ProPublica Investigation (2016): Analyzed COMPAS risk assessment used in courts nationwide. Found Black defendants were twice as likely to be incorrectly labeled as high risk compared to white defendants.
When the algorithm was wrong, white defendants labeled low risk were far more likely to reoffend than Black defendants with the same score.
Only 20% of people predicted to commit violent crimes actually did.
Source: ProPublica, "Machine Bias" (2016)

Credit Scoring Systems

Stanford/U Chicago Study (2021): Credit scores are 5-10% less accurate for minority and low-income borrowers due to "thin" credit files.
2021 Data: Median credit score for Black consumers: 639. For white consumers: 730 (nearly 100 points higher).
Urban Institute (2024): Black and Brown borrowers are more than twice as likely to be denied loans than white borrowers.
Sources: Stanford HAI, National Consumer Law Center, Urban Institute

Social Media Algorithms

YouTube: Recommendation algorithm responsible for 700 million hours of daily watch time (70% of total viewing).
Twitter/X Study (2024): Algorithm amplifies emotionally charged, hostile content optimized for engagement, not accuracy.
TikTok Research (2022): Watching just 20 videos questioning elections retrains the algorithm to push more conspiracy theories and extremism.
Sources: Mozilla, EPJ Data Science, Tech Policy Press
Credit Score Disparity by Race
Median credit scores in the United States (2021 data)
White consumers
730
Black consumers
639

91-point gap affects loan approvals, interest rates, and financial opportunities

COMPAS Algorithm False Positive Rates
How often low-risk people are incorrectly labeled as high-risk (ProPublica, 2016)
White defendants
23.5%
Black defendants
44.9%

Black defendants nearly twice as likely to be mislabeled as high-risk

Why This Happens

Take the criminal justice example. COMPAS was trained on historical arrest data, data reflecting decades of racially biased policing. More arrests in Black neighborhoods don't necessarily mean more crime; they often mean more police presence. But the algorithm learned to treat arrest patterns as objective truth.

Or consider credit scoring: minority borrowers often have less data in their credit files (using alternative financial services, shorter credit histories). With less data the algorithm makes noisier predictions, but those predictions still determine who gets loans.

Social media is different but equally problematic. Platforms optimize for engagement: clicks, shares, time spent. Research shows this consistently amplifies emotional, divisive, and low-quality content because that's what keeps people scrolling. The algorithm isn't trying to inform you; it's trying to keep you on the platform.

None of this is a bug. It's what happens when you treat "data-driven" as a synonym for "objective."

What You Can Do

The goal isn't to reject technology or go back to "gut feelings." It's to stop treating algorithms like they're infallible and start treating them like what they are: tools built by humans with specific goals and limitations.

Ask Questions

Next time you encounter an algorithmic decision, ask: What data was this trained on? What is it optimizing for? Who built it, and what assumptions did they make?

📢

Demand Transparency

Support laws requiring companies and governments to disclose how their algorithmic systems work, especially for high-stakes decisions like loans, sentencing, and content moderation.

📚

Build Data Literacy

Learn basic concepts: correlation vs. causation, what training data means, how models can be biased. You don't need to code, just understand the fundamentals.

🔍

Stay Skeptical

When someone says a system is "data-driven" or "objective," treat it as a claim that needs evidence; not as proof of neutrality. Check the sources, question the metrics.

The Bottom Line

Algorithms are powerful tools. But they're not truth machines. They're shaped by human choices about data, design, and goals; these choices often stay hidden behind technical complexity.

Next time someone tells you a decision is "data-driven," ask them: Whose data? Measuring what? Optimized for which goal?