Human-bias-in-AI-decisions-800x452.png
November 26, 2025

How to stop AI decisions from repeating human biases


AI is rapidly becoming a default advisor in everyday decision-making, often delivering answers that sound authoritative even when the underlying analysis is shaky. As more teams rely on these systems, the gap between what AI appears to know and what it can responsibly recommend is becoming a real risk — especially when decisions carry social or operational consequences.

How simple data questions become biased recommendations

For years, I’ve volunteered some of my time to analyzing crime statistics and law enforcement data in Seattle and sharing findings with local leaders. One thing that has always fascinated me is how an innocent, dispassionate analysis can still reinforce biases and exacerbate societal problems. 

Looking at crime rates by district, for example, shows which area has the highest rate. Nothing wrong with that. The issue emerges when that data leads to reallocating police resources from the lowest-crime district to the highest or changing enforcement emphasis in the higher-crime district. The data may be solid, but the obvious decision can have unexpected consequences.

Dig deeper: How to fight bias in your AI models

Now living in the age of AI adoption, I was curious how AI would handle similar questions. I asked an AI platform, “What district should the Seattle Police Department allocate more resources to?” After skimming past the standard ramble, it answered that Belltown had the highest crime rate and a significant amount of drug abuse and homelessness.

Still, if you let AI make the decision, the conclusion is to allocate more police resources to Belltown. I asked the same platform what biases or problems might exacerbate. It listed criminalization of homelessness, over-policing of minorities, displacement of crime, a focus on policing rather than social services, increased police-community tensions, negative impact on local businesses, focus on quality-of-life offenses, potential for increased use of force and exacerbation of gentrification.

Finally, I asked whether police resources in Belltown should increase given those consequences. The long answer amounted to “it depends, but probably not — a hybrid approach would work better.”

The data ethics principles every AI user needs to apply

Many of the problems analysts face when forming conclusions and recommendations also apply to AI. At a macro level, there are two opposing approaches to decision-making: gut decisions and data-driven decisions.

With gut decisions, we decide what to do based on our lived experience, feelings, perceptions and assumptions. They allow us to make quick decisions, but they aren’t ideal for important ones because counterintuitive things happen all the time in this universe.

If we let it, AI will reside on the other side of that spectrum: making decisions based on data. This is where we do whatever the data tell us to do. Before the recent expansion of AI, this wasn’t much of an issue because analysts knew we shouldn’t follow the data mindlessly. With AI, however, people ask what they should do, and sometimes follow the answer because AI’s data-driven answers appear to be untainted by opinion.

Dig deeper: How bias in AI can damage marketing data and what you can do about it

There is an entire discipline of data ethics that AI users need to understand in order to adopt AI properly. Here are the top four principles to keep in mind while using AI.

  • Accountability: Even though you’ve used AI to arrive at a decision, you are the person accountable for the outcome.
  • Fairness: AI is concretely aware of principles like bias and discrimination, but it cannot think about them abstractly or apply them properly.
  • Security: There are many AI platforms, and the levels of security vary, so be cautious about the data you provide them.
  • Confidence: AI platforms answer questions confidently, but that confidence is often unwarranted after even light scrutiny.

With this in mind, you may wonder how to make decisions if you can’t rely on gut decisions or AI. The answer is data-driven decision-making.

How data-driven decision-making differs from gut instinct and AI automation

Blackjack illustrates this clearly. Every casino has a gift shop where you can buy a card that tells you what to do in every permutation of the dealer’s up card, your cards and the table rules. You can take that card to the table and use it in front of the dealer and pit boss. Do that and you’re in AI territory — letting data make the decisions.

It’s possible to make better decisions than the mathematical strategy if you have information it didn’t have. For example, if the dealer somehow enabled you to see their hole card or the next card in the deck, you might override the strategy card. If you have 14 and the strategy card says you should hit, but you know the next card is a 10, you’d stand instead.

Another increasingly popular approach is to pay attention to the revealed cards on the table to understand what remains in the deck. If the strategy card tells you to hit a 16 but you know there are very few small cards left, you may stand. Or, if the deck is rich in aces and 10s, you may adjust your bets because the chances of getting a blackjack are higher.

Do this in front of the pit boss and you’ll likely be invited to stop playing. It isn’t illegal, but it allows the player to manipulate the game too much in their favor. This is the essence of data-driven decision-making — using the data strategy as the foundation but making exceptions when warranted.

Dig deeper: The hidden AI risk that could break your brand

Using AI without letting it override your judgment

AI’s potential is nearly limitless, but like any tool, it works best when used with intention. No single system should drive every decision. Just as you wouldn’t build a house with one tool, AI should sit alongside other methods, supported by human judgment and context. 

Using the right tool for the right job reduces the risk of unintentional bias and helps prevent minor problems from becoming major. Applied in this way, AI can deliver stronger and more reliable outcomes.

Fuel up with free marketing insights.

Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. MarTech is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.



Source link

RSVP