algorithmic discrimination techdetoxbox.com

Frontiers of Algorithmic Discrimination

Last Updated on October 5, 2023

Dirt in Your Digital Footprint

Finding dirt in the digital footprint of individuals is quickly becoming the new standard of risk management. It’s happening.

Digital footprint can potentially be used by any industry or authority. Historically, a bankruptcy in your credit history would limit your access to credit. Riskier customers are charged exorbitant interest rates. It’s absolutely legal: you have to pay a higher price to justify the risk the bank is taking with you.

Insurance companies can get access to your health data and driving patterns, and price their policies accordingly. Social media profiles are analyzed by HR departments and anything questionable could make you unemployable. Sexting as a teenager? No job interview for you – 20 years later. Everyone can Google your name and make a final judgement on your character based on what they see – even if it’s not true. Unless you live in Europe, Google is not obligated to remove reputation damaging posts.

We have no right to be forgotten, and no chance to be forgiven.

Now let’s expand this notion to every possible human endeavor.

  • Could data about teens’ cyberbullying incident be used to price their future loans and insurance policies? Why not. Their digital footprint looks dirty – they should pay extra for their “bad character”. It’s the law of economics.
  • Being flagged for extra airport security?
  • Having their relationship options limited on dating apps?
  • Being denied access to certain professions?

There is no limit to algorithmic discrimination.

Algorithmic Bias

Behavioral profiling already allows advertisers to price products differently to different individuals. Algorithmic discrimination can be intentional or unintentional: a wide-spread algorithmic bias creates unfairness in the marketplace, job market, education, healthcare, politics, and justice system.

It is against the law to charge you a higher rate for a loan or deny you a job if you are black. But what if a model flags you as risky because your favorite color is black, or some other random variable that cannot be traced back to protected categories of race, gender, and sexual identity? 

There could be millions of such variables lurking in our digital footprints, and it’s not illegal to use them for risk mitigation. We might think we live a clean and righteous life, but machine learning would find our hidden sins and deem us unworthy of a job, college admission, mortgage, or a date.

A proprietary “black box” of AI is not accountable to anyone.

The book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, by Cathy O’Neil, features many stories that show the dark side of big data. The decisions that affect our lives—where we go to school, whether we can get a job or a loan, how much we pay for health insurance—are being made not by humans, but by the machines. And when the algorithms are wrong, lives are ruined – but there is no way to defend oneself. 
 

And nothing is private anymore. Facts one would be ashamed to share with the therapist can be algorithmically revealed by social media history. 

A study of the predictive power of Facebook Likes found out that “Likes” are enough to accurately predict extremely sensitive personal information that users might prefer to keep to themselves: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, addiction, parental separation, age, and gender.

The researchers had warned that it would be highly unethical to use such deep psychological insights obtained so easily from millions of users without their knowledge or consent for any form of influence on users’ behavior.

Yet, such influence is the business model of social media.

Unlike your doctor or lawyer, social media companies are not limited by ethical codes of conduct.

  • Using data of childhood trauma and abuse to sell antidepressants with dangerous side effects? By all means.
  • Using data about hidden sexual dysfunction to sell Viagra and direct users to porn sites? Sure.
  • Using deep psychological insecurities dating back to middle school to sell overpriced brands? Why not.
  • Using body image issues to sell expensive diets and plastic surgeries? Be my guest.

How about using digital history of suicidal ideation and self-harm to exploit a depressed teen instead of saving their life? Well, it depends – the algorithm will direct the user to the solution that PAYS MORE – to the owners of the platform. It’s efficient. It is optimized for the outcome. Which is profit – not human wellbeing.

Is compromising human happiness legal? Unfortunately, yes. Is it ethical? Far from it.

Genetic Discrimination

Digital footprint is something an individual can at least partially control. Not so with your DNA.

Can you envision the limitless potential for genetic discrimination if your insurance company knew exactly what diseases you are likely to develop over the course of your life — and price your policy accordingly, or deny it altogether? Life insurance refused to those who are found genetically likely to die young?

There is some legislation to prevent this from happening — HIPAA Privacy Rule that protects individuals’ medical records and other individually identifiable health information. 

Because genetic discrimination could be a highly profitable violation of justice and fairness. 

Those with a compelling business interest might find a loophole to use DNA data anyway — or coerce customers to share it, like we are sharing our Fitbit records already

I personally refuse to obtain DNA information for myself or my children, even if it could potentially be useful to our health care. If genetic information falls in the hands of data brokers, who already have all of our digital footprint, this could potentially become a matter of life and death.

It gets creepier. One does not even need medical records or DNA data to predict health outcomes with extreme precision. 

Our Twitter feed is our health data. 

A study by the University of Pennsylvania found that Twitter data can predict heart disease and early mortality better than all other risk factors (smoking, obesity, etc.) combined, by simply analyzing the frequency of “negative emotional language” in the feeds. 

Last time I checked, social media was not covered by medical privacy laws. 

Which means you could get the following letter in the mail from your insurance company:

“Dear Mr. X, due to the frequency of certain 4-letter words in your Twitter feed, our system has identified you as high risk for an early heart attack”.

In reality, they would not disclose any of this. They would just say something nebulous like: “According to the change in our policies, your insurance premium is now higher”. With lack of algorithmic transparency, the insurer would not be obligated to disclose the reasons behind their decisions. Sorry, proprietary information.

Natural language processing algorithms can comb through your social media and diagnose physical and mental health problems. A data broker could combine this information with your location history (you have your lunch at McDonalds every day), and your Fitbit data (you are not moving much), and get an excellent picture of your health risk for sale to third parties. 

This data can and will be used against you.

Hostage to Data

Being held hostage to your data is a modern form of slavery. In the ancient times, if you were enslaved, you could maintain your inner freedom. One could combine outward compliance with inward defiance. Enslavement by big data is different. It knows your every move and every thought.

Digital reputation of the user becomes a weapon of control. Whoever owns your data, owns you.

Famous behavioral psychologist BF Skinner, on whose work addictive technology is largely based, labeled individual freedom as an anomaly in his 1970s controversial book Beyond Freedom and Dignity. Beyond indeed. The notion of programmable human behavior that resulted in shock and indignation just a few decades ago is now a reality, made possible by machine intelligence and the incentives to use it for mind control.

Every outcome for every member of society becomes predictable and programmable. At the cost of individual human free will – eliminated as a dangerous anomaly by AI weapons of social engineering. That’s our present and our future. Tech industry programs our behavior to make money, but there is another entity that is extremely interested in algorithmic control – the government.

Government + Big Data = End of Democracy.

Enemy of the State

China’s social credit score is a reality for 1.4 billion Chinese people – and a preview of the dystopian future that awaits the rest of us. Social credit is an algorithmic tool for the control of the population: the authorities can flip the switch and magically generate desired behavior in millions of citizens. 

Of course, all in the name of maintaining law and order.

How does it work? In short, it’s the mechanism to blacklist individual citizens based on their “bad” online and real world behavior. You did not pick up after your dog, smoked in the wrong place, put your recycling in the garbage, attended a religious gathering, or – wait for it – wrote a politically incorrect social media post.

Your social credit score drops. Your friends notice, and abandon you in fear that their scores would be affected by association with an “enemy of the state”. Your score plummets further, going into a death spiral. Your options for freedom and dignity disappear:

  • You cannot buy a plane or a train ticket
  • Your access to credit is limited
  • You cannot rent an apartment
  • You are denied employment
  • You cannot book a hotel
  • Your children cannot attend private schools and are banned from universities
  • You are publicly shamed and your blacklist status is displayed for all to see on WeChat, a messaging app used by most Chinese.

People who are deemed “untrustworthy” by the government face severe economic and legal punishments, their life a living hell.

On the other hand, praising the government on social media can get you on a whitelist with access to privileges and discounts. So is reporting on the misdeeds of your fellow citizens, creating a culture of fear. The same social technique sent millions of Russians to perish in Siberian Gulag concentration camps in Stalin’s Soviet Union: neighbors reporting on neighbors, wives reporting on husbands, children reporting on parents.

The social credit is made possible by the massive surveillance system and AI implemented by the government in collaboration with the big business – banks, online marketplaces, social media platforms – obligated to report the data on individual users in fear of economic sanctions. Facial recognition technology makes it impossible for anyone to hide.

Reward and punishment. Carrot and stick. Dehumanizing people to live in fear and obedience to the government is a totalitarian dream. George Orwell’s dystopian novel 1984 becomes a reality with Big Data supplying incriminating facts to the police state that Hitler, Stalin, and Mao could only dream of.

Big Brother is always watching. Better keep your digital footprint clean.

Take Back Control
Sign up for our monthly newsletter to receive latest digital wellbeing research and screen time management solutions. We never share your email with third parties.

Leave a Reply