Sometimes, it seems as though the refrain of the 21st century is that algorithms know too much about us, and we all need to manage our data more carefully.
You might find it more or less difficult to take this seriously compared to others. Maybe you don’t mind being shown certain ads or content recommendations, even if they are eerily similar to something that came up in a recent conversation. The ‘Big Brother’ vibe can be shrugged off when spying is viewed as mostly harmless.
But what if you were denied credit due to algorithmic bias? Entrepreneurs can always turn to private money lenders for fast access to funding, but individuals may not have such options. In the realms of finance, employment, and even law enforcement, one person’s outcomes may be vastly different from another’s, all because a machine decided it had to be so.
Algorithms have quickly become influential beyond the average person’s comprehension, and it’s time we took steps to change that.
The mundane versus the pervasive
Most of us might not like the idea of anonymous third-party entities effectively spying on us and then feeding that information into a machine.
Yet on the internet, that’s simply business as usual. And even as we’re becoming more aware of such privacy issues, we’re also coming to accept them as just the way things are done.
It’s one thing to have targeted ads show up on your feed, receive friend recommendations or music or video suggestions. It can save time and effort on querying mundane things or lead to serendipitous discoveries or occasional hilarity.
But we’ve also seen considerable backlash towards the likes of Facebook and YouTube for enabling the spread of misinformation and engendering societal divisions. This algorithm-driven side effect is pervasive, insidious, and not at all amusing.
Worse, nobody really expects Facebook or Google (YouTube’s parent company) to stop collecting data about their users, which is the underlying mechanism that enables such adverse, far-reaching effects.
We know that the potential for misuse is there. But we like to think that our data will ultimately serve legitimate purposes, that abuses will be caught and corrected, and that better regulations will be implemented over time.
Issues of hidden bias
Yet we’ve no real reason to believe that such assumptions have any basis in truth.
A considerable amount of investment goes into developing an effective algorithm. This includes the developers’ sweat equity, the time it takes to train the algorithm, and the company’s financial commitment to paying such people with a rare and valuable skill set.
This creates an incentive to keep the workings of the algorithm secret while also putting it in charge of important decisions.
Thus, algorithms are shrouded in opacity, often likened to a black box. They may be mechanical, but they are designed by humans and trained using human data, making them prone to bias yet immune to critique.
And they often influence decisions far more consequential than what shows up in your email or social media feed.
For instance, a ProPublica report exposed algorithmic bias in criminal sentencing decisions based on a person’s ethnicity. Similar biases exist and work against individuals with no credit scores, women or minorities seeking employment, or those deemed a high medical risk.
The companies using them, and the developers in charge, find it all too easy to hide behind nebulous technicalities and vague commitments to improve. They keep the algorithms out of sight while depriving countless individuals of agency in potentially life-changing matters.
Calling for algorithmic literacy
The problem with algorithms today is actually a universal one. We’ve given an entity too much power without any means of holding them accountable.
Even without the black box problem, how many people who aren’t professional developers actually know how an algorithm works or how to make one, let alone fix it? The tech-savvy are few, and policymakers lack knowledge to penetrate the layers of arcana surrounding their operations.
Those in control of the algorithm can act with impunity. The rest of us are at the mercy of their willingness to be transparent, compliant, and put the good of society and their users ahead of their self-interest.
Everyone can take steps to mitigate the impact of algorithms on their lives. On an individual level, we can do a better job of managing our digital footprints. It won’t hide your data completely, but practicing critical thinking can reduce the chances you’re influenced by machine bias somehow.
But the real solution is to update our definition of literacy to include algorithms. Call for this to become a mandatory part of education and train a new generation of laypersons and lawmakers to know exactly what they’re dealing with.
Until that happens, our society will remain divided into the elite who control algorithms and everyone else who’s controlled by them.