.

  • Written by Anjana Susarla, Associate Professor of Information Systems, Michigan State University

Every aspect of life can be guided by artificial intelligence algorithms – from choosing what route to take for your morning commute, to deciding whom to take on a date, to complex legal and judicial matters such as predictive policing.

Big tech companies like Google and Facebook use AI to obtain insights on their gargantuan trove of detailed customer data. This allows them monetize users’ collective preferences through practices such as micro-targeting, a strategy used by advertisers to narrowly target specific sets of users.

In parallel, many people now trust platforms and algorithms more than their own governments and civic society. An October 2018 study suggested that people demonstrate “algorithm appreciation[1],” to the extent that they would rely on advice more when they think it is from an algorithm than from a human.

In the past, technology experts have worried about a “digital divide”[2] between those who could access computers and the internet and those who could not. Households with less access to digital technologies are at a disadvantage in their ability to earn money and accumulate skills[3].

But, as digital devices proliferate, the divide is no longer just about access. How do people deal with information overload and the plethora of algorithmic decisions that permeate every aspect of their lives?

The savvier users are navigating away from devices and becoming aware about how algorithms affect their lives. Meanwhile, consumers who have less information are relying even more on algorithms to guide their decisions.

The new digital divide is between people who opt out of algorithms and people who don't Should you stay connected – or unplug? pryzmat/shutterstock.com[4]

The secret sauce behind artificial intelligence

The main reason for the new digital divide, in my opinion as someone who studies information systems, is that so few people understand how algorithms work[5]. For a majority of users, algorithms are seen as a black box.

AI algorithms take in data, fit them to a mathematical model and put out a prediction, ranging from what songs you might enjoy[6] to how many years someone should spend in jail[7]. These models are developed and tweaked based on past data and the success of previous models. Most people – even sometimes the algorithm designers themselves – do not really know what goes inside the model.

Researchers have long been concerned[8] about algorithmic fairness. For instance, Amazon’s AI-based recruiting tool turned out to dismiss female candidates[9]. Amazon’s system was selectively extracting implicitly gendered words[10] – words that men are more likely to use in everyday speech, such as “executed” and “captured.”

Other studies[11] have shown that judicial algorithms are racially biased, sentencing poor black defendants for longer than others.

As part of the recently approved General Data Protection Regulation in the European Union, people have “a right to explanation”[12] of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.

Meanwhile, some AI researchers have pushed for algorithms that are fair, accountable and transparent[13], as well as interpretable[14], meaning that they should arrive at their decisions through processes that humans can understand and trust.

What effect will transparency have? In one study[15], students were graded by an algorithm and offered different levels of explanation about how their peers’ scores were adjusted to to get to a final grade. The students with more transparent explanations actually trusted the algorithm less. This, again, suggests a digital divide: Algorithmic awareness does not lead to more confidence in the system.

But transparency is not a panacea. Even when an algorithm’s overall process is sketched out, the details may still be too complex[16] for users to comprehend. Transparency will help only users who are sophisticated enough to grasp the intricacies of algorithms.

For example, in 2014, Ben Bernanke, the former chair of the Federal Reserve, was initially denied a mortgage refinance by an automated system[17]. Most individuals who are applying for such a mortgage refinance would not understand how algorithms might determine their creditworthiness.

The new digital divide is between people who opt out of algorithms and people who don't What does the algorithm say to do today? Maria Savenko/shutterstock.com[18]

Opting out of the new information ecosystem

While algorithms influence so much of people’s lives, only a tiny fraction of participants are sophisticated enough to fully engage in how algorithms affect their life[19].

There are not many statistics about the number of people who are algorithm aware. Studies have found evidence of algorithmic anxiety[20], leading to a deep imbalance of power between platforms that deploy algorithms and the users who depend on them[21].

A study of Facebook usage[22] found that when participants were made aware of Facebook’s algorithm for curating news feeds, about 83% of participants modified their behavior to try to take advantage of the algorithm, while around 10% decreased their usage of Facebook.

A November 2018 report from the Pew Research Center[23] found that a broad majority of the public had significant concerns about the use of algorithms for particular uses. It found that 66% thought it would not be fair for algorithms to calculate personal finance scores, while 57% said the same about automated resume screening.

A small fraction of individuals exercise some control over how algorithms use their personal data. For example, the Hu-Manity platform allows users an option to control how much of their data is collected[24]. Online encyclopedia Everipedia[25] offers users the ability to be a stakeholder in the process of curation, which means that users can also control how information is aggregated and presented to them.

However, a vast majority of platforms do not provide either such flexibility to their end users or the right to choose how the algorithm uses their preferences in curating their news feed or in recommending them content. If there are options, users may not know about them. About 74% of Facebook’s users said in a survey that they were not aware of how the platform characterizes their personal interests[26].

In my view, the new digital literacy is not using a computer or being on the internet, but understanding and evaluating the consequences of an always-plugged-in lifestyle.

This lifestyle has a meaningful impact on how people interact with others[27]; on their ability to pay attention to new information[28]; and on the complexity of their decision-making processes[29].

Increasing algorithmic anxiety may also be mirrored by parallel shifts in the economy. A small group of individuals are capturing the gains from automation[30], while many workers are in a precarious position[31].

Opting out from algorithmic curation is a luxury – and could one day be a symbol of affluence available to only a select few. The question is then what the measurable harms will be for those on the wrong side of the digital divide.

References

  1. ^ algorithm appreciation (hbr.org)
  2. ^ “digital divide” (www.pewresearch.org)
  3. ^ earn money and accumulate skills (mitpress.mit.edu)
  4. ^ pryzmat/shutterstock.com (www.shutterstock.com)
  5. ^ so few people understand how algorithms work (www.acm.org)
  6. ^ what songs you might enjoy (qz.com)
  7. ^ how many years someone should spend in jail (www.law.nyu.edu)
  8. ^ have long been concerned (epic.org)
  9. ^ dismiss female candidates (www.reuters.com)
  10. ^ implicitly gendered words (www.reuters.com)
  11. ^ Other studies (doi.org)
  12. ^ “a right to explanation” (doi.org)
  13. ^ fair, accountable and transparent (www.fatml.org)
  14. ^ interpretable (arxiv.org)
  15. ^ one study (rene.kizilcec.com)
  16. ^ the details may still be too complex (hbr.org)
  17. ^ denied a mortgage refinance by an automated system (cdt.org)
  18. ^ Maria Savenko/shutterstock.com (www.shutterstock.com)
  19. ^ how algorithms affect their life (www.pewinternet.org)
  20. ^ algorithmic anxiety (papers.ssrn.com)
  21. ^ the users who depend on them (columbialawreview.org)
  22. ^ A study of Facebook usage (www.kevinhamilton.org)
  23. ^ A November 2018 report from the Pew Research Center (www.pewinternet.org)
  24. ^ an option to control how much of their data is collected (hu-manity.co)
  25. ^ Everipedia (everipedia.org)
  26. ^ not aware of how the platform characterizes their personal interests (www.pewinternet.org)
  27. ^ how people interact with others (www.nytimes.com)
  28. ^ pay attention to new information (compeap.com)
  29. ^ the complexity of their decision-making processes (www.fastcompany.com)
  30. ^ capturing the gains from automation (www.nytimes.com)
  31. ^ precarious position (www.brookings.edu)

Authors: Anjana Susarla, Associate Professor of Information Systems, Michigan State University

Read more http://theconversation.com/the-new-digital-divide-is-between-people-who-opt-out-of-algorithms-and-people-who-dont-114719

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more