How much is your data worth to tech companies? Lawmakers want to tell you, but it's not that easy to calculate
- Written by Samuel Lengen, Research Associate at Data Science Institute, University of Virginia
New proposed legislation[1] by U.S. senators Mark R. Warner and Josh Hawley seeks to protect privacy by forcing tech companies to disclose the “true value[2]” of their data to users.
Specifically, companies with more than 100 million users would have to provide each user with an assessment of the financial value of their data, as well as reveal revenue generated by “obtaining, collecting, processing, selling, using or sharing user data[3].” In addition, the DASHBOARD Act would give users the right to delete their data[4] from companies’ databases.
As a researcher[5] exploring the ethical and political implications of digital platforms and big data, I’m sympathetic to the bill’s ambition of increasing transparency and empowering users. However, estimating the value of user data isn’t simple and won’t, I believe, solve privacy issues.
Data collectors
The data collected by tech companies consists not just of traditional identifying information such as name, age and gender. Rather, as Harvard historian Rebecca Lemov has noted, it includes “Tweets, Facebook likes, Twitches, Google searches, online comments, one-click purchases, even viewing-but-skipping-over a photograph in your feed[6].”
In other words, big data contains the mundane yet intimate moments of people’s lives. And, if Facebook captures your interactions with friends and family, Google your late night searches, and Alexa your living room commands, wouldn’t you want to know, as the bill suggests, what your “data is worth and to whom it is sold[7]”?
However, calculating the value of user data isn’t that simple. Estimates on what user data is worth vary widely. They include evaluations of less than a dollar for an average person’s data[8] to a slightly more generous US$100 for a Facebook user. One user sold his data for $2,733 on Kickstarter[9]. To achieve this number, he had to share data including keystrokes, mouse movements and frequent screenshots.
Sadly, the DASHBOARD Act doesn’t specify how it would estimate the value of user data. Instead, it explains that the Securities and Exchange Commission, an independent federal government agency, “shall develop a method or methods for calculating the value of user data[10].” The commission, I believe, will quickly realize that estimating the value of user data is a challenging undertaking.
nevodka/Shutterstock.com[11]More than personal
The proposed legislation aims to provide users with more transparency. However, privacy is no longer solely a matter of personal data. Data shared by a few[12] can provide insights into the lives of many.
Facebook likes, for example, can help predict a user’s sexual orientation[13] with a high degree of accuracy. Target has used its purchase data to predict which customers are pregnant. The case garnered widespread attention after the retailer figured out a teen girl was pregnant before her father did[14].
Such predictive ability means that private information isn’t just contained in user data. Companies can also infer your private information, based on statistical correlations in the data of a number of users. How can the value of such data be reduced to an individual dollar value? It is more than the sum of its parts.
What’s more, this ability to use statistical analysis to identify people as belonging to a group category can have far-reaching privacy implications. If service providers can use predictive analytics[15] to guess a user’s sexual orientation, race, gender and religious belief, what is to stop them from discriminating on that basis?
Having been let loose, predictive technologies will continue to work even if users delete their part of the data that helped create them.
Control through data
The sensitivity of data depends not just on what it contains, but on how governments and companies can use it to exert influence.
This is evident in my current research[16] on China’s planned social credit system[17]. The Chinese government plans to use national databases and “trustworthiness ratings” to regulate the behavior of Chinese citizens.
Google’s, Amazon’s and Facebook’s “surveillance capitalism[18],” as author Shoshana Zuboff has argued, also uses predictive data to “tune and herd our behaviour towards the most profitable outcomes[19].”
In 2014, revelations about how Facebook experimented with its feed to influence the emotional state of users[20] ended in a public outcry. However, this instance just made visible how digital platforms, in general, can use data to keep users engaged and, in the process, generate more data.
Data privacy is as much about big tech’s ability to shape your personal life as about what it knows about you.
pcruciatti/Shutterstock.com[21]Who is harmed
The truth is that datafication, with all its privacy implications, does not affect everyone equally.
Big data’s hidden biases[22] and networked discrimination[23] continue to reproduce inequalities around gender, race and class. Women, minorities and the financially poor are most strongly affected. UCLA professor Safiya Umoja Noble, for example, has shown how Google search rankings reinforce negative stereotypes about women of color[24].
In light of such inequality how could a numerical value ever capture the “true” value of user data?
The proposed legislations’s lack of specificity is disconcerting. However, even more troubling might be its insistence that data transparency will be achieved by revealing monetary value alone. Numeric assessments of financial worth don’t reflect data’s power to predict our actions or guide our decisions.
The DASHBOARD Act aims to make the business of data more transparent and empower users. However, I believe that it will fail to fulfill this promise. If lawmakers want to tackle data privacy, they need to regulate not just data monetization, but more widely address the value and cost of data in people’s lives.
[ You’re smart and curious about the world. So are The Conversation’s authors and editors. You can read us daily by subscribing to our newsletter[25]. ]
References
- ^ New proposed legislation (www.scribd.com)
- ^ true value (www.scribd.com)
- ^ obtaining, collecting, processing, selling, using or sharing user data (www.scribd.com)
- ^ delete their data (www.scribd.com)
- ^ As a researcher (datascience.virginia.edu)
- ^ Tweets, Facebook likes, Twitches, Google searches, online comments, one-click purchases, even viewing-but-skipping-over a photograph in your feed (aeon.co)
- ^ data is worth and to whom it is sold (www.warner.senate.gov)
- ^ less than a dollar for an average person’s data (ig.ft.com)
- ^ One user sold his data for $2,733 on Kickstarter (www.theguardian.com)
- ^ shall develop a method or methods for calculating the value of user data (www.scribd.com)
- ^ nevodka/Shutterstock.com (www.shutterstock.com)
- ^ Data shared by a few (doi.org)
- ^ predict a user’s sexual orientation (www.pnas.org)
- ^ figured out a teen girl was pregnant before her father did (www.forbes.com)
- ^ use predictive analytics (doi.org)
- ^ current research (www.samuellengen.net)
- ^ social credit system (www.npr.org)
- ^ surveillance capitalism (www.faz.net)
- ^ tune and herd our behaviour towards the most profitable outcomes (www.theguardian.com)
- ^ Facebook experimented with its feed to influence the emotional state of users (dx.doi.org)
- ^ pcruciatti/Shutterstock.com (www.shutterstock.com)
- ^ hidden biases (hbr.org)
- ^ networked discrimination (static.newamerica.org)
- ^ Google search rankings reinforce negative stereotypes about women of color (safiyaunoble.com)
- ^ You can read us daily by subscribing to our newsletter (theconversation.com)
Authors: Samuel Lengen, Research Associate at Data Science Institute, University of Virginia