Biases in algorithms hurt those looking for information on health
- Written by Anjana Susarla, Professor of Information Systems, Michigan State University
YouTube hosts millions of videos related to health care.
The Health Information National Trends Survey reports that 75% of Americans go to the internet[1] first when looking for information about health or medical topics. YouTube is one of the most popular online platforms[2], with billions of views every day, and has emerged as a significant source of health information[3].
Several public health agencies, such as state health departments[4], have invested resources in YouTube as a channel for health communication. Patients with chronic health conditions especially rely on social media, including YouTube videos, to learn more about how to manage their conditions.
But video recommendations on such sites could exacerbate preexisting disparities in health.
A significant fraction of the U.S. population is estimated to have limited health literacy[5], or the capacity to obtain, process and understand basic health information, such as the ability to read and comprehend prescription bottles, appointment slips or discharge instructions from health clinics.
Studies of health literacy, such as[6] the National Assessment of Adult Literacy conducted in 2003, estimated that only 12% of adults had proficient health literacy skills. This has been corroborated in subsequent studies[7].
I’m a professor of information systems[8], and my own research[9] has examined how social media platforms such as YouTube widen such health literacy disparities by steering users toward questionable content.
On YouTube
Extracting thousands of videos purporting to be about diabetes, I verified whether the information shown conforms to valid medical guidelines.
I found that the most popular and engaging videos[10] are significantly less likely to have medically valid information.
Users typically encounter videos on health conditions through keyword searches on YouTube. YouTube then provides links to authenticated medical information, such as the top-ranked results. Several of these are produced by reputable health organizations.
[Deep knowledge, daily. Sign up for The Conversation’s newsletter[11].]
Recently, YouTube has adjusted how search results are displayed, allowing results to be ranked by “relevance[12]” and providing links to verified medical information[13].
However, when I recruited physicians[14] to watch the videos and rate them on whether these would be considered valid and understandable from a patient education perspective, they rated YouTube’s recommendations poorly.
I found that the most popular videos are the ones that tend to have easily understandable information but are not always medically valid. A study on the most popular videos on COVID-19 likewise found that a quarter of videos did not contain medically valid information[15].
The health literacy divide
This is because the algorithms underlying recommendations on social media platforms are biased toward engagement and popularity.
Based on how digital platforms provide information to search queries, a user with greater health literacy is more likely to discover usable medical advice from a reputed health care provider, such as the Mayo Clinic. The same algorithm will steer a less literate user toward fake cures or misleading medical advice.
This could be especially harmful for minority groups. Studies of health literacy in the United States have found that the impact of limited health literacy disproportionately impacts minorities[16].
We do not have enough studies on the state of health literacy among minority populations, especially in urban areas[17]. That makes it challenging to design health communication aimed at minorities, and interventions to improve the utilization of existing health care resources.
There can also be cultural barriers regarding health care in minority populations[18] that exacerbate the literacy barriers. Insufficient education and lack of self-management of chronic care have also been highlighted as challenges for minorities[19].
Algorithmic biases
Correcting algorithmic biases and providing better information to users of technology platforms would go a long way in promoting equity.
For example, a pioneering study by the Gender Shades project[20] examined disparities in identifying gender and skin type across different companies that provide commercial facial recognition software. It concluded that companies were able to make progress in reducing these disparities[21] once issues were pointed out.
According to some estimates, Google receives[22] over a billion health questions everyday. Especially those with low health literacy have a substantial risk of encountering medically unsubstantiated information, such as popular myths[23] or active conspiracy theories[24] that are not based on scientific evidence.
The World Economic Forum has dubbed health-related misinformation an “infodemic[25].” Digital platforms where anyone can engage also make them vulnerable[26] to misinformation, accentuating disparities in health literacy, as my own work shows.
Social media and search companies have partnered with health organizations such as the Mayo Clinic to provide validated information and reduce the spread of misinformation[27]. To make health information on YouTube more equitable, those who design recommendation algorithms[28] would have to incorporate feedback from clinicians and patients as well as end users[29].
References
- ^ 75% of Americans go to the internet (hints.cancer.gov)
- ^ popular online platforms (www.pewresearch.org)
- ^ significant source of health information (pubmed.ncbi.nlm.nih.gov)
- ^ public health agencies, such as state health departments (www.thepermanentejournal.org)
- ^ limited health literacy (www.ncbi.nlm.nih.gov)
- ^ Studies of health literacy, such as (www.canr.msu.edu)
- ^ corroborated in subsequent studies (www.canr.msu.edu)
- ^ professor of information systems (scholar.google.com)
- ^ my own research (papers.ssrn.com)
- ^ the most popular and engaging videos (papers.ssrn.com)
- ^ Sign up for The Conversation’s newsletter (theconversation.com)
- ^ relevance (www.inf.unibz.it)
- ^ links to verified medical information (www.nature.com)
- ^ I recruited physicians (2019.dsconf.cc)
- ^ medically valid information (gh.bmj.com)
- ^ disproportionately impacts minorities (health.gov)
- ^ minority populations, especially in urban areas (www.ncbi.nlm.nih.gov)
- ^ health care in minority populations (www.hrsa.gov)
- ^ highlighted as challenges for minorities (www.healthline.com)
- ^ Gender Shades project (gendershades.org)
- ^ make progress in reducing these disparities (www.media.mit.edu)
- ^ Google receives (www.beckershospitalreview.com)
- ^ popular myths (www.hsph.harvard.edu)
- ^ active conspiracy theories (www.nature.com)
- ^ infodemic (www.weforum.org)
- ^ where anyone can engage also make them vulnerable (theconversation.com)
- ^ provide validated information and reduce the spread of misinformation (www.bloombergquint.com)
- ^ design recommendation algorithms (pubmed.ncbi.nlm.nih.gov)
- ^ as well as end users (www.fastcompany.com)
Authors: Anjana Susarla, Professor of Information Systems, Michigan State University
Read more https://theconversation.com/biases-in-algorithms-hurt-those-looking-for-information-on-health-140616