Artificial intelligence in medicine raises legal and ethical concerns
- Written by Sharona Hoffman, Professor of Health Law and Bioethics, Case Western Reserve University
The use of artificial intelligence in medicine is generating great excitement and hope for treatment advances.
AI generally refers to[1] computers’ ability to mimic human intelligence and to learn. For example, by using machine learning, scientists[2] are working to develop algorithms[3] that will help them make decisions about cancer treatment. They hope that computers will be able to analyze radiological images and discern which cancerous tumors will respond well to chemotherapy and which will not[4].
But AI in medicine also raises significant legal and ethical challenges. Several of these are concerns about privacy, discrimination, psychological harm and the physician-patient relationship. In a forthcoming[5] article, I argue that policymakers should establish a number of safeguards around AI, much as they did when genetic testing became commonplace.
Potential for discrimination
AI involves the analysis[6] of very large amounts of data to discern patterns, which are then used to predict the likelihood of future occurrences. In medicine, the data sets can come from electronic health records and health insurance claims but also from several surprising sources. AI can draw upon purchasing records[7], income[8] data, criminal records[9] and even social media[10] for information about an individual’s health.
AP Photo/David Goldman[11]Researchers are already using AI to predict a multitude of medical conditions. These include heart disease[12], stroke[13], diabetes[14], cognitive decline[15], future opioid abuse[16] and even suicide[17]. As one example, Facebook employs an algorithm that makes suicide predictions[18] based on posts with phrases such as “Are you okay?” paired with “Goodbye” and “Please don’t do this.”
This predictive capability of AI raises significant ethical concerns in health care. If AI generates predictions about your health, I believe that information could one day be included in your electronic health records.
Anyone with access to your health records could then see predictions about cognitive decline or opioid abuse. Patients’ medical records are seen by dozens or even hundreds[19] of clinicians and administrators in the course of medical treatment. Additionally, patients themselves often authorize others to access their records: for example, when they apply for employment or life insurance.
Data broker industry giants such as LexisNexis and Acxiom[20] are also mining personal data and engaging in AI activities. They could then sell medical predictions to any interested third parties, including marketers, employers, lenders, life insurers and others. Because these businesses are not health care providers or insurers, the HIPAA Privacy Rule[21] does not apply to them. Therefore, they do not have to ask patients for permission to obtain their information and can freely disclose it.
Such disclosures can lead to discrimination. Employers, for instance, are interested in workers who will be healthy and productive, with few absences and low medical costs. If they believe certain applicants will develop diseases in the future, they will likely reject them. Lenders, landlords, life insurers and others might likewise make adverse decisions about individuals based on AI predictions.
Lack of protections
The Americans with Disabilities Act[22] does not prohibit discrimination based on future medical problems. It applies only to current and past ailments. In response to genetic testing, Congress enacted the Genetic Information Nondiscrimination Act[23]. This law prohibits employers and health insurers from considering genetic information and making decisions based on related assumptions about people’s future health conditions. No law imposes a similar prohibition with respect to nongenetic predictive data.
Reuters/Bryan Woolston[24]AI health prediction can also lead to psychological harm. For example, many people could be traumatized if they learn that they will likely suffer cognitive decline later in life. It is even possible that individuals will obtain health forecasts directly from commercial entities that bought their data. Imagine obtaining the news that you are at risk of dementia through an electronic advertisement urging you to buy memory-enhancing products.
When it comes to genetic testing, patients are advised to seek genetic counseling so that they can thoughtfully decide whether to be tested and better understand test results. By contrast, we do not have AI counselors who provide similar services to patients.
Yet another concern relates to the doctor-patient relationship. Will AI diminish the role of doctors? Will computers be the ones to make predictions, diagnoses and treatment suggestions, so that doctors simply implement the computers’ instructions? How will patients feel about their doctors if computers have a greater say in making medical determinations?
These concerns are exacerbated by the fact that AI predictions are far from infallible. Many factors can contribute to errors. If the data used to develop[25] an algorithm are flawed – for instance, if they use medical records that contain errors – the algorithm’s output will be incorrect. Therefore, patients may suffer discrimination or psychological harm when in fact they are not at risk of the predicted ailments.
A call for caution
What can be done to protect the American public? I have argued in past work[26] for the expansion of the HIPAA Privacy Rule so that it covers anyone who handles health information for business purposes. Privacy protections should apply not only to health care providers and insurers, but also to commercial enterprises. I have also argued[27] that Congress should amend the Americans with Disabilities Act to prohibit discrimination based on forecasts of future diseases.
Physicians who provide patients with AI predictions should ensure that they are thoroughly educated about the pros and cons of such forecasts. Experts should counsel patients about AI just as trained professionals do about genetic testing.
The prospect of AI can over-awe people. Yet, to ensure that AI truly promotes patient welfare, physicians, researchers and policymakers must recognize its risks and proceed with caution.
[ Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter[28]. ]
References
- ^ refers to (www.merriam-webster.com)
- ^ scientists (www.sciencedaily.com)
- ^ algorithms (www.pewresearch.org)
- ^ will not (www.ncbi.nlm.nih.gov)
- ^ forthcoming (papers.ssrn.com)
- ^ the analysis (www.healthcatalyst.com)
- ^ purchasing records (www.charlotteobserver.com)
- ^ income (www.charlotteobserver.com)
- ^ criminal records (www.politico.com)
- ^ social media (www.npr.org)
- ^ AP Photo/David Goldman (www.apimages.com)
- ^ heart disease (www.theverge.com)
- ^ stroke (www.itnonline.com)
- ^ diabetes (www.politico.com)
- ^ cognitive decline (www.ncbi.nlm.nih.gov)
- ^ opioid abuse (www.politico.com)
- ^ suicide (www.npr.org)
- ^ makes suicide predictions (www.washingtonpost.com)
- ^ dozens or even hundreds (bok.ahima.org)
- ^ LexisNexis and Acxiom (www.politico.com)
- ^ HIPAA Privacy Rule (www.hhs.gov)
- ^ Americans with Disabilities Act (adata.org)
- ^ Genetic Information Nondiscrimination Act (www.genome.gov)
- ^ Reuters/Bryan Woolston (pictures.reuters.com)
- ^ develop (onlinelibrary.wiley.com)
- ^ work (papers.ssrn.com)
- ^ argued (papers.ssrn.com)
- ^ Sign up for The Conversation’s daily newsletter (theconversation.com)
Authors: Sharona Hoffman, Professor of Health Law and Bioethics, Case Western Reserve University