AI in Healthcare: Integrating Advanced Technologies with Traditional Practices for Enhanced Patient Care
Keywords:
Data privacy, algorithmic bias, transparency, accountability, ethical considerations, regulatory frameworks, personalized medicine, predictive analytics, preventive care, drug discovery, drug developmentAbstract
The field of healthcare is fast changing due to artificial intelligence (AI), which presents previously unheard-of possibilities for bettering patient care, expediting medical research, and boosting the delivery of healthcare as a whole. The many facets of AI's influence on healthcare are examined in this review, with particular attention paid to drug research and discovery, personalized medicine, predictive analytics and preventive care, and ethical and legal issues. AI has made tremendous advances in personalized medicine, enabling the creation of individualized treatment regimens based on a patient's unique genetic, environmental, and behavioral characteristics. AI-powered technologies improve treatment plans and make it easier to identify genetic markers, improving the accuracy and potency of medical therapies. AI's ability to analyze large datasets has transformed predictive analytics and preventive care by enabling precise health risk projections and early detection of possible problems. By encouraging ongoing monitoring and individualized preventive care, this proactive strategy enhances both operational effectiveness and health outcomes. The process of drug discovery and development has been made more efficient by AI-driven innovations that have improved target identification, optimized compound screening, and clinical trial management. These developments speed up the release of novel medicines by cutting development time and expenses and improving the chance of a drug's beneficial effects. But there are also a lot of moral and legal issues with integrating AI in healthcare. To ensure the appropriate and fair use of AI technology, concerns including data privacy, algorithmic bias, transparency, accountability, and changing legislative frameworks need to be addressed. Sustaining patient confidence and attaining successful results depend heavily on implementing strong data protection, reducing biases, and encouraging openness.
References
Agrawal, J. (2018). Stethee, an AI Powered Electronic Stethoscope. Anaesthesia, Pain & Intensive Care, 22(3), 412–413.
Agrawal, P. (2018). Artificial intelligence in drug discovery and development. J Pharmacovigil, 6(2).
Angus, D. C. (2020). Randomized clinical trials of artificial intelligence. Jama, 323(11), 1043–1045.
Chan, H. C. S., Shan, H., Dahoun, T., Vogel, H., & Yuan, S. (2019). Advancing drug discovery via artificial intelligence. Trends in Pharmacological Sciences, 40(8), 592– 604.
Cruciger, O., Schildhauer, T. A., Meindl, R. C., Tegenthoff, M., Schwenkreis, P., Citak, M., & Aach, M. (2016). Impact of locomotion training with a neurologic controlled hybrid assistive limb (HAL) exoskeleton on neuropathic pain and health related quality of life (HRQoL) in chronic SCI: a case study. Disability and Rehabilitation: Assistive Technology, 11(6), 529–534.
Díaz, Ó. Dalton, J. A. R., & Giraldo, J. (2019). Artificial intelligence: a novel approach for drug discovery. Trends in Pharmacological Sciences, 40(8), 550–551.
Greenberg, N., Docherty, M., Gnanapragasam, S., & Wessely, S. (2020). Managing mental health challenges faced by healthcare workers during covid-19 pandemic. Bmj, 368.
Habermann, J. (2021). Psychological impacts of COVID-19 and preventive strategies: A review.
Harrer, S., Shah, P., Antony, B., & Hu, J. (2019). Artificial intelligence for clinical trial design. Trends in Pharmacological Sciences, 40(8), 577–591.
Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? ArXiv Preprint ArXiv: 1712.09923.
Hummel, P., & Braun, M. (2020). Just data? Solidarity and justice in data-driven medicine. Life Sciences, Society and Policy, 16(1), 1–18.
Lee, E. (2021). How do we build trust in machine learning models? Available at SSRN 3822437.
Lip, S., Visweswaran, S., & Padmanabhan, S. (2020). Transforming Clinical Trials with Artificial Intelligence. In Artificial Intelligence (pp. 297–306). Productivity Press.
Luengo-Oroz, M., Pham, K. H., Bullock, J., Kirkpatrick, R., Luccioni, A., Rubel, S., Wachholz, C., Chakchouk, M., Biggs, P., & Nguyen, T. (2020). Artificial intelligence cooperation to support the global response to COVID-19. Nature Machine Intelligence, 2(6), 295–297.
Maphumulo, W. T., & Bhengu, B. R. (2019). Challenges of quality improvement in the healthcare of South Africa post-apartheid: A critical review. Curationis, 42(1), 1–9.
Mayorga-Ruiz, I., Jiménez-Pastor, A., Fos-Guarinos, B., López-González, R., García-Castro, F., & Alberich-Bayarri, Á. (2019).
Springer. McNeill, P. M., & Walton, M. (2002). Medical harm and the consequences of error for doctors. The Medical Journal of Australia, 176(5), 222–225.
Pavli, A., Theodoridou, M., & Maltezou, H. C. (2021). Post-COVID syndrome: Incidence, clinical spectrum, and challenges for primary healthcare professionals. Archives of Medical Research. Prabu, A. (2021). SmartScope: An AI-Powered Digital Auscultation Device to Detect Cardiopulmonary Diseases.
Ross, S., Bond, C., Rothnie, H., Thomas, S., & Macleod, M. J. (2009). What is the scale of prescribing errors committed by junior doctors? A systematic review. British Journal of Clinical Pharmacology, 67(6), 629–640.
Shaheen, M. Y. (2021a). Adoption of machine learning for medical diagnosis. Shaheen, M. Y. (2021b). AI in Healthcare: medical and socio-economic benefits and challenges.
Shi, D., Zhang, W., Zhang, W., & Ding, X. (2019). A review on lower limb rehabilitation exoskeleton robots. Chinese Journal of Mechanical Engineering, 32(1), 1–11.
Sucharitha, G., & Chary, D. V. (2021). Predicting the effect of Covid-19 by using artificial intelligence: A case study. Materials Today: Proceedings.
Ting, D. S. W., Liu, Y., Burlina, P., Xu, X., Bressler, N. M., & Wong, T. Y. (2018). AI for medical imaging goes deep. Nature Medicine, 24(5), 539–540.
Vaishya, R., Javaid, M., Khan, I. H., & Haleem, A. (2020). Artificial Intelligence (AI) applications for COVID-19 pandemic. Diabetes & Metabolic Syndrome: Clinical Research & Reviews, 14(4), 337–339.
Van der Schaar, M., Alaa, A. M., Floto, A., Gimson, A., Scholtes, S., Wood, A., McKinney, E., Jarrett, D., Lio, P., & Ercole, A. (2021). How artificial intelligence and machine learning can help healthcare systems respond to COVID-19. Machine Learning, 110(1), 1–14. Woo, M. (2019). An AI boost for clinical trials. Nature, 573(7775), S100–S100.
Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019; 366:447–53.
Samek W, Montavon G, Vedaldi A, Hansen LK, Müller K-R, editors. Explainable AI: interpreting, explaining and visualizing deep learning. Berlin: Springer; 2019. https://doi.org/10.1007/978-3-030-28954-6.
Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, et al. A guide to deep learning in healthcare. Nat Med. 2019; 25:24–9
Islam SR, Eberle W, Ghafoor SK. Towards quantifcation of explainability in explainable artifcial intelligence methods. ArXiv191110104 Cs Q-Fin. 2019. http://arxiv.org/abs/1911.10104 . Accessed 2 Oct 2020
Samek W, Montavon G, Lapuschkin S, Anders CJ, Müller K-R. Toward interpretable machine learning: transparent deep neural networks and beyond. ArXiv200307631 Cs Stat. 2020. http://arxiv.org/abs/2003.07631. Accessed 2 Oct 2020.
Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W, Müller K-R. Unmasking Clever Hans predictors and assessing what machines really learn. Nat Commun. 2019; 10:1096.
Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLOS Med. 2018; 15:e1002683
Olsen HP, Slosser JL, Hildebrandt TT, Wiesener C. What’s in the box? The legal requirement of explainability in computationally aided decisionmaking in public administration. SSRN Scholarly Paper. Rochester: Social Science Research Network; 2019. https://doi.org/10.2139/ssrn.3402974.
Schönberger D. Artifcial intelligence in healthcare: a critical analysis of the legal and ethical implications. Int J Law Inf Technol. 2019; 27:171–203.
Cohen IG. Informed consent and medical artifcial intelligence: what to tell the patient? SSRN Scholarly Paper. Rochester, NY: Social Science Research Network; 2020. https://doi.org/10.2139/ssrn.3529576.
Beaudouin V, Bloch I, Bounie D, Clémençon S, d’Alché-Buc F, Eagan J, et al. Identifying the “right” level of explanation in a given situation. SSRN Electron J. 2020. https://doi.org/10.2139/ssrn.3604924.
FDA. Proposed regulatory framework for modifcations to artifcial intelligence/machine learning (AI/ML)-based Software as a Medical Device (SaMD). 2020. https://www.fda.gov/fles/medical%20devices/published/ US-FDA-Artificial-Intelligence-and- Machine Learning-Discussion-Paper .pdf. Accessed 5 July 2020
Hacker P, Krestel R, Grundmann S, Naumann F. Explainable AI under contract and tort law: legal incentives and technical challenges. SSRN Scholarly Paper. Rochester, NY: Social Science Research Network; 2020. https://papers.ssrn.com/abstract=3513433. Accessed 13 Feb 2020.
Ferretti A, Schneider M, Blasimme A. Machine learning in medicine: opening the new data protection black box. Eur Data Prot Law Rev EDPL. 2018; 4:320
Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS ONE. 2017; 12:e0174944.
Kakadiaris IA, Vrigkas M, Yen AA, Kuznetsova T, Budof M, Naghavi M. Machine learning outperforms ACC/AHA CVD risk calculator in MESA. J Am Heart Assoc. 2018; 7:e009476.
Liu T, Fan W, Wu C. A hybrid machine learning approach to cerebral stroke prediction based on imbalanced medical dataset. Artif Intell Med. 2019; 101:101723–101723.
Cutillo CM, Sharma KR, Foschini L, Kundu S, Mackintosh M, Mandl KD. Machine intelligence in healthcare—perspectives on trustworthiness, explainability, usability, and transparency. NPJ Digit Med. 2020; 3:1–5.
Tonekaboni S, Joshi S, McCradden MD, Goldenberg A. What clinicians want: contextualizing explainable machine learning for clinical end use. ArXiv190505134 Cs Stat. 2019. http://arxiv.org/abs/1905.05134. Accessed 3 Sept 2019.
Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academies Press (US); 2001. http:// www.ncbi.nlm.nih.gov/books/NBK222274/ . Accessed 21 May 2020.
Barry MJ, Edgman-Levitan S. Shared decision making—the pinnacle patient-centered care. N Engl J Med. 2012; 366:780–1.
Kunneman M, Montori VM, Castaneda-Guarderas A, Hess EP. What is shared decision making? (And what it is not). Acad Emerg Med. 2016; 23:1320–4.
O’Neill ES, Grande SW, and Sherman a, Elwyn G, Coylewright M. Availability of patient decision aids for stroke prevention in atrial fbrillation: a systematic review. Am Heart J. 2017; 191:1–11.
Noseworthy PA, Brito JP, Kunneman M, Hargraves IG, Zeballos-Palacios C, Montori VM, et al. Shared decision-making in atrial fbrillation: navigating complex issues in partnership with the patient. J Interv Card Electrophysiol. 2019; 56:159–63.
Dobler CC, Sanchez M, Gionfriddo MR, Alvarez-Villalobos NA, Ospina NS, Spencer-Bonilla G, et al. Impact of decision aids used during clinical encounters on clinician outcomes and consultation length: a systematic review. BMJ Qual Saf. 2019; 28:499–510.
Noseworthy PA, Kaufman ES, Chen LY, Chung MK, Elkind Mitchell SV, Joglar JA, et al. Subclinical and device-detected atrial fbrillation: pondering the knowledge gap: a scientifc statement from the American Heart Association. Circulation. 2019; 140:e944–63.
Spencer-Bonilla G, Thota A, Organick P, Ponce OJ, Kunneman M, Giblon R, et al. Normalization of a conversation tool to promote shared decision making about anticoagulation in patients with atrial fbrillation within a practical randomized trial of its efectiveness: a cross-sectional study. Trials. 2020; 21:395.
Bonner C, Bell K, Jansen J, Glasziou P, Irwig L, Doust J, et al. Should heart age calculators be used alongside absolute cardiovascular disease risk assessment? BMC Cardiovasc Disord. 2018; 18:19.
Bjerring JC, Busch J. Artifcial intelligence and patient-centered decisionmaking. Philos Technol. 2020. https://doi.org/10.1007/s13347-019-00391 -6
Politi MC, Dizon DS, Frosch DL, Kuzemchak MD, Stiggelbout AM. Importance of clarifying patients’ desired role in shared decision making to match their level of engagement with their preferences. BMJ. 2013. Https://doi.org/10.1136/bmj.f7066 .
Stacey D, Légaré F, Lewis K, Barry MJ, Bennett CL, Eden KB, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2017. https://doi.org/10.1002/14651858.CD001431. pub5.
Beauchamp TL. Principles of biomedical ethics. Paperback May-2008. New York: Oxford University Press; 2008.
Gillon R. Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics. J Med Ethics. 2015; 41:111–6.
Mittelstadt B. Principles alone cannot guarantee ethical AI. Nat Mach Intell. 2019; 1:501–7. 39. Faden RR, Beauchamp TL. A history and theory of informed consent. Oxford: Oxford University Press; 1986.
Raz J. The Morality of Freedom. Oxford: Oxford University Press; 2020. https://doi.org/10.1093/0198248075.001.0001/acprof-9780198248071.
McDougall RJ. Computer knows best? The need for value-fexibility in medical AI. J Med Ethics. 2019; 45:156–60.
Grote T, Berens P. On the ethics of algorithmic decision-making in healthcare. J Med Ethics. 2019. https://doi.org/10.1136/medethics-2019-105586
Beil M, Proft I, van Heerden D, Sviri S, van Heerden PV. Ethical considerations about artifcial intelligence for prognostication in intensive care. Intensive Care Med Exp. 2019. https://doi.org/10.1186/s4063 5-019-0286-6.
London AJ. Artifcial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent Rep. 2019; 49:15–21.