Recommendations and Rationale
1. ACP firmly believes that AI-enabled technologies should complement and not supplant the logic and decision making of physicians and other clinicians.
Artificial intelligence– and ML-enabled tools hold great promise for informing, guiding, and improving patient care. In some instances, AI- and ML-enabled tools may even outperform clinicians. However, ACP believes that a physician’s training and observations must remain the central tenet of patient care. Technology, especially AI, can be misguided through inappropriate, incomplete, or flawed data inputs, leading to flawed outputs, which can mislead physicians into ill-informed decision making with dangerous consequences for patients. Therefore, physicians must be careful not to place too much faith or decision-making power in AI- and ML-enabled tools.
Historically, however, humans have been prone to overreliance on automated systems (
44), including those enabled by AI and ML. “Automation complacency” (
45) and “automation bias,” or the tendency to “trust algorithms without sufficient skepticism,” are not uncommon among users of such systems (
44). Furthermore, experts have been just as likely as beginners to overly rely on autonomous systems, and this overreliance is mostly unaffected by training (
46). Even though skepticism is warranted, being overly skeptical of AI can have ramifications, too: It could mean that patients and physicians will not experience a benefit that AI could offer. A balanced approach to AI technologies is in order.
The language that physicians, technology developers, and society use to describe the qualities and features of AI tools has substantial implications for how society thinks about and uses AI. Some have cautioned against using anthropomorphic language in discussing AI and its applications, suggesting that the psychological effects of doing so can create human-centric expectations and assumptions and increase trust and reliance on systems (
28,
47–49). As one way to mitigate the risks of overreliance and automation bias, ACP believes that everyone, especially persons in the fields of AI academia and health care, has a responsibility to avoid anthropomorphizing AI models and their functionality, including using appropriate AI terminology that does not ascribe human characteristics to AI models.
Anthropomorphic language can include terms such as
think,
learn, and
hallucinate. Some have argued that describing certain AI outputs as “hallucinations,” in addition to being inaccurate and anthropomorphic, is not ideal because of stigmatization (
50).
Confabulation may be a better term, although it may raise similar concerns (
50,
51).
2. ACP believes that the development, testing, and use of AI in health care must be aligned with principles of medical ethics, serving to enhance patient care, clinical decision making, the patient–physician relationship, and health care equity and justice.
The use of AI in health care must align with principles of medical ethics (
52). Physicians as individuals and medicine as a profession have a duty to put patient interests first. Artificial intelligence technology, when appropriately used, should serve to enhance patient care; decision making based in patient values, interests, and preferences; and the patient–physician relationship. Physician duties articulated by the College in the ACP Ethics Manual are based on the ethical principles of
beneficence—“the duty to promote good and act in the best interest of the patient”;
nonmaleficence—“the duty to do no harm to the patient”;
respect for patient autonomy—“the duty to protect and foster a patient’s free, uncoerced choices” and the related duty of truth-telling; and
justice—the equitable distribution of “the life-enhancing opportunities afforded by health care” (
52).
Maintaining the patient–physician relationship requires care. Technology, including EHRs, can be used in ways that support or detract from this relationship (
53). Artificial intelligence should be implemented in ways that do not harm or interfere with this relationship but instead enhance and promote the therapeutic alliance between patient and physician. Ambient clinical documentation (that is, audio technologies that transform natural patient–physician communication into a draft clinical note format), for example, could promote better communication and connection during a visit, freeing a physician from the distraction of staring at a computer screen. Other technologies, such as conversational agents or chatbots, have the potential to support chronic care management and patient access; however, if their use by patients becomes required or burdensome, they could interfere with patient–physician relationships (
54).
Physicians should advocate for AI tools that support putting the patient–physician relationship first and, as a matter of respect for patient uniqueness and dignity, support informed decision making according to the patient’s needs, values, and preferences. The promise of AI is that it can summarize data and information more quickly and comprehensively to inform decision making; the peril of AI is that it could exert too much influence on clinical decision making and be subject to hidden or unconscious biases or agendas, such as those of developers and or deployers (for example, if AI outputs exclude a clinically indicated option because of cost). At the individual patient level, this means ensuring AI tools do not replace or interfere with patient–physician decisions. At a system level, this means ensuring patients, physicians, and other clinicians are involved in the design, development, implementation, and testing of AI tools and systems throughout the entire product cycle (
Appendix Figure 1), to advocate for solutions to the problems they face and to ensure that the tools do not exacerbate health disparities.
3. ACP reaffirms its call for transparency in the development, testing, and use of AI for patient care to promote trust in the patient–physician relationship. ACP recommends that patients, physicians, and other clinicians be made aware, when possible, that AI tools are likely being used in medical treatment and decision making.
Artificial intelligence transparency is important for patients as well as physicians and other clinicians. In a July 2021 position paper, “Health Information Privacy, Protection, and Use in the Expanding Digital Health Ecosystem,” the College emphasized the need for transparency in AI use (
55). Specifically, ACP called for increased transparency around the collection and use of personal health information (for example, “methods of deidentification, timelines, allowable disclosures, and when consent is needed”) in AI development (
55). The ACP has also advocated that clinical entities should develop clear policies that relate to the aggregation of data and their use and release for purposes other than direct care of the patient (for example, performance aggregation and reporting and research). We reaffirm these views, especially for the use of patient data to train AI models.
Even if patients are not, at present, explicitly informed of all the ways technology is involved in their care—for example, they may or may not be told about computer-assisted electrocardiogram or mammography interpretation—the newness of AI and its potential for clinically significant effects on care suggests that honesty and transparency about its use are paramount. At least 1 study has indicated that patients would prefer to have options when it comes to the use of AI, including the choice of refusing certain uses of AI in their care (
56). Focus groups conducted in November 2019 that inquired about patient choice and autonomy found that “the preservation of choice” contributed to participants’ comfort with the use of AI in their care, and that participants believed that “patients should have the right to choose to have an AI tool used in their care and be able to opt-out of AI involvement if they felt strongly” (
56).
Other studies have found that patients have mixed views about the use of AI in their care. A December 2019 survey of patient perspectives of AI found that an overwhelming majority of respondents considered it important for them to be told when an AI program has played a big role in their diagnosis or treatment (887 [95.7%];
n = 926) (
57). A smaller but still substantial majority of respondents also considered it important for them to be told when an AI program has played a small role in their diagnosis or treatment (801 [86.5%]) (
57). The authors also found that “most respondents were very concerned or somewhat concerned about AI’s unintended consequences, including misdiagnosis (91.5%), privacy breaches (70.8%), less time with clinicians (69.6%), and higher health care costs (68.4%). A higher proportion of respondents who self-identified as being members of racial and ethnic minority groups indicated being very concerned about these issues, compared with White respondents” (
57).
Although we respect patient autonomy and encourage transparency with patients when AI is being used in their care, most physicians do not have a clear understanding of where AI and ML are already in use or integrated in their own health care systems or practices. Knowing where these technologies are being implemented in existing workflows and EHR systems can help physicians determine when it may be wise to question software output and better gauge the risks of using AI-enabled systems on an individual patient. Recognizing these needs, a group of practicing physicians collaborated in creating “The Physicians’ Charter for Responsible AI” to guide the development, testing, and use of AI tools in clinical practice (
58). The Charter “stems from a growing concern […] about the rapid pace of AI and how it will be implemented in healthcare.” The College agrees with the authors that the views of practicing physicians are necessary in guiding the incorporation of AI into health care.
To promote patient safety and transparency, ACP calls for transparency, clarity, and education for physicians and other clinicians about where AI-generated information is entering workflows and ensuring that physician knowledge and discretion can supersede AI-generated “defaults,” when appropriate. In addition, there must be readily available mechanisms for clinicians to call attention to, and for AI developers to correct, errors in AI-generated output or decision making. Physicians must seek to be aware of AI uses in their practice and exercise clinical and professional judgment in making appropriate disclosures, just as they routinely do in other aspects of care.
We recognize that the feasibility of transparency with patients regarding AI use and availability of opt-out mechanisms can be variable and dependent on the nature of the AI tools themselves; the systems in which they may be integrated; and perhaps most significantly, the level of disclosure or transparency, if any, with physicians. For instance, transparency with patients about the integration of AI into certain devices, such as a glucometer or insulin pump, and the possibility of opting out of using such a device may be reasonably feasible. However, transparency with patients about AI integration into EHR systems and other common sources of information for physicians (for example, search engines) may not be as feasible, especially given that physicians are often not made aware of the integration. Questions also remain about whether disclosure or the ability to opt-out is necessary or feasible when AI tools are used to assist with or complete tasks commonly associated with administrative burden. Recognizing that some AI tools may operate outside the patient–physician relationship, health systems may need to notify patients directly about those uses (for example, as part of consent processes). New approaches will likely be necessary to address whether and how to disclose AI use and obtain consent in specific circumstances.
In addition, to appropriately empower physicians with the tools they need to deliver health care safely and effectively, the College believes that the FDA and Office of the National Coordinator for Health Information Technology should engage patients, physicians, and others in health care in developing standard, easy-to-understand model labels and standardized reporting checklists that include indicators for model quality and performance, use criteria (for example, appropriate patient populations), and other important safety and effectiveness information. These labels must be updated as models are changed and updated. Training and testing data sources and attributes should be transparent and accessible to physicians, regulators, and auditors. Algorithmic impact assessments, audits, and postmarket monitoring are also recommended.
4. ACP reaffirms that AI developers, implementers, and researchers should prioritize the privacy and confidentiality of patient and clinician data collected and used for AI model development and deployment.
The ACP Ethics Manual (seventh edition) defines privacy as “freedom from unauthorized intrusion” and confidentiality as “a matter of respecting the privacy of patients, encouraging them to seek medical care and discuss their problems candidly, and preventing discrimination on the basis of their medical conditions” (
52). In its July 2021 position paper, “Health Information Privacy, Protection, and Use in the Expanding Digital Health Ecosystem,” the College emphasized the need for transparency in AI. The ACP also recommended federated learning as a potential solution to privacy issues involved with AI (
55). Federated learning enables AI tools to be exposed to very large, geographically diverse data sets without the need for sharing sensitive clinical data (
55). We reaffirm the critical need for privacy, confidentiality, and transparency in the use of patient and clinician data (for example, physician practice data, quality and safety metrics, and so forth) in all phases of AI cycles. If patient, physician, or other clinician data must be used for the development of AI models, the data should first be deidentified and aggregated. We note, however, that deidentification of data, particularly if the data is unstructured, can be a substantial challenge. We also renew our call for comprehensive federal privacy legislation, with special provisions regarding privacy protections for AI data sets included in such legislation.
5. ACP recommends that clinical safety and effectiveness, as well as health equity, must be a top priority for developers, implementers, researchers, and regulators of AI-enabled medical technology and that the use of AI in the provision of health care should be approached by using a continuous improvement process that includes a feedback mechanism. This necessarily includes end-user testing in diverse real-world clinical contexts, using real patient demographics, and peer-reviewed research. Special attention must be given to known and evolving risks that are associated with the use of AI in medicine.
Successful integration of AI in medicine will require the involvement of end-user physicians and clinicians, and when appropriate, patients, in the development and testing of AI tools and systems to ensure clinically appropriate AI uses and applications and to maximize the feasibility of AI adoption. Vendors, model developers, and health care systems should ensure that models are trained, maintained, and updated using the latest clinical practice guidelines. In addition, the safety, utility, and applicability of AI models are dependent on the quality and attributes of the data used for their development (
59) (
Appendix Figure 2). Flawed data can contribute to false-positive and false-negative results with clinically significant health and safety implications for patients (
59).
Since the 1980s, ACP has advocated that efforts to assess new and emerging technologies should ensure that they are safe and effective before they become a part of common medical practice. The ACP has further recommended that health IT systems be tailored to emphasize patient safety improvement (
62). More recently, ACP noted the following:
The movement of automated, AI-based systems into [diagnosis, therapy selection, and population health management] is a cause for concern by many physicians and others—specifically when considering care decisions regarding diagnosis and therapy selection [citation omitted]. There is justifiable concern that what may be initially presented as an assistant could easily become a risk to physician autonomy and a risk to patient safety. […] These concerns must be addressed satisfactorily before these technologies are permitted to enter the clinical workflows, and more research on the potential effects of the use of AI, as well as any emerging technology, in clinical workflows is needed (
63).
Furthermore, context- and system-specific data, model design, use, and testing are necessary to ensure clinical safety and effectiveness (
64). Pragmatic randomized controlled trials are recommended to test AI models in real-world settings. In addition, ACP recommends that the use of AI in the provision of health care should be approached by using a continuous improvement process that includes a feedback mechanism or other feedback process, such as a Plan-Do-Study-Act cycle (
65). Continuous improvement processes should be implemented with the understanding that they can contribute to model deterioration when model-informed decisions are introduced into the model (
66). Caution is necessary to ensure that these improvement processes do not give rise to more risk than they are intended to mitigate.
Given the rapid evolution of AI-enabled tools, there will be unknown risks that will need to be managed as the field changes. Beyond intrinsic AI problems that may arise from the development stages of the AI cycle, there are also a range of postdeployment risks upon implementation due to human factors, including biases (
67).
An analysis of adverse events involving FDA-approved ML devices that were submitted to the FDA’s Manufacturer and User Facility Device Experience identified 266 events involving 25 devices (
2). Consequences of events were categorized as “harm,” “near miss events with potential to harm if not for intervention to prevent it,” “hazards with potential to cause harm,” “consequences for health care delivery without specific patient harm,” “no consequences for health care delivery,” and “complaints which generally describe the users [sic] experience but do not indicate harm, hazard, or systemic problems qualifying for other categories” (
2). Although 14 of these events were voluntarily reported, almost all 266 events qualified as 1 of 3 kinds of events subject to mandatory adverse event reporting: malfunctions (
n = 238), injuries (
n = 25), or death (
n = 1) (
2) (
Appendix Figure 3).
6. ACP reaffirms that the use of AI and other emerging technologies in health care should reduce rather than exacerbate disparities in health and health care. To facilitate this effort:a.
ACP calls for AI model development data to include data from diverse populations for which resulting models may be used.
b.
ACP calls on Congress, HHS, and other key entities to support and invest in research and analysis of data in AI systems to identify any disparate or discriminatory effects.
c.
ACP recommends that multisector collaborations occur between the federal government, industry, nonprofit organizations, academia, and others that prioritize research and development of ways to mitigate biases in any established or future algorithmic technology.
As a matter of both equity and safety, data used to train AI models should be carefully selected and assessed for suitability for the intended populations, locations, and uses of the resulting AI model (
Appendix Figure 2). Diverse populations should include those that are underrepresented, socially marginalized, and disadvantaged. The ACP previously advocated for efforts to ensure that use of new technologies like AI does not increase health care disparities, noting that “AI, ML, and other algorithmic technology, if not implemented with caution and appropriate regulations, can embed implicit biases into health care decision-making systems, which can in turn threaten patient health and quality of care” (
68). The College’s concerns have been warranted given the findings of studies confirming that biases have been perpetuated by clinical algorithms to the detriment of Black patients and other underserved or underrepresented populations (for example, female patients, Hispanic patients, and patients with Medicaid insurance) (
1,
69). For example, studies have indicated that certain dermatology-related algorithms perform worse on darker skin tones than lighter ones (
70,
71). As Thadaney-Israni and Verghese (
72) have pointed out, “Flawed or incomplete data sets that are not inclusive can automate inequality.” Other studies such as a December 2023 comparative effectiveness review from the Agency for Healthcare Research and Quality found that of 17 studies evaluating the effect of 18 algorithms on racial and ethnic disparities in health and health care, 11 studies identified 13 algorithms that may perpetuate or exacerbate racial and ethnic disparities, 5 studies identified 4 algorithms that may lessen disparities, and 1 study found that the algorithm studied likely had no effect on disparities (
73). Therefore, research evaluating the effect of AI technology on the practice of medicine, patient access to care, and the quality and effectiveness of patient care, including assessments of whether AI use in medicine contributes to or drives biased or discriminatory health practices or inequitable health outcomes, is necessary.
Bias in AI is not only a data problem, it is also an ethical problem. How analyses are done—that is, which variables are chosen—is important. In a notable example, health care costs were used as a proxy for health care needs (
1); such an assumption can result in bias when less money is spent on certain patient populations, despite their needs. In addition, biases can arise regarding when and how algorithms are deployed (that is, if they are used for certain patient groups and not others) (
74,
75).
7.
ACP recommends that developers of AI must be accountable for the performance of their models. There should be a coordinated federal AI strategy, built upon a unified governance framework. This strategy should involve governmental and nongovernmental regulatory entities to ensure:a.
the oversight of the development, deployment, and use of AI-enabled medical tools;
b.
the enforcement of existing and future AI-related policies and guidance; and
c.
mechanisms to enable and ensure the reporting of adverse events resulting from the use of AI.
Independent, nongovernmental regulatory bodies, such as the Joint Commission, are necessary for oversight of AI models procured, developed, or deployed by governmental bodies. However, governmental bodies, such as the HHS Office of Inspector General, should also be involved in the oversight of AI-related technology used by governmental and nongovernmental entities. The College also supports recent calls for outcome-focused regulations (
76) and for a public-private partnership to establish a “nationwide network of health AI assurance laboratories” (
77). Regulatory oversight and performance monitoring are necessary throughout the entire tenure of the AI-enabled technology. If the use of AI-enabled technology is suspected to have contributed to an adverse event, the event and relevant details should be reported to the appropriate regulatory bodies, such as the FDA, and logged in public databases (for example, the FDA Manufacturer and User Facility Device Experience database). The College also supports proposals for HHS Office for Civil Rights enforcement against violations of section 1557 of the Affordable Care Act (which prohibits discrimination by covered health programs and activities based on race, sex, color, national origin, age, or disability) for the use of clinical algorithms in discriminatory ways (
78,
79).
8. ACP recommends that in all stages of development and use, AI tools should be designed to reduce physician and other clinician burden in support of patient care.
Reducing unnecessary time, administrative, cognitive, and other burdens should be priorities in the design and development of AI-enabled devices to allow physicians to better care for patients. Artificial intelligence tools can be used to reduce administrative burden by performing patient intake, scheduling, and prior authorization functions, for example, and can decrease cognitive burden, for instance, by helping physicians get to the right diagnoses and treatments faster. Artificial intelligence should support more time for direct patient care by physicians and other clinicians. Artificial intelligence products should be sufficiently tested (that is, used by actual end users under real-world circumstances and all intended use contexts) before deployment to ensure usability and to identify and address problems and technologic burdens that may arise for clinicians and other members of the care team. Any mechanisms for clinicians to provide feedback on the performance of or any issues with the AI tool should not be burdensome to the clinician. The effects of AI-enabled burden reduction tools on burnout should be assessed.
New payment initiatives, especially those for value-based care, must support the use of AI technology as a mechanism to reduce burden and ideally improve quality. The ACP has previously advocated that “payment policies should create incentives for physicians and other health professionals to use health information technologies that have the functions and capabilities needed to improve clinical decision-making at the point of care, including functions designed to support care consistent with evidence-based guidelines, care coordination, and preventive and patient-centered care” (
80). The ACP has also called for all involved parties to “support the development, adoption and use of innovative technologies that seamlessly enable enhanced and coordinated patient-centered care” (
63). Artificial intelligence–enabled tools have the potential to improve the quality of patient care and reduce health care costs, thereby promoting value-based care—and it is critical that they take on this role, rather than contribute to clinician burden.
9. ACP recommends that training be provided at all levels of medical education to ensure that physicians have the knowledge and understanding necessary to practice in AI-enabled health care systems.
To enable safe deployment, comprehensive educational training programs and resources are needed at the undergraduate medical education, graduate medical education, and attending physician levels to address the knowledge gaps of current health care professionals. Education and training regarding the foundational concepts of AI; the ethics; clinically effective and appropriate uses of AI in medicine; and the risks and unintended consequences of AI use, including its effect on health disparities, should be incorporated into educational materials.
Physicians are far less likely to use AI tools if they do not understand, or trust, the output of AI systems. Therefore, to increase and improve AI use and usefulness, the creation and dissemination of clear and comprehensive educational materials to clinicians and end users of AI is crucial. Training should ensure that physicians remain able to make appropriate clinical decisions independently, in the absence of AI decision support should such technology become unavailable and more importantly, for vigilance against errors in AI generated or guided decisions to protect patient safety.
10. ACP recommends that the environmental impacts of AI and their mitigation should be studied and considered throughout the AI cycle.
Environmental health is defined as the health effects associated with environmental factors, such as air pollution, water contamination, and climate change. The College has called for the health sector to adopt environmentally sustainable and energy-efficient practices to aggressively reduce its greenhouse gas emissions (
81), particularly given that it accounts for 8.5% of U.S. greenhouse gas emissions overall (
82). Although estimates of health care technology’s toll on greenhouse gas emissions are limited, we know that substantial computational resources involving immense amounts of data and energy consumption are required for the development of AI models (
83). Well before the widespread proliferation of generative AI, it was reported that training just 1 AI model for NLP can emit 284 019.13 kilograms of carbon dioxide equivalent—nearly 5 times the lifetime emissions of the average American car (
84). The carbon footprint of generative AI tools like LLMs is likely much higher (
85,
86).
However, the utility of AI for climate change mitigation and the extent of the technology’s inherent contribution to climate change are still being explored and may be up for debate (
87). Given the divergent commentary and findings of existing studies about the relationship between AI and greenhouse gas emissions (
87,
88), we believe that efforts to quantify the effect of AI on climate are necessary but also that the dearth of standardized measures impedes our ability to address the potentially negative climate effects of AI (
83,
89,
90).
Balancing Innovation, Privacy and Ethics: The Role of AI in Healthcare
ACP's Policy Position Paper "Artificial Intelligence in the Provision of Health Care" (1) has many valuable insights on the transformative potential of Artificial Intelligence (AI) in healthcare. However, it is crucial to fully acknowledge the ethical and privacy considerations that accompany these advancements. David Elliott and Eldon Soifer's analysis (2) of AI's implications for patient privacy and data security offers an overview of the challenges that AI integration into healthcare settings can pose. Their distinction between security interests and privacy concerns illuminates the intricate relationship between technological innovation and patient autonomy. As AI systems become more integrated into daily medical practice, processing and analyzing sensitive medical information and ensuring patient confidentiality becomes essential. Consider implementing AI-driven predictive analytics in clinical decision-making. While these tools promise early disease detection and personalized treatment recommendations, they also raise questions about data transparency, algorithmic bias, patient privacy, and informed consent. Without robust safeguards, AI technologies may exacerbate disparities in healthcare access and erode the sacred doctor-patient relationship. The American Medical Association's recent report, "The Emerging Landscape of Augmented Intelligence in Health Care" (3), is also commendable in shaping the responsible use of AI in medicine. The AMA's advocacy for AI innovation to align with patient-centered values and professional ethics, development of AI principles for healthcare, and promotion of regulatory oversight and collaboration between healthcare and technology stakeholders is a significant step towards ensuring ethical AI use in healthcare. As we embark on this journey, we must empower healthcare providers to navigate AI integration and its ethical and patient privacy implications. By equipping physicians with the knowledge and tools to evaluate AI technologies critically, we can uphold the integrity of the patient-doctor relationship and protect patient rights in an increasingly digital healthcare landscape. This is pivotal in ensuring AI's responsible and ethical use in healthcare.
References
1. Daneshvar N, Pandita D, Erickson S, Sulmasy LS, DeCamp M; ACP Medical Informatics Committee and the Ethics, Professionalism and Human Rights Committee. Artificial Intelligence in the Provision of Health Care: An American College of Physicians Policy Position Paper. Ann Intern Med. 2024 Jun 4. doi: 10.7326/M24-0146.
2. Elliott, D. & Soifer, E. (2022). AI Technologies, Privacy, and Security. Frontiers in Artificial Intelligence, 5. https://doi.org/10.3389/frai.2022.826737
3. American Medical Association. (2024, February 26). AMA Future of Health: The Emerging Landscape of Augmented Intelligence in Health Care.
In Reply
We thank Drs. Mason and Cote and Mr. Richards for their thoughtful comments on this American College of Physicians position paper. We agree there is a need “to fully acknowledge the ethical and privacy considerations that accompany these advancements,” which is why privacy, along with confidentiality and transparency, are discussed throughout the paper. In addition, the College’s ethics committee plans to expand on this initial paper with more detailed treatment of the ethical implications of artificial intelligence (AI), including privacy. The commentators reference an important element of privacy–i.e., the interest people have in privacy, including data privacy, as a matter of dignity and respect for autonomy irrespective of how information may or may not be used (or “privacy per se” (1)). This calls to mind the concept of a “wrong without harm” (2) that an action can be unethical or illegal even if an individual is not injured—a dignitary harm. Practically, this means a need for much greater awareness and ethical scrutiny of exactly what patient data are collected even if not resulting in physical or other injury to patients. In so doing, it must simultaneously be acknowledged that health care entities have long used anonymized and deidentified data for analytic and algorithmic purposes, even before the widespread introduction of AI. And, of course, the limits of deidentification must also be recognized. The current ACP statement (3) and prior ones (4) emphasize strong, patient-centered privacy and confidentiality protections, and call for “increased transparency and public understanding and improved models of consent about the collection, exchange, and use of personal health information” (4). While transparency helps address privacy concerns by informing patients about data collection, disclosures, and uses, we thank the commentators for encouraging everyone to think more deeply about this rich fundamental concept of privacy.
References
1. Elliott D, Soifer E. AI technologies, privacy, and security. Front Artif Intell. 2022;5:826737.
2. Feinberg J. Harm to Others. New York: Oxford University, 1984.
3. Daneshvar N, Pandita D, Erickson S, Snyder Sulmasy L, DeCamp M; ACP Medical Informatics Committee and the Ethics, Professionalism and Human Rights Committee. Artificial intelligence in the provision of health care: An American College of Physicians Policy Position Paper. Ann Intern Med. 2024;177(7):964-967. doi:10.7326/M24-0146
4. Rockwern B, Johnson D, Snyder Sulmasy L; Medical Informatics Committee and Ethics, Professionalism and Human Rights Committee of the American College of Physicians. Health information privacy, protection, and use in the expanding digital health ecosystem: a position paper of the American College of Physicians. Ann Intern Med. 2021;174:994-998. [PMID: 33900797] doi: 10.7326/M20-7639.
Disclosures:
MD serves as a consultant to the American College of Physicians on ethics policy issues and is a member of the Society of General Internal Medicine Ethics Committee. LSS is Director of the Center for Ethics and Professionalism at the American College of Physicians and a member of the Society of General Internal Medicine Ethics Committee.