Investigation of AI Training by Australian Radiology Provider Provides Important Reminder for US Healthcare Providers Jackson Lewis PC JDSupra

pIf there is one thing artificial intelligence AI systems need is data and lots of it as training AI is essential for achieving success for a given use case A recent investigation by Australias privacy regulator into the countrys largest medical imaging provider IMED Radiology Network illustrates concerns about the use of medical data to AI systems This investigation may offer important insights for healthcare providers in the US also trying to leverage the benefits of AI They too grapple with where those applications intersect with privacy and data security laws including the Health Insurance Portability and Accountability Act HIPAAppThe Office of the Australian Information Commissioner OAIC has initiated an inquiry into allegations that IMED Radiology Network shared patient chest xrays with Harrisonai a health technology company to train AI models without first obtaining patient consent According to reports a leaked email indicates that Harrisonai distanced itself from responsibility for patient consent asserting that compliance with privacy regulations was IMEDs obligation Harrisonai has since stated that the data used was deidentified and that it complied with all legal obligationsppUnder Australian privacy law particularly the Australian Privacy Principles APPs personal information can only be disclosed for its intended or a secondary use that the patient would reasonably expect It remains unclear whether training AI on medical data qualifies as a reasonable expectation for secondary useppThe OAICs preliminary inquiries into IMED Radiology may ultimately clarify how medical data can be used in AI contexts under Australian law and may offer insights for healthcare providers across borders including those in the United StatesppThe investigation of IMED raises significant issues that US healthcare providers subject to HIPAA should consider especially given the growing adoption of AI tools in medical diagnostics and treatment To date the US Department of Health and Human Services HHS has not provided any specific guidance for HIPAA covered entities or business associates concerning AI In April 2024 HHS publicly shared its plan for promoting responsible use of artificial intelligence AI in automated and algorithmic systems by state local tribal and territorial governments in the administration of public benefits  PDF In October 2023 HHS and the Health Sector Cybersecurity Coordination Center HC3 published a white paper entitled AIAugmented Phishing and the Threat to the Health Sector More is expected  ppHIPAA regulates the privacy and security of protected health information PHI generally requiring covered entities to obtain patient consent or authorization before using or disclosing PHI for purposes outside of certain exceptions such as treatment payment or healthcare operations TPOppIn the context of AI the use of deidentified data for research or development purposessuch as training AI systemscan generally proceed without specific patient authorization where that the data meets HIPAAs strict deidentification standards HIPAA generally defines deidentified information as data from which all identifiable information has been removed in such a way that it cannot be linked back to the individualppHowever US healthcare providers must ensure that deidentification is properly executed particularly when AI is involved as the reidentification risks in AI models can be heightened due to the vast amounts of data processed and the sophisticated methods used to analyze it Therefore even when deidentified data is used entities should carefully evaluate the robustness of their deidentification methods and consider whether additional safeguards are needed to mitigate any risks of reidentificationppWhile HIPAA does not currently impose specific obligations on AI use beyond general privacy and security requirements the IMED case highlights how AIdriven data practices can attract regulatory attention US healthcare providers should be prepared for similar scrutiny from federal and state regulators as AI becomes more integrated into healthcare systemsppIn addition there is increasing pressure on policymakers to update healthcare privacy laws including HIPAA to address the unique challenges posed by AI and machine learning Providers should stay informed about potential regulatory changes and proactively implement AI governance frameworks that ensure compliance with both current and emerging legal standardsppThe ongoing investigation into IMED Radiologys alleged misuse of medical data for AI training underscores the importance of ensuring legal compliance patient transparency and robust data governance in AI applications For US healthcare providers subject to HIPAA the case offers several key takeawaysppBy staying proactive US healthcare providers can harness the power of AI while maintaining compliance with privacy laws and safeguarding patient trustppSee more ppDISCLAIMER Because of the generality of this update the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations
Attorney Advertisingpp
Jackson Lewis PC
var today new Date var yyyy todaygetFullYeardocumentwriteyyyy
ppRefine your interests ppBack to TopppExplore 2024 Readers Choice AwardsppCopyright var today new Date var yyyy todaygetFullYeardocumentwriteyyyy JD Supra LLCp