Click here to chat with me!
Search our website now
Latest Posts
Overview Artificial intelligence is transforming how healthcare organizations deliver care, manage operations, and ensure compliance. But with opportunity comes risk—and oversight is no longer optional. In this episode of Speaking of Health Law, Andrew Mahler, Vice President of Privacy and Compliance Services at Clearwater, is joined by Kathleen Healy, Partner at Robinson Cole, and Robert Martin, Senior Legal Counsel at Mass General Brigham. Together, they break down the legal and operational steps compliance teams can take to assess and monitor AI systems effectively. Their discussion draws from their presentation at AHLA’s 2025 Complexities of AI in Health Care Conference and offers strategic, real-world insights for building responsible AI oversight programs. What You’ll Learn How to design a scalable, risk-based AI governance framework Key roles in multidisciplinary governance committees Strategies to assess bias, fairness, and transparency in AI models How HIPAA and the 21st Century Cures Act intersect with AI deployments What HHS, FTC, and other regulators are signaling about AI enforcement Best practices for auditing AI tools post-implementation Featured Experts Andrew Mahler, VP, Privacy and Compliance Services, Clearwater Kathleen Healy, Partner, Robinson Cole Robert Martin, Senior Legal Counsel, Mass General Brigham Why It Matters As AI becomes embedded across healthcare workflows—from clinical decision support to ambient documentation—compliance teams need to stay ahead of rapidly evolving legal and regulatory landscapes. This episode provides practical guidance to help organizations move from reactive oversight to proactive governance. Take the Next Step Want to assess your organization’s readiness to manage AI-related risks?Connect with Clearwater to learn how our privacy, compliance, and cybersecurity experts can support your AI governance strategy.Schedule a consultation Clearwater helps healthcare organizations implement: Comprehensive, OCR-aligned risk analysis and risk response Purpose-built incident response plans and tabletop exercises 24/7 managed detection and response with IRM|Pro® analytics Contact us to learn more. Podcast hosted and originally published by AHLA The post Practical Guidance to Enable Health Care Compliance Programs to Assess and Monitor AI appeared first on Clearwater.
Ahead of Intelligent Health (13-14 September 2023, Basel, Switzerland), we asked Yurii Kryvoborodov, Head of AI & Data Consulting, Unicsoft, his thoughts on the future of AI in healthcare. Do you think the increased usage of Generative AI and LLMs will have a dramatic impact on the healthcare industry and, if so, how? Generative AI is just a part of the disruptive impact of all AI tech on the healthcare industry. It allows to dramatically reduce time efforts, costs and chances of mistakes. Generative AI and LLMs are applied to automating clinical documentation, drug discovery, tailoring of treatment plans to individual patients, real-time clinical decision support and health monitoring, extracting valuable insights from unstructured clinical records, streamlining administrative tasks like billing and claims processing, providing instant access to comprehensive medical knowledge. And this list continues.
We sat with Benjamin von Deschwanden, Co-Founder and CPO at Acodis AG, to ask him his thoughts on the future of AI in healthcare. Do you think the increased usage of Generative AI and LLMs will have a dramatic impact on the healthcare industry and, if so, how? I think that the strength of Generative AI lies in making huge amounts of information accessible without needing to manually sift through the source material. Being able to quickly answer any questions is going to be transformative for everyone working with increasingly bigger data sets.The challenge will be to ensure that the information we get by means of Generative AI is correct and complete – especially in healthcare – as the consequences of wrong data can be fatal. We at Acodis are actively working on practical applications of Generative AI inside our Intelligent Document Processing (IDP) Platform for Life Science and Pharma clients to drive efficiency and accelerate time to market, whilst controlling the risks.
Intelligent Health 2024 returns to Basel, Switzerland on 11th–12th September. We’ve got prominent speakers. An extensive programme. Groundbreaking advancements in #HealthTech. And much, much more. Our incredible 2024 programme will dive deeper than ever before. From sharing the latest innovation insights to exploring use cases of AI application in clinical settings from around the world. All through our industry-renowned talks, limitless networking opportunities, and much-loved, hands-on workshops. Read on to discover what themes await at the world’s largest AI and healthcare summit.
We sat down with Margrietha H. (Greet) Vink, Erasmus MC’s Director of Research Development Office and Smart Health Tech Center, to ask her for her thoughts on the future of AI in healthcare. Do you think the increased usage of Generative AI and LLMs will have a dramatic impact on the healthcare industry and, if so, how? The integration of Generative AI and LLMs into the healthcare industry holds the potential to revolutionise various aspects of patient care, from diagnostics and treatment to administrative tasks and drug development. However, this transformation will require careful consideration of ethical, legal, and practical challenges to ensure that the benefits are realised in a responsible and equitable manner.