Written by Brian C. Moon - Feb. 24, 2025

Keynote Address: The Ethical Imperative of AI in Public Health

Panel Discussion: Real-World Ethical Considerations in AI
A highlight of the forum was the panel discussion, featuring experts across medicine, public health, AI research, and community engagement. Panel moderator, Dr. Stacy Lloyd, Assistant Professor, Department of Pathobiology, Tuskegee University, and panelists Dr. Ryan Melvin, Associate Professor, Endowed Faculty Scholar for Data Science, Artificial Intelligence, and Machine Learning, UAB School of Medicine, Department of Anesthesiology and Perioperative Medicine; Dr. Carol Agomo, Program Director for Community Outreach and Engagement, Division of General Internal Medicine & Population Science/Forge AHEAD Center, University of Alabama at Birmingham; Mr. Chris Williams, community member and Forge AHEAD CAB member; and Dr. Candy Tate, Museum Curator, Tuskegee University; engaged in a thought-provoking discussion on the ethical challenges of AI applications in healthcare, including:
- AI-driven insurance claim denials: A recent class-action lawsuit underscored how AI-based decision-making in Medicare Advantage claims may harm rural communities. Panelists debated the need for regulatory oversight to prevent unethical use of AI in insurance and healthcare access.
- Bias in AI-driven decision-making: AI systems, often trained on historically biased datasets, risk perpetuating discrimination in medical diagnoses and treatment recommendations. Panelists emphasized the importance of human oversight, transparency, and equitable training data.
- The role of community engagement: Building trust in AI requires inclusive conversations with the populations most impacted. Speakers emphasized collaborative decision-making that includes patients, healthcare providers, and ethicists in AI development.
Lightning Talks: AI Developers in Action
Stay Engaged!
|
- Open Knowledge Networks: Researchers from the Alabama Center for Advancement of AI presented a system designed to aggregate federally available healthcare datasets, making complex medical data more accessible and usable for public health and clinical applications.
- AI for clinical research and diagnostics: Dr. Ryan Melvin demonstrated AI-powered tools that assist with literature searches, grant writing, and patient risk assessment—showcasing how AI can streamline healthcare decision-making for medical professionals.
- Ethical considerations in AI-assisted medical diagnosis: Presenters discussed how AI models can enhance efficiency while introducing risks, emphasizing the need for human oversight, accountability, and regulatory safeguards in AI-driven healthcare.
Key Takeaways: Ethical AI for an Equitable Future
Throughout the forum, a recurring theme was how to balance innovation with ethical responsibility. Attendees left with key insights on:- The necessity of transparency and explainability in AI-driven healthcare solutions.
- The importance of regulatory policies that protect patients from AI-driven harm.
- The role of interdisciplinary collaboration between computer scientists, clinicians, ethicists, and community members.
Moving Forward: The Ongoing Conversation on AI in Healthcare

The conversation does not end here—the CCTS invites all stakeholders to continue engaging in future discussions, policy development, and research initiatives to shape an AI-driven healthcare landscape that is both innovative and just. For those unable to attend or who wish to revisit the conversations, we invite you to watch the recordings available on the CCTS's Video channel, and make sure you are subscribed to the weekly CCTS Digest and following the CCTS on LinkedIn to stay informed of all upcoming CCTS events.