Marnix E. Heersink Institute for Biomedical Innovation to instruct its new graduate certificate course, Leadership & Ethics of AI in Medicine.
Molly Wasko, Ph.D., MBA, UAB University Professor and associate dean for Research, Innovation and Faculty Success in the Collat School of Business., has partnered with theThe institute’s AI in Medicine Graduate Certificate provides current and future health care leaders with important foundations in understanding and applying artificial intelligence (AI), as well as the safety, security, and ethics of using AI to improve the health and lives of patients. The Leadership & Ethics of AI in Medicine course introduces students to leadership, ethical and strategic skills, responsible AI, AI strategy, people, organization, and implementation of AI in medicine.
Wasko has over 15 years of leadership experience in the areas of high-technology start-ups, strategic planning, and organizational innovation. She believes that innovation and new knowledge creation come from collaborating across organizational boundaries. This belief drives her primary focus: building collaborative academic, research, and commercialization relationships across UAB.
The Heersink communications team met with Dr. Wasko to discuss the ethics of AI and what students can expect from this new program at UAB.
Q: How did you become interested in AI?
When I was 16, a sophomore in high school in 1983, I took a computer programming course. I remember my first assignment, where I had to write a computer code that would calculate the length of the third side of a right triangle based on a user’s input of the first two sides. I was totally blown away thinking about the potential for putting computers to work in ways that help people. I knew then that computers were going to fundamentally change how, where, and when work gets done, and I’ve been studying the impacts of computers on social systems and work ever since.
Q: How did you get connected with UAB?
I joined UAB in 2010 as the department chair for Management, Information Systems & Quantitative Methods (MISQ) in the Collat School of Business. MISQ includes management, information systems, quantitative systems, and entrepreneurship – a perfect match with all of my favorite disciplines.
Q: Why did you choose to teach the Leadership & Ethics of AI in Medicine course?
My area of research expertise is organizational innovation, where research has consistently found that new innovations come from ideas crossing boundaries. Therefore, I put myself out there just to make interesting connections across disciplines to accelerate the sharing of ideas. One thing students may not realize is that we faculty learn just as much from them as they do from us. So, getting to work with students from multiple disciplines and different professional experiences is my idea of great fun! This course gives me a chance to combine many of my passions: leadership, ethics, organizations, and technology, in the context of caring for the health and well-being of people.
Q: What are the ethical dilemmas faced concerning AI in medical practice?
Generally, ethics refers to taking actions that enact a set of values and principles. The really tough ethical dilemmas arise not from choosing between right and wrong but from having to choose between two rights. Ethics also operate across multiple levels of action: individual, group, and organizational, and from different points of view: the technology developer, the health care leader, the front-line health care workers, patients, and caregivers. So, taking an action that seems ethical from a leader’s point of view may come into direct conflict with another’s perspective. In these situations, decisions must be taken that balance idealism with pragmatism. My guiding principle is that technology should be designed and used in ways that always help humanity. If we consider actions that balance what is in the best interest of our patients, serve our health care workers, and are financially responsible, then I believe we will see amazing transformation through AI. Ultimately, with responsible and ethical leadership, I believe that AI has the potential to make health care more human.
Q: What makes a great leader in the innovation field?
Innovation is the implementation of an idea in the wild. People must adopt the new approach and actually change how they do things in order for it to be an innovation. A great leader of innovation is one that can motivate their organizations to see the possibilities of a new future that is better. Building this vision always starts with “why,” communicating the inspiring purpose behind the change. There is so much real human fear surrounding AI at the moment. Just imagine if you’re a top scientist in your field, like a radiologist, and you see headlines about how AI can read scans faster, with greater accuracy, and can work 24/7 without getting tired. That’s a really scary thought that has real, bad outcomes for some people. There is a real fear when you believe that your specialized knowledge and even your whole career could be made obsolete. Great leaders of innovation are able to paint a vision of the future they want to create in a way that is inclusive and brings people along. Creating this vision should illustrate how AI will work collaboratively with people to make health care more human in how patients are cared for.
Q: What can students expect to learn from the course?
This course provides a comprehensive overview of how to lead ethical and responsible AI-driven innovation in health care organizations. Students will learn how to apply an ethical viewpoint to assess the external competitive environment and develop an internal strategic vision to guide the responsible implementation of AI solutions in medicine. As with all tough ethical dilemmas, there is no single right or wrong answer, so this course also provides a safe environment for students to present and challenge ideas in order to develop their own personal ethical perspective of AI.
Learn more about this program. Applications are due by Aug. 1 for the Fall 2023 semester.