Site icon Health Harbor

Boon or burden? Artificial intelligence comes to paediatrics

Boon or burden? Artificial intelligence comes to paediatrics

Close up of white baby with blue eyes touching partially-seen white robot hand

As a paediatric bioethicist at Seattle Children’s Research Institute and a faculty scientist at the University of Washington School of Medicine, Kate MacDuffie has her finger on the pulse of emerging technologies and their applications in healthcare. Over the course of her career, she’s witnessed advances in the use of genome sequencing, gene therapies and brain organoids. None, however, have been so sudden or disruptive as the current burst of artificial intelligence (AI) technologies.

“It’s definitely everywhere in my day-to-day now,” MacDuffie says. “I never thought of myself as becoming an AI researcher, but it has become unavoidable.”

This July, MacDuffie joined more than 200 other paediatricians, ethicists, technologists and philosophers at the Pediatric Bioethics Conference — hosted for the last 19 years by Seattle Children’s Treuman Katz Center for Pediatric Bioethics and Palliative Care. The meeting, the first and largest of its kind, is a singular event on the paediatrics event calendar, focusing on emerging issues facing researchers and clinicians today. This year, organizers tackled the arrival of AI tools and the ways in which they intersect with big data, and everyone arrived in Seattle with their own stories.

“We were the first centre to be created to solely focus on paediatric bioethics, and this meeting is our attempt to bring together experts to think through the relevant issues in our field,” says Douglas Diekema, a paediatric bioethicist at Seattle Children’s and director of education at the centre. “More than any other year, we are really addressing a cutting-edge topic that’s taking shape as we discuss it.”

Giving paediatrics its due

Artificial intelligence, and the machine learning (ML) underlying today’s AI technology, is hardly new, even in medicine. After developing a rudimentary form of AI in the 1950s, by the late 1980s, researchers were using computers to study and diagnose more than 500 diseases. But more recently, over just a few short years, an explosion of generative AI-based tools from chatbots to advanced algorithms have changed the academic and medical landscape entirely, pledging to assist with everything from writing research papers to assessing patient symptoms and mining for new drugs.

Undoubtedly, these tools offer the opportunity to claw back some of the time practitioners must now dedicate to administrative minutia, but determining how they might be applied most usefully and ethically remains an open question.

Paediatrics comes with its own challenges over adult medicine, says Lainie Friedman Ross, chair of the Department of Health Humanities and Bioethics at the University of Rochester, and children cannot be thought of as ‘small adults’. Children are more diverse in terms of their growth and metabolism, meaning that health data collected on adult populations cannot be easily extrapolated to children. Children can rarely give legal consent, and so the usual patient-physician relationship becomes a triad, with a child’s parents contributing to decision-making.

“There’s definitely a debate in AI ethics of whether computers and machine learning are a fourth party in the paediatric setting, or just another tool,” she says.

Taking the first steps

The earliest iterations of AI in paediatrics have shown both the technology’s promise and pitfalls. While most use cases remain hypothetical, researchers and clinicians have leveraged chatbots to summarize notes from patient visits, simplify the language in informed consent1 and medical discharge2 documents, and quickly translate instructions into many languages3. Most strikingly, several companies are now offering mental health therapy to adolescents4 through chatbots that have not yet been vetted by the US Food and Drug Administration.

Proponents of these tools say they will democratize healthcare knowledge and resources and turn clinicians into better doctors. But others are wary of removing the nuance that comes with a face-to-face visit by offloading too much of their work to algorithms. In addition to losing out on the ‘art’ of bedside medicine, as Ross calls it, AI-based tools often come with concerns around data privacy — even anonymized data can sometimes be identifiable — and have been shown to perpetuate existing biases5 against vulnerable groups. And a further, far-reaching consideration: if poor decisions are made on the basis of biased AI-generated information, who then is held accountable?

“Because of these concerns, a lot of us are reticent to have AI replace the decision-making of humans,” Ross says. “In my mind, the best way for us to use AI is to remember that it’s just a tool that we’re responsible for. The closer it gets to autonomous medical decision-making, the more meticulous scrutiny it needs.”

The law weighs in

The law has only recently begun to apply that scrutiny to medical AI, and the legal landscape for liability remains “unsettled”, says I. Glenn Cohen, faculty director of the Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics at Harvard Law School. “If there’s going to be a robust case law on the subject, it hasn’t shown up yet.”

Rather than medical malpractice, Cohen says that controversies rooted in data privacy and discrimination will probably become more common. While electronic health records are tightly protected by laws like HIPAA that ensure the privacy and security of health information, they represent just the tip of the iceberg of our personal data. Because so much of our lives are broadcast online, it’s likely that “inferences about people’s health states — drawn from sources that are not shielded by HIPAA — are going to be much easier to make based on publicly available data when AI is in the picture”, Cohen explains.

Ideally, institutions and hospitals will need to take charge of assessing AI’s role in their systems and to vet it as they would any other new tool, Cohen says. Responding to the need for a standardized approach for evaluating research proposals that include AI, Seattle Children’s recently established an Artificial Intelligence Review Board.

Douglas Opel, faculty scientist at the University of Washington School of Medicine and director of the Treuman Katz Center at Seattle Children’s, says that most of what they’ve seen are projects intending to leverage AI to streamline medical care, rather than using AI to interact directly with patients.

“We acknowledge that AI is here to stay, and we need to understand how to integrate it into paediatric healthcare and research so that we’re not always reacting to new advances after the fact,” Opel says. “When we hold our conferences, we hope it’s not just people coming together, but also learning from each other, beginning collaborations, and setting in motion efforts that can move this field forward.”

The Pediatric Bioethics Conference, hosted by the Treuman Katz Center for Pediatric Bioethics and Palliative Care, is a summer event held annually in Seattle, Washington. Attendees gather to explore current ethical topics and questions in paediatrics, with a different focus each year. Click here to learn more about the conference and research happening at the centre.

2024 SPEAKERS

link

Exit mobile version