Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

This editorial examines the interplay between academic freedom, artificial intelligence (AI), and the phenomenon of what author Professor Julian Savulescu calls “blandification of ethics.” Drawing on past controversies in medical ethics and editorial decision-making, he argues that increasing pressure (both external and internal) is leading academics and journal editors to self-censor by simplifying or sanitising contentious arguments (i.e. “blandification”) in order to minimise backlash. The article warns that such self-censorship risks undermining the richness, diversity, and critical edge of ethical discourse. At the same time, Professor Savulescu explores how advances in AI, especially large language models (LLMs), pose profound challenges and opportunities for medical ethics: such systems may be used not only to assist in writing and scholarship, but to represent patients or doctors, offer “artificial moral advice,” or even surpass human performance in moral reasoning. He reflects on the hazards and responsibilities this shift brings: Should ethics be “blandified” to placate critics, or should it remain bold, provocative, and pluralistic even as we integrate AI tools into its practice? The piece calls for preserving academic freedom, resisting oversanitisation, and engaging rigorously and courageously with AI’s transformative potential in ethics.