Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

This programme will refine a novel methodology for deriving and applying wide public values to policy in contexts of value disagreement and controversy. 

Building on the work of John Rawls, Collective Reflective Equilibrium in Practice (CREP) brings theoretical considerations (ethical principles, theories, concepts, professional guidelines, policies) into coherent equilibrium with public values (screened for prejudice, inconsistency and alignment with fundamental theories, principles and concepts) to inform value-based policy in the face of disagreement. 

We will build methodological platforms using demographically representative community panels. We will establish proof of concept initially with a sample of the British and American public, and then through the Singapore Health Opinion Population Survey (HOPS) panel. 

We will share the resulting enhanced CREP methodology across the Southeast Asia Bioethics Network (SEABN) to provide opportunities to identify areas of radical disagreement across pluralistic value systems. 

To complement this, we will develop novel reward-learning algorithms - artificial intelligence (AI) - to address moral disagreements. Our algorithms will aggregate data from inconsistent sources, forming links with techniques to handle moral uncertainty and disagreement, and using Bayesian inference to provide a mathematical framework for knowledge aggregation and understanding. 

This programme will deliver new methods capable of incorporating flexible models and modelling the imprecision and variability in human expressions of value and objectives. The important ethical issues arising in the uses of AI in analysing people’s values will be interrogated explicitly to ensure the project provides an explicit, transparent, and accountable approach to value-based policy making in current and future social controversies.