![]() |
![]() |
![]() |
![]() |
![]() |
The need to improve reasoning with uncertain evidence, and in the presence of differing values, is widely recognised as critical in key ESRC research topics (Environment – climate change, Health & Wellbeing - medical, social care Public Services – regulating risk (e.g. food safety), Politics & Governance – Law, Society – intelligence analysis, forensic sciences). Yet the methods used by, and psychological factors influencing, decision-makers vary drastically both between and within these domains. Moreover, when different decision-makers are presented with identical options and evidence, and even the same view of the world, their proposed outcomes will inevitably be incompatible if their personal or group affiliated goals, priorities and incentives differ. Biases will ensure that each decision-maker will use the evidence to ensure it fits their own narrative and meets their own goals, yet remain unaligned with the goals of others.
There has been no systematic attempt to handle such incompatibilities in models or values. Instead, modellers have generally assumed that domain experts and decision-makers will take a rational ‘scientific’ view of the uncertain world. Generally, the models used fail to capture the "reason for reasoning". Our view is that causal narratives and models are necessary to provide explanatory background to decisions and actions, yet are insufficient if they cannot generate successful predictions about or improve the future. Each of our chosen domains (medical, regulatory risk, law, intelligence analysis, and forensic sciences) are increasingly connected yet each has identified and tried to solve its own problems and failed in different ways because they have taken a ‘silo approach’ to reasoning.
The aim of the Centre is to continually improve decision-making in the chosen critical domains by closing the gap between perspectives for modelling and the goals, incentives, and priorities that ultimately drive choices, decisions and conclusions. We will do this by developing a framework and supporting methods (that will be implemented in a continually evolving ‘toolkit’) to model these meta-level challenges, to promote greater clarity, improve practice and expose value and perception differences. Potential errors in uncertain reasoning could also be diagnosed and ultimately treated using these methods. Ultimately the framework and methods will provide a normative basis for how reasoning ought to be conducted in the presence of uncertainty, at the level of the individual and the collective, and differences in values and beliefs across multiple domains.
To continually improve decision-making in critical problems that involve multiple stakeholders in different domains with different (potentially conflicting) values, goals, incentives, and priorities. The interconnected domains of interest are: medical, regulatory risk, law, intelligence analysis, and forensic sciences.
Expose and model the differences that ultimately drive choices, decisions and conclusions.
Exploit opportunities to align domains to achieve more optimal outcomes, focusing especially on reasoning with uncertain evidence in key ESRC research topics.
Provide a normative basis for how reasoning ought to be conducted in the presence of uncertainty, at the level of the individual and the collective, and differences in values and beliefs across multiple domains.
Develop a framework and supporting methods (implemented in a continually evolving 'toolkit') to model the meta level challenges, to promote greater clarity, improve practice and expose value and perception differences.
Diagnose and ultimately treat potential errors in uncertain reasoning.
Study a set of domain specific problems in which similarities and differences in how actors reason can be identified and generalised - informed by key stakeholders (practitioners, decision-makers, policy makers, regulators) managing actual social infrastructure problems.
Develop a formal psychologically informed multivariate framework that can be subject to empirical test using the most up to date methods - so that any recommended measures to represent, communicate change and improve how actors collectively reason can be rigorously evaluated.
Communicate the challenges and how they can be practically surmounted to ensure continual improvement, impact and uptake of the framework and feedback gaps and flaws in the toolkit, requiring fresh scoping and re-evaluation.
The social and economic costs of poor decision making in critical areas are enormous and growing. In the Justice System, between 2014-15, the UK Criminal Cases Review Commission reported over 1500 cases of appeal, with 95,000 successful appeals against criminal conviction in the last two decades. Between 2010-16, 22% of cases at the Court of Appeal were deemed unsafe because of misleading criminal evidence in the original trial. A 2016 US report on criminal injustice from 700 cases, cited the cost of over 200 thousand years to human life and $282m to the economy. In Health and Social Care, a 2014 Dept of Health report claimed £1-2.5bn wasted by NHS on preventable errors, with medication errors contributing to 22,000 deaths a year. Ofsted had recently identified that too many vulnerable children face "clear and present risk of harm" due to failings from child protection departments
The Brave Centre will have a long term transformational impact on the way interdisciplinary reasoning is done in areas critical to public interest. Its practical toolkit for evidence based decision-making will ensure misalignments of goals and values are minimized so that more appropriate decisions for the public good are made. It will thereby deliver major social and economic benefits from the sounder decision making that results. Impact from Brave's research will be instrumental in influencing the development of policy, practice and service provision, shaping legislation, and altering future workplace behaviour in decision making.
Poor decision making from large organisations, government and the justice system impact almost everybody. While the Centre focus is on law, health, intelligence analysis, and forensics, the methods naturally extend to other critical domains (education, environment etc). So, BRAVE has the long-term potential to benefit all of society. Moreover, BRAVE has created the mechanism to ensure such impact, as it has already obtained commitments from multiple influential stakeholders and decision-makers across all areas of public life (including MPs, MEPs, Lords, CEOs of major organisations, high court judges, senior Government advisors, surgeons and the world's leading scholars In reasoning, decision-making and risk) - all named in the proposal (BRAVE already has a website www.mclachlandigital.com/brave with a 'call to action' inviting new members to get involved). In the shorter-term, because of the involvement of such stakeholders, the following will all benefit from the availability of the BRAVE toolkit to support decision-making:
Regulatory bodies: Food Health (National Institute of Clinical Excellence, Medicines & healthcare products regulatory authority, European Medicines Agency)
Government:MPs, Civil Servants, Fire Services, Military & Police, Transportation, urban planning, telecommunication, social welfare & NHS, Energy infrastructure.
Local Authority: Education Providers, roads & transport service providers, environment & waste services
Justice system: The CPS, legal professionals, court officials, forensic scientists, expert witnesses.
Healthcare: Healthcare practitioners, Health service providers, professional services, Quality improvement managers.
Social Care: Care workers,education providers, special needs, mental health & disability services, elderly care, home care and care homes.
There will also be Academic Researchers who benefit from BRAVE's outputs. In particular, those in: statistics (improved methods of hypothesis testing and analysis); psychology (improved methods of experimentation); law and medicine (improved methods of reasoning and decision-making with uncertain evidence and data); and computer science (improved methods and tools for modelling causal relationships and integrating data with expert judgement).
[A1] Chockler, H., Fenton, N., Keppens, J. & Lagnado, D. A. (2015) “Causal analysis for attributing responsibility in legal cases”. in Proceedings of ICAIL ’15 33–42 (ACM Press, 2015).
[A2] Dhami, M. K. (2018). Towards an evidence-based approach to communicating uncertainty in intelligence analysis. Intelligence and National Security, 33, 257-272.
[A3] Dhami, M. K., Lundrigan, S., & Mueller-Johnson, K. (2015). Instructions on reasonable doubt: Defining the standard of proof and the juror’s task. Psychology, Public Policy and Law, 21, 169-178.
[A4] Edmond, G. (2016) ‘Legal versus non-legal approaches to forensic science evidence” IJEP, Vol. 20(1) 3–28 (2016)
[A5] Fenton, N. E. & Neil, M. (2012), “Risk Assessment and Decision Analysis with Bayesian Networks.” CRC Press.
[A6] Fenton N.E, Neil M, Berger D, (2016), “Bayes and the Law”, Annual Review of Statistics and Its Application, Volume 3, 2016 (June), pp 51-77
[A7] Fenton, N. and Neil, M. (2010). "Comparing risks of alternative medical diagnosis using Bayesian arguments." Journal of Biomedical Informatics, 43: 485-495
[A8] Goldacre, B. (2012) “Bad pharma : how drug companies mislead doctors and harm patients.”, Fourth Estate.
[A9] Hahn, U., & Harris, A. J. L. (2014). What does it mean to be biased: motivated reasoning and rationality. The Psychology of Learning and Motivation, 61, 41-102.
[A10] Harris, A. J. L., & Hahn, U. (2011). Unrealistic optimism about future life events: A cautionary note. Psychological Review, 118, 135-154.
[A11] Holt, D., & Osman, M. (2017). Approaches to cognitive modeling in dynamic systems control.Frontiers in Cognitive Science, 8, 2032.
[A12] Kahneman, D (2011), Thinking, Fast and Slow, Farrar, Straus and Giroux
[A13] Kendrick, M. (2015) “Doctoring data: how to sort out medical advice from medical nonsense.” Columbus Publishing.
[A14] Lagnado, D. A., Fenton, N. E. & Neil, M. (2013), “Legal idioms: a framework for evidential reasoning.” Argument Comput. 4, 46–63 (2013).
[A15] Lagnado, D. A., Gerstenberg, T. & Zultan, R. (2013), “Causal Responsibility and Counterfactuals.” Cogn. Sci. 37, 1036–1073.
[A16] Löfstedt, R. E. (2005) “Risk Management in Post-Trust Societies.” Palgrave Macmillan UK
[A17] Lofstedt, R., McLoughlin, M., & Osman, M. (2017): Uncertainty analysis: results from an empirical pilot study. A research note, Journal of Risk Research.
[A18] Nakhaeizadeh, S., Dror, I. E. and Morgan, R. M. (2014) Cognitive bias in forensic anthropology. Science and Justice 54(3): 208–214
[A19] Marks, A. (2013) “Expert Evidence of Drug Traces: relevance, reliability and the right to silence” CLR 10: 810-825
[A20] Morgan, R. M. 2017 Conceptualising forensic science and forensic reconstruction; Part II: the critical interaction between research, policy/law and practice. Science and Justice 57(6): 460-467
[A21] Osman, M. (2014). Future-minded: The psychology of Agency and Control. Palgrave-MacMillan
[A22] Osman, M. (2016). Making a meal out of uncertainty. Journal of Risk Research, 18, 1-3.
[A23] Osman, M, Heath, A., & Lofstedt, R. (2018). The problems of increasing transparency on uncertainty. Public Understanding of Science, 27, 131-138.
[A24] Pearl, J. (2000): “Causality: Models Reasoning and Inference.” Cambridge University Press.
[A25] Shah, P., Harris, A. J. L., Bird, G., Catmur, C., & Hahn, U. (2016). A pessimistic view of optimistic belief updating. Cognitive Psychology, 90, 71-127.
[A26] Smit, N. M., Morgan, R. M., Lagnado, D.A. 2019 A systematic analysis of the misleading evidence in unsafe rulings in England and Wales Science and Justice Volume 58(2) March 2018, 128-137
[A27] Smit, N. M., Lagnado, D. A., Morgan, R. M., Fenton, N. E. (2016) Using Bayesian networks to guide the assessment of new evidence in an appeal: A real case study. Crime Science 5:9-21
[A28] Yet, B.; Perkins, Z. B.; Tai, N. R. M.; and Marsh, D. W. (2016) Clinical evidence framework for Bayesian networks. R. Knowledge and Information Systems, 1--27. 2016.
[A29] Yudowsky, E. (2017) “Inadequate Equilibria”. Machine Intelligence Research Inst, Berkeley, CA.
[B1] Aleksandrowicz, D, Chockler, H, Halpern, JY, Ivrii, A (2017), “The Computational Complexity of Structure-Based Causality”. Journal of Artificial Intelligence Research (JAIR) 58:431-451.
[B2] Alrajeh, D, Chockler, H & Halpern, JY (2018), “Combining Experts’ Causal Judgments”. in The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). AAAI Press, New Orleans, Lousiana, United States, 2-7 February.
[B3] Arscott, E., Morgan, R. M., Meakin, G., and French, J.C. 2017 Understanding forensic expert evaluative evidence: A study of the perception of verbal expressions of the strength of evidence. Science and Justice 57 (3):221-227
[B4] Balding, D., Fenton, N. E., Gill, R., Lagnado, D. & Schneps, L. "Twelve Guiding Principles and Recommendations for Dealing with Quantitative Evidence in Criminal Law". (2017). Isaac Newton Institute Report INI 16061, http://www.newton.ac.uk/files/preprints/ni16061.pdf
[B5] Bayes and the Law (2018), https://sites.google.com/site/bayeslegal/home
[B6] BAYES-KNOWLEDGE (2018) “Effective Bayesian Modelling with Knowledge before Data” www.bayes-knowledge.com
[B7] BBC (2013) “Timeline of Baby P Case”, http://www.bbc.co.uk/news/uk-11626806
[B8] BBC (2015) “Climate Change by Numbers”, www.bbc.co.uk/programmes/p02jsdrk
[B9] Beer, I, Ben-David, S, Chockler, H, Orni, A, Trefler, R (2012). “Explaining counterexamples using causality”. Formal Methods in System Design 40(1):20-40. doi 10.1007/s10703-011-0132-2.
[B10] Big Data Network investments, www.esrc.ac.uk/research/our-research/big-data-network/big-data-network-phase-2.
[B11] BRAVE (2018) “Belief, Reasoning And Valuation using Evidence”, www.mclachlandigital.com/brave
[B12] CCRC (2015) Criminal Cases Review Committee Annual Report www.ccrc.gov.uk/app/uploads/2015/07/CCRC-Annual-Report-and-Accounts-2014-15.pdf
[B13] CLOSER, www.closer.ac.uk/
[B14] Constantinou, A. C., Fenton, N., Marsh, W., & Radlinski, L. (2016). "From complex questionnaire and interviewing data to intelligent Bayesian Network models for medical decision support", Artificial Intelligence in Medicine, 2016. Vol 67 pages 75-93.
[B15] CPS (2012) “CPS Policy for Prosecuting Cases of Rape”, www.cps.gov.uk/publication/cps-policy-prosecuting-cases-rape
[B16] CRUISSE (2018), “Challenging Radical Uncertainty in Society, Science and the Environment”, www.lse.ac.uk/CATS/Research/CRUISSE
[B17] Cruz, N, Hahn, U, Fenton N. E., Lagnado, D (2018), "Shifting patterns of dependence between causes for positive and negative evidence", submitted CogSci 2018
[B18] Dhami, M. K., Belton, I., & Goodman-Delahunty, J. (2015). Quasi-rational models of sentencing. Journal of Applied Research on Memory and Cognition, 4, 239-247.
[B19] Dror, I. E. & Hampikian, G. Subjectivity and bias in forensic DNA mixture interpretation. Sci. Justice 51, 204–208 (2011).
[B20] Dror I., Morgan R., Rando C. & Nakhaeizadeh S. “The Bias Snowball and the Bias Cascade Effects: Two Distinct Biases That May Impact Forensic Decision Making”, J Forensic Sci, May 2017, Vol. 62, No. 3, 832-833
[B21] European Food Safety Authority (EFSA) “Guidance on Uncertainty Analysis in Scientific Assessments”, 2018, doi: 10.2903/j.efsa.2018.5123
[B22] Fenton, N. E., D. Berger, D. Lagnado, M. Neil and A. Hsu, (2014). "When ‘neutral’ evidence still has probative value (with implications from the Barry George Case)", Science and Justice, 54(4), 274-287
[B23] Fenton, N. E., Lagnado, D. A. & Neil, M. (2013), “A General Structure for Legal Arguments Using Bayesian Networks”. Cogn. Sci. 37, (2013).
[B24] Frontier Economics (2014), “Exploring the costs of unsafe care in the NHS”, www.frontier-economics.com
[B25] Garcia-Retamero, R., & Dhami, M. K. (2011). Pictures speak louder than numbers: On communicating medical risks to immigrants with non-native language proficiency. Health Expectations, 14, (suppl. 1.), 46-57.
[B26] Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual review of psychology, 62, 451-482.
[B27] Guardian ( 2015), “Parents cleared of abuse told they may not be reunited with their child”, https://www.theguardian.com/uk-news/2015/oct/09/parents-cleared-of-abuse-launch-legal-battle-to-win-custody-of-adopted-baby, 9 Oct 2015
[B28] Independent (2018), “All rape trial cases to be reviewed by CPS after collapse of four trials within weeks”, 27 Jan, https://tinyurl.com/y8gtp69q
[B29] Isaac Newton Institute Cambridge (2016), Probability and statistics in forensic science, www.newton.ac.uk/event/fos
[B30] Kaye, D. H. The Admissibility of Probability Evidence in Criminal Trials (pt. 1). J. Law, Sci. Technol. 26, 343–346 (1986).
[B31] London Interdisciplinary Social Science (LISS), https://liss-dtp.ac.uk/
[B32] LSCB Local Safeguarding Children Board Haringey (2009) “Serious Case Review: Baby Peter”, www.haringeylscb.org/sites/haringeylscb/files/executive_summary_peter_final.pdf
[B33] M2D (2018) “Decision Making Under Uncertainty”, https://blogs.exeter.ac.uk/models2decisions/
[B34] Marks, A et al (2015). 'Automatic Justice? Technology, Crime and Social Control'. The Oxford Handbook of the Law and Regulation of Technology, ed R. Brownsword, E. Scotford and K. Yeung. OUP
[B35] Martire, K.A, Edmond, G., Navarro, D. & Newell, B.R.(2017). On the likelihood of 'encapsulating all uncertainty' . Science & Justice, 57, 76-79.
[B36] Martire, K.A., Kemp, R.I., & Newell, B.R. (2013). The psychology of interpreting expert evaluative opinions. Australian Journal of Forensic Sciences, 45, 305-314
[B37] Newell, BR; Lagnado, DA; Shanks, DR; (2015) Straight choices: The psychology of decision making: Second edition.
[B38] Osman M. (2010a). Controlling Uncertainty: A Review of Human Behavior in Complex Dynamic Environments. Psychological Bulletin, 136, 65-86.
[B39] Osman, M. (2010b). Controlling Uncertainty: Learning and Decision Making in complex worlds. Wiley Blackwell Publishers
[B40] PAMBAYESIAN (2018), “PAtient Managed decision-support using Bayesian networks”, www.pambayesian.org, 2018
[B41] Pearl, J. (2018) “Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution”, http://arxiv.org/abs/1801.04016
[B42] UCL, Bloomsbury & East London DTP (UBEL), https://ubel-dtp.ac.uk/
[B43] UK Data Service, www.ukdataservice.ac.uk
[B44] Understanding Society, www.understandingsociety.ac.uk/
Norman Fenton is Professor of Risk Information Management at Queen Mary London University Norman's QMUL Profile. Norman is also a Director of Agena, a company that specialises in risk management for critical systems. Norman, who is a mathematician by training, works on quantitative risk assessment. This typically involves analysing and predicting the probabilities of unknown events using Bayesian statistical methods including especially causal, probabilistic models (Bayesian networks). This type of reasoning enables improved assessment by taking account of both statistical data and also expert judgment.
Magda Osman is a Reader and Researcher in Experimental Psychology at QMUL. Magda's QMUL Profile. Magda's main research interests concern understanding the underlying mechanisms involved in learning, decision making, and problem solving in complex dynamic environments (e.g, biological (fitness), economic (stock market), ecological (rainforest), industrial (nuclear power plant), mechanical (automobile), management (company), and safety critical (automated-pilot) systems).
Martin is a Professor in Computer Science and Statistics at Queen Mary, University of London. Martin's QMUL Profile. Martin's research interests cover Bayesian modeling and risk quantification in diverse areas. Experience in applying Bayesian methods to real problems has convinced him that intelligent risk assessment and decision analysis requires knowledge and data. Not just “Big Data”. He is also a joint founder and of Agena Ltd, who develop and distribute AgenaRisk, a software product for modeling risk and uncertainty . At Queen Mary he teaches decision and risk analysis and software engineering.
Amber Marks is a barrister and is Co-Director of the Criminal Justice Centre and Convenor for the Law of Evidence and Criminal Justice and Surveillance Technologies at QMUL. Amber's QMUL Profile. Amber lectures in the law of evidence, criminal law and criminal justice and surveillance. She is a co-founder of the multi-disciplinary network 'Bayes and the Law', a member of the Metropolitan Police Firearms and Taser Reference Group, the ethical advisory board to NANOSMELL and a trustee of RELEASE. Amber is fluent in Spanish and is a visiting lecturer at the University of Barcelona.
William Marsh is a Senior Lecturer and Researcher at Queen Mary University of London William's QMUL Profile. William’s research aims are to develop better ways to build useful risk and decision making techniques, using a combination of data and knowledge (or expertise). He mainly works with Bayesian networks and prefers to work with ‘end users’ who are making decisions. He is currently collaborating with several groups of clinicians to build decision support systems for medical decision problems.
Professor of Computer Science in the School of Electronic Engineering and Computer Science, QMUL. Paul's QMUL Profile. Working with Peter McOwan, Paul created and edits the EPSRC funded magazine and webzine cs4fn: an initiative to bring computer science research to schools and promote the fun side of the subject. They've also created spin-offs from cs4fn for Electronic Engineering (ee4fn) and Audio Engineering ((Audio!)). Their series of magic books (teaching computer science through magic) is incredibly popular. With William Marsh, Paul created and is the Director of Teaching London Computing, supporting teachers across the UK to deliver the new computing curriculum from primary school upwards. The project is joint with King's College London and is funded by the Mayor of London and the Department of Education.
Ragnar Lofstedt is Professor of Risk Management at Kings. Ragnar's Kings Profile. Ragnar has conducted research in risk communication and management in such areas as renewable energy policy, transboundary environmental issues (acid rain and nuclear power), health and safety, telecommunications, biosafety, pharmaceuticals, and the siting of building of incinerators, fuel policy, nuclear waste installations and railways. He is a believer in the building of public trust in regulators and industry via proactive risk communication and argues that high regulatory/industry trust is equivalent to low public perceived risk.
Ulrike Hahn is Professor of Psychology at Birbeck, University of London. Ulrike's Birbeck Profile. Ulrike's research typically involves both experiments and modelling, and is presently focused on Argumentation, Judgement and Decision Making, Similaroty, Concepts and Concept Acquisition and Language and Language Acquisition. She is presently a member of the Senior Editorial Board for Topics in Cognitive Science and an Associate Editor for Frontiers in Cognitive Science.
David Lagnado is Professor of Experimental Psychology at University College London. David's UCL Profile. David's research focuses on the psychological processes that underlie human learning, reasoning and decision-making. A major theme is the central role played by causal models in cognition. I investigate how people learn causal models from uncertain data, and how they use these models to draw inferences and make decisions.
Mandeep Dhami is Professor of Decision Psychology at Middlesex University. Mandeep's Middlesex Profile. Mandeep has also worked as a Principal Scientist at the Defence Science and Technology Laboratory (UK Ministry of Defence), and has work experience in two British prisons. Mandeep's research focuses on human judgment and decision-making, risk perception and risk taking, and understanding and communicating uncertainty. She has examined these issues extensively in the criminal justice sector, and more recently in the defence and security sectors.
Hana Chockler is a Senior Lecturer in the Department of Informatics at Kings College London and a Head of the Software Systems Group. Hana's Kings Profile. Before joining King’s College, Hana worked at IBM Research in formal verification of hardware and software designs. Hana’s interests include formal methods for hardware and software, search for better specifications, and causality and responsibility and their applications to a wide range of domains, including software engineering, law, and policy. Hana contributed to the research on actual causality, introducing, together with Joe Halpern, the notions of responsibility and blame, allowing to quantify the measure of causality and express the agent’s epistemic state in the calculation of responsibility. Her recent research results include a formal framework for combining experts’ opinions to support policy-makers.
Ruth Morgan is Professor of Crime and Forensic Science at University College London Ruth's UCL Profile. Ruth's research group is focused around the role of physical evidence in the detection of crime and concerns the interpretation of forensic evidence and intelligence. The research falls into two main areas; trace evidence dynamics and the interpretation of evidence.
Become an Associate. Associates will be able to:
Propose specific examples of decision problems suitable for investigation
Access the funded postdocs and investigators to work on agreed problems of mutual interest
Have expense paid visits/exchanges to achieve the above
Become an Advisory Board member.
Members will meet once in year in London, expenses paid
Become a User Group member.
User Group members will get access to all public material from the Centre, including the toolkit, and regular Centre updates.