Dr Dale Cooper honoured to receive the Risk Engineer Achievement Award 2023

Dr Dale Cooper was honoured to receive the Risk Engineer Achievement Award 2023. The award was presented at the biennial Risk Engineering Society Conference, RISK 2023, held in Brisbane on 7-8 September 2023, by Pedram Danesh-Mand, National President of the Risk Engineering Society.

Acceptance speech

This is an extended version of Dale Cooper's acceptance speech at the Conference dinner.

Pedram Danesh-Mand asked me to reflect on how I started in risk engineering, and to provide some thoughts for those who are starting out in our profession.

My risk engineering journey began in the late 1970s at the University of Southampton. My colleague Chris Chapman (now Emeritus Professor of Management Science at the Southampton Business School) asked me to help him with a project in Canada, and in particular to help migrate software from BP in London and apply it in a different context.

We were working with Acres (now part of Hatch) in an office in Niagara Falls Ontario, on a proposed LNG project, the Arctic Pilot Project, on Melville Island, high in the Canadian Arctic [1, 5]. The intent was to deliver large quantities of natural gas to a terminal in eastern Canada, year round, using ice-breaking LNG tankers. The specific focus of our work was the reliability of the LNG plant and the appropriate size of on-site LNG storage [4].

The circumstances for risk engineers at that time were very different from those we face today. The technology we use now for quantitative analysis did not exist. Portable computers were only beginning to emerge (and were the size of small suitcases), and there were no spreadsheets or accessible simulation software like @RISK. We used numerical integration to combine distributions, with controlled interval and memory (CIM) models that Chris had developed, implemented in Fortran on small mainframe computers [3].

Despite the technical differences, many aspects of what we were doing then have persisted in my current risk practice with Broadleaf. I’ll mention two of them tonight.

First, we spent a significant time understanding the problem and the kind of solution that was needed to support decisions, and in tailoring a general operational research method for the specific circumstances. Chris drove most of this, with me very much in the back seat. For each component of the LNG facility (pipeline delivery system, LNG plant, utilities, storage system ...), we combined three distributions – a failure rate distribution, a distribution of the effect of the failure on production and a distribution of response time – to generate a distribution of lost production that could arise during one day of attempted operation. We combined these distributions across components to generate a distribution for the facility as a whole. My main contribution was to include a semi-Markov process, that seemed innovative at the time, for converting the distribution from one day of attempted operation into steady-state distributions of production over longer operating periods. (Chris had used semi Markov models previously in a very different context, examining weather windows in strategic planning for offshore North Sea projects [2].)

Modern computing technology often makes it all too easy to jump straight to a model and a superficially impressive quantitative outcome, without thinking enough about how that will add value to the client’s business and support their decisions. Developing an understanding of the structure of uncertainty, and what decision makers really need, are essential precursors to delivering value through quantitative forecasts of a system’s uncertain behaviour. The lesson here is to spend time thinking, and scribbling on whiteboards and pieces of paper, before you open your laptop. Otherwise you may generate very precise forecasts that are misleading, technically incorrect, fatally flawed or useless for their intended purpose of supporting important decisions. ‘I would never do that!’ I hear you say, but I’ve seen examples of all of these in my professional life, and sometimes from people who should have known better.

Second, we were rigorous in documenting assumptions, sources of uncertainty and their relationships, effects of an outage on production, and plans for responding to failure – over 300 pages of structured information. This allowed us, and others, to review all the data and information underlying the quantitative outcomes. That was particularly important for this project, as there were only nine operating LNG plants in the world at the time, none in a remote Arctic location, and reliable information was hard to find and often commercially sensitive.

The comprehensive documentation we generated promoted communication, increased understanding and enhanced confidence, for us and for the senior managers who relied on the numbers we generated. It justified our initial triage of sources of uncertainty into things that were too small to include (minor risks, like a filter failure on a pump, that would have a negligible effect on production), things that were too big to include (project conditions, like terrorism or major fires, better dealt with outside a model) and the remainder that were addressed explicitly.

The lesson here is that the thinking and rationale behind the quantitative analysis must be readily available and transparent, so it can be examined and tested by others. This enhances confidence in the process and the outcomes, it allows changes to assumptions to be examined, and quantitative outcomes can be updated efficiently as a project progresses.

Finally, I developed three personal tips for young risk engineers, as Pedram requested, but on reflection I think they are relevant for all of us.

  1. Learn from one another. If you are less experienced, identify and learn from more experienced mentors, and if you have been a risk engineer for some time then provide support to those around you. I learned a huge amount from Chris Chapman when I was starting, and I hope I’ve returned the favour over my professional career by supporting others to become better at what they do. This will help you, and advance our profession.

  2. Never be afraid to ask very basic questions, what I call ‘constructively dumb’ questions – Why do you do it that way? What would happen if that assumption changed? – to explore the assumptions and boundaries of the problems you face. Some of these basic questions can be confronting because they’re not always easy to answer, but they lead to greater understanding.

  3. Be inquisitive. Find out all you can about the projects and organisations you’re involved with. It will help you ask the right questions, and you’ll learn a lot.

That brings me to my closing point. Risk engineering can be challenging, but it is great fun. You work with interesting people, in interesting parts of the world, and it makes you think widely and deeply about interesting things. Be passionate! Enjoy what you do, and help those around you to enjoy what they do too.

Enjoy the rest of your evening. Thank you.


  1. Bailey, RA (1983) The Arctic Pilot Project. Energy Exploration & Exploitation 2(1): 5-24.

  2. Chapman, CB (1979), Large engineering project risk analysis, IEEE Trans Energy Management 26 (3) 78-86.

  3. Chapman, CB and DF Cooper (1983) Risk engineering: basic controlled interval and memory models. J Operational Research Society 34(1): 51-60.

  4. Chapman, CB, DF Cooper and AB Cammaert (1984) Model and situation specific OR methods: risk engineering reliability analysis of an LNG facility. J Operational Research Society 35(1): 27-35. Note: This paper was awarded the President's Medal of the Operational Research Society.

  5. PetroCanada (nd). Artic Pilot Project. PetroCanada, Alberta, Canada. Available here.