The unscientific method that caused worldwide panics and lockdowns

Whenever any mathematical model used to justify a proposed public policy produces predictions that turn out to vary wildly from what ultimately happens, those models deserve skeptical examination, something that is particularly true of the Imperial College CCP virus outcome model.  This model predicted such an overwhelming surge in expected deaths that our politicians gave us the lockdown.  Yet not only did the model overestimate the lethality of the virus by 95%, but its modelers were reported as keeping their code for the model confidential, which meant that it could not be independently verified.

Now a version of the code behind that model has been made public.  And a former Google senior software engineer with 30 years in the field has reviewed it.  The review is long and technical, but here's one of many problems the reviewer identified:

Non-deterministic outputs. Due to bugs, the code can produce very different results given identical inputs.  They [the modelers] routinely act as if this is unimportant.

This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results.  Without replication, the findings might not be real at all — as the field of psychology has been finding out to its cost.  Even if their original code was released, it's apparent that the same numbers as in Report 9 might not come out of it.

Here are the review's conclusions.

Conclusions. All papers based on this code should be retracted immediately. Imperial's modelling efforts should be reset with a new team that isn't under Professor Ferguson, and which has a commitment to replicable results with published code from day one.

On a personal level, I'd go further and suggest that all academic epidemiology be defunded.  This sort of work is best done by the insurance sector.  Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on.  Academic efforts don't have these people, and the results speak for themselves.

After defending the fact that his article is pseudonymous, the reviewer added a final comment.

This situation has come about due to rampant credentialism and I'm tired of it. As the widespread dismay by programmers demonstrates, if anyone in SAGE or the Government had shown the code to a working software engineer they happened to know, alarm bells would have been rung immediately. Instead, the Government is dominated by academics who apparently felt unable to question anything done by a fellow professor.  Meanwhile, average citizens like myself are told we should never question "expertise."  Although I've proven my Google employment to [the website's publisher, unquestioning acceptance of "expertise"] is damaging and needs to end: please, evaluate the claims I've made for yourself, or ask a programmer you know and trust to evaluate them for you.

It is not news that governments and academic managers usually do not have the commercial experience to produce workable models or the discipline to employ competent "managers ... to ensure their product [in this case, modeling software] is properly tested, understandable and so on."  It has been ongoing in the U.K. "high tech" industries for some time.  Neville Shute, the late British aerial engineer and novelist, encountered the problem nearly one hundred years ago.  In his autobiography Slide Rule, he observed that civil servants are often not subject to commercial discipline (something usually true of academics, too).  Even if the civil servants do spot problems with erroneous yet popular opinions, we cannot expect the majority of them to make too much of a fuss, since such a course of action would put their families' livelihoods at risk.  And Exhibit A in modern academia would be that nobody gave Professor Ferguson's code to a third party for review or raised any questions about it before the lockdown began.

Let's hope our politicians learn wisdom from this.

Rob Williamson is the pseudonym of a freelance writer with a longstanding interest in political decision-making.

Whenever any mathematical model used to justify a proposed public policy produces predictions that turn out to vary wildly from what ultimately happens, those models deserve skeptical examination, something that is particularly true of the Imperial College CCP virus outcome model.  This model predicted such an overwhelming surge in expected deaths that our politicians gave us the lockdown.  Yet not only did the model overestimate the lethality of the virus by 95%, but its modelers were reported as keeping their code for the model confidential, which meant that it could not be independently verified.

Now a version of the code behind that model has been made public.  And a former Google senior software engineer with 30 years in the field has reviewed it.  The review is long and technical, but here's one of many problems the reviewer identified:

Non-deterministic outputs. Due to bugs, the code can produce very different results given identical inputs.  They [the modelers] routinely act as if this is unimportant.

This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results.  Without replication, the findings might not be real at all — as the field of psychology has been finding out to its cost.  Even if their original code was released, it's apparent that the same numbers as in Report 9 might not come out of it.

Here are the review's conclusions.

Conclusions. All papers based on this code should be retracted immediately. Imperial's modelling efforts should be reset with a new team that isn't under Professor Ferguson, and which has a commitment to replicable results with published code from day one.

On a personal level, I'd go further and suggest that all academic epidemiology be defunded.  This sort of work is best done by the insurance sector.  Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on.  Academic efforts don't have these people, and the results speak for themselves.

After defending the fact that his article is pseudonymous, the reviewer added a final comment.

This situation has come about due to rampant credentialism and I'm tired of it. As the widespread dismay by programmers demonstrates, if anyone in SAGE or the Government had shown the code to a working software engineer they happened to know, alarm bells would have been rung immediately. Instead, the Government is dominated by academics who apparently felt unable to question anything done by a fellow professor.  Meanwhile, average citizens like myself are told we should never question "expertise."  Although I've proven my Google employment to [the website's publisher, unquestioning acceptance of "expertise"] is damaging and needs to end: please, evaluate the claims I've made for yourself, or ask a programmer you know and trust to evaluate them for you.

It is not news that governments and academic managers usually do not have the commercial experience to produce workable models or the discipline to employ competent "managers ... to ensure their product [in this case, modeling software] is properly tested, understandable and so on."  It has been ongoing in the U.K. "high tech" industries for some time.  Neville Shute, the late British aerial engineer and novelist, encountered the problem nearly one hundred years ago.  In his autobiography Slide Rule, he observed that civil servants are often not subject to commercial discipline (something usually true of academics, too).  Even if the civil servants do spot problems with erroneous yet popular opinions, we cannot expect the majority of them to make too much of a fuss, since such a course of action would put their families' livelihoods at risk.  And Exhibit A in modern academia would be that nobody gave Professor Ferguson's code to a third party for review or raised any questions about it before the lockdown began.

Let's hope our politicians learn wisdom from this.

Rob Williamson is the pseudonym of a freelance writer with a longstanding interest in political decision-making.