COVID-19 SCIENCE

Critique of the Burnet Institute’s Model used for Victoria’s Road Map out of the pandemic

By Dr Juergen Ude | October 5th 2021
Various states of Australia, as have many countries, have decided on using models as a basis for their roadmap. One of these recent models is the Doherty Model which is an agent-based model and was critically evaluated here - Doherty Modelling Report - Australia's Pandemic Strategy Based On Assumptions or Science?

Introduction

The Victorian Premier decided not to use the Doherty Model and instead based the Victorian road map on simulations performed by the Burnet Institute using the open source CovaSim software. CovaSim can be obtained from Git Hub.

GitHub - InstituteforDiseaseModeling/covasim: COVID-19 Agent-based Simulator (Covasim): a model for exploring coronavirus dynamics and interventions
BIS.Net Analyst Change Analysis used in Covid-19 analysis

Documentation can be obtained from here.

Covasim: An agent-based model of COVID-19 dynamics and interventions (plos.org)

Agent-based models for Covid, in this instance simply mean that the developers have mathematically represented the way they believe people (agents) transmit covid by incorporating demographics, transmission networks and other relevant factors. Data is then simulated to determine the effect on transmission and deaths based on an extensive set of interventions including testing, isolation, contact tracing and quarantine.

In an academic environment, models are very stimulating and mentally rewarding to the academic. However, outside the academic environment models can result in wrong decisions due to a false confidence in the models by health advisers and government leaders who are out of their depth.

The rest of this snippet covers pitfalls with not only CovaSim, but other models that are impossible to avoid, and hence need to be understood by health advisers and leaders if they do not want to cause unnecessary hardship for people.

Historic Performance and limitations of models

Irrespective to what opinion and belief in the science we may have regarding models, models have failed considerably throughout this and other pandemics with grossly overexaggerated predictions, begging the question why are we not learning that managing pandemics requires more modern and smarter approaches?

But does it matter if a model cannot be relied on? No, it does not matter if we use a flexible strategy where we adapt as soon as there is evidence that the model is failing. Unfortunately, our leaders driven by health advice seemingly are not using a flexible strategy.

Government leaders are advised that models are not science and to use the outcome to dogmatically decide on triggers for a roadmap out of a pandemic will and has started to result in unnecessary anxiety and hurt due to using interventions and coercion that have no scientific basis.

It is however acknowledged that models can be part of the scientific process, provided they are used intelligently within their limitations and with flexibility. They can provide a feel or insight into possibilities, but so can common sense. Common sense dictates that everyone should be vaccinated but common sense also dictates that caution must be exercised because of immediate side effects, and unknown future side effects. There are also dangers of rapid mass injections making recipients susceptible to faulty batches caused by manufacturing and logistic breakdowns.

There is no intent to discredit the use of models to the extent of abandoning the use of models, only to show that models must be used within their limitations and not treated as if they provide absolute results.

Health advisers and an abundance of experts seem to be out of their depth regarding models and managing pandemics. Qualifications in epidemiology, statistics and data science does not automatically result in a thorough understanding of models.

One example demonstrating the problem of advisers and some experts not truly understanding models is the Burnet Institute’s model used by the Victorian Chief Health Officer to convince the Victorian Public that Victoria’s Stage 3 Lockdown response to a resurgence of COVID-19 in 2020 has averted 9,000- 37,000 cases.

Misleading modelling or did Victoria's stage three lockdown really avert up to 37,000 cases?

The modelers assumed an incorrect theoretical growth curve and failed to recognize the limitation of the test used to confirm that the growth curve was appropriate. The model hence provided no evidence.

The Victorian Chief Health Officer also used a moving average of 7 to convince the Victorian public that the Lockdown Stage 4 in 2020 worked. Moving averages are known to lag and the lag resulted in the wrong conclusion. Cases were already coming down when Lockdown Stage 4 was mandated and hence we can argue there was no need for Lockdown Stage 4. We should have been patient.

Misleading mathematics or did Victoria, Australia really need lockdown stage 4?

As with all models, CoviaSim is based on assumptions and oversimplification of the real-world. It is reliant on theoretical distributions that do not hold in practice, estimates of probabilities which do not consider variation in variability. The probability of transmission cannot be assumed to be uniform. There are too many real-world factors to consider. One important omission is the human factor that contributes to hospital overwhelming and deaths.

Another issue is not understanding that this is soft science, and hence using data from other countries, states, provinces, or different periods to calibrate a model will result in significant prediction errors because the effects and impacts are unlikely to be applicable. Yet this was done with the simulations performed by the Burnet Institute. For example, according to the Burnet Institute -

“The impact of different policy changes associated with the roadmap were estimated from calibration to the epidemic wave in 2020 [2-4].”

The modelling report can be downloaded here for the reader to verify.

The developers of CovaSim used by the Burnet Institute themselves acknowledge -

“Covasim is subject to the usual limitations of mathematical models, most notably constraints around the degree of realism that can be captured. For example, human contact patterns are intractably complex, and the algorithms that CovaSim uses to approximate these are necessarily quite simplified.

Like all models, the quality of the outputs depends on the quality of the inputs, and many of the parameters on which CovaSim relies are still subject to large uncertainties. Most critically, the proportion of asymptomatics and their relative transmission intensity, and the proportion of presymptomatic transmission, strongly affect the number of tests required to achieve workable COVID-19 suppression via testing-based interventions”

Although some serious limitations have been missed, to be discussed in the next section, the Burnet Institute has with honesty made its own list of limitations (see below) and hence there can be no finger pointing at the institute.

The problem is that advisers and leaders are ignoring these limitations and treating the output of the models as set in concrete.

BIS.Net Analyst Change Analysis used in Covid-19 analysis

Two Critical Flaws invalidating Burnet Institute’s Victorian Model (and other models)

Case Reporting

The Burnet Institute’s model parameters were calibrated to data on cases in Melbourne over the period of July to September 2021.

Case reporting has been one of the most serious scientifically flawed practice used throughout the pandemic, by all nations, showing a disturbing degree of incompetence in the management of this pandemic. This incompetence must regretfully be directed at health advisers, who have shown throughout this pandemic they are out of their depth irrespective of qualifications, and academic reputation and field experience. Writing peer reviewed papers cited by hundreds demonstrates academic prowess but not real-world prowess.

Case reporting works in uninfluenced circumstances. For example, reporting car accidents, or deaths. However, cases are influenced by test numbers which are influenced by government marketing. A recent survey to be published on this site soon has shown that based on 987 respondents from across Australia, 84% of people tested because they were influenced, not because of symptoms.

There seems to be a failure by health advisers and other experts to comprehend that test numbers affect case numbers for a given prevalence. Test numbers must be factored in and that is done with case positivity. If this is not done, then we are likely draw wrong conclusions with our models.

The folly of using case numbers, instead of case positivity is best demonstrated by comparing Victoria with NSW. NSW recently commenced lockdown after cases first appeared much later than Victoria is known for. Specifically, NSW commenced lockdown two months after cases appeared, at which time daily cases were around 400. These have rapidly increased to over 1000 per day. The Victorian premier used this fact to justify his health advice strategy.

The reality is that the CRUDE (there are caveats to be reported in a future article) prevalence estimate in NSW is almost ONE THIRD of Victoria and going down, whereas now Victoria is still heading up without any evidence that this is due to recent protests and Grand Final Partying. Victoria with all its harsh restrictions over a record period is in the unenviable reputation of having the worst prevalence estimate of all of Australia. There seems to be a negative correlation between the harshness of containment actions and results.

Victoria was blinded by cases numbers, as have many other countries and states and provinces, and could not see that NSW’s high case numbers were related to much higher test numbers than Victoria. (If you look for trouble you will find trouble.)

Deaths

Deaths and hospital overwhelming are the underlying reason for global paranoia. Hospital overwhelming, which is influenced by mismanagement of fear and panic will be discussed in a future snippet not being relevant in this snippet.

Models assume across the board death numbers for model calibration. This can provide misleading output because the real world is unpredictably heterogenous. Heterogeneity in deaths can bias averages and distributions and hence affect calibration. For Victoria deaths in age care residency have distorted the true overall death profile in the population, affecting the reliability of the model. We found no evidence that this was considered but stand open to being corrected.

Australia, as of the 28th of September, had a total of 1256 deaths for 100,912 cases since 2020. The case fatality ratio is thus 1.2%. If we remove Victorian nursing home deaths (residential age care recipients) obtained from Coronavirus (COVID-19) at a glance for 28 September 2021 (health.gov.au) the CFR is only 0.54% for all of Australia.

As of the 28th of September 2021, Victoria has the unenviable reputation of having contributed to half of Australia’s reported deaths since 2020 all from nursing homes.

BIS.Net Analyst Change Analysis used in Covid-19 analysis
BIS.Net Analyst Change Analysis used in Covid-19 analysis

But the situation is far worse. At the end of Victoria’s second wave last year, end of October nursing home deaths accounted for 72% of all of Australia’s reported deaths and 80% of all of Victoria’s deaths. How can we take deadliness of the virus serious the way it was presented where testing positive is an automatic death sentence.

Models are also invalidated by assuming deaths are all caused by Covid. Deaths were also caused by mismanagement and fear and panic. Assuming covid deaths are caused by covid is an unrealistic model assumption. Recorded deaths are death with covid and not from.

There is now some anecdotal evidence that some, if not all, Victorian nursing homes administered morphine in residents just because they tested positive, not because they suffered with symptoms.

According to one person “Apparently last year the grandmother of a friend rang from the nursing home saying she had the virus, but not one symptom. Yet the staff wanted to put her on morphine which her family stopped after finding out about it. However, two other patients, without symptoms were placed on Morphine and both died. The deaths were then recorded as Covid deaths.”

This has been confirmed by a survey of over 1000 people, and an Age Care GP.

“This is the “pandemic plan” the nurse manager of a nearby nursing home told me about in late March/ early April last year. I think there is an undisclosed “pandemic plan” which operates the way xxx has described.”

This needs investigation because this plan if true may have killed people making the virus even less deadly. At this stage we make no allegation.

If we also consider the well documented complications regarding death definitions outlined here:

How Deadly is Covid-19?

then the model is highly flawed with assumptions about vaccine efficacy only making matters worse.

Concluding Comments

To what extent all the flaws in the model will affect the performance relative to reality no one can be sure of. Time will tell, but so far pandemic models have failed as can only be expected.

Models are not science but can be part of the scientific process if used intelligently as a guide only. Instead, models have been used like a fixed rail track throughout the pandemic.

Instead of using models to decide when to reduce restrictions, when to open international borders, those decisions should be based on data not theoretical model predicted vaccination targets. The models should serve only as a guide to what we may expect and nothing more.

ABOUT THE AUTHOR

Dr Juergen Ude has a certificate in applied chemistry, a degree in applied science majoring in statistics and operations research as top student, a masters in economics with high distinctions in every subject, and a PhD in computer modelling and algorithms. He has lectured at Monash University on subjects of data analysis, computer modelling, and quality & reliability.

Prior to founding his own company (Qtech International Pty Ltd), Dr Ude worked as a statistician and operations researcher for 18 years in management roles having saved employers millions of dollars through his AI and ML algorithms. Through Qtech International, Dr Ude has developed data analysis solutions in over 40 countries for leading corporations such as Alcoa, Black and Decker, Coca-Cola Amatil, US Vision and many more. Additionally he has developed campaign analysis software for politicians.

Help support our Covid-19 Data Research

Over the last 18 months, we have volunteered our time to the data analysis of the Covid-19 pandemic, publishing truthful unbiased facts backed with real data evidence. We have also worked alongside doctors and lawyers, providing them with 'data evidence' and 'statistical insights' from a data perspective into the pandemic to help support their work.

For us to continue our data research, publish more articles, help support the doctors and lawyers, and lobby the federal/state governments with 'evidence' behind us, WE NEED YOUR HELP. If you can provide a small donation to our work that will be greatly appreciated!