COVID-19 SCIENCE

Science or Assumptions? Critique Of The Doherty Modelling Report For Australia

By Dr Juergen Ude | August 11th 2021
The Doherty modelling report, prepared for the Australian National Cabinet on July 30th 2021, has been used as the grounds for Australia's road map out of the pandemic, paving the way for political leaders to justify lockdowns and restrictions as part of its drive towards 80% vaccination of the population. A critique into the Doherty report has shown that science was lacking according to our standards of modelling.

The following is not about pro or anti vaccination. It is just another example of how bad the science applied to the pandemic is, begging the question, should we not re-examine the science using real-world experts instead of academia based experts? A new approach is now needed.

Introduction

It suffices to note that all models used in Australia and overseas have failed and proven to be unrealistic. 'Agent-based' models used by the institute are notorious for being based on TOO MANY ASSUMPTIONS.

The model was not used as part of a scientific process. Models are not science, but can be used as part of the scientific process, if and only if used as a basis of formulating a hypothesis which is then tested. In this instance there was no testing involved. The untested hypothesis is that the 80% vaccination target will return us back to normal. Because we have not tested this, there is no scientific basis for following the recommendation.

There are many fundamental sources of error attached to the Doherty modelling report.

  • Reliance on unrealistic assumptions
  • Using model input from studies in countries, mainly the UK, which are NOT representative of Australia.
  • Failure to incorporate variability
  • Reliance on input from studies which themselves were based on models
  • Seemingly incorporating data from the Wuhan virus when the study was about the Delta variant
  • An unrealistic assumption that the delta variant will be the dominant variant by the time the vaccination target is reached. This problem was acknowledged, but forgotten was that the model is based on a variant that most likely will no longer be around in the near future
  • Failure to incorporate real-world facts
  • Failure to demonstrate accuracy

It is worth noting that the Doherty Modelling report states that vaccinating children will have minimum impact on reducing transmission potential of Covid-19. SO WHY VACCINATE CHILDREN? Children also have a low risk of infection.

BIS.Net Analyst (Covid-19 Science) - Why do children need vaccination if it will not reduce covid transmission?

The Doherty modelling report also states that there is LIMITED EVIDENCE that the Delta strain is more severe than previous strains. Therefore how can we say with conviction that the delta strain IS deadlier than previous strains?

BIS.Net Analyst (Covid-19 Science) - No evidence that Delta is more severe

The Doherty modelling report can be downloaded at the end of this report.

About Models In General

MODEL PERFORMANCE

George E. Box, who has been called "one of the great statistical minds of the 20th century" famously said, "All models are wrong, although they can be useful".

If all models are wrong, then how can models be used to base potentially harmful decisions such as coercing people to vaccinate which we all know are experimental and proven to already deliver short-term adverse reactions including even deaths? We do not even know what the medium to long term effects are.

Models can of course be highly useful. The usefulness of weather forecasting has been established for some time now. But we must bear in mind that these models have been tweaked over many years. This has not been possible for this pandemic and may not be possible due to the unpredictability of pandemics where mutations are continuous disrupting the underlying assumptions.

As proof of the unreliability of the models applied to the pandemic so far, one only needs to look at the performance of models during epidemics and pandemics, including the SarsCov2 pandemic.

The original predictions used as the basis for shutting down countries around the world were from Professor Ferguson and the Imperial College of London. Professor Ferguson showed that he could not have believed in the seriousness of his predictions by violating stay at home orders to meet with his mistress.

BRIEF HISTORY OF CLASSIC FAILURES

FOOT AND MOUTH AND OTHER DISEASES

In 2001 Professor Ferguson and the Imperial College produced modelling on foot and mouth disease suggesting that animals should be culled, even if there was no evidence of infection. This led to the total culling of more than six million cattle, sheep and pigs – which cost the UK economy an estimated £10 billion. The modelling was ‘severely flawed’ and made a ‘serious error’ by ‘ignoring the species composition of farms.’ In 2002, modelling predicted up to 50,000 people would die from exposure to BSE (mad cow disease). In the UK, there have only been 177 deaths from BSE. In 2005, Professor Ferguson predicted up to 200 million people could die from bird flu. In the end, it was only 282.

Why were there no lockdowns after predicting 200 million people would die from bird flu?.

BIS.Net Analyst (Covid-19 Science) - Bird Flu

MODELLING JUSTIFYING VICTORIA'S CONTAINMENT MEASURES IN 2020

During 2020, the Victorian State Government used models to convince the public to accept draconian containment measures through fear tactics. It was said that modelling showed 36,000 people would have died in Victoria if physical distancing had not been put in place. It was said that modelling showed 650 people would have died each day without physical distancing measures. In reality, Victoria only had 820 deaths in total, or approximately 2 deaths per day. We don't know whether these people died WITH or FROM covid. Without an autopsy we cannot even say that the deaths were due to Covid. There was also NO change in registered deaths hence we cannot with a clear conscience say with conviction that Covid-19 took the lives of 820 people from Victoria.

We can argue that the predictions were not met because of our social distancing measures.

Hence let us compare daily deaths for countries that had little social distancing. Sweden, when adjusted for Victoria’s population size had 20 deaths per day, nowhere near 650 per day and nowhere near 36,000 in the year. Sweden had around 8000 deaths for 2020. Some social distancing was introduced only near the end of 2020. Sweden now has close to zero deaths. This of course may change as do all deaths from diseases. That is life as we have lived it since the beginning of time.

Japan, without strong social distancing, during a period of high deaths, averaged 1 death per day when adjusted for Victoria’s size.

What Are Models?

Models are a simplification of the real world to help understand the real world by manipulating parameters to study their effect. Being a simplification, they can never be correct. Models can be physical, such as model aeroplanes or they can be mathematical. Pandemic modelling is mathematical.

More elaborate models predict cases and deaths from trends by fitting theoretical distributions and using these for mathematical predictions. This was done by the Burnet Institute, which the Victorian CHO is associated with, to prove lockdown stage 3 averted between 9000 and 37,000 cases in July 2020. The model was flawed and invalid because it assumed an exponential growth rate when in reality there was a linear growth rate due to the problem of case reporting which never factored in testing numbers. Models are only as good as the modeler.

A more elaborate model is what is called an 'agent-based model' which is what the Doherty institute has used. These are usually implemented as computer simulations. These models are unrealistically complex. It is not possible to mathematically represent what happens in the real world. There are too many factors to consider, there are too many changing factors such as mutations, immunity, natural and induced behaviour of the public. They make use of probability distributions which are often unrealistic.

Epidemiological ABMs have been criticized for simplifying and unrealistic assumptions.

The Doherty Modelling Report

AN ISSUE OF MORALITY

The government has used the model to justify the vaccination of high percentage of people, and according to some reports 12 to 15-year-old school children.

Yet, the report effectively states that -

"expanding the vaccine program to the 12–15-year age group has minimal impact on transmission and clinical outcomes for any achieved level of vaccine uptake"

The above now begs the question, WHY ARE WE VACCINATING CHILDREN?

From this perspective alone there is no justification for young children, still not fully grown, to be given an experimental vaccine. Furthermore, to allow 16-year-old teenagers, who are still not experienced enough in life to make responsible decisions to be vaccinated without parental approval is in our opinion morally wrong and unethical, especially with a cloud over the true deadliness of the virus.

Why do young people need to vaccinate with experimental vaccines, setting an extremely dangerous precedence when vaccination will not stop them from infecting others and when they are in no danger, other than normal cold and flu symptoms, except perhaps in extreme atypical cases occasionally reported.? Vaccination does not stop infections as can already be seen in highly vaccinated countries such as China, Israel, Seychelles.

If vaccinations do not stop infections, then is it not immoral to push injections on young people, especially school children?

GENERAL RELIABILITY OF THE MODEL

It is very easy to be distracted from assessing the reliability of the report by reading the outcome of the model forgetting that the outcome IS NOT A PROVEN FACT, but simply the RESULT OF MODEL SIMULATIONS, which are only as good as the mathematics. The output of models are not facts.

How does the end user know how reliable the algorithms are?

The Doherty Model to the public is a black box. The descriptions do not give any insight into the algorithms driving the output. We cannot test the reliability of the model, whether calibrated properly or not. We cannot repeat pandemics for the same virus. Since the model has not been disclosed, we cannot as scientists trust the results of the model, especially considering the poor performance of other 'agent-based' models, such as those used by Professor Ferguson to convince most of the world to lockdown. Similarly, we cannot ignore the failure of Australian models, especially after learning that some were outrightly incompetent, at least according to our standards.

RELIABILITY DUE TO ASSUMPTIONS

Academia relies on assumptions to compensate for lack of real-world knowledge and to simplify the analysis burden. The reliability is hence dependent on assumptions.

Let us list some of the assumptions used in the modelling exercise. Recall that the science community considers the reliance on assumptions a weakness of agent-based models.

The following is a short list of assumptions:

1. COVID-19 WOULD SPREAD UNIFORMLY ACROSS THE AUSTRALIAN CONTINENT

As quoted in the report -

"Under the evidently coarse simplifying assumption that COVID-19 would spread uniformly across the Australian continent, we use an 'agent-based' model of the total population to represent epidemic dynamics and the combined impacts of vaccination, public health and social measures to limit transmission and reduce the outcomes of interest. Hospital and ICU admissions are benchmarked against stated national capacity, based on the additional simplifying assumption that such resources are equally accessible to every Australian"

There is not a single country where the virus has spread uniformly throughout the country. Australia with its population distribution varied between coastal and the sparsely populated inner region consisting of 28% of the Australian population has already shown the virus does not spread uniformly across Australia.

Resources are not equally accessible to every Australian. THIS UNREALISTIC ASSUMPTION SHOULD RING ALARM BELLS.

2. THE MODEL IS BASED ON A SIMPLIFYING ASSUMPTION

As quoted in the report -

"The model is based on a simplifying assumption of a single national epidemic, with Covid-19 transmission, severity and vaccine effectiveness of the delta variant"

We have already had a Wuhan, ‘European’, Alpha (Kent), Beta (South Africa), Gamma (Brazil) and Delta variant. Who knows how many others that we don’t know of? Such a simplifying assumption cannot be trusted. We cannot assume that delta variant is now the only one, just because we have not found another one. We have not tested 7 billion people continuously. The report does suggest that the model must be revaluated with emerging new variants. However, the conclusions are based on a variant which is unlikely to be the dominant variant by the time Australia has achieved its target.

The report assumes that the delta variant has similar severity to the other variants. If this is true, and we do support this, as have others, then why is the Australian government painting the delta variant as extremely deadly and rampant. We and others have already determined there is no evidence that the delta variant is far more rampant than anything before. It may have spread fast in India, not necessarily because cases increased, but probably due to lack of testing facilities, forcing diagnosis based on symptoms alone. This is the real world. Further the current speed of spread in Australia is not as fast as we are told, taking test numbers into account.

Irrespective of the above there is an issue of competence in relation to the model. On the one hand it is stated that the model is based on the delta variant, but on the other hand input of model parameters also came from the ‘extinct’ Wuhan virus of over a year ago. How can we base a model on some Wuhan data when it is supposed to be about the delta variant?

3. DISEASE SEVERITY ASSUMPTIONS

We have analysed data from virtually every country effected by Covid. There is too much variation from country to country and region to regions and even time to time to make such assumptions. There is nothing static when it comes to the real world.

4. VIRUS ASSUMPTIONS

These are based on limited studies by several authors in countries with completely different characteristics to Australia. This IMMEDIATELY INVALIDATES THE MODELLING FOR AUSTRALIA.

Our own study spanning all countries has shown too much heterogeneity to trust conclusions based on different countries. The DEATH RATES AND HOSPITALIZATIONS VARIED FROM COUNTRY TO COUNTRY.

The studies themselves rely on other models. Some authors themselves showed how different models resulted in different output. Since the input used by the Doherty model was based on other models we cannot rely on the reliability of the output.

All studies involved fitting theoretical models to data. That works for the existing data but cannot be used to predict the future and performance in other countries. THE WORLD DOES NOT FOLLOW MATHEMATICS. IT CHANGES CONTINOUSLY. What works in one snapshot of time, fails in another. Our work spanning the whole covid period has shown this.

One author referenced by the institute has concluded that the effectiveness of school closures might be less than other respiratory illnesses. This begs the questions, why are Australian political leaders further ruining anything that can make us the clever country by ruining the education of our children? Why are teachers scaremongering primary school children into believing that they will die if they don’t get vaccinated? Where has conscience gone? Since when have we become people who use children in our battles against adults? Shame on our educators and government leaders.

5. VACCINE EFFECTIVENESS ASSUMPTIONS

These are based on limited data hence not representative and cannot be used to make any predictions. We are already finding that effectiveness differs from country to country. The academic studies are just that, academic. They do not even consider the quality and potency of the vaccines delivered into different countries, which can be affected by logistics and many other factors.

One cited author states "Given the observational nature of these data, estimates of vaccine effectiveness need to be interpreted with caution."

Yet we are making soul destroying dictatorial decisions based on unrealistic academic studies which are unreliable even by admission. Pandemics are soft science and soft science may follow the scientific process, but conclusions cannot be trusted.

6. REAL-WORLD FACTS IGNORED

The Doherty model and not a single model used in its cited studies has considered the unreliability of the underlying data.

None have considered that case reporting is invalid because cases are dependent on test numbers which keep changing. EG: Day 1 testing numbers were 10,000, Day 2 testing numbers were 45,000, and so on. Without factoring in the varying testing numbers into case reporting you are going to receive a misleading representation of the virulence of the virus. Our studies have shown that, after considering test numbers, many of the waves were less in magnitude compared to the first.

None have considered the unreliability of the PCR test.

None have considered the variability of severity, and the various indexes such as the r0 factor and TP (Transmission Potential). It is unrealistic not to incorporate variability. It is unrealistic to assume there is some magical r0 factor. Transmissions depends on too many things. No virus has a unique r0 factor or TP value.

None have considered the effect of the academic strategy of playing it safe by treating death as a Covid death if the patient had a positive test, even though there was no autopsy and even though there is no evidence that the death is a "died from covid case".

None have incorporated registered deaths.

Concluding Comments

The above is not about the arguments for and against vaccination. It is about showing that by use of modelling, without conclusive testing, based on unrealistic assumptions, there was no science used in determining vaccination strategies. If models that academics so strongly believe in have failed so badly, one cannot help wondering that academia, from a protected environment, does not fully understand the real-world and may not be the best source of knowledge to make decisions over the lives of human beings living in a real-world.

Vaccinations are seen as the way out by experts. One cannot argue against vaccination being the way out if vaccines are effective, without harmful side effects. So far, to the best of my knowledge they have not been for the flu, SARS and the common cold. SARS-COV-02 is of the common cold family. Already there are indications that the hope we had in vaccinations were premature and that if vaccines eventuate to be ineffective we may need a new mindset using real-world experts.

Maybe there is a better way out. Academic experts have assumed that the virus's deadliness, based on irrational definitions of a Covid death, justifies destruction of lives without considering that maybe, just maybe instead of managing fear, our fear strategy has caused a major portion of excess deaths. We have never adequately studied the effect of fear on death cause by hospital overwhelming, hospital aversion, hospital mismanagement, fear induced death compromising the immune system, and over treatment by panic-stricken doctors, placing people on ventilators, often using inadequately trained technicians. We have not even adequately studied the effect of death exasperation caused by atypical health, health system problems, and environmental differences such as the highly polluted region of Lombardi, or acid rain in Wuhan.

Interestingly, the common cold has 3 to 6 waves in Australia. The current waves all over the world look suspiciously similar. Maybe the PCR test detected the common cold virus. After denials that PCR can detect the flu, there has now been a back flip.

Maybe the way out is to use real world experts as explained in the science Open Letter.

Download the full Doherty Report here

ABOUT THE AUTHOR

Dr Juergen Ude has a certificate in applied chemistry, a degree in applied science majoring in statistics and operations research as top student, a masters in economics with high distinctions in every subject, and a PhD in computer modelling and algorithms. He has lectured at Monash University on subjects of data analysis, computer modelling, and quality & reliability.

Prior to founding his own company (Qtech International Pty Ltd), Dr Ude worked as a statistician and operations researcher for 18 years in management roles having saved employers millions of dollars through his AI and ML algorithms. Through Qtech International, Dr Ude has developed data analysis solutions in over 40 countries for leading corporations such as Alcoa, Black and Decker, Coca-Cola Amatil, US Vision and many more. Additionally he has developed campaign analysis software for politicians.

Help support our Covid-19 Data Research

Over the last 18 months, we have volunteered our time to the data analysis of the Covid-19 pandemic, publishing truthful unbiased facts backed with real data evidence. We have also worked alongside doctors and lawyers, providing them with 'data evidence' and 'statistical insights' from a data perspective into the pandemic to help support their work.

For us to continue our data research, publish more articles, help support the doctors and lawyers, and lobby the federal/state governments with 'evidence' behind us, WE NEED YOUR HELP. If you can provide a small donation to our work that will be greatly appreciated!