Experts and the Media are Engaged in COVID-19 Propaganda Based on Error-Prone Tests (test sensitivity and test specificity)
With COVID-19, science is more of a slogan than anything approaching reality. Our “experts” continue to equate positive infection test results from highly suspect tests with actual infections.
COVID-19 tests in use have error rates that makes the results highly suspect, particularly when applied to largely uninfected populations. And there is no significant randomized selection for testing—as anti-scientific as it gets.
The tests cannot even be considered as valid science, since few have been validated under myriad varying collection conditions (e.g. the nasal swab tests).
COVID-19 deaths in California are barely a blip, yet the economic destruction proceeds apace, with Palo Alto (for example) a ghost town compared to its usual beehive activity.
False positive and false negative
Suppose that 1000 uninfected people are tested. If the test specificity is 97%, then 30 people will test as infected (false positive)!
Next, test 1000 infected people. If the test sensitivity is 97%, then 30 people will test as not infected (false negative), and they will go on to infect others.
Just try getting authoritative (non marketing, independent peer-validated research) on the real false positive and false negative rates. The news is propaganda, press releases are not credible. Yet lives and businesses are being destroyed by the heavy hand of government using this anti-science GIGO “data”.
With COVID-19, the accuracy of testing depends on which test is used, the type of specimen tested, how it was collected and the duration of illness. Combine that with false positives and false negatives and count me out on giving the data credibility.
Sensitivity measures how often a test correctly generates a positive result for people who have the condition that’s being tested for (also known as the “true positive” rate). A test that’s highly sensitive will flag almost everyone who has the disease and not generate many false-negative results. (Example: a test with 90% sensitivity will correctly return a positive result for 90% of people who have the disease, but will return a negative result — a false-negative — for 10% of the people who have the disease and should have tested positive.)
Specificity measures a test’s ability to correctly generate a negative result for people who don’t have the condition that’s being tested for (also known as the “true negative” rate). A high-specificity test will correctly rule out almost everyone who doesn’t have the disease and won’t generate many false-positive results. (Example: a test with 90% specificity will correctly return a negative result for 90% of people who don’t have the disease, but will return a positive result — a false-positive — for 10% of the people who don’t have the disease and should have tested negative.)