Is there a name for the phenomenon of false positives counterintuitively outstripping true positivesIs there a name for 10% best individual grades?Is there a word for the phenomenon that the old are generally less affected by risk factors?Is there a better name than “average of the integral”?Is there a formal name for this data normalization formula?Name of error measure based on top |positives| resultsIs there a name for the “one-sided–normalised covariance”?Other name for “Parameter”?Is there a standard name for a certain parameter for the beta distribution?

What is the "more" means here?

Alternatives to boxes

Joining matrices together

Why don't my appliances work when my tester shows voltage at the outlets?

How would a young girl/boy (about 14) who never gets old survive in the 16th century?

Continents with simplex noise

What is a word for the feeling of constantly wanting new possessions?

Practical considerations when using a large number of capacitors in parallel?

Impeachment jury tampering

Can someone help explain what this FFT workflow is doing to my signal, and why it works?

Allow all users to create files in a directory, but only the owner can delete

Multiple devices with one IPv6 to the Internet?

Reviewer wants me to do massive amount of work, the result would be a different article. Should I tell that to the editor?

C function to check the validity of a date in DD.MM.YYYY format

How to initiate a conversation with a person who recently had transition but you were not in touch with them?

How is warfare affected when armor has (temporarily) outpaced guns? How can guns compete?

Why does Wonder Woman say "goodbye brother" to Ares?

Is velocity a valid measure of team and process improvement?

Features of a Coda section

How does an aircraft descend without its nose pointing down?

What does "speed checked" mean?

Why did Grima shed a tear?

Meaning of 'off one's brake fluid'

I can be found near gentle green hills and stony mountains



Is there a name for the phenomenon of false positives counterintuitively outstripping true positives


Is there a name for 10% best individual grades?Is there a word for the phenomenon that the old are generally less affected by risk factors?Is there a better name than “average of the integral”?Is there a formal name for this data normalization formula?Name of error measure based on top |positives| resultsIs there a name for the “one-sided–normalised covariance”?Other name for “Parameter”?Is there a standard name for a certain parameter for the beta distribution?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;









39














$begingroup$


It seems very counter intuitive to many people that a given diagnostic test with very high accuracy (say 99%) can generate massively more false positives than true positives in some situations, namely where the population of true positives is very small compared to whole population.



I see people making this mistake often e.g. when arguing for wider public health screenings, or wider anti-crime surveillance measures etc but I am at a loss for how to succinctly describe the mistake people are making.



Does this phenomenon / statistical fallacy have a name? Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person.



Apologies if this is the wrong forum to ask this. If so please direct me to a more appropriate one.










share|cite|improve this question










$endgroup$














  • $begingroup$
    as a quick comment, one would say that the scenario has poor "positive predictive value" which might be another avenue to consider exploring in thinking how to explain.
    $endgroup$
    – James Stanley
    Oct 15 at 4:15










  • $begingroup$
    Do you mean that the test generates more false positives than true positives generally, despite its being 99% accurate over all cases, or do you mean that the exact same test has different behavior based on which subset of the population one is talking about? Because the overall accuracy rate already implies that the case it has difficulty identifying true positives of is the rarer condition. "When the population of true positives is very small compared..." sounds like it is characterizing the test over entire populations, not differences in its behavior over sub-populations. Is this correct?
    $endgroup$
    – pygosceles
    Oct 15 at 19:18






  • 1




    $begingroup$
    The current answer gives you the term, but you also asked for an example that could help to explain this to a layman: Consider a disease that affects 1 in 1000 people. When doing a test with an accuracy of 99% on 1000 people, then 10 people are classified incorrectly. So 1 person might be a true positive, but still, there may be 9 false positives. In general, 'accuracy' (as a measure) only makes sense for balanced distributions. Otherwise, 'informedness' may be a better measure. See en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion for more examples.
    $endgroup$
    – Marco13
    Oct 16 at 12:25











  • $begingroup$
    @pygosceles Yes. Many, if not most, people have the intuition that a test that's 99% accurate implies a false positive rate of 1% regardless of the number of true positives in the population and the population size. It is counter-intuitive to many people that a highly accurate test can give you way more false positives than true positives in some circumstances.
    $endgroup$
    – technicalbloke
    Oct 17 at 11:02










  • $begingroup$
    @technicalbloke It sounds like they aren't really even thinking about the true positive rate as its own thing, perhaps falsely conflating the huge proportion of true negatives+true negatives with the true positives, since true negatives drive the accuracy measure for rare conditions, and so say nothing about the true positive and false positive rates. Disregarding false positives sounds like they may also have conflated accuracy with recall and so need to supplement their concept of recall with precision, which seems to be at the core of your concern.
    $endgroup$
    – pygosceles
    Oct 17 at 14:11

















39














$begingroup$


It seems very counter intuitive to many people that a given diagnostic test with very high accuracy (say 99%) can generate massively more false positives than true positives in some situations, namely where the population of true positives is very small compared to whole population.



I see people making this mistake often e.g. when arguing for wider public health screenings, or wider anti-crime surveillance measures etc but I am at a loss for how to succinctly describe the mistake people are making.



Does this phenomenon / statistical fallacy have a name? Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person.



Apologies if this is the wrong forum to ask this. If so please direct me to a more appropriate one.










share|cite|improve this question










$endgroup$














  • $begingroup$
    as a quick comment, one would say that the scenario has poor "positive predictive value" which might be another avenue to consider exploring in thinking how to explain.
    $endgroup$
    – James Stanley
    Oct 15 at 4:15










  • $begingroup$
    Do you mean that the test generates more false positives than true positives generally, despite its being 99% accurate over all cases, or do you mean that the exact same test has different behavior based on which subset of the population one is talking about? Because the overall accuracy rate already implies that the case it has difficulty identifying true positives of is the rarer condition. "When the population of true positives is very small compared..." sounds like it is characterizing the test over entire populations, not differences in its behavior over sub-populations. Is this correct?
    $endgroup$
    – pygosceles
    Oct 15 at 19:18






  • 1




    $begingroup$
    The current answer gives you the term, but you also asked for an example that could help to explain this to a layman: Consider a disease that affects 1 in 1000 people. When doing a test with an accuracy of 99% on 1000 people, then 10 people are classified incorrectly. So 1 person might be a true positive, but still, there may be 9 false positives. In general, 'accuracy' (as a measure) only makes sense for balanced distributions. Otherwise, 'informedness' may be a better measure. See en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion for more examples.
    $endgroup$
    – Marco13
    Oct 16 at 12:25











  • $begingroup$
    @pygosceles Yes. Many, if not most, people have the intuition that a test that's 99% accurate implies a false positive rate of 1% regardless of the number of true positives in the population and the population size. It is counter-intuitive to many people that a highly accurate test can give you way more false positives than true positives in some circumstances.
    $endgroup$
    – technicalbloke
    Oct 17 at 11:02










  • $begingroup$
    @technicalbloke It sounds like they aren't really even thinking about the true positive rate as its own thing, perhaps falsely conflating the huge proportion of true negatives+true negatives with the true positives, since true negatives drive the accuracy measure for rare conditions, and so say nothing about the true positive and false positive rates. Disregarding false positives sounds like they may also have conflated accuracy with recall and so need to supplement their concept of recall with precision, which seems to be at the core of your concern.
    $endgroup$
    – pygosceles
    Oct 17 at 14:11













39












39








39


9



$begingroup$


It seems very counter intuitive to many people that a given diagnostic test with very high accuracy (say 99%) can generate massively more false positives than true positives in some situations, namely where the population of true positives is very small compared to whole population.



I see people making this mistake often e.g. when arguing for wider public health screenings, or wider anti-crime surveillance measures etc but I am at a loss for how to succinctly describe the mistake people are making.



Does this phenomenon / statistical fallacy have a name? Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person.



Apologies if this is the wrong forum to ask this. If so please direct me to a more appropriate one.










share|cite|improve this question










$endgroup$




It seems very counter intuitive to many people that a given diagnostic test with very high accuracy (say 99%) can generate massively more false positives than true positives in some situations, namely where the population of true positives is very small compared to whole population.



I see people making this mistake often e.g. when arguing for wider public health screenings, or wider anti-crime surveillance measures etc but I am at a loss for how to succinctly describe the mistake people are making.



Does this phenomenon / statistical fallacy have a name? Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person.



Apologies if this is the wrong forum to ask this. If so please direct me to a more appropriate one.







terminology






share|cite|improve this question














share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Oct 14 at 11:29









technicalbloketechnicalbloke

2982 silver badges6 bronze badges




2982 silver badges6 bronze badges














  • $begingroup$
    as a quick comment, one would say that the scenario has poor "positive predictive value" which might be another avenue to consider exploring in thinking how to explain.
    $endgroup$
    – James Stanley
    Oct 15 at 4:15










  • $begingroup$
    Do you mean that the test generates more false positives than true positives generally, despite its being 99% accurate over all cases, or do you mean that the exact same test has different behavior based on which subset of the population one is talking about? Because the overall accuracy rate already implies that the case it has difficulty identifying true positives of is the rarer condition. "When the population of true positives is very small compared..." sounds like it is characterizing the test over entire populations, not differences in its behavior over sub-populations. Is this correct?
    $endgroup$
    – pygosceles
    Oct 15 at 19:18






  • 1




    $begingroup$
    The current answer gives you the term, but you also asked for an example that could help to explain this to a layman: Consider a disease that affects 1 in 1000 people. When doing a test with an accuracy of 99% on 1000 people, then 10 people are classified incorrectly. So 1 person might be a true positive, but still, there may be 9 false positives. In general, 'accuracy' (as a measure) only makes sense for balanced distributions. Otherwise, 'informedness' may be a better measure. See en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion for more examples.
    $endgroup$
    – Marco13
    Oct 16 at 12:25











  • $begingroup$
    @pygosceles Yes. Many, if not most, people have the intuition that a test that's 99% accurate implies a false positive rate of 1% regardless of the number of true positives in the population and the population size. It is counter-intuitive to many people that a highly accurate test can give you way more false positives than true positives in some circumstances.
    $endgroup$
    – technicalbloke
    Oct 17 at 11:02










  • $begingroup$
    @technicalbloke It sounds like they aren't really even thinking about the true positive rate as its own thing, perhaps falsely conflating the huge proportion of true negatives+true negatives with the true positives, since true negatives drive the accuracy measure for rare conditions, and so say nothing about the true positive and false positive rates. Disregarding false positives sounds like they may also have conflated accuracy with recall and so need to supplement their concept of recall with precision, which seems to be at the core of your concern.
    $endgroup$
    – pygosceles
    Oct 17 at 14:11
















  • $begingroup$
    as a quick comment, one would say that the scenario has poor "positive predictive value" which might be another avenue to consider exploring in thinking how to explain.
    $endgroup$
    – James Stanley
    Oct 15 at 4:15










  • $begingroup$
    Do you mean that the test generates more false positives than true positives generally, despite its being 99% accurate over all cases, or do you mean that the exact same test has different behavior based on which subset of the population one is talking about? Because the overall accuracy rate already implies that the case it has difficulty identifying true positives of is the rarer condition. "When the population of true positives is very small compared..." sounds like it is characterizing the test over entire populations, not differences in its behavior over sub-populations. Is this correct?
    $endgroup$
    – pygosceles
    Oct 15 at 19:18






  • 1




    $begingroup$
    The current answer gives you the term, but you also asked for an example that could help to explain this to a layman: Consider a disease that affects 1 in 1000 people. When doing a test with an accuracy of 99% on 1000 people, then 10 people are classified incorrectly. So 1 person might be a true positive, but still, there may be 9 false positives. In general, 'accuracy' (as a measure) only makes sense for balanced distributions. Otherwise, 'informedness' may be a better measure. See en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion for more examples.
    $endgroup$
    – Marco13
    Oct 16 at 12:25











  • $begingroup$
    @pygosceles Yes. Many, if not most, people have the intuition that a test that's 99% accurate implies a false positive rate of 1% regardless of the number of true positives in the population and the population size. It is counter-intuitive to many people that a highly accurate test can give you way more false positives than true positives in some circumstances.
    $endgroup$
    – technicalbloke
    Oct 17 at 11:02










  • $begingroup$
    @technicalbloke It sounds like they aren't really even thinking about the true positive rate as its own thing, perhaps falsely conflating the huge proportion of true negatives+true negatives with the true positives, since true negatives drive the accuracy measure for rare conditions, and so say nothing about the true positive and false positive rates. Disregarding false positives sounds like they may also have conflated accuracy with recall and so need to supplement their concept of recall with precision, which seems to be at the core of your concern.
    $endgroup$
    – pygosceles
    Oct 17 at 14:11















$begingroup$
as a quick comment, one would say that the scenario has poor "positive predictive value" which might be another avenue to consider exploring in thinking how to explain.
$endgroup$
– James Stanley
Oct 15 at 4:15




$begingroup$
as a quick comment, one would say that the scenario has poor "positive predictive value" which might be another avenue to consider exploring in thinking how to explain.
$endgroup$
– James Stanley
Oct 15 at 4:15












$begingroup$
Do you mean that the test generates more false positives than true positives generally, despite its being 99% accurate over all cases, or do you mean that the exact same test has different behavior based on which subset of the population one is talking about? Because the overall accuracy rate already implies that the case it has difficulty identifying true positives of is the rarer condition. "When the population of true positives is very small compared..." sounds like it is characterizing the test over entire populations, not differences in its behavior over sub-populations. Is this correct?
$endgroup$
– pygosceles
Oct 15 at 19:18




$begingroup$
Do you mean that the test generates more false positives than true positives generally, despite its being 99% accurate over all cases, or do you mean that the exact same test has different behavior based on which subset of the population one is talking about? Because the overall accuracy rate already implies that the case it has difficulty identifying true positives of is the rarer condition. "When the population of true positives is very small compared..." sounds like it is characterizing the test over entire populations, not differences in its behavior over sub-populations. Is this correct?
$endgroup$
– pygosceles
Oct 15 at 19:18




1




1




$begingroup$
The current answer gives you the term, but you also asked for an example that could help to explain this to a layman: Consider a disease that affects 1 in 1000 people. When doing a test with an accuracy of 99% on 1000 people, then 10 people are classified incorrectly. So 1 person might be a true positive, but still, there may be 9 false positives. In general, 'accuracy' (as a measure) only makes sense for balanced distributions. Otherwise, 'informedness' may be a better measure. See en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion for more examples.
$endgroup$
– Marco13
Oct 16 at 12:25





$begingroup$
The current answer gives you the term, but you also asked for an example that could help to explain this to a layman: Consider a disease that affects 1 in 1000 people. When doing a test with an accuracy of 99% on 1000 people, then 10 people are classified incorrectly. So 1 person might be a true positive, but still, there may be 9 false positives. In general, 'accuracy' (as a measure) only makes sense for balanced distributions. Otherwise, 'informedness' may be a better measure. See en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion for more examples.
$endgroup$
– Marco13
Oct 16 at 12:25













$begingroup$
@pygosceles Yes. Many, if not most, people have the intuition that a test that's 99% accurate implies a false positive rate of 1% regardless of the number of true positives in the population and the population size. It is counter-intuitive to many people that a highly accurate test can give you way more false positives than true positives in some circumstances.
$endgroup$
– technicalbloke
Oct 17 at 11:02




$begingroup$
@pygosceles Yes. Many, if not most, people have the intuition that a test that's 99% accurate implies a false positive rate of 1% regardless of the number of true positives in the population and the population size. It is counter-intuitive to many people that a highly accurate test can give you way more false positives than true positives in some circumstances.
$endgroup$
– technicalbloke
Oct 17 at 11:02












$begingroup$
@technicalbloke It sounds like they aren't really even thinking about the true positive rate as its own thing, perhaps falsely conflating the huge proportion of true negatives+true negatives with the true positives, since true negatives drive the accuracy measure for rare conditions, and so say nothing about the true positive and false positive rates. Disregarding false positives sounds like they may also have conflated accuracy with recall and so need to supplement their concept of recall with precision, which seems to be at the core of your concern.
$endgroup$
– pygosceles
Oct 17 at 14:11




$begingroup$
@technicalbloke It sounds like they aren't really even thinking about the true positive rate as its own thing, perhaps falsely conflating the huge proportion of true negatives+true negatives with the true positives, since true negatives drive the accuracy measure for rare conditions, and so say nothing about the true positive and false positive rates. Disregarding false positives sounds like they may also have conflated accuracy with recall and so need to supplement their concept of recall with precision, which seems to be at the core of your concern.
$endgroup$
– pygosceles
Oct 17 at 14:11










3 Answers
3






active

oldest

votes


















59
















$begingroup$

Yes there is. Generally it is termed base rate fallacy or more specific false positive paradox. There is even a wikipedia article about it: see here






share|cite|improve this answer










$endgroup$






















    4
















    $begingroup$

    Unfortunately I have no name for this fallacy. When I need to explain this I have found it usefull to refer to diseases that are commonly known amongst laypersons but are ridiculously rare.
    I live in Germany and whilst everyone has read about the plague in their history books, everyone knows that as a German doctor I will never diagnose a true plague case nor take care of a shark bite.



    When you tell people, that there is a test for shark bites that is positive in one of a hundred healthy people everyone will agree, that that test does not make sense, no matter how well its positive predictive value is.



    Depending on where in the world you are and who your audience is, possible examples may be the plague, mad cow disease (BSE), progeria, being struck by lightning. There are many known risks, that people are well aware of their risk being far less then 1 %.



    Edit/Addition: So far this has attracted 3 downvotes and no comments. Defending myself against the most likely objection: The original poster wrote




    Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person




    And I think that I did exactly that. Mr Pi posted his better answer later than I posted my lay person explanation and I upvoted his as soon as I saw it.






    share|cite|improve this answer












    $endgroup$






















      3
















      $begingroup$

      The base rate fallacy has to do with specialization to different populations, which does not capture a broader misconception that high accuracy implies both low false positive and low false negative rates.



      In addressing the conundrum of high accuracy with a high false positive rate, I find it impossible to go beyond very superficial, hand-wavy and inaccurate explanations without introducing people to the concepts of precision and recall.



      In laymen's terms, one can simply write out two values of interest instead of the over-simplified "accuracy" rate:



      1. Of those people who have condition X, what proportion does the test indicate have condition X? This is the recall rate. Incorrect determinations are false negatives--people who should have been diagnosed as having the condition but were not.

      2. Of those people whom the test said have condition X, what proportion actually have condition X? This is the precision rate. Incorrect determinations here are false positives--people we said have the condition but do not.

      A diagnostic test is only useful if it imparts new information. You can show them that for the diagnosis of any rare condition (say, <1% of cases), it is trivially easy to construct a test that is highly accurate (>99% accuracy!), while telling us nothing we didn't already know about who does or does not actually have it: simply tell everyone they don't have it. An infinite number of tests have the same accuracy but trade precision for recall and vice-versa. One can get 100% precision or 100% accuracy by doing nothing, but only a discriminating test will maximize both. Actually computing and showing them the precision and recall rates can inform them and help them to think intelligently about the tradeoffs and the need for a more discerning test. Combining tests that offer different information can lead to a more accurate diagnosis even when the result of one test or the other is unacceptably inaccurate by itself.



      This is key: Does the test give us new information, or not?



      Then there is also the dimension of risk aversion: How many false positives is it worth incurring to find one true positive? That is, how many people are you willing to mislead into thinking they have something they might not have in order to find one who does have it? This will depend on the danger of misdiagnosis, which usually differs for false positives and false negatives.



      Edit:
      Further beneficial would be a confirming test or tests that are more and more precise, perhaps held out until later because they are more expensive. Diagnoses with a bias towards false positives can thus be used in concert to construct a sieve that is a cost-effective discriminator, eliminating most true negatives early on. However, this too comes at a cost of increased danger for true positives: You want cancer patients to get treatment as soon as possible, and having them jump through three or five hoops each requiring two weeks to a month of advance scheduling before they can even get access to treatment can worsen their prognosis by an order of magnitude. Therefore it is helpful to take other less expensive tests into consideration jointly when doing triage for follow-up to prioritize those patients have the greatest likelihood of having the condition, and perform multiple tests simultaneously where possible.






      share|cite|improve this answer











      New contributor



      pygosceles is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      $endgroup$
















        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "65"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );














        draft saved

        draft discarded
















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f431370%2fis-there-a-name-for-the-phenomenon-of-false-positives-counterintuitively-outstri%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown


























        3 Answers
        3






        active

        oldest

        votes








        3 Answers
        3






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        59
















        $begingroup$

        Yes there is. Generally it is termed base rate fallacy or more specific false positive paradox. There is even a wikipedia article about it: see here






        share|cite|improve this answer










        $endgroup$



















          59
















          $begingroup$

          Yes there is. Generally it is termed base rate fallacy or more specific false positive paradox. There is even a wikipedia article about it: see here






          share|cite|improve this answer










          $endgroup$

















            59














            59










            59







            $begingroup$

            Yes there is. Generally it is termed base rate fallacy or more specific false positive paradox. There is even a wikipedia article about it: see here






            share|cite|improve this answer










            $endgroup$



            Yes there is. Generally it is termed base rate fallacy or more specific false positive paradox. There is even a wikipedia article about it: see here







            share|cite|improve this answer













            share|cite|improve this answer




            share|cite|improve this answer










            answered Oct 14 at 14:29









            Mr PiMr Pi

            7714 silver badges14 bronze badges




            7714 silver badges14 bronze badges


























                4
















                $begingroup$

                Unfortunately I have no name for this fallacy. When I need to explain this I have found it usefull to refer to diseases that are commonly known amongst laypersons but are ridiculously rare.
                I live in Germany and whilst everyone has read about the plague in their history books, everyone knows that as a German doctor I will never diagnose a true plague case nor take care of a shark bite.



                When you tell people, that there is a test for shark bites that is positive in one of a hundred healthy people everyone will agree, that that test does not make sense, no matter how well its positive predictive value is.



                Depending on where in the world you are and who your audience is, possible examples may be the plague, mad cow disease (BSE), progeria, being struck by lightning. There are many known risks, that people are well aware of their risk being far less then 1 %.



                Edit/Addition: So far this has attracted 3 downvotes and no comments. Defending myself against the most likely objection: The original poster wrote




                Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person




                And I think that I did exactly that. Mr Pi posted his better answer later than I posted my lay person explanation and I upvoted his as soon as I saw it.






                share|cite|improve this answer












                $endgroup$



















                  4
















                  $begingroup$

                  Unfortunately I have no name for this fallacy. When I need to explain this I have found it usefull to refer to diseases that are commonly known amongst laypersons but are ridiculously rare.
                  I live in Germany and whilst everyone has read about the plague in their history books, everyone knows that as a German doctor I will never diagnose a true plague case nor take care of a shark bite.



                  When you tell people, that there is a test for shark bites that is positive in one of a hundred healthy people everyone will agree, that that test does not make sense, no matter how well its positive predictive value is.



                  Depending on where in the world you are and who your audience is, possible examples may be the plague, mad cow disease (BSE), progeria, being struck by lightning. There are many known risks, that people are well aware of their risk being far less then 1 %.



                  Edit/Addition: So far this has attracted 3 downvotes and no comments. Defending myself against the most likely objection: The original poster wrote




                  Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person




                  And I think that I did exactly that. Mr Pi posted his better answer later than I posted my lay person explanation and I upvoted his as soon as I saw it.






                  share|cite|improve this answer












                  $endgroup$

















                    4














                    4










                    4







                    $begingroup$

                    Unfortunately I have no name for this fallacy. When I need to explain this I have found it usefull to refer to diseases that are commonly known amongst laypersons but are ridiculously rare.
                    I live in Germany and whilst everyone has read about the plague in their history books, everyone knows that as a German doctor I will never diagnose a true plague case nor take care of a shark bite.



                    When you tell people, that there is a test for shark bites that is positive in one of a hundred healthy people everyone will agree, that that test does not make sense, no matter how well its positive predictive value is.



                    Depending on where in the world you are and who your audience is, possible examples may be the plague, mad cow disease (BSE), progeria, being struck by lightning. There are many known risks, that people are well aware of their risk being far less then 1 %.



                    Edit/Addition: So far this has attracted 3 downvotes and no comments. Defending myself against the most likely objection: The original poster wrote




                    Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person




                    And I think that I did exactly that. Mr Pi posted his better answer later than I posted my lay person explanation and I upvoted his as soon as I saw it.






                    share|cite|improve this answer












                    $endgroup$



                    Unfortunately I have no name for this fallacy. When I need to explain this I have found it usefull to refer to diseases that are commonly known amongst laypersons but are ridiculously rare.
                    I live in Germany and whilst everyone has read about the plague in their history books, everyone knows that as a German doctor I will never diagnose a true plague case nor take care of a shark bite.



                    When you tell people, that there is a test for shark bites that is positive in one of a hundred healthy people everyone will agree, that that test does not make sense, no matter how well its positive predictive value is.



                    Depending on where in the world you are and who your audience is, possible examples may be the plague, mad cow disease (BSE), progeria, being struck by lightning. There are many known risks, that people are well aware of their risk being far less then 1 %.



                    Edit/Addition: So far this has attracted 3 downvotes and no comments. Defending myself against the most likely objection: The original poster wrote




                    Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person




                    And I think that I did exactly that. Mr Pi posted his better answer later than I posted my lay person explanation and I upvoted his as soon as I saw it.







                    share|cite|improve this answer















                    share|cite|improve this answer




                    share|cite|improve this answer








                    edited Oct 17 at 12:33

























                    answered Oct 14 at 13:38









                    BernhardBernhard

                    3,4496 silver badges24 bronze badges




                    3,4496 silver badges24 bronze badges
























                        3
















                        $begingroup$

                        The base rate fallacy has to do with specialization to different populations, which does not capture a broader misconception that high accuracy implies both low false positive and low false negative rates.



                        In addressing the conundrum of high accuracy with a high false positive rate, I find it impossible to go beyond very superficial, hand-wavy and inaccurate explanations without introducing people to the concepts of precision and recall.



                        In laymen's terms, one can simply write out two values of interest instead of the over-simplified "accuracy" rate:



                        1. Of those people who have condition X, what proportion does the test indicate have condition X? This is the recall rate. Incorrect determinations are false negatives--people who should have been diagnosed as having the condition but were not.

                        2. Of those people whom the test said have condition X, what proportion actually have condition X? This is the precision rate. Incorrect determinations here are false positives--people we said have the condition but do not.

                        A diagnostic test is only useful if it imparts new information. You can show them that for the diagnosis of any rare condition (say, <1% of cases), it is trivially easy to construct a test that is highly accurate (>99% accuracy!), while telling us nothing we didn't already know about who does or does not actually have it: simply tell everyone they don't have it. An infinite number of tests have the same accuracy but trade precision for recall and vice-versa. One can get 100% precision or 100% accuracy by doing nothing, but only a discriminating test will maximize both. Actually computing and showing them the precision and recall rates can inform them and help them to think intelligently about the tradeoffs and the need for a more discerning test. Combining tests that offer different information can lead to a more accurate diagnosis even when the result of one test or the other is unacceptably inaccurate by itself.



                        This is key: Does the test give us new information, or not?



                        Then there is also the dimension of risk aversion: How many false positives is it worth incurring to find one true positive? That is, how many people are you willing to mislead into thinking they have something they might not have in order to find one who does have it? This will depend on the danger of misdiagnosis, which usually differs for false positives and false negatives.



                        Edit:
                        Further beneficial would be a confirming test or tests that are more and more precise, perhaps held out until later because they are more expensive. Diagnoses with a bias towards false positives can thus be used in concert to construct a sieve that is a cost-effective discriminator, eliminating most true negatives early on. However, this too comes at a cost of increased danger for true positives: You want cancer patients to get treatment as soon as possible, and having them jump through three or five hoops each requiring two weeks to a month of advance scheduling before they can even get access to treatment can worsen their prognosis by an order of magnitude. Therefore it is helpful to take other less expensive tests into consideration jointly when doing triage for follow-up to prioritize those patients have the greatest likelihood of having the condition, and perform multiple tests simultaneously where possible.






                        share|cite|improve this answer











                        New contributor



                        pygosceles is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.





                        $endgroup$



















                          3
















                          $begingroup$

                          The base rate fallacy has to do with specialization to different populations, which does not capture a broader misconception that high accuracy implies both low false positive and low false negative rates.



                          In addressing the conundrum of high accuracy with a high false positive rate, I find it impossible to go beyond very superficial, hand-wavy and inaccurate explanations without introducing people to the concepts of precision and recall.



                          In laymen's terms, one can simply write out two values of interest instead of the over-simplified "accuracy" rate:



                          1. Of those people who have condition X, what proportion does the test indicate have condition X? This is the recall rate. Incorrect determinations are false negatives--people who should have been diagnosed as having the condition but were not.

                          2. Of those people whom the test said have condition X, what proportion actually have condition X? This is the precision rate. Incorrect determinations here are false positives--people we said have the condition but do not.

                          A diagnostic test is only useful if it imparts new information. You can show them that for the diagnosis of any rare condition (say, <1% of cases), it is trivially easy to construct a test that is highly accurate (>99% accuracy!), while telling us nothing we didn't already know about who does or does not actually have it: simply tell everyone they don't have it. An infinite number of tests have the same accuracy but trade precision for recall and vice-versa. One can get 100% precision or 100% accuracy by doing nothing, but only a discriminating test will maximize both. Actually computing and showing them the precision and recall rates can inform them and help them to think intelligently about the tradeoffs and the need for a more discerning test. Combining tests that offer different information can lead to a more accurate diagnosis even when the result of one test or the other is unacceptably inaccurate by itself.



                          This is key: Does the test give us new information, or not?



                          Then there is also the dimension of risk aversion: How many false positives is it worth incurring to find one true positive? That is, how many people are you willing to mislead into thinking they have something they might not have in order to find one who does have it? This will depend on the danger of misdiagnosis, which usually differs for false positives and false negatives.



                          Edit:
                          Further beneficial would be a confirming test or tests that are more and more precise, perhaps held out until later because they are more expensive. Diagnoses with a bias towards false positives can thus be used in concert to construct a sieve that is a cost-effective discriminator, eliminating most true negatives early on. However, this too comes at a cost of increased danger for true positives: You want cancer patients to get treatment as soon as possible, and having them jump through three or five hoops each requiring two weeks to a month of advance scheduling before they can even get access to treatment can worsen their prognosis by an order of magnitude. Therefore it is helpful to take other less expensive tests into consideration jointly when doing triage for follow-up to prioritize those patients have the greatest likelihood of having the condition, and perform multiple tests simultaneously where possible.






                          share|cite|improve this answer











                          New contributor



                          pygosceles is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.





                          $endgroup$

















                            3














                            3










                            3







                            $begingroup$

                            The base rate fallacy has to do with specialization to different populations, which does not capture a broader misconception that high accuracy implies both low false positive and low false negative rates.



                            In addressing the conundrum of high accuracy with a high false positive rate, I find it impossible to go beyond very superficial, hand-wavy and inaccurate explanations without introducing people to the concepts of precision and recall.



                            In laymen's terms, one can simply write out two values of interest instead of the over-simplified "accuracy" rate:



                            1. Of those people who have condition X, what proportion does the test indicate have condition X? This is the recall rate. Incorrect determinations are false negatives--people who should have been diagnosed as having the condition but were not.

                            2. Of those people whom the test said have condition X, what proportion actually have condition X? This is the precision rate. Incorrect determinations here are false positives--people we said have the condition but do not.

                            A diagnostic test is only useful if it imparts new information. You can show them that for the diagnosis of any rare condition (say, <1% of cases), it is trivially easy to construct a test that is highly accurate (>99% accuracy!), while telling us nothing we didn't already know about who does or does not actually have it: simply tell everyone they don't have it. An infinite number of tests have the same accuracy but trade precision for recall and vice-versa. One can get 100% precision or 100% accuracy by doing nothing, but only a discriminating test will maximize both. Actually computing and showing them the precision and recall rates can inform them and help them to think intelligently about the tradeoffs and the need for a more discerning test. Combining tests that offer different information can lead to a more accurate diagnosis even when the result of one test or the other is unacceptably inaccurate by itself.



                            This is key: Does the test give us new information, or not?



                            Then there is also the dimension of risk aversion: How many false positives is it worth incurring to find one true positive? That is, how many people are you willing to mislead into thinking they have something they might not have in order to find one who does have it? This will depend on the danger of misdiagnosis, which usually differs for false positives and false negatives.



                            Edit:
                            Further beneficial would be a confirming test or tests that are more and more precise, perhaps held out until later because they are more expensive. Diagnoses with a bias towards false positives can thus be used in concert to construct a sieve that is a cost-effective discriminator, eliminating most true negatives early on. However, this too comes at a cost of increased danger for true positives: You want cancer patients to get treatment as soon as possible, and having them jump through three or five hoops each requiring two weeks to a month of advance scheduling before they can even get access to treatment can worsen their prognosis by an order of magnitude. Therefore it is helpful to take other less expensive tests into consideration jointly when doing triage for follow-up to prioritize those patients have the greatest likelihood of having the condition, and perform multiple tests simultaneously where possible.






                            share|cite|improve this answer











                            New contributor



                            pygosceles is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.





                            $endgroup$



                            The base rate fallacy has to do with specialization to different populations, which does not capture a broader misconception that high accuracy implies both low false positive and low false negative rates.



                            In addressing the conundrum of high accuracy with a high false positive rate, I find it impossible to go beyond very superficial, hand-wavy and inaccurate explanations without introducing people to the concepts of precision and recall.



                            In laymen's terms, one can simply write out two values of interest instead of the over-simplified "accuracy" rate:



                            1. Of those people who have condition X, what proportion does the test indicate have condition X? This is the recall rate. Incorrect determinations are false negatives--people who should have been diagnosed as having the condition but were not.

                            2. Of those people whom the test said have condition X, what proportion actually have condition X? This is the precision rate. Incorrect determinations here are false positives--people we said have the condition but do not.

                            A diagnostic test is only useful if it imparts new information. You can show them that for the diagnosis of any rare condition (say, <1% of cases), it is trivially easy to construct a test that is highly accurate (>99% accuracy!), while telling us nothing we didn't already know about who does or does not actually have it: simply tell everyone they don't have it. An infinite number of tests have the same accuracy but trade precision for recall and vice-versa. One can get 100% precision or 100% accuracy by doing nothing, but only a discriminating test will maximize both. Actually computing and showing them the precision and recall rates can inform them and help them to think intelligently about the tradeoffs and the need for a more discerning test. Combining tests that offer different information can lead to a more accurate diagnosis even when the result of one test or the other is unacceptably inaccurate by itself.



                            This is key: Does the test give us new information, or not?



                            Then there is also the dimension of risk aversion: How many false positives is it worth incurring to find one true positive? That is, how many people are you willing to mislead into thinking they have something they might not have in order to find one who does have it? This will depend on the danger of misdiagnosis, which usually differs for false positives and false negatives.



                            Edit:
                            Further beneficial would be a confirming test or tests that are more and more precise, perhaps held out until later because they are more expensive. Diagnoses with a bias towards false positives can thus be used in concert to construct a sieve that is a cost-effective discriminator, eliminating most true negatives early on. However, this too comes at a cost of increased danger for true positives: You want cancer patients to get treatment as soon as possible, and having them jump through three or five hoops each requiring two weeks to a month of advance scheduling before they can even get access to treatment can worsen their prognosis by an order of magnitude. Therefore it is helpful to take other less expensive tests into consideration jointly when doing triage for follow-up to prioritize those patients have the greatest likelihood of having the condition, and perform multiple tests simultaneously where possible.







                            share|cite|improve this answer











                            New contributor



                            pygosceles is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.








                            share|cite|improve this answer




                            share|cite|improve this answer








                            edited Oct 15 at 20:48





















                            New contributor



                            pygosceles is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.








                            answered Oct 15 at 19:31









                            pygoscelespygosceles

                            1313 bronze badges




                            1313 bronze badges




                            New contributor



                            pygosceles is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.




                            New contributor




                            pygosceles is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.

































                                draft saved

                                draft discarded















































                                Thanks for contributing an answer to Cross Validated!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                Use MathJax to format equations. MathJax reference.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f431370%2fis-there-a-name-for-the-phenomenon-of-false-positives-counterintuitively-outstri%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown









                                Popular posts from this blog

                                Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                                Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                                199年 目錄 大件事 到箇年出世嗰人 到箇年死嗰人 節慶、風俗習慣 導覽選單