Why is explainability not one of the criteria for publication?How harmful is it to submit side-results that are not new?Can I still try to publish my work if my algorithm's results are not as good as other algorithms'?After successfully publishing papers during my Post-Doc, why am I now having trouble publishing as a tenure track academic?Why are meta-analyses the most quoted type of research paper despite being often flawed?Is an improved algorithm for a non-hot problem publishable?Strict prohibition on overlapping data. Why?Can we add a finding to a paper which is known by other communities but not ours?Why is novelty mandatory for a Ph.D. degree?

Is this password scheme legit?

Are (c#) dictionaries an Anti Pattern?

Defending Castle from Zombies

Can MuseScore be used programmatically?

To what extent should we fear giving offense?

Videos of surgery

How to prevent a hosting company from accessing a VM's encryption keys?

Notice period 60 days but I need to join in 45 days

Why is there not a willingness from the world to step in between Pakistan and India?

Why does a sticker slowly peel off, but if it is pulled quickly it tears?

Can an object tethered to a spaceship be pulled out of event horizon?

Why didn't Doc believe Marty was from the future?

A first "Hangman" game in Python

Why does the `ls` command sort files like this?

Is a Centaur PC considered an animal when calculating carrying capacity for vehicles?

Find most "academic" implementation of doubly linked list

What to do about my 1-month-old boy peeing through diapers?

Many many thanks

What happens to transactions included in extinct or invalid blocks?

Should I use the words "pyromancy" and "necromancy" even if they don't mean what people think they do?

Are there any to-scale diagrams of the TRAPPIST-1 system?

How to force GCC to assume that a floating-point expression is non-negative?

Is a memoized pure function itself considered pure?

Can I take a boxed bicycle on a German train?



Why is explainability not one of the criteria for publication?


How harmful is it to submit side-results that are not new?Can I still try to publish my work if my algorithm's results are not as good as other algorithms'?After successfully publishing papers during my Post-Doc, why am I now having trouble publishing as a tenure track academic?Why are meta-analyses the most quoted type of research paper despite being often flawed?Is an improved algorithm for a non-hot problem publishable?Strict prohibition on overlapping data. Why?Can we add a finding to a paper which is known by other communities but not ours?Why is novelty mandatory for a Ph.D. degree?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2















A paper is eligible for publishing in reputable journals in general if it satisfies the criteria objectivity, reproducibility and (optionally) novelty.



But why not they are considering Explainability as a criterion? Although the model proposed in the paper satisfies the above mentioned three metrics but not explainability, then how can it be considered as a contribution to field?



PS: Low "explainability" means proving something works without explaining how it works. See also "Interpretability"










share|improve this question





















  • 1





    If it didn't satisfy explanability, how did it get accepted by peer reviewers?

    – Coder
    9 hours ago











  • Some subfields of computer science has wide acceptance without explainability.

    – hanugm
    9 hours ago






  • 1





    What's explainability? Do you mean accessibility?

    – user2768
    9 hours ago






  • 6





    So experimental results should not be published until they are well understood?

    – fqq
    9 hours ago






  • 3





    There are huge realms of knowledge where we know what happens, but not why it happens... are you proposing that that should not be publishable?

    – Flyto
    4 hours ago

















2















A paper is eligible for publishing in reputable journals in general if it satisfies the criteria objectivity, reproducibility and (optionally) novelty.



But why not they are considering Explainability as a criterion? Although the model proposed in the paper satisfies the above mentioned three metrics but not explainability, then how can it be considered as a contribution to field?



PS: Low "explainability" means proving something works without explaining how it works. See also "Interpretability"










share|improve this question





















  • 1





    If it didn't satisfy explanability, how did it get accepted by peer reviewers?

    – Coder
    9 hours ago











  • Some subfields of computer science has wide acceptance without explainability.

    – hanugm
    9 hours ago






  • 1





    What's explainability? Do you mean accessibility?

    – user2768
    9 hours ago






  • 6





    So experimental results should not be published until they are well understood?

    – fqq
    9 hours ago






  • 3





    There are huge realms of knowledge where we know what happens, but not why it happens... are you proposing that that should not be publishable?

    – Flyto
    4 hours ago













2












2








2








A paper is eligible for publishing in reputable journals in general if it satisfies the criteria objectivity, reproducibility and (optionally) novelty.



But why not they are considering Explainability as a criterion? Although the model proposed in the paper satisfies the above mentioned three metrics but not explainability, then how can it be considered as a contribution to field?



PS: Low "explainability" means proving something works without explaining how it works. See also "Interpretability"










share|improve this question
















A paper is eligible for publishing in reputable journals in general if it satisfies the criteria objectivity, reproducibility and (optionally) novelty.



But why not they are considering Explainability as a criterion? Although the model proposed in the paper satisfies the above mentioned three metrics but not explainability, then how can it be considered as a contribution to field?



PS: Low "explainability" means proving something works without explaining how it works. See also "Interpretability"







publishability






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 25 mins ago









Anonymous Physicist

25.7k9 gold badges52 silver badges107 bronze badges




25.7k9 gold badges52 silver badges107 bronze badges










asked 9 hours ago









hanugmhanugm

1,4162 gold badges17 silver badges25 bronze badges




1,4162 gold badges17 silver badges25 bronze badges










  • 1





    If it didn't satisfy explanability, how did it get accepted by peer reviewers?

    – Coder
    9 hours ago











  • Some subfields of computer science has wide acceptance without explainability.

    – hanugm
    9 hours ago






  • 1





    What's explainability? Do you mean accessibility?

    – user2768
    9 hours ago






  • 6





    So experimental results should not be published until they are well understood?

    – fqq
    9 hours ago






  • 3





    There are huge realms of knowledge where we know what happens, but not why it happens... are you proposing that that should not be publishable?

    – Flyto
    4 hours ago












  • 1





    If it didn't satisfy explanability, how did it get accepted by peer reviewers?

    – Coder
    9 hours ago











  • Some subfields of computer science has wide acceptance without explainability.

    – hanugm
    9 hours ago






  • 1





    What's explainability? Do you mean accessibility?

    – user2768
    9 hours ago






  • 6





    So experimental results should not be published until they are well understood?

    – fqq
    9 hours ago






  • 3





    There are huge realms of knowledge where we know what happens, but not why it happens... are you proposing that that should not be publishable?

    – Flyto
    4 hours ago







1




1





If it didn't satisfy explanability, how did it get accepted by peer reviewers?

– Coder
9 hours ago





If it didn't satisfy explanability, how did it get accepted by peer reviewers?

– Coder
9 hours ago













Some subfields of computer science has wide acceptance without explainability.

– hanugm
9 hours ago





Some subfields of computer science has wide acceptance without explainability.

– hanugm
9 hours ago




1




1





What's explainability? Do you mean accessibility?

– user2768
9 hours ago





What's explainability? Do you mean accessibility?

– user2768
9 hours ago




6




6





So experimental results should not be published until they are well understood?

– fqq
9 hours ago





So experimental results should not be published until they are well understood?

– fqq
9 hours ago




3




3





There are huge realms of knowledge where we know what happens, but not why it happens... are you proposing that that should not be publishable?

– Flyto
4 hours ago





There are huge realms of knowledge where we know what happens, but not why it happens... are you proposing that that should not be publishable?

– Flyto
4 hours ago










4 Answers
4






active

oldest

votes


















6















Coming especially from a biomedical sciences perspective,




I mean proving something works without explaining how it works.




(from a comment describing what is meant by 'explainability')



this would be an absolute disaster for science. Many results are not explainable according to that criteria; many treatments are known to be successful without being explained. If we waited until findings were understood before publishing, science would move a lot more slowly.



If you had a black-box image processing algorithm that, for example, beat the state of the art in tumor detection in processing MRI images, that result would be very interesting and publishable without being able to explain the black-box. In fact, it would likely be unethical to not publish such a finding.






share|improve this answer




















  • 2





    For the field of medical science your statement may be true, but that particular example from your last paragraph strikes a chord for me. Currently, we're seeing a flood of papers in computer science that apply "deep learning"/"neural networks" to a variety of highly specific problems, and do not say more than "It works (but we don't know why)". Many people are publishing like mad and literally do not have the slightest idea of what they are doing - even though that may seem hard to believe for other fields. Really, cf. xkcd.com/1838 : If it looks right, it will be published.

    – Marco13
    1 hour ago











  • @Marco13 Indeed, though of course people still need to be cautious of xkcd.com/882 - that shouldn't be a barrier to publishing, though, it should be a barrier to how studies are interpreted and how results are validated on independent data. I should make clear that I have in mind things like paracetamol which are in wide use, definitely effective, and yet...still oddly not well understood.

    – Bryan Krause
    1 hour ago












  • Superconductivity would be another example. New superconductors may require hundreds of papers before they are explained.

    – Anonymous Physicist
    23 mins ago











  • @Marco13 It's a horrible problem really. How can we know that the neural networks actually work if we don't know how they work? I know they had an AI that could check chest X-rays for tuberculosis, and while they did better than human x-ray technicians, many of the false positives it found were correctly assessed as negative by humans. The issue was the AI knew the xrays were coming from a hospital, it had learned this meant he or she was sicker, and this biased the test to think it was more likely tuberculosis. If you can't explain what patterns it's finding, how do you know they are real?

    – Ryan_L
    15 mins ago


















3















Papers are evaluated on a variety of criteria, including accessibility and the contribution to the field of research.



Now papers that not only report findings, but analyze findings and provide root causes for effects observed in the paper are obviously more valuable and are more likely to be accepted.



But from a scientific point of view, requiring that papers have this property would not be a good idea. Quite often, the root cause of an observed phenomenon is not known. Not being able to publish papers without finding the root cause would mean that information stays "unknown" until the person making a discovery also finds out the reason for an observed phenomenon, which could mean that it is never found out. For instance, if Mendel with his discovery that traits are inherited until the DNA was found,
that would have been quite a loss.



In computer science, you need to distinguish between pure theoretical computer and the rest. While in the former, the proofs provide all the reason you need, in the applied fields, at least part of the argument is some utility of the finding. There are many subfields in which algorithms are published that work well in practice despite not giving theoretical guarantees that they always work. Finding out why certain algorithms work well in practice would require to define exactly what "practice" means, which changes over time. Machine learning is a good example: we know that many machine learning algorithms can get stuck in local optima, and we have some ideas on how to prevent that (in many interesting cases). And then there is some theory that tries to capture this. But ultimately, the reason for why many of the approaches work are that the models to be learned are easy enough and the algorithm is good enough, which is very difficult to impossible to formalize to a level that it would be acceptable in a scientific paper. And then requiring an in-depth explanation of why a new approach works would essentially mean that there will be almost no publications of practical relevance.






share|improve this answer


































    1















    I'm not sure what you mean exactly by explainability and it cannot be a scientific metric if it doesn't exist in a dictionary.



    So I conclude what you are thinking about is that the content of an article has to explain something: an not well understood process, a new method, a new theory.



    Different fields have different standards and metrics. I'm sure there are different for publishing a new physical theory vs. an optimization of a machine learning algorithm for image recognition. But this is normally covered by the novelty and significance metric by a journal.



    From a philosophy of science point of view you also should see or inspect what the modus operandi of researchers in your field is. For example, in particle physics or cosmology researchers try to falsify the scientific paradigm/theory, especially if there are too many flaws in a currently used theory. I know some of the basics of machine learning theory and that many of it is based on mathematical methods developed in quantum physics. This is a bullet-proof theory pretty much, no one has falsified it until this day and physicists still try. But in engineering and even in applied physics depending on the topic/resarch question rather a positivistic modus operandi is used by researchers, e.g. optimizing/enhancing/backing up a machine learning algorithm without substantial questioning or falsification underlying theories. And for minor incremental improvements an explanation in the sense of why rather then how may be not necessary in your field and therefore no general metric if the underlying theories are not really touched. As soon as you question a theory or common measurement process, at least in physics, you need to input a good explanation in your article, why and how you do this. What is the motivation, why it is more accurate to describe something.



    When you say in the comment "proving something works without how it works", I think this is what sometimes in industrial machine learning happens, input - black box - output. But if you can neither explain how or why your algorithm works (better), in the best case you can call it smart engineering but not science that can/should be published ;-)






    share|improve this answer
































      -1















      There is another aspect to this that applies in some fields, even surprisingly diverse ones. It is "explainable to whom, exactly?" I'll use math as an example but it also applies to things like literary criticism and CS, I think.



      When a professional paper is written, it is written in such a way that people similar to the author can understand it. It isn't, normally, written for novices or people in other fields. The author(s) suspect that most of their readers will be just like themselves with a similar background and way of thinking. So a math proof, can, in many (most?) cases, leave out many steps that would make the paper more understandable to a novice, but would just slow down most of the readers.



      I think that any field, even one not as "arcane" as mathematics, but which has a large professional vocabulary that is well understood by experienced practitioners will have a lot of papers like this.



      On the other hand, people that write for a general audience may need to do just the opposite. Fill in more detail than professionals require and resort to metaphor and analogy more than experts need, just to be understood at all.



      Of course, the worst of all worlds is either0 to provide so much detail that the work becomes pedantic, pleasing no one or simply making unsupported statements requiring leaps of faith to follow (or not).



      In any case, what may be easily understood by you, may not be by myself, and vice-versa.



      Moreover, since the reviewers of any paper are probably a lot like the authors, then if they can understand it they won't object, and if they can't, then they will require modifications. So, your "requirement" is probably built into the process implicitly as member Coder implies in a comment.






      share|improve this answer





























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "415"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f135305%2fwhy-is-explainability-not-one-of-the-criteria-for-publication%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        4 Answers
        4






        active

        oldest

        votes








        4 Answers
        4






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        6















        Coming especially from a biomedical sciences perspective,




        I mean proving something works without explaining how it works.




        (from a comment describing what is meant by 'explainability')



        this would be an absolute disaster for science. Many results are not explainable according to that criteria; many treatments are known to be successful without being explained. If we waited until findings were understood before publishing, science would move a lot more slowly.



        If you had a black-box image processing algorithm that, for example, beat the state of the art in tumor detection in processing MRI images, that result would be very interesting and publishable without being able to explain the black-box. In fact, it would likely be unethical to not publish such a finding.






        share|improve this answer




















        • 2





          For the field of medical science your statement may be true, but that particular example from your last paragraph strikes a chord for me. Currently, we're seeing a flood of papers in computer science that apply "deep learning"/"neural networks" to a variety of highly specific problems, and do not say more than "It works (but we don't know why)". Many people are publishing like mad and literally do not have the slightest idea of what they are doing - even though that may seem hard to believe for other fields. Really, cf. xkcd.com/1838 : If it looks right, it will be published.

          – Marco13
          1 hour ago











        • @Marco13 Indeed, though of course people still need to be cautious of xkcd.com/882 - that shouldn't be a barrier to publishing, though, it should be a barrier to how studies are interpreted and how results are validated on independent data. I should make clear that I have in mind things like paracetamol which are in wide use, definitely effective, and yet...still oddly not well understood.

          – Bryan Krause
          1 hour ago












        • Superconductivity would be another example. New superconductors may require hundreds of papers before they are explained.

          – Anonymous Physicist
          23 mins ago











        • @Marco13 It's a horrible problem really. How can we know that the neural networks actually work if we don't know how they work? I know they had an AI that could check chest X-rays for tuberculosis, and while they did better than human x-ray technicians, many of the false positives it found were correctly assessed as negative by humans. The issue was the AI knew the xrays were coming from a hospital, it had learned this meant he or she was sicker, and this biased the test to think it was more likely tuberculosis. If you can't explain what patterns it's finding, how do you know they are real?

          – Ryan_L
          15 mins ago















        6















        Coming especially from a biomedical sciences perspective,




        I mean proving something works without explaining how it works.




        (from a comment describing what is meant by 'explainability')



        this would be an absolute disaster for science. Many results are not explainable according to that criteria; many treatments are known to be successful without being explained. If we waited until findings were understood before publishing, science would move a lot more slowly.



        If you had a black-box image processing algorithm that, for example, beat the state of the art in tumor detection in processing MRI images, that result would be very interesting and publishable without being able to explain the black-box. In fact, it would likely be unethical to not publish such a finding.






        share|improve this answer




















        • 2





          For the field of medical science your statement may be true, but that particular example from your last paragraph strikes a chord for me. Currently, we're seeing a flood of papers in computer science that apply "deep learning"/"neural networks" to a variety of highly specific problems, and do not say more than "It works (but we don't know why)". Many people are publishing like mad and literally do not have the slightest idea of what they are doing - even though that may seem hard to believe for other fields. Really, cf. xkcd.com/1838 : If it looks right, it will be published.

          – Marco13
          1 hour ago











        • @Marco13 Indeed, though of course people still need to be cautious of xkcd.com/882 - that shouldn't be a barrier to publishing, though, it should be a barrier to how studies are interpreted and how results are validated on independent data. I should make clear that I have in mind things like paracetamol which are in wide use, definitely effective, and yet...still oddly not well understood.

          – Bryan Krause
          1 hour ago












        • Superconductivity would be another example. New superconductors may require hundreds of papers before they are explained.

          – Anonymous Physicist
          23 mins ago











        • @Marco13 It's a horrible problem really. How can we know that the neural networks actually work if we don't know how they work? I know they had an AI that could check chest X-rays for tuberculosis, and while they did better than human x-ray technicians, many of the false positives it found were correctly assessed as negative by humans. The issue was the AI knew the xrays were coming from a hospital, it had learned this meant he or she was sicker, and this biased the test to think it was more likely tuberculosis. If you can't explain what patterns it's finding, how do you know they are real?

          – Ryan_L
          15 mins ago













        6














        6










        6









        Coming especially from a biomedical sciences perspective,




        I mean proving something works without explaining how it works.




        (from a comment describing what is meant by 'explainability')



        this would be an absolute disaster for science. Many results are not explainable according to that criteria; many treatments are known to be successful without being explained. If we waited until findings were understood before publishing, science would move a lot more slowly.



        If you had a black-box image processing algorithm that, for example, beat the state of the art in tumor detection in processing MRI images, that result would be very interesting and publishable without being able to explain the black-box. In fact, it would likely be unethical to not publish such a finding.






        share|improve this answer













        Coming especially from a biomedical sciences perspective,




        I mean proving something works without explaining how it works.




        (from a comment describing what is meant by 'explainability')



        this would be an absolute disaster for science. Many results are not explainable according to that criteria; many treatments are known to be successful without being explained. If we waited until findings were understood before publishing, science would move a lot more slowly.



        If you had a black-box image processing algorithm that, for example, beat the state of the art in tumor detection in processing MRI images, that result would be very interesting and publishable without being able to explain the black-box. In fact, it would likely be unethical to not publish such a finding.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 8 hours ago









        Bryan KrauseBryan Krause

        20.7k5 gold badges63 silver badges82 bronze badges




        20.7k5 gold badges63 silver badges82 bronze badges










        • 2





          For the field of medical science your statement may be true, but that particular example from your last paragraph strikes a chord for me. Currently, we're seeing a flood of papers in computer science that apply "deep learning"/"neural networks" to a variety of highly specific problems, and do not say more than "It works (but we don't know why)". Many people are publishing like mad and literally do not have the slightest idea of what they are doing - even though that may seem hard to believe for other fields. Really, cf. xkcd.com/1838 : If it looks right, it will be published.

          – Marco13
          1 hour ago











        • @Marco13 Indeed, though of course people still need to be cautious of xkcd.com/882 - that shouldn't be a barrier to publishing, though, it should be a barrier to how studies are interpreted and how results are validated on independent data. I should make clear that I have in mind things like paracetamol which are in wide use, definitely effective, and yet...still oddly not well understood.

          – Bryan Krause
          1 hour ago












        • Superconductivity would be another example. New superconductors may require hundreds of papers before they are explained.

          – Anonymous Physicist
          23 mins ago











        • @Marco13 It's a horrible problem really. How can we know that the neural networks actually work if we don't know how they work? I know they had an AI that could check chest X-rays for tuberculosis, and while they did better than human x-ray technicians, many of the false positives it found were correctly assessed as negative by humans. The issue was the AI knew the xrays were coming from a hospital, it had learned this meant he or she was sicker, and this biased the test to think it was more likely tuberculosis. If you can't explain what patterns it's finding, how do you know they are real?

          – Ryan_L
          15 mins ago












        • 2





          For the field of medical science your statement may be true, but that particular example from your last paragraph strikes a chord for me. Currently, we're seeing a flood of papers in computer science that apply "deep learning"/"neural networks" to a variety of highly specific problems, and do not say more than "It works (but we don't know why)". Many people are publishing like mad and literally do not have the slightest idea of what they are doing - even though that may seem hard to believe for other fields. Really, cf. xkcd.com/1838 : If it looks right, it will be published.

          – Marco13
          1 hour ago











        • @Marco13 Indeed, though of course people still need to be cautious of xkcd.com/882 - that shouldn't be a barrier to publishing, though, it should be a barrier to how studies are interpreted and how results are validated on independent data. I should make clear that I have in mind things like paracetamol which are in wide use, definitely effective, and yet...still oddly not well understood.

          – Bryan Krause
          1 hour ago












        • Superconductivity would be another example. New superconductors may require hundreds of papers before they are explained.

          – Anonymous Physicist
          23 mins ago











        • @Marco13 It's a horrible problem really. How can we know that the neural networks actually work if we don't know how they work? I know they had an AI that could check chest X-rays for tuberculosis, and while they did better than human x-ray technicians, many of the false positives it found were correctly assessed as negative by humans. The issue was the AI knew the xrays were coming from a hospital, it had learned this meant he or she was sicker, and this biased the test to think it was more likely tuberculosis. If you can't explain what patterns it's finding, how do you know they are real?

          – Ryan_L
          15 mins ago







        2




        2





        For the field of medical science your statement may be true, but that particular example from your last paragraph strikes a chord for me. Currently, we're seeing a flood of papers in computer science that apply "deep learning"/"neural networks" to a variety of highly specific problems, and do not say more than "It works (but we don't know why)". Many people are publishing like mad and literally do not have the slightest idea of what they are doing - even though that may seem hard to believe for other fields. Really, cf. xkcd.com/1838 : If it looks right, it will be published.

        – Marco13
        1 hour ago





        For the field of medical science your statement may be true, but that particular example from your last paragraph strikes a chord for me. Currently, we're seeing a flood of papers in computer science that apply "deep learning"/"neural networks" to a variety of highly specific problems, and do not say more than "It works (but we don't know why)". Many people are publishing like mad and literally do not have the slightest idea of what they are doing - even though that may seem hard to believe for other fields. Really, cf. xkcd.com/1838 : If it looks right, it will be published.

        – Marco13
        1 hour ago













        @Marco13 Indeed, though of course people still need to be cautious of xkcd.com/882 - that shouldn't be a barrier to publishing, though, it should be a barrier to how studies are interpreted and how results are validated on independent data. I should make clear that I have in mind things like paracetamol which are in wide use, definitely effective, and yet...still oddly not well understood.

        – Bryan Krause
        1 hour ago






        @Marco13 Indeed, though of course people still need to be cautious of xkcd.com/882 - that shouldn't be a barrier to publishing, though, it should be a barrier to how studies are interpreted and how results are validated on independent data. I should make clear that I have in mind things like paracetamol which are in wide use, definitely effective, and yet...still oddly not well understood.

        – Bryan Krause
        1 hour ago














        Superconductivity would be another example. New superconductors may require hundreds of papers before they are explained.

        – Anonymous Physicist
        23 mins ago





        Superconductivity would be another example. New superconductors may require hundreds of papers before they are explained.

        – Anonymous Physicist
        23 mins ago













        @Marco13 It's a horrible problem really. How can we know that the neural networks actually work if we don't know how they work? I know they had an AI that could check chest X-rays for tuberculosis, and while they did better than human x-ray technicians, many of the false positives it found were correctly assessed as negative by humans. The issue was the AI knew the xrays were coming from a hospital, it had learned this meant he or she was sicker, and this biased the test to think it was more likely tuberculosis. If you can't explain what patterns it's finding, how do you know they are real?

        – Ryan_L
        15 mins ago





        @Marco13 It's a horrible problem really. How can we know that the neural networks actually work if we don't know how they work? I know they had an AI that could check chest X-rays for tuberculosis, and while they did better than human x-ray technicians, many of the false positives it found were correctly assessed as negative by humans. The issue was the AI knew the xrays were coming from a hospital, it had learned this meant he or she was sicker, and this biased the test to think it was more likely tuberculosis. If you can't explain what patterns it's finding, how do you know they are real?

        – Ryan_L
        15 mins ago













        3















        Papers are evaluated on a variety of criteria, including accessibility and the contribution to the field of research.



        Now papers that not only report findings, but analyze findings and provide root causes for effects observed in the paper are obviously more valuable and are more likely to be accepted.



        But from a scientific point of view, requiring that papers have this property would not be a good idea. Quite often, the root cause of an observed phenomenon is not known. Not being able to publish papers without finding the root cause would mean that information stays "unknown" until the person making a discovery also finds out the reason for an observed phenomenon, which could mean that it is never found out. For instance, if Mendel with his discovery that traits are inherited until the DNA was found,
        that would have been quite a loss.



        In computer science, you need to distinguish between pure theoretical computer and the rest. While in the former, the proofs provide all the reason you need, in the applied fields, at least part of the argument is some utility of the finding. There are many subfields in which algorithms are published that work well in practice despite not giving theoretical guarantees that they always work. Finding out why certain algorithms work well in practice would require to define exactly what "practice" means, which changes over time. Machine learning is a good example: we know that many machine learning algorithms can get stuck in local optima, and we have some ideas on how to prevent that (in many interesting cases). And then there is some theory that tries to capture this. But ultimately, the reason for why many of the approaches work are that the models to be learned are easy enough and the algorithm is good enough, which is very difficult to impossible to formalize to a level that it would be acceptable in a scientific paper. And then requiring an in-depth explanation of why a new approach works would essentially mean that there will be almost no publications of practical relevance.






        share|improve this answer































          3















          Papers are evaluated on a variety of criteria, including accessibility and the contribution to the field of research.



          Now papers that not only report findings, but analyze findings and provide root causes for effects observed in the paper are obviously more valuable and are more likely to be accepted.



          But from a scientific point of view, requiring that papers have this property would not be a good idea. Quite often, the root cause of an observed phenomenon is not known. Not being able to publish papers without finding the root cause would mean that information stays "unknown" until the person making a discovery also finds out the reason for an observed phenomenon, which could mean that it is never found out. For instance, if Mendel with his discovery that traits are inherited until the DNA was found,
          that would have been quite a loss.



          In computer science, you need to distinguish between pure theoretical computer and the rest. While in the former, the proofs provide all the reason you need, in the applied fields, at least part of the argument is some utility of the finding. There are many subfields in which algorithms are published that work well in practice despite not giving theoretical guarantees that they always work. Finding out why certain algorithms work well in practice would require to define exactly what "practice" means, which changes over time. Machine learning is a good example: we know that many machine learning algorithms can get stuck in local optima, and we have some ideas on how to prevent that (in many interesting cases). And then there is some theory that tries to capture this. But ultimately, the reason for why many of the approaches work are that the models to be learned are easy enough and the algorithm is good enough, which is very difficult to impossible to formalize to a level that it would be acceptable in a scientific paper. And then requiring an in-depth explanation of why a new approach works would essentially mean that there will be almost no publications of practical relevance.






          share|improve this answer





























            3














            3










            3









            Papers are evaluated on a variety of criteria, including accessibility and the contribution to the field of research.



            Now papers that not only report findings, but analyze findings and provide root causes for effects observed in the paper are obviously more valuable and are more likely to be accepted.



            But from a scientific point of view, requiring that papers have this property would not be a good idea. Quite often, the root cause of an observed phenomenon is not known. Not being able to publish papers without finding the root cause would mean that information stays "unknown" until the person making a discovery also finds out the reason for an observed phenomenon, which could mean that it is never found out. For instance, if Mendel with his discovery that traits are inherited until the DNA was found,
            that would have been quite a loss.



            In computer science, you need to distinguish between pure theoretical computer and the rest. While in the former, the proofs provide all the reason you need, in the applied fields, at least part of the argument is some utility of the finding. There are many subfields in which algorithms are published that work well in practice despite not giving theoretical guarantees that they always work. Finding out why certain algorithms work well in practice would require to define exactly what "practice" means, which changes over time. Machine learning is a good example: we know that many machine learning algorithms can get stuck in local optima, and we have some ideas on how to prevent that (in many interesting cases). And then there is some theory that tries to capture this. But ultimately, the reason for why many of the approaches work are that the models to be learned are easy enough and the algorithm is good enough, which is very difficult to impossible to formalize to a level that it would be acceptable in a scientific paper. And then requiring an in-depth explanation of why a new approach works would essentially mean that there will be almost no publications of practical relevance.






            share|improve this answer















            Papers are evaluated on a variety of criteria, including accessibility and the contribution to the field of research.



            Now papers that not only report findings, but analyze findings and provide root causes for effects observed in the paper are obviously more valuable and are more likely to be accepted.



            But from a scientific point of view, requiring that papers have this property would not be a good idea. Quite often, the root cause of an observed phenomenon is not known. Not being able to publish papers without finding the root cause would mean that information stays "unknown" until the person making a discovery also finds out the reason for an observed phenomenon, which could mean that it is never found out. For instance, if Mendel with his discovery that traits are inherited until the DNA was found,
            that would have been quite a loss.



            In computer science, you need to distinguish between pure theoretical computer and the rest. While in the former, the proofs provide all the reason you need, in the applied fields, at least part of the argument is some utility of the finding. There are many subfields in which algorithms are published that work well in practice despite not giving theoretical guarantees that they always work. Finding out why certain algorithms work well in practice would require to define exactly what "practice" means, which changes over time. Machine learning is a good example: we know that many machine learning algorithms can get stuck in local optima, and we have some ideas on how to prevent that (in many interesting cases). And then there is some theory that tries to capture this. But ultimately, the reason for why many of the approaches work are that the models to be learned are easy enough and the algorithm is good enough, which is very difficult to impossible to formalize to a level that it would be acceptable in a scientific paper. And then requiring an in-depth explanation of why a new approach works would essentially mean that there will be almost no publications of practical relevance.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited 8 hours ago

























            answered 9 hours ago









            DCTLibDCTLib

            9,35631 silver badges40 bronze badges




            9,35631 silver badges40 bronze badges
























                1















                I'm not sure what you mean exactly by explainability and it cannot be a scientific metric if it doesn't exist in a dictionary.



                So I conclude what you are thinking about is that the content of an article has to explain something: an not well understood process, a new method, a new theory.



                Different fields have different standards and metrics. I'm sure there are different for publishing a new physical theory vs. an optimization of a machine learning algorithm for image recognition. But this is normally covered by the novelty and significance metric by a journal.



                From a philosophy of science point of view you also should see or inspect what the modus operandi of researchers in your field is. For example, in particle physics or cosmology researchers try to falsify the scientific paradigm/theory, especially if there are too many flaws in a currently used theory. I know some of the basics of machine learning theory and that many of it is based on mathematical methods developed in quantum physics. This is a bullet-proof theory pretty much, no one has falsified it until this day and physicists still try. But in engineering and even in applied physics depending on the topic/resarch question rather a positivistic modus operandi is used by researchers, e.g. optimizing/enhancing/backing up a machine learning algorithm without substantial questioning or falsification underlying theories. And for minor incremental improvements an explanation in the sense of why rather then how may be not necessary in your field and therefore no general metric if the underlying theories are not really touched. As soon as you question a theory or common measurement process, at least in physics, you need to input a good explanation in your article, why and how you do this. What is the motivation, why it is more accurate to describe something.



                When you say in the comment "proving something works without how it works", I think this is what sometimes in industrial machine learning happens, input - black box - output. But if you can neither explain how or why your algorithm works (better), in the best case you can call it smart engineering but not science that can/should be published ;-)






                share|improve this answer





























                  1















                  I'm not sure what you mean exactly by explainability and it cannot be a scientific metric if it doesn't exist in a dictionary.



                  So I conclude what you are thinking about is that the content of an article has to explain something: an not well understood process, a new method, a new theory.



                  Different fields have different standards and metrics. I'm sure there are different for publishing a new physical theory vs. an optimization of a machine learning algorithm for image recognition. But this is normally covered by the novelty and significance metric by a journal.



                  From a philosophy of science point of view you also should see or inspect what the modus operandi of researchers in your field is. For example, in particle physics or cosmology researchers try to falsify the scientific paradigm/theory, especially if there are too many flaws in a currently used theory. I know some of the basics of machine learning theory and that many of it is based on mathematical methods developed in quantum physics. This is a bullet-proof theory pretty much, no one has falsified it until this day and physicists still try. But in engineering and even in applied physics depending on the topic/resarch question rather a positivistic modus operandi is used by researchers, e.g. optimizing/enhancing/backing up a machine learning algorithm without substantial questioning or falsification underlying theories. And for minor incremental improvements an explanation in the sense of why rather then how may be not necessary in your field and therefore no general metric if the underlying theories are not really touched. As soon as you question a theory or common measurement process, at least in physics, you need to input a good explanation in your article, why and how you do this. What is the motivation, why it is more accurate to describe something.



                  When you say in the comment "proving something works without how it works", I think this is what sometimes in industrial machine learning happens, input - black box - output. But if you can neither explain how or why your algorithm works (better), in the best case you can call it smart engineering but not science that can/should be published ;-)






                  share|improve this answer



























                    1














                    1










                    1









                    I'm not sure what you mean exactly by explainability and it cannot be a scientific metric if it doesn't exist in a dictionary.



                    So I conclude what you are thinking about is that the content of an article has to explain something: an not well understood process, a new method, a new theory.



                    Different fields have different standards and metrics. I'm sure there are different for publishing a new physical theory vs. an optimization of a machine learning algorithm for image recognition. But this is normally covered by the novelty and significance metric by a journal.



                    From a philosophy of science point of view you also should see or inspect what the modus operandi of researchers in your field is. For example, in particle physics or cosmology researchers try to falsify the scientific paradigm/theory, especially if there are too many flaws in a currently used theory. I know some of the basics of machine learning theory and that many of it is based on mathematical methods developed in quantum physics. This is a bullet-proof theory pretty much, no one has falsified it until this day and physicists still try. But in engineering and even in applied physics depending on the topic/resarch question rather a positivistic modus operandi is used by researchers, e.g. optimizing/enhancing/backing up a machine learning algorithm without substantial questioning or falsification underlying theories. And for minor incremental improvements an explanation in the sense of why rather then how may be not necessary in your field and therefore no general metric if the underlying theories are not really touched. As soon as you question a theory or common measurement process, at least in physics, you need to input a good explanation in your article, why and how you do this. What is the motivation, why it is more accurate to describe something.



                    When you say in the comment "proving something works without how it works", I think this is what sometimes in industrial machine learning happens, input - black box - output. But if you can neither explain how or why your algorithm works (better), in the best case you can call it smart engineering but not science that can/should be published ;-)






                    share|improve this answer













                    I'm not sure what you mean exactly by explainability and it cannot be a scientific metric if it doesn't exist in a dictionary.



                    So I conclude what you are thinking about is that the content of an article has to explain something: an not well understood process, a new method, a new theory.



                    Different fields have different standards and metrics. I'm sure there are different for publishing a new physical theory vs. an optimization of a machine learning algorithm for image recognition. But this is normally covered by the novelty and significance metric by a journal.



                    From a philosophy of science point of view you also should see or inspect what the modus operandi of researchers in your field is. For example, in particle physics or cosmology researchers try to falsify the scientific paradigm/theory, especially if there are too many flaws in a currently used theory. I know some of the basics of machine learning theory and that many of it is based on mathematical methods developed in quantum physics. This is a bullet-proof theory pretty much, no one has falsified it until this day and physicists still try. But in engineering and even in applied physics depending on the topic/resarch question rather a positivistic modus operandi is used by researchers, e.g. optimizing/enhancing/backing up a machine learning algorithm without substantial questioning or falsification underlying theories. And for minor incremental improvements an explanation in the sense of why rather then how may be not necessary in your field and therefore no general metric if the underlying theories are not really touched. As soon as you question a theory or common measurement process, at least in physics, you need to input a good explanation in your article, why and how you do this. What is the motivation, why it is more accurate to describe something.



                    When you say in the comment "proving something works without how it works", I think this is what sometimes in industrial machine learning happens, input - black box - output. But if you can neither explain how or why your algorithm works (better), in the best case you can call it smart engineering but not science that can/should be published ;-)







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered 9 hours ago









                    serasera

                    2,5375 silver badges16 bronze badges




                    2,5375 silver badges16 bronze badges
























                        -1















                        There is another aspect to this that applies in some fields, even surprisingly diverse ones. It is "explainable to whom, exactly?" I'll use math as an example but it also applies to things like literary criticism and CS, I think.



                        When a professional paper is written, it is written in such a way that people similar to the author can understand it. It isn't, normally, written for novices or people in other fields. The author(s) suspect that most of their readers will be just like themselves with a similar background and way of thinking. So a math proof, can, in many (most?) cases, leave out many steps that would make the paper more understandable to a novice, but would just slow down most of the readers.



                        I think that any field, even one not as "arcane" as mathematics, but which has a large professional vocabulary that is well understood by experienced practitioners will have a lot of papers like this.



                        On the other hand, people that write for a general audience may need to do just the opposite. Fill in more detail than professionals require and resort to metaphor and analogy more than experts need, just to be understood at all.



                        Of course, the worst of all worlds is either0 to provide so much detail that the work becomes pedantic, pleasing no one or simply making unsupported statements requiring leaps of faith to follow (or not).



                        In any case, what may be easily understood by you, may not be by myself, and vice-versa.



                        Moreover, since the reviewers of any paper are probably a lot like the authors, then if they can understand it they won't object, and if they can't, then they will require modifications. So, your "requirement" is probably built into the process implicitly as member Coder implies in a comment.






                        share|improve this answer































                          -1















                          There is another aspect to this that applies in some fields, even surprisingly diverse ones. It is "explainable to whom, exactly?" I'll use math as an example but it also applies to things like literary criticism and CS, I think.



                          When a professional paper is written, it is written in such a way that people similar to the author can understand it. It isn't, normally, written for novices or people in other fields. The author(s) suspect that most of their readers will be just like themselves with a similar background and way of thinking. So a math proof, can, in many (most?) cases, leave out many steps that would make the paper more understandable to a novice, but would just slow down most of the readers.



                          I think that any field, even one not as "arcane" as mathematics, but which has a large professional vocabulary that is well understood by experienced practitioners will have a lot of papers like this.



                          On the other hand, people that write for a general audience may need to do just the opposite. Fill in more detail than professionals require and resort to metaphor and analogy more than experts need, just to be understood at all.



                          Of course, the worst of all worlds is either0 to provide so much detail that the work becomes pedantic, pleasing no one or simply making unsupported statements requiring leaps of faith to follow (or not).



                          In any case, what may be easily understood by you, may not be by myself, and vice-versa.



                          Moreover, since the reviewers of any paper are probably a lot like the authors, then if they can understand it they won't object, and if they can't, then they will require modifications. So, your "requirement" is probably built into the process implicitly as member Coder implies in a comment.






                          share|improve this answer





























                            -1














                            -1










                            -1









                            There is another aspect to this that applies in some fields, even surprisingly diverse ones. It is "explainable to whom, exactly?" I'll use math as an example but it also applies to things like literary criticism and CS, I think.



                            When a professional paper is written, it is written in such a way that people similar to the author can understand it. It isn't, normally, written for novices or people in other fields. The author(s) suspect that most of their readers will be just like themselves with a similar background and way of thinking. So a math proof, can, in many (most?) cases, leave out many steps that would make the paper more understandable to a novice, but would just slow down most of the readers.



                            I think that any field, even one not as "arcane" as mathematics, but which has a large professional vocabulary that is well understood by experienced practitioners will have a lot of papers like this.



                            On the other hand, people that write for a general audience may need to do just the opposite. Fill in more detail than professionals require and resort to metaphor and analogy more than experts need, just to be understood at all.



                            Of course, the worst of all worlds is either0 to provide so much detail that the work becomes pedantic, pleasing no one or simply making unsupported statements requiring leaps of faith to follow (or not).



                            In any case, what may be easily understood by you, may not be by myself, and vice-versa.



                            Moreover, since the reviewers of any paper are probably a lot like the authors, then if they can understand it they won't object, and if they can't, then they will require modifications. So, your "requirement" is probably built into the process implicitly as member Coder implies in a comment.






                            share|improve this answer















                            There is another aspect to this that applies in some fields, even surprisingly diverse ones. It is "explainable to whom, exactly?" I'll use math as an example but it also applies to things like literary criticism and CS, I think.



                            When a professional paper is written, it is written in such a way that people similar to the author can understand it. It isn't, normally, written for novices or people in other fields. The author(s) suspect that most of their readers will be just like themselves with a similar background and way of thinking. So a math proof, can, in many (most?) cases, leave out many steps that would make the paper more understandable to a novice, but would just slow down most of the readers.



                            I think that any field, even one not as "arcane" as mathematics, but which has a large professional vocabulary that is well understood by experienced practitioners will have a lot of papers like this.



                            On the other hand, people that write for a general audience may need to do just the opposite. Fill in more detail than professionals require and resort to metaphor and analogy more than experts need, just to be understood at all.



                            Of course, the worst of all worlds is either0 to provide so much detail that the work becomes pedantic, pleasing no one or simply making unsupported statements requiring leaps of faith to follow (or not).



                            In any case, what may be easily understood by you, may not be by myself, and vice-versa.



                            Moreover, since the reviewers of any paper are probably a lot like the authors, then if they can understand it they won't object, and if they can't, then they will require modifications. So, your "requirement" is probably built into the process implicitly as member Coder implies in a comment.







                            share|improve this answer














                            share|improve this answer



                            share|improve this answer








                            edited 8 hours ago

























                            answered 8 hours ago









                            BuffyBuffy

                            79.4k21 gold badges244 silver badges352 bronze badges




                            79.4k21 gold badges244 silver badges352 bronze badges






























                                draft saved

                                draft discarded
















































                                Thanks for contributing an answer to Academia Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f135305%2fwhy-is-explainability-not-one-of-the-criteria-for-publication%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                                Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                                Tom Holland Mục lục Đầu đời và giáo dục | Sự nghiệp | Cuộc sống cá nhân | Phim tham gia | Giải thưởng và đề cử | Chú thích | Liên kết ngoài | Trình đơn chuyển hướngProfile“Person Details for Thomas Stanley Holland, "England and Wales Birth Registration Index, 1837-2008" — FamilySearch.org”"Meet Tom Holland... the 16-year-old star of The Impossible""Schoolboy actor Tom Holland finds himself in Oscar contention for role in tsunami drama"“Naomi Watts on the Prince William and Harry's reaction to her film about the late Princess Diana”lưu trữ"Holland and Pflueger Are West End's Two New 'Billy Elliots'""I'm so envious of my son, the movie star! British writer Dominic Holland's spent 20 years trying to crack Hollywood - but he's been beaten to it by a very unlikely rival"“Richard and Margaret Povey of Jersey, Channel Islands, UK: Information about Thomas Stanley Holland”"Tom Holland to play Billy Elliot""New Billy Elliot leaving the garage"Billy Elliot the Musical - Tom Holland - Billy"A Tale of four Billys: Tom Holland""The Feel Good Factor""Thames Christian College schoolboys join Myleene Klass for The Feelgood Factor""Government launches £600,000 arts bursaries pilot""BILLY's Chapman, Holland, Gardner & Jackson-Keen Visit Prime Minister""Elton John 'blown away' by Billy Elliot fifth birthday" (video with John's interview and fragments of Holland's performance)"First News interviews Arrietty's Tom Holland"“33rd Critics' Circle Film Awards winners”“National Board of Review Current Awards”Bản gốc"Ron Howard Whaling Tale 'In The Heart Of The Sea' Casts Tom Holland"“'Spider-Man' Finds Tom Holland to Star as New Web-Slinger”lưu trữ“Captain America: Civil War (2016)”“Film Review: ‘Captain America: Civil War’”lưu trữ“‘Captain America: Civil War’ review: Choose your own avenger”lưu trữ“The Lost City of Z reviews”“Sony Pictures and Marvel Studios Find Their 'Spider-Man' Star and Director”“‘Mary Magdalene’, ‘Current War’ & ‘Wind River’ Get 2017 Release Dates From Weinstein”“Lionsgate Unleashing Daisy Ridley & Tom Holland Starrer ‘Chaos Walking’ In Cannes”“PTA's 'Master' Leads Chicago Film Critics Nominations, UPDATED: Houston and Indiana Critics Nominations”“Nominaciones Goya 2013 Telecinco Cinema – ENG”“Jameson Empire Film Awards: Martin Freeman wins best actor for performance in The Hobbit”“34th Annual Young Artist Awards”Bản gốc“Teen Choice Awards 2016—Captain America: Civil War Leads Second Wave of Nominations”“BAFTA Film Award Nominations: ‘La La Land’ Leads Race”“Saturn Awards Nominations 2017: 'Rogue One,' 'Walking Dead' Lead”Tom HollandTom HollandTom HollandTom Hollandmedia.gettyimages.comWorldCat Identities300279794no20130442900000 0004 0355 42791085670554170004732cb16706349t(data)XX5557367