Why do we need explainable AI?How could self-driving cars make ethical decisions about who to kill?Should I use anthropomorphic language when discussing AI?How would AI prioritize situational ethics?How is the “right to explanation” reasonable?Is human-like intelligence the smart objective?Will Human Cognitive Evolution Drown in Response to Artificial Intelligence?Is AI research culture predisposed to adversarialism while even the mathematics is more friendly?Why do we need common sense in AI?Does it make sense to invent intelligent robots, if we only need to automate the economy?

Colored grid with coordinates on all sides?

Why don't "echo -e" commands seem to produce the right output?

What is the motivation behind designing a control stick that does not move?

How can I improve my formal definitions?

Using font to highlight a god's speech in dialogue

Why is Mitch McConnell blocking nominees to the Federal Election Commission?

Calculate Landau's function

Can a country avoid prosecution for crimes against humanity by denying it happened?

Single vs Multiple Try Catch

How does Query decide the order in which the functions are applied?

How would a disabled person earn their living in a medieval-type town?

Cheap oscilloscope showing 16 MHz square wave

Can users with the same $HOME have separate bash histories?

Punishment in pacifist society

Function of the separated, individual solar cells on Telstar 1 and 2? Why were they "special"?

To minimize the Hausdorff distance between convex polygonal regions

How does the search space affect the speed of an ILP solver?

How to have the "Restore Missing Files" function from Nautilus without installing Nautilus?

Blogging in LaTeX

The 7-numbers crossword

Could a simple hospital oxygen mask protect from aerosol poison?

How do I get my neighbour to stop disturbing with loud music?

Missing $ inserted. Extra }, or forgotten $. Missing } inserted

Can UV radiation be safe for the skin?



Why do we need explainable AI?


How could self-driving cars make ethical decisions about who to kill?Should I use anthropomorphic language when discussing AI?How would AI prioritize situational ethics?How is the “right to explanation” reasonable?Is human-like intelligence the smart objective?Will Human Cognitive Evolution Drown in Response to Artificial Intelligence?Is AI research culture predisposed to adversarialism while even the mathematics is more friendly?Why do we need common sense in AI?Does it make sense to invent intelligent robots, if we only need to automate the economy?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








9












$begingroup$


I assume the original purpose we create an AI is to help humans in some tasks. Then why should we care about its explainability? For example, in deep learning, as long as this "intelligence" can help us with their best abilities and it also carefully making its decisions, why we need to know "how does its intelligence works?"










share|improve this question











$endgroup$




















    9












    $begingroup$


    I assume the original purpose we create an AI is to help humans in some tasks. Then why should we care about its explainability? For example, in deep learning, as long as this "intelligence" can help us with their best abilities and it also carefully making its decisions, why we need to know "how does its intelligence works?"










    share|improve this question











    $endgroup$
















      9












      9








      9


      1



      $begingroup$


      I assume the original purpose we create an AI is to help humans in some tasks. Then why should we care about its explainability? For example, in deep learning, as long as this "intelligence" can help us with their best abilities and it also carefully making its decisions, why we need to know "how does its intelligence works?"










      share|improve this question











      $endgroup$




      I assume the original purpose we create an AI is to help humans in some tasks. Then why should we care about its explainability? For example, in deep learning, as long as this "intelligence" can help us with their best abilities and it also carefully making its decisions, why we need to know "how does its intelligence works?"







      philosophy explainable-ai






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited yesterday









      nbro

      6,7284 gold badges16 silver badges36 bronze badges




      6,7284 gold badges16 silver badges36 bronze badges










      asked yesterday









      malioboromalioboro

      9773 silver badges20 bronze badges




      9773 silver badges20 bronze badges























          5 Answers
          5






          active

          oldest

          votes


















          8













          $begingroup$

          As argued by Selvaraju et al., there are three stages of AI evolution, in all of which interpretability is helpful.



          1. In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.


          2. When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.


          3. When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).


          Why is trust so important?



          First, let me give you a couple of examples of industries where trust is paramount:



          • In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.


          • Another example are self-driving cars. The same questions arise: if a car crashes whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.


          In fact, this lack of trust, has according to many hindered the adoption of AI in many fields (sources: 1, 2, 3). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: 1, 2, 3).



          In several real world applications you can't just say "it works 94% of the time". You might also need to provide a justification...



          Government regulations



          Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.



          The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: 1, 2, 3). For instance the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access




          "meaningful information about the logic involved"




          (Article 15, EU GDPR)



          Now this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if
          you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.



          To sum up...



          Explainable AIs are necessary because:



          • It gives us a better understanding, which helps us improve them.

          • In some cases we can learn from AI how to make better decisions in some tasks.

          • It helps users trust AI, which which leads to a wider adoption of AI.

          • Deployed AIs in the (not to distant) future might be required to be more "transparent".





          share|improve this answer









          $endgroup$






















            1













            $begingroup$

            Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.






            share|improve this answer











            $endgroup$






















              1













              $begingroup$

              If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.






              share|improve this answer











              $endgroup$






















                0













                $begingroup$

                Explainable AI is often desirable because



                1. AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.


                2. AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).


                3. In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?






                share|improve this answer











                $endgroup$






















                  0













                  $begingroup$

                  IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.






                  share|improve this answer








                  New contributor



                  S. McGrew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





                  $endgroup$















                    protected by nbro 8 hours ago



                    Thank you for your interest in this question.
                    Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                    Would you like to answer one of these unanswered questions instead?














                    5 Answers
                    5






                    active

                    oldest

                    votes








                    5 Answers
                    5






                    active

                    oldest

                    votes









                    active

                    oldest

                    votes






                    active

                    oldest

                    votes









                    8













                    $begingroup$

                    As argued by Selvaraju et al., there are three stages of AI evolution, in all of which interpretability is helpful.



                    1. In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.


                    2. When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.


                    3. When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).


                    Why is trust so important?



                    First, let me give you a couple of examples of industries where trust is paramount:



                    • In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.


                    • Another example are self-driving cars. The same questions arise: if a car crashes whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.


                    In fact, this lack of trust, has according to many hindered the adoption of AI in many fields (sources: 1, 2, 3). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: 1, 2, 3).



                    In several real world applications you can't just say "it works 94% of the time". You might also need to provide a justification...



                    Government regulations



                    Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.



                    The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: 1, 2, 3). For instance the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access




                    "meaningful information about the logic involved"




                    (Article 15, EU GDPR)



                    Now this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if
                    you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.



                    To sum up...



                    Explainable AIs are necessary because:



                    • It gives us a better understanding, which helps us improve them.

                    • In some cases we can learn from AI how to make better decisions in some tasks.

                    • It helps users trust AI, which which leads to a wider adoption of AI.

                    • Deployed AIs in the (not to distant) future might be required to be more "transparent".





                    share|improve this answer









                    $endgroup$



















                      8













                      $begingroup$

                      As argued by Selvaraju et al., there are three stages of AI evolution, in all of which interpretability is helpful.



                      1. In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.


                      2. When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.


                      3. When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).


                      Why is trust so important?



                      First, let me give you a couple of examples of industries where trust is paramount:



                      • In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.


                      • Another example are self-driving cars. The same questions arise: if a car crashes whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.


                      In fact, this lack of trust, has according to many hindered the adoption of AI in many fields (sources: 1, 2, 3). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: 1, 2, 3).



                      In several real world applications you can't just say "it works 94% of the time". You might also need to provide a justification...



                      Government regulations



                      Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.



                      The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: 1, 2, 3). For instance the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access




                      "meaningful information about the logic involved"




                      (Article 15, EU GDPR)



                      Now this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if
                      you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.



                      To sum up...



                      Explainable AIs are necessary because:



                      • It gives us a better understanding, which helps us improve them.

                      • In some cases we can learn from AI how to make better decisions in some tasks.

                      • It helps users trust AI, which which leads to a wider adoption of AI.

                      • Deployed AIs in the (not to distant) future might be required to be more "transparent".





                      share|improve this answer









                      $endgroup$

















                        8














                        8










                        8







                        $begingroup$

                        As argued by Selvaraju et al., there are three stages of AI evolution, in all of which interpretability is helpful.



                        1. In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.


                        2. When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.


                        3. When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).


                        Why is trust so important?



                        First, let me give you a couple of examples of industries where trust is paramount:



                        • In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.


                        • Another example are self-driving cars. The same questions arise: if a car crashes whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.


                        In fact, this lack of trust, has according to many hindered the adoption of AI in many fields (sources: 1, 2, 3). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: 1, 2, 3).



                        In several real world applications you can't just say "it works 94% of the time". You might also need to provide a justification...



                        Government regulations



                        Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.



                        The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: 1, 2, 3). For instance the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access




                        "meaningful information about the logic involved"




                        (Article 15, EU GDPR)



                        Now this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if
                        you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.



                        To sum up...



                        Explainable AIs are necessary because:



                        • It gives us a better understanding, which helps us improve them.

                        • In some cases we can learn from AI how to make better decisions in some tasks.

                        • It helps users trust AI, which which leads to a wider adoption of AI.

                        • Deployed AIs in the (not to distant) future might be required to be more "transparent".





                        share|improve this answer









                        $endgroup$



                        As argued by Selvaraju et al., there are three stages of AI evolution, in all of which interpretability is helpful.



                        1. In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.


                        2. When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.


                        3. When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).


                        Why is trust so important?



                        First, let me give you a couple of examples of industries where trust is paramount:



                        • In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.


                        • Another example are self-driving cars. The same questions arise: if a car crashes whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.


                        In fact, this lack of trust, has according to many hindered the adoption of AI in many fields (sources: 1, 2, 3). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: 1, 2, 3).



                        In several real world applications you can't just say "it works 94% of the time". You might also need to provide a justification...



                        Government regulations



                        Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.



                        The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: 1, 2, 3). For instance the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access




                        "meaningful information about the logic involved"




                        (Article 15, EU GDPR)



                        Now this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if
                        you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.



                        To sum up...



                        Explainable AIs are necessary because:



                        • It gives us a better understanding, which helps us improve them.

                        • In some cases we can learn from AI how to make better decisions in some tasks.

                        • It helps users trust AI, which which leads to a wider adoption of AI.

                        • Deployed AIs in the (not to distant) future might be required to be more "transparent".






                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered 6 hours ago









                        Djib2011Djib2011

                        6586 bronze badges




                        6586 bronze badges


























                            1













                            $begingroup$

                            Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.






                            share|improve this answer











                            $endgroup$



















                              1













                              $begingroup$

                              Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.






                              share|improve this answer











                              $endgroup$

















                                1














                                1










                                1







                                $begingroup$

                                Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.






                                share|improve this answer











                                $endgroup$



                                Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.







                                share|improve this answer














                                share|improve this answer



                                share|improve this answer








                                edited yesterday









                                nbro

                                6,7284 gold badges16 silver badges36 bronze badges




                                6,7284 gold badges16 silver badges36 bronze badges










                                answered yesterday









                                MakintoszMakintosz

                                849 bronze badges




                                849 bronze badges
























                                    1













                                    $begingroup$

                                    If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.






                                    share|improve this answer











                                    $endgroup$



















                                      1













                                      $begingroup$

                                      If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.






                                      share|improve this answer











                                      $endgroup$

















                                        1














                                        1










                                        1







                                        $begingroup$

                                        If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.






                                        share|improve this answer











                                        $endgroup$



                                        If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.







                                        share|improve this answer














                                        share|improve this answer



                                        share|improve this answer








                                        edited 5 hours ago









                                        nbro

                                        6,7284 gold badges16 silver badges36 bronze badges




                                        6,7284 gold badges16 silver badges36 bronze badges










                                        answered 11 hours ago









                                        Tomasz BartkowiakTomasz Bartkowiak

                                        2981 silver badge6 bronze badges




                                        2981 silver badge6 bronze badges
























                                            0













                                            $begingroup$

                                            Explainable AI is often desirable because



                                            1. AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.


                                            2. AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).


                                            3. In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?






                                            share|improve this answer











                                            $endgroup$



















                                              0













                                              $begingroup$

                                              Explainable AI is often desirable because



                                              1. AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.


                                              2. AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).


                                              3. In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?






                                              share|improve this answer











                                              $endgroup$

















                                                0














                                                0










                                                0







                                                $begingroup$

                                                Explainable AI is often desirable because



                                                1. AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.


                                                2. AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).


                                                3. In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?






                                                share|improve this answer











                                                $endgroup$



                                                Explainable AI is often desirable because



                                                1. AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.


                                                2. AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).


                                                3. In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?







                                                share|improve this answer














                                                share|improve this answer



                                                share|improve this answer








                                                edited yesterday

























                                                answered yesterday









                                                nbronbro

                                                6,7284 gold badges16 silver badges36 bronze badges




                                                6,7284 gold badges16 silver badges36 bronze badges
























                                                    0













                                                    $begingroup$

                                                    IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.






                                                    share|improve this answer








                                                    New contributor



                                                    S. McGrew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                    Check out our Code of Conduct.





                                                    $endgroup$



















                                                      0













                                                      $begingroup$

                                                      IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.






                                                      share|improve this answer








                                                      New contributor



                                                      S. McGrew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                      Check out our Code of Conduct.





                                                      $endgroup$

















                                                        0














                                                        0










                                                        0







                                                        $begingroup$

                                                        IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.






                                                        share|improve this answer








                                                        New contributor



                                                        S. McGrew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                        Check out our Code of Conduct.





                                                        $endgroup$



                                                        IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.







                                                        share|improve this answer








                                                        New contributor



                                                        S. McGrew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                        Check out our Code of Conduct.








                                                        share|improve this answer



                                                        share|improve this answer






                                                        New contributor



                                                        S. McGrew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                        Check out our Code of Conduct.








                                                        answered 12 hours ago









                                                        S. McGrewS. McGrew

                                                        1092 bronze badges




                                                        1092 bronze badges




                                                        New contributor



                                                        S. McGrew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                        Check out our Code of Conduct.




                                                        New contributor




                                                        S. McGrew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                        Check out our Code of Conduct.




















                                                            protected by nbro 8 hours ago



                                                            Thank you for your interest in this question.
                                                            Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                                                            Would you like to answer one of these unanswered questions instead?



                                                            Popular posts from this blog

                                                            Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                                                            Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                                                            Tom Holland Mục lục Đầu đời và giáo dục | Sự nghiệp | Cuộc sống cá nhân | Phim tham gia | Giải thưởng và đề cử | Chú thích | Liên kết ngoài | Trình đơn chuyển hướngProfile“Person Details for Thomas Stanley Holland, "England and Wales Birth Registration Index, 1837-2008" — FamilySearch.org”"Meet Tom Holland... the 16-year-old star of The Impossible""Schoolboy actor Tom Holland finds himself in Oscar contention for role in tsunami drama"“Naomi Watts on the Prince William and Harry's reaction to her film about the late Princess Diana”lưu trữ"Holland and Pflueger Are West End's Two New 'Billy Elliots'""I'm so envious of my son, the movie star! British writer Dominic Holland's spent 20 years trying to crack Hollywood - but he's been beaten to it by a very unlikely rival"“Richard and Margaret Povey of Jersey, Channel Islands, UK: Information about Thomas Stanley Holland”"Tom Holland to play Billy Elliot""New Billy Elliot leaving the garage"Billy Elliot the Musical - Tom Holland - Billy"A Tale of four Billys: Tom Holland""The Feel Good Factor""Thames Christian College schoolboys join Myleene Klass for The Feelgood Factor""Government launches £600,000 arts bursaries pilot""BILLY's Chapman, Holland, Gardner & Jackson-Keen Visit Prime Minister""Elton John 'blown away' by Billy Elliot fifth birthday" (video with John's interview and fragments of Holland's performance)"First News interviews Arrietty's Tom Holland"“33rd Critics' Circle Film Awards winners”“National Board of Review Current Awards”Bản gốc"Ron Howard Whaling Tale 'In The Heart Of The Sea' Casts Tom Holland"“'Spider-Man' Finds Tom Holland to Star as New Web-Slinger”lưu trữ“Captain America: Civil War (2016)”“Film Review: ‘Captain America: Civil War’”lưu trữ“‘Captain America: Civil War’ review: Choose your own avenger”lưu trữ“The Lost City of Z reviews”“Sony Pictures and Marvel Studios Find Their 'Spider-Man' Star and Director”“‘Mary Magdalene’, ‘Current War’ & ‘Wind River’ Get 2017 Release Dates From Weinstein”“Lionsgate Unleashing Daisy Ridley & Tom Holland Starrer ‘Chaos Walking’ In Cannes”“PTA's 'Master' Leads Chicago Film Critics Nominations, UPDATED: Houston and Indiana Critics Nominations”“Nominaciones Goya 2013 Telecinco Cinema – ENG”“Jameson Empire Film Awards: Martin Freeman wins best actor for performance in The Hobbit”“34th Annual Young Artist Awards”Bản gốc“Teen Choice Awards 2016—Captain America: Civil War Leads Second Wave of Nominations”“BAFTA Film Award Nominations: ‘La La Land’ Leads Race”“Saturn Awards Nominations 2017: 'Rogue One,' 'Walking Dead' Lead”Tom HollandTom HollandTom HollandTom Hollandmedia.gettyimages.comWorldCat Identities300279794no20130442900000 0004 0355 42791085670554170004732cb16706349t(data)XX5557367