Why do we need explainable AI?How could self-driving cars make ethical decisions about who to kill?Should I use anthropomorphic language when discussing AI?How would AI prioritize situational ethics?How is the “right to explanation” reasonable?Is human-like intelligence the smart objective?Will Human Cognitive Evolution Drown in Response to Artificial Intelligence?Is AI research culture predisposed to adversarialism while even the mathematics is more friendly?Why do we need common sense in AI?Does it make sense to invent intelligent robots, if we only need to automate the economy?
Colored grid with coordinates on all sides?
Why don't "echo -e" commands seem to produce the right output?
What is the motivation behind designing a control stick that does not move?
How can I improve my formal definitions?
Using font to highlight a god's speech in dialogue
Why is Mitch McConnell blocking nominees to the Federal Election Commission?
Calculate Landau's function
Can a country avoid prosecution for crimes against humanity by denying it happened?
Single vs Multiple Try Catch
How does Query decide the order in which the functions are applied?
How would a disabled person earn their living in a medieval-type town?
Cheap oscilloscope showing 16 MHz square wave
Can users with the same $HOME have separate bash histories?
Punishment in pacifist society
Function of the separated, individual solar cells on Telstar 1 and 2? Why were they "special"?
To minimize the Hausdorff distance between convex polygonal regions
How does the search space affect the speed of an ILP solver?
How to have the "Restore Missing Files" function from Nautilus without installing Nautilus?
Blogging in LaTeX
The 7-numbers crossword
Could a simple hospital oxygen mask protect from aerosol poison?
How do I get my neighbour to stop disturbing with loud music?
Missing $ inserted. Extra }, or forgotten $. Missing } inserted
Can UV radiation be safe for the skin?
Why do we need explainable AI?
How could self-driving cars make ethical decisions about who to kill?Should I use anthropomorphic language when discussing AI?How would AI prioritize situational ethics?How is the “right to explanation” reasonable?Is human-like intelligence the smart objective?Will Human Cognitive Evolution Drown in Response to Artificial Intelligence?Is AI research culture predisposed to adversarialism while even the mathematics is more friendly?Why do we need common sense in AI?Does it make sense to invent intelligent robots, if we only need to automate the economy?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I assume the original purpose we create an AI is to help humans in some tasks. Then why should we care about its explainability? For example, in deep learning, as long as this "intelligence" can help us with their best abilities and it also carefully making its decisions, why we need to know "how does its intelligence works?"
philosophy explainable-ai
$endgroup$
add a comment |
$begingroup$
I assume the original purpose we create an AI is to help humans in some tasks. Then why should we care about its explainability? For example, in deep learning, as long as this "intelligence" can help us with their best abilities and it also carefully making its decisions, why we need to know "how does its intelligence works?"
philosophy explainable-ai
$endgroup$
add a comment |
$begingroup$
I assume the original purpose we create an AI is to help humans in some tasks. Then why should we care about its explainability? For example, in deep learning, as long as this "intelligence" can help us with their best abilities and it also carefully making its decisions, why we need to know "how does its intelligence works?"
philosophy explainable-ai
$endgroup$
I assume the original purpose we create an AI is to help humans in some tasks. Then why should we care about its explainability? For example, in deep learning, as long as this "intelligence" can help us with their best abilities and it also carefully making its decisions, why we need to know "how does its intelligence works?"
philosophy explainable-ai
philosophy explainable-ai
edited yesterday
nbro
6,7284 gold badges16 silver badges36 bronze badges
6,7284 gold badges16 silver badges36 bronze badges
asked yesterday
malioboromalioboro
9773 silver badges20 bronze badges
9773 silver badges20 bronze badges
add a comment |
add a comment |
5 Answers
5
active
oldest
votes
$begingroup$
As argued by Selvaraju et al., there are three stages of AI evolution, in all of which interpretability is helpful.
In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.
When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.
When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).
Why is trust so important?
First, let me give you a couple of examples of industries where trust is paramount:
In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.
Another example are self-driving cars. The same questions arise: if a car crashes whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.
In fact, this lack of trust, has according to many hindered the adoption of AI in many fields (sources: 1, 2, 3). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: 1, 2, 3).
In several real world applications you can't just say "it works 94% of the time". You might also need to provide a justification...
Government regulations
Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.
The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: 1, 2, 3). For instance the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access
"meaningful information about the logic involved"
(Article 15, EU GDPR)
Now this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if
you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.
To sum up...
Explainable AIs are necessary because:
- It gives us a better understanding, which helps us improve them.
- In some cases we can learn from AI how to make better decisions in some tasks.
- It helps users trust AI, which which leads to a wider adoption of AI.
- Deployed AIs in the (not to distant) future might be required to be more "transparent".
$endgroup$
add a comment |
$begingroup$
Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.
$endgroup$
add a comment |
$begingroup$
If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.
$endgroup$
add a comment |
$begingroup$
Explainable AI is often desirable because
AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.
AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).
In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?
$endgroup$
add a comment |
$begingroup$
IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.
New contributor
$endgroup$
add a comment |
protected by nbro 8 hours ago
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
As argued by Selvaraju et al., there are three stages of AI evolution, in all of which interpretability is helpful.
In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.
When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.
When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).
Why is trust so important?
First, let me give you a couple of examples of industries where trust is paramount:
In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.
Another example are self-driving cars. The same questions arise: if a car crashes whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.
In fact, this lack of trust, has according to many hindered the adoption of AI in many fields (sources: 1, 2, 3). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: 1, 2, 3).
In several real world applications you can't just say "it works 94% of the time". You might also need to provide a justification...
Government regulations
Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.
The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: 1, 2, 3). For instance the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access
"meaningful information about the logic involved"
(Article 15, EU GDPR)
Now this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if
you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.
To sum up...
Explainable AIs are necessary because:
- It gives us a better understanding, which helps us improve them.
- In some cases we can learn from AI how to make better decisions in some tasks.
- It helps users trust AI, which which leads to a wider adoption of AI.
- Deployed AIs in the (not to distant) future might be required to be more "transparent".
$endgroup$
add a comment |
$begingroup$
As argued by Selvaraju et al., there are three stages of AI evolution, in all of which interpretability is helpful.
In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.
When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.
When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).
Why is trust so important?
First, let me give you a couple of examples of industries where trust is paramount:
In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.
Another example are self-driving cars. The same questions arise: if a car crashes whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.
In fact, this lack of trust, has according to many hindered the adoption of AI in many fields (sources: 1, 2, 3). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: 1, 2, 3).
In several real world applications you can't just say "it works 94% of the time". You might also need to provide a justification...
Government regulations
Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.
The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: 1, 2, 3). For instance the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access
"meaningful information about the logic involved"
(Article 15, EU GDPR)
Now this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if
you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.
To sum up...
Explainable AIs are necessary because:
- It gives us a better understanding, which helps us improve them.
- In some cases we can learn from AI how to make better decisions in some tasks.
- It helps users trust AI, which which leads to a wider adoption of AI.
- Deployed AIs in the (not to distant) future might be required to be more "transparent".
$endgroup$
add a comment |
$begingroup$
As argued by Selvaraju et al., there are three stages of AI evolution, in all of which interpretability is helpful.
In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.
When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.
When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).
Why is trust so important?
First, let me give you a couple of examples of industries where trust is paramount:
In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.
Another example are self-driving cars. The same questions arise: if a car crashes whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.
In fact, this lack of trust, has according to many hindered the adoption of AI in many fields (sources: 1, 2, 3). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: 1, 2, 3).
In several real world applications you can't just say "it works 94% of the time". You might also need to provide a justification...
Government regulations
Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.
The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: 1, 2, 3). For instance the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access
"meaningful information about the logic involved"
(Article 15, EU GDPR)
Now this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if
you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.
To sum up...
Explainable AIs are necessary because:
- It gives us a better understanding, which helps us improve them.
- In some cases we can learn from AI how to make better decisions in some tasks.
- It helps users trust AI, which which leads to a wider adoption of AI.
- Deployed AIs in the (not to distant) future might be required to be more "transparent".
$endgroup$
As argued by Selvaraju et al., there are three stages of AI evolution, in all of which interpretability is helpful.
In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.
When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.
When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).
Why is trust so important?
First, let me give you a couple of examples of industries where trust is paramount:
In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.
Another example are self-driving cars. The same questions arise: if a car crashes whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.
In fact, this lack of trust, has according to many hindered the adoption of AI in many fields (sources: 1, 2, 3). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: 1, 2, 3).
In several real world applications you can't just say "it works 94% of the time". You might also need to provide a justification...
Government regulations
Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.
The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: 1, 2, 3). For instance the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access
"meaningful information about the logic involved"
(Article 15, EU GDPR)
Now this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if
you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.
To sum up...
Explainable AIs are necessary because:
- It gives us a better understanding, which helps us improve them.
- In some cases we can learn from AI how to make better decisions in some tasks.
- It helps users trust AI, which which leads to a wider adoption of AI.
- Deployed AIs in the (not to distant) future might be required to be more "transparent".
answered 6 hours ago
Djib2011Djib2011
6586 bronze badges
6586 bronze badges
add a comment |
add a comment |
$begingroup$
Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.
$endgroup$
add a comment |
$begingroup$
Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.
$endgroup$
add a comment |
$begingroup$
Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.
$endgroup$
Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.
edited yesterday
nbro
6,7284 gold badges16 silver badges36 bronze badges
6,7284 gold badges16 silver badges36 bronze badges
answered yesterday
MakintoszMakintosz
849 bronze badges
849 bronze badges
add a comment |
add a comment |
$begingroup$
If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.
$endgroup$
add a comment |
$begingroup$
If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.
$endgroup$
add a comment |
$begingroup$
If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.
$endgroup$
If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.
edited 5 hours ago
nbro
6,7284 gold badges16 silver badges36 bronze badges
6,7284 gold badges16 silver badges36 bronze badges
answered 11 hours ago
Tomasz BartkowiakTomasz Bartkowiak
2981 silver badge6 bronze badges
2981 silver badge6 bronze badges
add a comment |
add a comment |
$begingroup$
Explainable AI is often desirable because
AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.
AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).
In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?
$endgroup$
add a comment |
$begingroup$
Explainable AI is often desirable because
AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.
AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).
In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?
$endgroup$
add a comment |
$begingroup$
Explainable AI is often desirable because
AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.
AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).
In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?
$endgroup$
Explainable AI is often desirable because
AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.
AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).
In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?
edited yesterday
answered yesterday
nbronbro
6,7284 gold badges16 silver badges36 bronze badges
6,7284 gold badges16 silver badges36 bronze badges
add a comment |
add a comment |
$begingroup$
IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.
New contributor
$endgroup$
add a comment |
$begingroup$
IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.
New contributor
$endgroup$
add a comment |
$begingroup$
IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.
New contributor
$endgroup$
IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.
New contributor
New contributor
answered 12 hours ago
S. McGrewS. McGrew
1092 bronze badges
1092 bronze badges
New contributor
New contributor
add a comment |
add a comment |
protected by nbro 8 hours ago
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?