Is there any research on the development of attacks against artificial intelligence systems?Are there any microchips specifically designed to run ANNs?Are there any textual CAPTCHA challenges which can fool AI, but not human?Is there any way can teach AI creative painting (not convert photo to paint)?Are preemptive countermeasures indicated?Can artificial intelligence be hacked or not?Are Computer Vision and Digital Image Processing part of Artificial Intelligence?Is there a simple way of classifying images of size differing from the input of existing image classifiers?Is there a theory behind which model is good for a classification task for the convolutional neural network?
How safe is the 4% rule if the U.S. goes back to the world mean?
If I did not sign promotion bonus document, my career would be over. Is this duress?
What are the differences between prismatic, lensatic, and optical sighting compasses?
Why is my paper "under review" if it contains no results?
Can you use a virtual credit card to withdraw money from an ATM in the UK?
How to extract *.tgz.part-*?
Which culture used no personal names?
100% positive Glassdoor employee reviews, 100% negative candidate reviews
Encountering former, abusive advisor at a conference
Why do military jets sometimes have elevators in a depressed position when parked?
Did it take 3 minutes to reload a musket when the second amendment to the US constitution was ratified?
Why is Mars cold?
Drawing a sequence of circles
Word for 'most late'
What are the branches of statistics?
Why does Principal Vagina say, "no relation" after introducing himself?
How should I understand FPGA architecture?
How to increment the value of a (decimal) variable (with leading zero) by +1?
Where does the upgrade to macOS Catalina move root "/" directory files?
How do I remove 'None' items from the end of a list in Python
How are Aircraft Noses Designed?
I run daily 5kms but I cant seem to improve stamina when playing soccer
Car as a good investment
How can you tell apart the pronounciation at the end between the "meine" and "meiner" in the daily spoken situation?
Is there any research on the development of attacks against artificial intelligence systems?
Are there any microchips specifically designed to run ANNs?Are there any textual CAPTCHA challenges which can fool AI, but not human?Is there any way can teach AI creative painting (not convert photo to paint)?Are preemptive countermeasures indicated?Can artificial intelligence be hacked or not?Are Computer Vision and Digital Image Processing part of Artificial Intelligence?Is there a simple way of classifying images of size differing from the input of existing image classifiers?Is there a theory behind which model is good for a classification task for the convolutional neural network?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
$begingroup$
Is there any research on the development of attacks against artificial intelligence systems?
For example, is there a way to generate a letter "A", which every human being in this world can recognize but, if it is shown to the state-of-the-art character recognition system, this system will fail to recognize it? Or spoken audio which can be easily recognized by everyone but will fail on the state-of-the-art speech recognition system.
If there exists such a thing, is this technology a theory-based science (mathematics proved) or an experimental science (randomly add different types of noise and feed into the AI system and see how it works)? Where can I find such material?
image-recognition voice-recognition security adversarial-ml
New contributor
$endgroup$
add a comment
|
$begingroup$
Is there any research on the development of attacks against artificial intelligence systems?
For example, is there a way to generate a letter "A", which every human being in this world can recognize but, if it is shown to the state-of-the-art character recognition system, this system will fail to recognize it? Or spoken audio which can be easily recognized by everyone but will fail on the state-of-the-art speech recognition system.
If there exists such a thing, is this technology a theory-based science (mathematics proved) or an experimental science (randomly add different types of noise and feed into the AI system and see how it works)? Where can I find such material?
image-recognition voice-recognition security adversarial-ml
New contributor
$endgroup$
$begingroup$
Why would a state of art character reco system fail to recognize a digit, the sole task for which it was created? Current state of art systems have higher object recognition accuracy than humans. I think you have misworded your question. But yes things like Actor-Critic exist in which one AI tries to fool or defeat the other (used in GANs and Reinforcement Learning [probably])
$endgroup$
– DuttaA
9 hours ago
$begingroup$
I didn't misword my question. This question is what I want to know. I will look into Actor-Critic. Thanks.
$endgroup$
– Lion Lai
9 hours ago
$begingroup$
I wouldn't call it 'Anti-Artificial' intelligence, it's like we are trying to create something to combat against AI.
$endgroup$
– DuttaA
9 hours ago
$begingroup$
It sounds similar to the invention of a cap of invisibility which allows the user to hide from the powerful Artificial Intelligence and create a space in which not technology but fantasy is ruling the world. The only thing which is more powerful than a computer program is a magic spell which can neutralize all OCR recognition systems easily ...
$endgroup$
– Manuel Rodriguez
8 hours ago
$begingroup$
I edited your question to hopefully clarify it. If I changed the meaning of the question, please, edit it again.
$endgroup$
– nbro
5 hours ago
add a comment
|
$begingroup$
Is there any research on the development of attacks against artificial intelligence systems?
For example, is there a way to generate a letter "A", which every human being in this world can recognize but, if it is shown to the state-of-the-art character recognition system, this system will fail to recognize it? Or spoken audio which can be easily recognized by everyone but will fail on the state-of-the-art speech recognition system.
If there exists such a thing, is this technology a theory-based science (mathematics proved) or an experimental science (randomly add different types of noise and feed into the AI system and see how it works)? Where can I find such material?
image-recognition voice-recognition security adversarial-ml
New contributor
$endgroup$
Is there any research on the development of attacks against artificial intelligence systems?
For example, is there a way to generate a letter "A", which every human being in this world can recognize but, if it is shown to the state-of-the-art character recognition system, this system will fail to recognize it? Or spoken audio which can be easily recognized by everyone but will fail on the state-of-the-art speech recognition system.
If there exists such a thing, is this technology a theory-based science (mathematics proved) or an experimental science (randomly add different types of noise and feed into the AI system and see how it works)? Where can I find such material?
image-recognition voice-recognition security adversarial-ml
image-recognition voice-recognition security adversarial-ml
New contributor
New contributor
edited 5 hours ago
nbro
7,2664 gold badges17 silver badges37 bronze badges
7,2664 gold badges17 silver badges37 bronze badges
New contributor
asked 9 hours ago
Lion LaiLion Lai
1061 bronze badge
1061 bronze badge
New contributor
New contributor
$begingroup$
Why would a state of art character reco system fail to recognize a digit, the sole task for which it was created? Current state of art systems have higher object recognition accuracy than humans. I think you have misworded your question. But yes things like Actor-Critic exist in which one AI tries to fool or defeat the other (used in GANs and Reinforcement Learning [probably])
$endgroup$
– DuttaA
9 hours ago
$begingroup$
I didn't misword my question. This question is what I want to know. I will look into Actor-Critic. Thanks.
$endgroup$
– Lion Lai
9 hours ago
$begingroup$
I wouldn't call it 'Anti-Artificial' intelligence, it's like we are trying to create something to combat against AI.
$endgroup$
– DuttaA
9 hours ago
$begingroup$
It sounds similar to the invention of a cap of invisibility which allows the user to hide from the powerful Artificial Intelligence and create a space in which not technology but fantasy is ruling the world. The only thing which is more powerful than a computer program is a magic spell which can neutralize all OCR recognition systems easily ...
$endgroup$
– Manuel Rodriguez
8 hours ago
$begingroup$
I edited your question to hopefully clarify it. If I changed the meaning of the question, please, edit it again.
$endgroup$
– nbro
5 hours ago
add a comment
|
$begingroup$
Why would a state of art character reco system fail to recognize a digit, the sole task for which it was created? Current state of art systems have higher object recognition accuracy than humans. I think you have misworded your question. But yes things like Actor-Critic exist in which one AI tries to fool or defeat the other (used in GANs and Reinforcement Learning [probably])
$endgroup$
– DuttaA
9 hours ago
$begingroup$
I didn't misword my question. This question is what I want to know. I will look into Actor-Critic. Thanks.
$endgroup$
– Lion Lai
9 hours ago
$begingroup$
I wouldn't call it 'Anti-Artificial' intelligence, it's like we are trying to create something to combat against AI.
$endgroup$
– DuttaA
9 hours ago
$begingroup$
It sounds similar to the invention of a cap of invisibility which allows the user to hide from the powerful Artificial Intelligence and create a space in which not technology but fantasy is ruling the world. The only thing which is more powerful than a computer program is a magic spell which can neutralize all OCR recognition systems easily ...
$endgroup$
– Manuel Rodriguez
8 hours ago
$begingroup$
I edited your question to hopefully clarify it. If I changed the meaning of the question, please, edit it again.
$endgroup$
– nbro
5 hours ago
$begingroup$
Why would a state of art character reco system fail to recognize a digit, the sole task for which it was created? Current state of art systems have higher object recognition accuracy than humans. I think you have misworded your question. But yes things like Actor-Critic exist in which one AI tries to fool or defeat the other (used in GANs and Reinforcement Learning [probably])
$endgroup$
– DuttaA
9 hours ago
$begingroup$
Why would a state of art character reco system fail to recognize a digit, the sole task for which it was created? Current state of art systems have higher object recognition accuracy than humans. I think you have misworded your question. But yes things like Actor-Critic exist in which one AI tries to fool or defeat the other (used in GANs and Reinforcement Learning [probably])
$endgroup$
– DuttaA
9 hours ago
$begingroup$
I didn't misword my question. This question is what I want to know. I will look into Actor-Critic. Thanks.
$endgroup$
– Lion Lai
9 hours ago
$begingroup$
I didn't misword my question. This question is what I want to know. I will look into Actor-Critic. Thanks.
$endgroup$
– Lion Lai
9 hours ago
$begingroup$
I wouldn't call it 'Anti-Artificial' intelligence, it's like we are trying to create something to combat against AI.
$endgroup$
– DuttaA
9 hours ago
$begingroup$
I wouldn't call it 'Anti-Artificial' intelligence, it's like we are trying to create something to combat against AI.
$endgroup$
– DuttaA
9 hours ago
$begingroup$
It sounds similar to the invention of a cap of invisibility which allows the user to hide from the powerful Artificial Intelligence and create a space in which not technology but fantasy is ruling the world. The only thing which is more powerful than a computer program is a magic spell which can neutralize all OCR recognition systems easily ...
$endgroup$
– Manuel Rodriguez
8 hours ago
$begingroup$
It sounds similar to the invention of a cap of invisibility which allows the user to hide from the powerful Artificial Intelligence and create a space in which not technology but fantasy is ruling the world. The only thing which is more powerful than a computer program is a magic spell which can neutralize all OCR recognition systems easily ...
$endgroup$
– Manuel Rodriguez
8 hours ago
$begingroup$
I edited your question to hopefully clarify it. If I changed the meaning of the question, please, edit it again.
$endgroup$
– nbro
5 hours ago
$begingroup$
I edited your question to hopefully clarify it. If I changed the meaning of the question, please, edit it again.
$endgroup$
– nbro
5 hours ago
add a comment
|
2 Answers
2
active
oldest
votes
$begingroup$
Sometimes if the rules used by an AI to identify characters are discovered, and if the rules used by a human being to identify the same characters are different, it is possible to design characters that are recognized by a human being but not recognized by an AI. However, if the human being and AI both use the same rules, they will recognize the same characters equally well.
A student I advised once trained a neural network to recognize a set of numerals, then used a genetic algorithm to alter the shapes and connectivity of the numerals so that a human could still recognize them but the neural network could not. Of course, if he had then re-trained the neural network using the expanded set of numerals, it probably would have been able to recognize the new ones.
$endgroup$
1
$begingroup$
Nice answer. Yeah, if the system is trained with the expanded data, then it will learn to recognize exception. So I think the answer of this question might related to signal processing. Like when we human do the color blindness test. It's just my guessing.
$endgroup$
– Lion Lai
8 hours ago
add a comment
|
$begingroup$
Yes, there is some research on this topic, which can be called adversarial machine learning, which is more an experimental field.
An adversarial example is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which predicts the object to be an orange.
Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, Automatically testing self-driving cars with search-based procedural content generation (2019) by Alessio Gambi et al.
$endgroup$
add a comment
|
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "658"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Lion Lai is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f15820%2fis-there-any-research-on-the-development-of-attacks-against-artificial-intellige%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Sometimes if the rules used by an AI to identify characters are discovered, and if the rules used by a human being to identify the same characters are different, it is possible to design characters that are recognized by a human being but not recognized by an AI. However, if the human being and AI both use the same rules, they will recognize the same characters equally well.
A student I advised once trained a neural network to recognize a set of numerals, then used a genetic algorithm to alter the shapes and connectivity of the numerals so that a human could still recognize them but the neural network could not. Of course, if he had then re-trained the neural network using the expanded set of numerals, it probably would have been able to recognize the new ones.
$endgroup$
1
$begingroup$
Nice answer. Yeah, if the system is trained with the expanded data, then it will learn to recognize exception. So I think the answer of this question might related to signal processing. Like when we human do the color blindness test. It's just my guessing.
$endgroup$
– Lion Lai
8 hours ago
add a comment
|
$begingroup$
Sometimes if the rules used by an AI to identify characters are discovered, and if the rules used by a human being to identify the same characters are different, it is possible to design characters that are recognized by a human being but not recognized by an AI. However, if the human being and AI both use the same rules, they will recognize the same characters equally well.
A student I advised once trained a neural network to recognize a set of numerals, then used a genetic algorithm to alter the shapes and connectivity of the numerals so that a human could still recognize them but the neural network could not. Of course, if he had then re-trained the neural network using the expanded set of numerals, it probably would have been able to recognize the new ones.
$endgroup$
1
$begingroup$
Nice answer. Yeah, if the system is trained with the expanded data, then it will learn to recognize exception. So I think the answer of this question might related to signal processing. Like when we human do the color blindness test. It's just my guessing.
$endgroup$
– Lion Lai
8 hours ago
add a comment
|
$begingroup$
Sometimes if the rules used by an AI to identify characters are discovered, and if the rules used by a human being to identify the same characters are different, it is possible to design characters that are recognized by a human being but not recognized by an AI. However, if the human being and AI both use the same rules, they will recognize the same characters equally well.
A student I advised once trained a neural network to recognize a set of numerals, then used a genetic algorithm to alter the shapes and connectivity of the numerals so that a human could still recognize them but the neural network could not. Of course, if he had then re-trained the neural network using the expanded set of numerals, it probably would have been able to recognize the new ones.
$endgroup$
Sometimes if the rules used by an AI to identify characters are discovered, and if the rules used by a human being to identify the same characters are different, it is possible to design characters that are recognized by a human being but not recognized by an AI. However, if the human being and AI both use the same rules, they will recognize the same characters equally well.
A student I advised once trained a neural network to recognize a set of numerals, then used a genetic algorithm to alter the shapes and connectivity of the numerals so that a human could still recognize them but the neural network could not. Of course, if he had then re-trained the neural network using the expanded set of numerals, it probably would have been able to recognize the new ones.
answered 8 hours ago
S. McGrewS. McGrew
1452 bronze badges
1452 bronze badges
1
$begingroup$
Nice answer. Yeah, if the system is trained with the expanded data, then it will learn to recognize exception. So I think the answer of this question might related to signal processing. Like when we human do the color blindness test. It's just my guessing.
$endgroup$
– Lion Lai
8 hours ago
add a comment
|
1
$begingroup$
Nice answer. Yeah, if the system is trained with the expanded data, then it will learn to recognize exception. So I think the answer of this question might related to signal processing. Like when we human do the color blindness test. It's just my guessing.
$endgroup$
– Lion Lai
8 hours ago
1
1
$begingroup$
Nice answer. Yeah, if the system is trained with the expanded data, then it will learn to recognize exception. So I think the answer of this question might related to signal processing. Like when we human do the color blindness test. It's just my guessing.
$endgroup$
– Lion Lai
8 hours ago
$begingroup$
Nice answer. Yeah, if the system is trained with the expanded data, then it will learn to recognize exception. So I think the answer of this question might related to signal processing. Like when we human do the color blindness test. It's just my guessing.
$endgroup$
– Lion Lai
8 hours ago
add a comment
|
$begingroup$
Yes, there is some research on this topic, which can be called adversarial machine learning, which is more an experimental field.
An adversarial example is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which predicts the object to be an orange.
Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, Automatically testing self-driving cars with search-based procedural content generation (2019) by Alessio Gambi et al.
$endgroup$
add a comment
|
$begingroup$
Yes, there is some research on this topic, which can be called adversarial machine learning, which is more an experimental field.
An adversarial example is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which predicts the object to be an orange.
Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, Automatically testing self-driving cars with search-based procedural content generation (2019) by Alessio Gambi et al.
$endgroup$
add a comment
|
$begingroup$
Yes, there is some research on this topic, which can be called adversarial machine learning, which is more an experimental field.
An adversarial example is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which predicts the object to be an orange.
Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, Automatically testing self-driving cars with search-based procedural content generation (2019) by Alessio Gambi et al.
$endgroup$
Yes, there is some research on this topic, which can be called adversarial machine learning, which is more an experimental field.
An adversarial example is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which predicts the object to be an orange.
Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, Automatically testing self-driving cars with search-based procedural content generation (2019) by Alessio Gambi et al.
edited 53 mins ago
answered 5 hours ago
nbronbro
7,2664 gold badges17 silver badges37 bronze badges
7,2664 gold badges17 silver badges37 bronze badges
add a comment
|
add a comment
|
Lion Lai is a new contributor. Be nice, and check out our Code of Conduct.
Lion Lai is a new contributor. Be nice, and check out our Code of Conduct.
Lion Lai is a new contributor. Be nice, and check out our Code of Conduct.
Lion Lai is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f15820%2fis-there-any-research-on-the-development-of-attacks-against-artificial-intellige%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Why would a state of art character reco system fail to recognize a digit, the sole task for which it was created? Current state of art systems have higher object recognition accuracy than humans. I think you have misworded your question. But yes things like Actor-Critic exist in which one AI tries to fool or defeat the other (used in GANs and Reinforcement Learning [probably])
$endgroup$
– DuttaA
9 hours ago
$begingroup$
I didn't misword my question. This question is what I want to know. I will look into Actor-Critic. Thanks.
$endgroup$
– Lion Lai
9 hours ago
$begingroup$
I wouldn't call it 'Anti-Artificial' intelligence, it's like we are trying to create something to combat against AI.
$endgroup$
– DuttaA
9 hours ago
$begingroup$
It sounds similar to the invention of a cap of invisibility which allows the user to hide from the powerful Artificial Intelligence and create a space in which not technology but fantasy is ruling the world. The only thing which is more powerful than a computer program is a magic spell which can neutralize all OCR recognition systems easily ...
$endgroup$
– Manuel Rodriguez
8 hours ago
$begingroup$
I edited your question to hopefully clarify it. If I changed the meaning of the question, please, edit it again.
$endgroup$
– nbro
5 hours ago