Optimization terminology: “Exact” v. “Approximate”Combinatorial Optimization: Metaheuristics, CP, IP — “versus” or “and”?Algorithmic gap for Hochbaum's (greedy) algorithm for (metric) uncapacitated facility locationOptimization models for portfolio optimizationHow can I linearize or convexify this binary quadratic optimization problem?
How to drill holes in 3/8" thick steel plates?
Does Multiverse exist in MCU?
Is "De qui parles-tu" (for example) as formal as it is in English, or is it normal for the French to casually say that
Addressing unnecessary daily meetings with manager?
Is "I do not want you to go nowhere" a case of "DOUBLE-NEGATIVES" as claimed by Grammarly?
Why do we need common sense in AI?
What is a "shilicashe?"
Civic overheating and hoses popping
Are there any balance issues in allowing two half-feats to be taken without the Ability Score Increase instead of a feat?
When I press the space bar it deletes the letters after it
Apex code to find record diff between two dates (to call API only when needed)
Optimization terminology: "Exact" v. "Approximate"
What's the point of having a RAID 1 configuration over incremental backups to a secondary drive?
Employers keep telling me my college isn't good enough - is there any way to fix this?
Killing Magic Numbers: "const int" vs "constexpr int" (or is there no difference in the end)
Was I subtly told to resign?
Received a dinner invitation through my employer's email, is it ok to attend?
Is a request to book a business flight ticket for a graduate student an unreasonable one?
Are there any sports for which the world's best player is female?
How can I fix the dull colors I am getting in Ubuntu 19.04 Terminal?
What happens to unproductive professors?
For a hashing function like MD5, how similar can two plaintext strings be and still generate the same hash?
Credit score and financing new car
How were Martello towers supposed to work?
Optimization terminology: “Exact” v. “Approximate”
Combinatorial Optimization: Metaheuristics, CP, IP — “versus” or “and”?Algorithmic gap for Hochbaum's (greedy) algorithm for (metric) uncapacitated facility locationOptimization models for portfolio optimizationHow can I linearize or convexify this binary quadratic optimization problem?
$begingroup$
In optimization literature, I frequently see solution methods termed "exact" or "approximate". (I use the term "method" here because I suspect exactness, or its lack, is a function of both algorithm and model.) My understanding is that "exact" translates to "produces a provably optimal solution" and "approximate" translates to "may or may not find an optimal solution but will not provide proof of optimality". I'd like to know how others view the terms.
The motivation is a review of recent paper submission. In it, we posed a binary linear programming model and a solution procedure that used branch-and-cut with a variant of Benders decomposition. We called the combination an exact method for solving the underlying problem. We also provided computational examples, all but one of which we solved to optimality. In the one remaining case, I cut off the run at 10 hours with an optimality gap of around 12%. The reviewer pointed to that case as an indication the procedure was not "exact". My initial inclination is to dismiss the comment, since branch-and-cut on a bounded ILP will always reach proven optimality given enough resources (time and memory). On the other hand, from a practical standpoint, if you're not going to get a proven optimal solution within your lifetime, perhaps the method really should not be considered exact.
I know we're not supposed to ask for opinions here, so instead I'll ask which is the proper interpretation of "exact" (or whether I'm completely off base in my interpretation of the term).
combinatorial-optimization terminology
$endgroup$
add a comment |
$begingroup$
In optimization literature, I frequently see solution methods termed "exact" or "approximate". (I use the term "method" here because I suspect exactness, or its lack, is a function of both algorithm and model.) My understanding is that "exact" translates to "produces a provably optimal solution" and "approximate" translates to "may or may not find an optimal solution but will not provide proof of optimality". I'd like to know how others view the terms.
The motivation is a review of recent paper submission. In it, we posed a binary linear programming model and a solution procedure that used branch-and-cut with a variant of Benders decomposition. We called the combination an exact method for solving the underlying problem. We also provided computational examples, all but one of which we solved to optimality. In the one remaining case, I cut off the run at 10 hours with an optimality gap of around 12%. The reviewer pointed to that case as an indication the procedure was not "exact". My initial inclination is to dismiss the comment, since branch-and-cut on a bounded ILP will always reach proven optimality given enough resources (time and memory). On the other hand, from a practical standpoint, if you're not going to get a proven optimal solution within your lifetime, perhaps the method really should not be considered exact.
I know we're not supposed to ask for opinions here, so instead I'll ask which is the proper interpretation of "exact" (or whether I'm completely off base in my interpretation of the term).
combinatorial-optimization terminology
$endgroup$
2
$begingroup$
I would also ask for "heuristic" methods, since I like to think of an "approximate" solution has the results of an "approximation algorithm" which provide a theoretical bound (worst case or expected) on the optimal value. For instance, the Goemans-Williamson max-cut algorithm is an approximation algorithm.
$endgroup$
– Stefano Gualandi
8 hours ago
add a comment |
$begingroup$
In optimization literature, I frequently see solution methods termed "exact" or "approximate". (I use the term "method" here because I suspect exactness, or its lack, is a function of both algorithm and model.) My understanding is that "exact" translates to "produces a provably optimal solution" and "approximate" translates to "may or may not find an optimal solution but will not provide proof of optimality". I'd like to know how others view the terms.
The motivation is a review of recent paper submission. In it, we posed a binary linear programming model and a solution procedure that used branch-and-cut with a variant of Benders decomposition. We called the combination an exact method for solving the underlying problem. We also provided computational examples, all but one of which we solved to optimality. In the one remaining case, I cut off the run at 10 hours with an optimality gap of around 12%. The reviewer pointed to that case as an indication the procedure was not "exact". My initial inclination is to dismiss the comment, since branch-and-cut on a bounded ILP will always reach proven optimality given enough resources (time and memory). On the other hand, from a practical standpoint, if you're not going to get a proven optimal solution within your lifetime, perhaps the method really should not be considered exact.
I know we're not supposed to ask for opinions here, so instead I'll ask which is the proper interpretation of "exact" (or whether I'm completely off base in my interpretation of the term).
combinatorial-optimization terminology
$endgroup$
In optimization literature, I frequently see solution methods termed "exact" or "approximate". (I use the term "method" here because I suspect exactness, or its lack, is a function of both algorithm and model.) My understanding is that "exact" translates to "produces a provably optimal solution" and "approximate" translates to "may or may not find an optimal solution but will not provide proof of optimality". I'd like to know how others view the terms.
The motivation is a review of recent paper submission. In it, we posed a binary linear programming model and a solution procedure that used branch-and-cut with a variant of Benders decomposition. We called the combination an exact method for solving the underlying problem. We also provided computational examples, all but one of which we solved to optimality. In the one remaining case, I cut off the run at 10 hours with an optimality gap of around 12%. The reviewer pointed to that case as an indication the procedure was not "exact". My initial inclination is to dismiss the comment, since branch-and-cut on a bounded ILP will always reach proven optimality given enough resources (time and memory). On the other hand, from a practical standpoint, if you're not going to get a proven optimal solution within your lifetime, perhaps the method really should not be considered exact.
I know we're not supposed to ask for opinions here, so instead I'll ask which is the proper interpretation of "exact" (or whether I'm completely off base in my interpretation of the term).
combinatorial-optimization terminology
combinatorial-optimization terminology
asked 8 hours ago
prubinprubin
1,5263 silver badges15 bronze badges
1,5263 silver badges15 bronze badges
2
$begingroup$
I would also ask for "heuristic" methods, since I like to think of an "approximate" solution has the results of an "approximation algorithm" which provide a theoretical bound (worst case or expected) on the optimal value. For instance, the Goemans-Williamson max-cut algorithm is an approximation algorithm.
$endgroup$
– Stefano Gualandi
8 hours ago
add a comment |
2
$begingroup$
I would also ask for "heuristic" methods, since I like to think of an "approximate" solution has the results of an "approximation algorithm" which provide a theoretical bound (worst case or expected) on the optimal value. For instance, the Goemans-Williamson max-cut algorithm is an approximation algorithm.
$endgroup$
– Stefano Gualandi
8 hours ago
2
2
$begingroup$
I would also ask for "heuristic" methods, since I like to think of an "approximate" solution has the results of an "approximation algorithm" which provide a theoretical bound (worst case or expected) on the optimal value. For instance, the Goemans-Williamson max-cut algorithm is an approximation algorithm.
$endgroup$
– Stefano Gualandi
8 hours ago
$begingroup$
I would also ask for "heuristic" methods, since I like to think of an "approximate" solution has the results of an "approximation algorithm" which provide a theoretical bound (worst case or expected) on the optimal value. For instance, the Goemans-Williamson max-cut algorithm is an approximation algorithm.
$endgroup$
– Stefano Gualandi
8 hours ago
add a comment |
5 Answers
5
active
oldest
votes
$begingroup$
Exact: algorithm will eventually provide a provably optimal solution.
Approximate: algorithm will eventually produce a solution with some guarantees (e.g. a tour being at most twice as long as the shortest tour)
Heuristic: Algorithms that do not give any worst-case guarantee whatsoever.
Finite convergence and runtime are separate from "exactness" properties of an algorithm.
$endgroup$
add a comment |
$begingroup$
Exact: Provably optimal
Approximate: offers an upper bound on the gap
I would add heuristics: procedures that, as you described, may or may not provide an optimal solution (with out any proof or guarantee).
New contributor
$endgroup$
add a comment |
$begingroup$
An exact method will (typically within a bounded number of steps) provide a proven optimal solution. This is, a solution x*
and a guarantee that no other feasible solution has an objective better than that of x*
. Typically, exact methods compute two types of bounds: lower (L
) and upper (U
) bounds. Optimality is then proven whenever both bounds coincide.
In contrast to exact methods that provide a guarantee of optimality, heuristics provide no such guarantee. In the OR literature, it is common to refer to those methods as heuristics rather than approximate methods. The latter is more commonly used to refer to algorithms with a proof of solution quality, although not necessarily exact. For instance, the TSP admits a 1.5-approximate algorithm (Christofides method), this is a method that will construct a solution whose cost is at most 1.5x that of an optimal TSP solution.
With respect to the reviewer comment, it is indeed a little odd. Using the same rationale, one would argue that Dijkstra's method is not exact because it will fail at solving shortest path problems containing trillions of points. The CONCORDE solver for the TSP is another remarkable example of exact method. Like any other problem for a reasonably difficult optimization problem, it will fail at solving extremely large problems. No one would argue that CONCORDE is not exact though. It seems to me that you need to make your point using such type of argument. As I pointed before, an exact algorithm is one that constructs, within a finite number of steps, a solution x*
and a guarantee that no other feasible solution of better objective value exists (usually by computing lower and upper bounds). No such algorithm will scale to solve every single possible instance on earth, unless of course the problem is trivial.
$endgroup$
add a comment |
$begingroup$
In addition to the other answers posted already, I'll add that the term approximation algorithm means an algorithm with a provable worst-case error bound that (as @MarcoLubbecke reminded me in the comments) has polynomial runtime. But the term is often misused to refer to a heuristic that may or may not have such a provable bound.
I would have interpreted the term approximate algorithm as roughly equivalent to heuristic, with or without error bounds. From others' answers, it seems they would disagree, and that they would argue that approximate algorithm = approximation algorithm. From a purely linguistic point of view, that certainly seems preferable; it would be really annoying if -e and -ion meant two very different things. But I am not sure I have seen the terms used this way consistently.
$endgroup$
2
$begingroup$
plus, an approximation algorithm must have polynomial runtime
$endgroup$
– Marco Lübbecke
6 hours ago
1
$begingroup$
I'm glad to see that I'm not the only one to have come across "approximate algorithm" as a code-phrase for (meta)heuristic.
$endgroup$
– prubin
4 hours ago
add a comment |
$begingroup$
According to L. Jourdan, et. al.[1], "NP-hard problems are difficult to solve and no polynomial-time algorithms are known for solving them. Most combinatorial optimization problems are NP-hard. Two approaches can be considered to solve this kind of problems depending on their size. For small instances, researchers usually use exact methods. Exact methods find the optimal solution and assess its optimality. There exist numerous exact methods such as the family of Branch and X, LP, DP and etc. Exact methods are known to be time expensive, so they can not be applied to large NP-hard problems or difficult ones. When instances become too large for exact methods, heuristics and in particular metaheuristics are often used".
Authors of [2] are also mentioned the "solving-time" of the method for combinatorial optimization problem as the only disjunctive difference between Exact and Approximate methods saying that: "The available techniques for COPs can roughly be classified into two main categories: exact and heuristic methods. Exact algorithms are guaranteed to find an optimal solution and to prove its optimality for every instance of a COP. The run-time, however, often increases dramatically with the instance size, and often only small or moderately-sized instances can be practically solved to provable optimality. In this case, the only possibility for larger instances is to trade optimality for run-time, yielding heuristic algorithms. In other words, the guarantee of finding optimal solutions is sacrificed for the sake of getting good solutions in a limited time.
Last two sentences of the above paragraph explain exactly the approach that you followed in your paper.
So my interpretation is: "ALL exact methods can solve the large instances of COPs optimally if there are no limitations on their resources in terms of time and computational power, on the other hand, No method can be considered Exact unless the optimality of its solution can be proven."
[1]: Jourdan, Laetitia, Matthieu Basseur, and E-G. Talbi. "Hybridizing exact methods and metaheuristics: A taxonomy." European Journal of Operational Research 199.3 (2009): 620-629.
[2]: Puchinger, Jakob, and Günther R. Raidl. "Combining metaheuristics and exact algorithms in combinatorial optimization: A survey and classification." International Work-Conference on the Interplay Between Natural and Artificial Computation. Springer, Berlin, Heidelberg, 2005.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "700"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2for.stackexchange.com%2fquestions%2f934%2foptimization-terminology-exact-v-approximate%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Exact: algorithm will eventually provide a provably optimal solution.
Approximate: algorithm will eventually produce a solution with some guarantees (e.g. a tour being at most twice as long as the shortest tour)
Heuristic: Algorithms that do not give any worst-case guarantee whatsoever.
Finite convergence and runtime are separate from "exactness" properties of an algorithm.
$endgroup$
add a comment |
$begingroup$
Exact: algorithm will eventually provide a provably optimal solution.
Approximate: algorithm will eventually produce a solution with some guarantees (e.g. a tour being at most twice as long as the shortest tour)
Heuristic: Algorithms that do not give any worst-case guarantee whatsoever.
Finite convergence and runtime are separate from "exactness" properties of an algorithm.
$endgroup$
add a comment |
$begingroup$
Exact: algorithm will eventually provide a provably optimal solution.
Approximate: algorithm will eventually produce a solution with some guarantees (e.g. a tour being at most twice as long as the shortest tour)
Heuristic: Algorithms that do not give any worst-case guarantee whatsoever.
Finite convergence and runtime are separate from "exactness" properties of an algorithm.
$endgroup$
Exact: algorithm will eventually provide a provably optimal solution.
Approximate: algorithm will eventually produce a solution with some guarantees (e.g. a tour being at most twice as long as the shortest tour)
Heuristic: Algorithms that do not give any worst-case guarantee whatsoever.
Finite convergence and runtime are separate from "exactness" properties of an algorithm.
answered 8 hours ago
Michael FeldmeierMichael Feldmeier
1,6065 silver badges31 bronze badges
1,6065 silver badges31 bronze badges
add a comment |
add a comment |
$begingroup$
Exact: Provably optimal
Approximate: offers an upper bound on the gap
I would add heuristics: procedures that, as you described, may or may not provide an optimal solution (with out any proof or guarantee).
New contributor
$endgroup$
add a comment |
$begingroup$
Exact: Provably optimal
Approximate: offers an upper bound on the gap
I would add heuristics: procedures that, as you described, may or may not provide an optimal solution (with out any proof or guarantee).
New contributor
$endgroup$
add a comment |
$begingroup$
Exact: Provably optimal
Approximate: offers an upper bound on the gap
I would add heuristics: procedures that, as you described, may or may not provide an optimal solution (with out any proof or guarantee).
New contributor
$endgroup$
Exact: Provably optimal
Approximate: offers an upper bound on the gap
I would add heuristics: procedures that, as you described, may or may not provide an optimal solution (with out any proof or guarantee).
New contributor
edited 8 hours ago
New contributor
answered 8 hours ago
Daniel DuqueDaniel Duque
3267 bronze badges
3267 bronze badges
New contributor
New contributor
add a comment |
add a comment |
$begingroup$
An exact method will (typically within a bounded number of steps) provide a proven optimal solution. This is, a solution x*
and a guarantee that no other feasible solution has an objective better than that of x*
. Typically, exact methods compute two types of bounds: lower (L
) and upper (U
) bounds. Optimality is then proven whenever both bounds coincide.
In contrast to exact methods that provide a guarantee of optimality, heuristics provide no such guarantee. In the OR literature, it is common to refer to those methods as heuristics rather than approximate methods. The latter is more commonly used to refer to algorithms with a proof of solution quality, although not necessarily exact. For instance, the TSP admits a 1.5-approximate algorithm (Christofides method), this is a method that will construct a solution whose cost is at most 1.5x that of an optimal TSP solution.
With respect to the reviewer comment, it is indeed a little odd. Using the same rationale, one would argue that Dijkstra's method is not exact because it will fail at solving shortest path problems containing trillions of points. The CONCORDE solver for the TSP is another remarkable example of exact method. Like any other problem for a reasonably difficult optimization problem, it will fail at solving extremely large problems. No one would argue that CONCORDE is not exact though. It seems to me that you need to make your point using such type of argument. As I pointed before, an exact algorithm is one that constructs, within a finite number of steps, a solution x*
and a guarantee that no other feasible solution of better objective value exists (usually by computing lower and upper bounds). No such algorithm will scale to solve every single possible instance on earth, unless of course the problem is trivial.
$endgroup$
add a comment |
$begingroup$
An exact method will (typically within a bounded number of steps) provide a proven optimal solution. This is, a solution x*
and a guarantee that no other feasible solution has an objective better than that of x*
. Typically, exact methods compute two types of bounds: lower (L
) and upper (U
) bounds. Optimality is then proven whenever both bounds coincide.
In contrast to exact methods that provide a guarantee of optimality, heuristics provide no such guarantee. In the OR literature, it is common to refer to those methods as heuristics rather than approximate methods. The latter is more commonly used to refer to algorithms with a proof of solution quality, although not necessarily exact. For instance, the TSP admits a 1.5-approximate algorithm (Christofides method), this is a method that will construct a solution whose cost is at most 1.5x that of an optimal TSP solution.
With respect to the reviewer comment, it is indeed a little odd. Using the same rationale, one would argue that Dijkstra's method is not exact because it will fail at solving shortest path problems containing trillions of points. The CONCORDE solver for the TSP is another remarkable example of exact method. Like any other problem for a reasonably difficult optimization problem, it will fail at solving extremely large problems. No one would argue that CONCORDE is not exact though. It seems to me that you need to make your point using such type of argument. As I pointed before, an exact algorithm is one that constructs, within a finite number of steps, a solution x*
and a guarantee that no other feasible solution of better objective value exists (usually by computing lower and upper bounds). No such algorithm will scale to solve every single possible instance on earth, unless of course the problem is trivial.
$endgroup$
add a comment |
$begingroup$
An exact method will (typically within a bounded number of steps) provide a proven optimal solution. This is, a solution x*
and a guarantee that no other feasible solution has an objective better than that of x*
. Typically, exact methods compute two types of bounds: lower (L
) and upper (U
) bounds. Optimality is then proven whenever both bounds coincide.
In contrast to exact methods that provide a guarantee of optimality, heuristics provide no such guarantee. In the OR literature, it is common to refer to those methods as heuristics rather than approximate methods. The latter is more commonly used to refer to algorithms with a proof of solution quality, although not necessarily exact. For instance, the TSP admits a 1.5-approximate algorithm (Christofides method), this is a method that will construct a solution whose cost is at most 1.5x that of an optimal TSP solution.
With respect to the reviewer comment, it is indeed a little odd. Using the same rationale, one would argue that Dijkstra's method is not exact because it will fail at solving shortest path problems containing trillions of points. The CONCORDE solver for the TSP is another remarkable example of exact method. Like any other problem for a reasonably difficult optimization problem, it will fail at solving extremely large problems. No one would argue that CONCORDE is not exact though. It seems to me that you need to make your point using such type of argument. As I pointed before, an exact algorithm is one that constructs, within a finite number of steps, a solution x*
and a guarantee that no other feasible solution of better objective value exists (usually by computing lower and upper bounds). No such algorithm will scale to solve every single possible instance on earth, unless of course the problem is trivial.
$endgroup$
An exact method will (typically within a bounded number of steps) provide a proven optimal solution. This is, a solution x*
and a guarantee that no other feasible solution has an objective better than that of x*
. Typically, exact methods compute two types of bounds: lower (L
) and upper (U
) bounds. Optimality is then proven whenever both bounds coincide.
In contrast to exact methods that provide a guarantee of optimality, heuristics provide no such guarantee. In the OR literature, it is common to refer to those methods as heuristics rather than approximate methods. The latter is more commonly used to refer to algorithms with a proof of solution quality, although not necessarily exact. For instance, the TSP admits a 1.5-approximate algorithm (Christofides method), this is a method that will construct a solution whose cost is at most 1.5x that of an optimal TSP solution.
With respect to the reviewer comment, it is indeed a little odd. Using the same rationale, one would argue that Dijkstra's method is not exact because it will fail at solving shortest path problems containing trillions of points. The CONCORDE solver for the TSP is another remarkable example of exact method. Like any other problem for a reasonably difficult optimization problem, it will fail at solving extremely large problems. No one would argue that CONCORDE is not exact though. It seems to me that you need to make your point using such type of argument. As I pointed before, an exact algorithm is one that constructs, within a finite number of steps, a solution x*
and a guarantee that no other feasible solution of better objective value exists (usually by computing lower and upper bounds). No such algorithm will scale to solve every single possible instance on earth, unless of course the problem is trivial.
answered 8 hours ago
Claudio ContardoClaudio Contardo
6261 silver badge6 bronze badges
6261 silver badge6 bronze badges
add a comment |
add a comment |
$begingroup$
In addition to the other answers posted already, I'll add that the term approximation algorithm means an algorithm with a provable worst-case error bound that (as @MarcoLubbecke reminded me in the comments) has polynomial runtime. But the term is often misused to refer to a heuristic that may or may not have such a provable bound.
I would have interpreted the term approximate algorithm as roughly equivalent to heuristic, with or without error bounds. From others' answers, it seems they would disagree, and that they would argue that approximate algorithm = approximation algorithm. From a purely linguistic point of view, that certainly seems preferable; it would be really annoying if -e and -ion meant two very different things. But I am not sure I have seen the terms used this way consistently.
$endgroup$
2
$begingroup$
plus, an approximation algorithm must have polynomial runtime
$endgroup$
– Marco Lübbecke
6 hours ago
1
$begingroup$
I'm glad to see that I'm not the only one to have come across "approximate algorithm" as a code-phrase for (meta)heuristic.
$endgroup$
– prubin
4 hours ago
add a comment |
$begingroup$
In addition to the other answers posted already, I'll add that the term approximation algorithm means an algorithm with a provable worst-case error bound that (as @MarcoLubbecke reminded me in the comments) has polynomial runtime. But the term is often misused to refer to a heuristic that may or may not have such a provable bound.
I would have interpreted the term approximate algorithm as roughly equivalent to heuristic, with or without error bounds. From others' answers, it seems they would disagree, and that they would argue that approximate algorithm = approximation algorithm. From a purely linguistic point of view, that certainly seems preferable; it would be really annoying if -e and -ion meant two very different things. But I am not sure I have seen the terms used this way consistently.
$endgroup$
2
$begingroup$
plus, an approximation algorithm must have polynomial runtime
$endgroup$
– Marco Lübbecke
6 hours ago
1
$begingroup$
I'm glad to see that I'm not the only one to have come across "approximate algorithm" as a code-phrase for (meta)heuristic.
$endgroup$
– prubin
4 hours ago
add a comment |
$begingroup$
In addition to the other answers posted already, I'll add that the term approximation algorithm means an algorithm with a provable worst-case error bound that (as @MarcoLubbecke reminded me in the comments) has polynomial runtime. But the term is often misused to refer to a heuristic that may or may not have such a provable bound.
I would have interpreted the term approximate algorithm as roughly equivalent to heuristic, with or without error bounds. From others' answers, it seems they would disagree, and that they would argue that approximate algorithm = approximation algorithm. From a purely linguistic point of view, that certainly seems preferable; it would be really annoying if -e and -ion meant two very different things. But I am not sure I have seen the terms used this way consistently.
$endgroup$
In addition to the other answers posted already, I'll add that the term approximation algorithm means an algorithm with a provable worst-case error bound that (as @MarcoLubbecke reminded me in the comments) has polynomial runtime. But the term is often misused to refer to a heuristic that may or may not have such a provable bound.
I would have interpreted the term approximate algorithm as roughly equivalent to heuristic, with or without error bounds. From others' answers, it seems they would disagree, and that they would argue that approximate algorithm = approximation algorithm. From a purely linguistic point of view, that certainly seems preferable; it would be really annoying if -e and -ion meant two very different things. But I am not sure I have seen the terms used this way consistently.
edited 3 hours ago
answered 8 hours ago
LarrySnyder610LarrySnyder610
3,7439 silver badges51 bronze badges
3,7439 silver badges51 bronze badges
2
$begingroup$
plus, an approximation algorithm must have polynomial runtime
$endgroup$
– Marco Lübbecke
6 hours ago
1
$begingroup$
I'm glad to see that I'm not the only one to have come across "approximate algorithm" as a code-phrase for (meta)heuristic.
$endgroup$
– prubin
4 hours ago
add a comment |
2
$begingroup$
plus, an approximation algorithm must have polynomial runtime
$endgroup$
– Marco Lübbecke
6 hours ago
1
$begingroup$
I'm glad to see that I'm not the only one to have come across "approximate algorithm" as a code-phrase for (meta)heuristic.
$endgroup$
– prubin
4 hours ago
2
2
$begingroup$
plus, an approximation algorithm must have polynomial runtime
$endgroup$
– Marco Lübbecke
6 hours ago
$begingroup$
plus, an approximation algorithm must have polynomial runtime
$endgroup$
– Marco Lübbecke
6 hours ago
1
1
$begingroup$
I'm glad to see that I'm not the only one to have come across "approximate algorithm" as a code-phrase for (meta)heuristic.
$endgroup$
– prubin
4 hours ago
$begingroup$
I'm glad to see that I'm not the only one to have come across "approximate algorithm" as a code-phrase for (meta)heuristic.
$endgroup$
– prubin
4 hours ago
add a comment |
$begingroup$
According to L. Jourdan, et. al.[1], "NP-hard problems are difficult to solve and no polynomial-time algorithms are known for solving them. Most combinatorial optimization problems are NP-hard. Two approaches can be considered to solve this kind of problems depending on their size. For small instances, researchers usually use exact methods. Exact methods find the optimal solution and assess its optimality. There exist numerous exact methods such as the family of Branch and X, LP, DP and etc. Exact methods are known to be time expensive, so they can not be applied to large NP-hard problems or difficult ones. When instances become too large for exact methods, heuristics and in particular metaheuristics are often used".
Authors of [2] are also mentioned the "solving-time" of the method for combinatorial optimization problem as the only disjunctive difference between Exact and Approximate methods saying that: "The available techniques for COPs can roughly be classified into two main categories: exact and heuristic methods. Exact algorithms are guaranteed to find an optimal solution and to prove its optimality for every instance of a COP. The run-time, however, often increases dramatically with the instance size, and often only small or moderately-sized instances can be practically solved to provable optimality. In this case, the only possibility for larger instances is to trade optimality for run-time, yielding heuristic algorithms. In other words, the guarantee of finding optimal solutions is sacrificed for the sake of getting good solutions in a limited time.
Last two sentences of the above paragraph explain exactly the approach that you followed in your paper.
So my interpretation is: "ALL exact methods can solve the large instances of COPs optimally if there are no limitations on their resources in terms of time and computational power, on the other hand, No method can be considered Exact unless the optimality of its solution can be proven."
[1]: Jourdan, Laetitia, Matthieu Basseur, and E-G. Talbi. "Hybridizing exact methods and metaheuristics: A taxonomy." European Journal of Operational Research 199.3 (2009): 620-629.
[2]: Puchinger, Jakob, and Günther R. Raidl. "Combining metaheuristics and exact algorithms in combinatorial optimization: A survey and classification." International Work-Conference on the Interplay Between Natural and Artificial Computation. Springer, Berlin, Heidelberg, 2005.
$endgroup$
add a comment |
$begingroup$
According to L. Jourdan, et. al.[1], "NP-hard problems are difficult to solve and no polynomial-time algorithms are known for solving them. Most combinatorial optimization problems are NP-hard. Two approaches can be considered to solve this kind of problems depending on their size. For small instances, researchers usually use exact methods. Exact methods find the optimal solution and assess its optimality. There exist numerous exact methods such as the family of Branch and X, LP, DP and etc. Exact methods are known to be time expensive, so they can not be applied to large NP-hard problems or difficult ones. When instances become too large for exact methods, heuristics and in particular metaheuristics are often used".
Authors of [2] are also mentioned the "solving-time" of the method for combinatorial optimization problem as the only disjunctive difference between Exact and Approximate methods saying that: "The available techniques for COPs can roughly be classified into two main categories: exact and heuristic methods. Exact algorithms are guaranteed to find an optimal solution and to prove its optimality for every instance of a COP. The run-time, however, often increases dramatically with the instance size, and often only small or moderately-sized instances can be practically solved to provable optimality. In this case, the only possibility for larger instances is to trade optimality for run-time, yielding heuristic algorithms. In other words, the guarantee of finding optimal solutions is sacrificed for the sake of getting good solutions in a limited time.
Last two sentences of the above paragraph explain exactly the approach that you followed in your paper.
So my interpretation is: "ALL exact methods can solve the large instances of COPs optimally if there are no limitations on their resources in terms of time and computational power, on the other hand, No method can be considered Exact unless the optimality of its solution can be proven."
[1]: Jourdan, Laetitia, Matthieu Basseur, and E-G. Talbi. "Hybridizing exact methods and metaheuristics: A taxonomy." European Journal of Operational Research 199.3 (2009): 620-629.
[2]: Puchinger, Jakob, and Günther R. Raidl. "Combining metaheuristics and exact algorithms in combinatorial optimization: A survey and classification." International Work-Conference on the Interplay Between Natural and Artificial Computation. Springer, Berlin, Heidelberg, 2005.
$endgroup$
add a comment |
$begingroup$
According to L. Jourdan, et. al.[1], "NP-hard problems are difficult to solve and no polynomial-time algorithms are known for solving them. Most combinatorial optimization problems are NP-hard. Two approaches can be considered to solve this kind of problems depending on their size. For small instances, researchers usually use exact methods. Exact methods find the optimal solution and assess its optimality. There exist numerous exact methods such as the family of Branch and X, LP, DP and etc. Exact methods are known to be time expensive, so they can not be applied to large NP-hard problems or difficult ones. When instances become too large for exact methods, heuristics and in particular metaheuristics are often used".
Authors of [2] are also mentioned the "solving-time" of the method for combinatorial optimization problem as the only disjunctive difference between Exact and Approximate methods saying that: "The available techniques for COPs can roughly be classified into two main categories: exact and heuristic methods. Exact algorithms are guaranteed to find an optimal solution and to prove its optimality for every instance of a COP. The run-time, however, often increases dramatically with the instance size, and often only small or moderately-sized instances can be practically solved to provable optimality. In this case, the only possibility for larger instances is to trade optimality for run-time, yielding heuristic algorithms. In other words, the guarantee of finding optimal solutions is sacrificed for the sake of getting good solutions in a limited time.
Last two sentences of the above paragraph explain exactly the approach that you followed in your paper.
So my interpretation is: "ALL exact methods can solve the large instances of COPs optimally if there are no limitations on their resources in terms of time and computational power, on the other hand, No method can be considered Exact unless the optimality of its solution can be proven."
[1]: Jourdan, Laetitia, Matthieu Basseur, and E-G. Talbi. "Hybridizing exact methods and metaheuristics: A taxonomy." European Journal of Operational Research 199.3 (2009): 620-629.
[2]: Puchinger, Jakob, and Günther R. Raidl. "Combining metaheuristics and exact algorithms in combinatorial optimization: A survey and classification." International Work-Conference on the Interplay Between Natural and Artificial Computation. Springer, Berlin, Heidelberg, 2005.
$endgroup$
According to L. Jourdan, et. al.[1], "NP-hard problems are difficult to solve and no polynomial-time algorithms are known for solving them. Most combinatorial optimization problems are NP-hard. Two approaches can be considered to solve this kind of problems depending on their size. For small instances, researchers usually use exact methods. Exact methods find the optimal solution and assess its optimality. There exist numerous exact methods such as the family of Branch and X, LP, DP and etc. Exact methods are known to be time expensive, so they can not be applied to large NP-hard problems or difficult ones. When instances become too large for exact methods, heuristics and in particular metaheuristics are often used".
Authors of [2] are also mentioned the "solving-time" of the method for combinatorial optimization problem as the only disjunctive difference between Exact and Approximate methods saying that: "The available techniques for COPs can roughly be classified into two main categories: exact and heuristic methods. Exact algorithms are guaranteed to find an optimal solution and to prove its optimality for every instance of a COP. The run-time, however, often increases dramatically with the instance size, and often only small or moderately-sized instances can be practically solved to provable optimality. In this case, the only possibility for larger instances is to trade optimality for run-time, yielding heuristic algorithms. In other words, the guarantee of finding optimal solutions is sacrificed for the sake of getting good solutions in a limited time.
Last two sentences of the above paragraph explain exactly the approach that you followed in your paper.
So my interpretation is: "ALL exact methods can solve the large instances of COPs optimally if there are no limitations on their resources in terms of time and computational power, on the other hand, No method can be considered Exact unless the optimality of its solution can be proven."
[1]: Jourdan, Laetitia, Matthieu Basseur, and E-G. Talbi. "Hybridizing exact methods and metaheuristics: A taxonomy." European Journal of Operational Research 199.3 (2009): 620-629.
[2]: Puchinger, Jakob, and Günther R. Raidl. "Combining metaheuristics and exact algorithms in combinatorial optimization: A survey and classification." International Work-Conference on the Interplay Between Natural and Artificial Computation. Springer, Berlin, Heidelberg, 2005.
answered 7 hours ago
Oguz ToragayOguz Toragay
77513 bronze badges
77513 bronze badges
add a comment |
add a comment |
Thanks for contributing an answer to Operations Research Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2for.stackexchange.com%2fquestions%2f934%2foptimization-terminology-exact-v-approximate%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
I would also ask for "heuristic" methods, since I like to think of an "approximate" solution has the results of an "approximation algorithm" which provide a theoretical bound (worst case or expected) on the optimal value. For instance, the Goemans-Williamson max-cut algorithm is an approximation algorithm.
$endgroup$
– Stefano Gualandi
8 hours ago