Efficient Algorithms for Destroyed Document ReconstructionStereo images rectification and disparity: which algorithms?Which algorithms are usable for heatmaps and what are their pros and consEfficient flood filling (seed filling)How to test image segmentation algorithms?Viewing images that compressed using lossless algorithmsSources on dictionary learning and related algorithmsHigh Dimensional Spaces for ImagesImage registration algorithms for images with varying distancesAlgorithms to correct misspelled word?More Efficient Feature Method Than Haar-Feature For Face Detection

Was murdering a slave illegal in American slavery, and if so, what punishments were given for it?

amsmath: How can I use the equation numbering and label manually and anywhere?

Is it safe to redirect stdout and stderr to the same file without file descriptor copies?

mmap: effect of other processes writing to a file previously mapped read-only

Why did Nick Fury not hesitate in blowing up the plane he thought was carrying a nuke?

How could the B-29 bomber back up under its own power?

Way of refund if scammed?

How to make Flex Markers appear in Logic Pro X?

Is there a solution to paying high fees when opening and closing lightning channels once we hit a fee only market?

why "American-born", not "America-born"?

JavaScript: Access 'this' when calling function stored in variable

Can a UK national work as a paid shop assistant in the USA?

Efficient Algorithms for Destroyed Document Reconstruction

VHDL: Why is it hard to desgin a floating point unit in hardware?

How would a physicist explain this starship engine?

How does the Earth's center produce heat?

Proto-Indo-European (PIE) words with IPA

Download app bundles from App Store to run on iOS Emulator on Mac

How did the Allies achieve air superiority on Sicily?

Does science define life as "beginning at conception"?

Why is unzipped file smaller than zipped file

How do I write real-world stories separate from my country of origin?

Writing "hahaha" versus describing the laugh

Managing heat dissipation in a magic wand



Efficient Algorithms for Destroyed Document Reconstruction


Stereo images rectification and disparity: which algorithms?Which algorithms are usable for heatmaps and what are their pros and consEfficient flood filling (seed filling)How to test image segmentation algorithms?Viewing images that compressed using lossless algorithmsSources on dictionary learning and related algorithmsHigh Dimensional Spaces for ImagesImage registration algorithms for images with varying distancesAlgorithms to correct misspelled word?More Efficient Feature Method Than Haar-Feature For Face Detection













1












$begingroup$


I am not certain this is the proper site for this question however I am mainly looking for resources on this topic (perhaps code). I was watching TV and one of the characters had a lawyer who destroyed his documents using a paper shredder. A lab tech said that the shredder was special.



I am not familiar with this area of computer science/ mathematics but I am looking for information on efficient algorithms to reconstruct destroyed documents. I can come up with a naive approach that is brute force fairly easily I imagine but just going through all the pieces and looking for edges that are the same but this doesn't sound feasible as the number of combinations will explode.



Note: By destroyed documents I am talking about taking a document (printed out) and then shredding it into small pieces and reassembling it by determining which pieces fit together.










share|cite|improve this question









New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$







  • 1




    $begingroup$
    Can you edit your question to define "destroyed documents"?
    $endgroup$
    – lox
    3 hours ago






  • 1




    $begingroup$
    You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
    $endgroup$
    – David Richerby
    3 hours ago















1












$begingroup$


I am not certain this is the proper site for this question however I am mainly looking for resources on this topic (perhaps code). I was watching TV and one of the characters had a lawyer who destroyed his documents using a paper shredder. A lab tech said that the shredder was special.



I am not familiar with this area of computer science/ mathematics but I am looking for information on efficient algorithms to reconstruct destroyed documents. I can come up with a naive approach that is brute force fairly easily I imagine but just going through all the pieces and looking for edges that are the same but this doesn't sound feasible as the number of combinations will explode.



Note: By destroyed documents I am talking about taking a document (printed out) and then shredding it into small pieces and reassembling it by determining which pieces fit together.










share|cite|improve this question









New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$







  • 1




    $begingroup$
    Can you edit your question to define "destroyed documents"?
    $endgroup$
    – lox
    3 hours ago






  • 1




    $begingroup$
    You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
    $endgroup$
    – David Richerby
    3 hours ago













1












1








1





$begingroup$


I am not certain this is the proper site for this question however I am mainly looking for resources on this topic (perhaps code). I was watching TV and one of the characters had a lawyer who destroyed his documents using a paper shredder. A lab tech said that the shredder was special.



I am not familiar with this area of computer science/ mathematics but I am looking for information on efficient algorithms to reconstruct destroyed documents. I can come up with a naive approach that is brute force fairly easily I imagine but just going through all the pieces and looking for edges that are the same but this doesn't sound feasible as the number of combinations will explode.



Note: By destroyed documents I am talking about taking a document (printed out) and then shredding it into small pieces and reassembling it by determining which pieces fit together.










share|cite|improve this question









New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




I am not certain this is the proper site for this question however I am mainly looking for resources on this topic (perhaps code). I was watching TV and one of the characters had a lawyer who destroyed his documents using a paper shredder. A lab tech said that the shredder was special.



I am not familiar with this area of computer science/ mathematics but I am looking for information on efficient algorithms to reconstruct destroyed documents. I can come up with a naive approach that is brute force fairly easily I imagine but just going through all the pieces and looking for edges that are the same but this doesn't sound feasible as the number of combinations will explode.



Note: By destroyed documents I am talking about taking a document (printed out) and then shredding it into small pieces and reassembling it by determining which pieces fit together.







image-processing






share|cite|improve this question









New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|cite|improve this question









New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|cite|improve this question




share|cite|improve this question








edited 3 hours ago







Shogun













New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 3 hours ago









ShogunShogun

1085




1085




New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









  • 1




    $begingroup$
    Can you edit your question to define "destroyed documents"?
    $endgroup$
    – lox
    3 hours ago






  • 1




    $begingroup$
    You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
    $endgroup$
    – David Richerby
    3 hours ago












  • 1




    $begingroup$
    Can you edit your question to define "destroyed documents"?
    $endgroup$
    – lox
    3 hours ago






  • 1




    $begingroup$
    You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
    $endgroup$
    – David Richerby
    3 hours ago







1




1




$begingroup$
Can you edit your question to define "destroyed documents"?
$endgroup$
– lox
3 hours ago




$begingroup$
Can you edit your question to define "destroyed documents"?
$endgroup$
– lox
3 hours ago




1




1




$begingroup$
You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
$endgroup$
– David Richerby
3 hours ago




$begingroup$
You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
$endgroup$
– David Richerby
3 hours ago










1 Answer
1






active

oldest

votes


















2












$begingroup$

Your problem is NP-Complete, even for strips (n strips yields (2n)!) used, so people use heuristics, transforms like Hough and morphological filters (to match continuity of text, but this heavily increases complexity for matching) or any kind of genetic / NN search, Ants Colony Optimization.



For summary of consecutive steps and various algorithm I recommend An Investigation into Automated Shredded Document Reconstruction using Heuristic Search Algorithms.



The problem itself may end up in nasty cases, when document is not fully sharp (blurred, printed with low resolution) and strips width is small and cut by physical cutter with dulled edges, because standard merging methods like panorama photo sticher gets lost and yield improper results. This is due to lost information by missing small strips, otherwise if you have full digital image cut into pieces, it is as hard as Jigsaw puzzle, non-digital image falls into approximate search.



To make algorithm automatic another problem is pieces feeding, rarely you can give axis aligned strips, so to start process it is nice to input all stripes as one picture with pieces lay by hand, this imposes another problem (this one is easy) to detect blobs and rotate them.



By special shredder instead of stripes yield very small rectangles. For comparison, P-1 class shredder gives stripes 6-12mm wide of any length (about 1800mm^2), class P-7 gives rectangles with area less than 5mm^2. When you get rectangles instead of stripes, problem yields (4n)! permutations, assuming one one-sided document, if there are lots of shreds from unrelated documents (no pictures, text only) in one bag, problem is not really tractable.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
    $endgroup$
    – Alexander
    59 mins ago










  • $begingroup$
    @Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
    $endgroup$
    – Evil
    43 mins ago










  • $begingroup$
    Makes sense! thanks
    $endgroup$
    – Alexander
    16 mins ago











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "419"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);






Shogun is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f109567%2fefficient-algorithms-for-destroyed-document-reconstruction%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2












$begingroup$

Your problem is NP-Complete, even for strips (n strips yields (2n)!) used, so people use heuristics, transforms like Hough and morphological filters (to match continuity of text, but this heavily increases complexity for matching) or any kind of genetic / NN search, Ants Colony Optimization.



For summary of consecutive steps and various algorithm I recommend An Investigation into Automated Shredded Document Reconstruction using Heuristic Search Algorithms.



The problem itself may end up in nasty cases, when document is not fully sharp (blurred, printed with low resolution) and strips width is small and cut by physical cutter with dulled edges, because standard merging methods like panorama photo sticher gets lost and yield improper results. This is due to lost information by missing small strips, otherwise if you have full digital image cut into pieces, it is as hard as Jigsaw puzzle, non-digital image falls into approximate search.



To make algorithm automatic another problem is pieces feeding, rarely you can give axis aligned strips, so to start process it is nice to input all stripes as one picture with pieces lay by hand, this imposes another problem (this one is easy) to detect blobs and rotate them.



By special shredder instead of stripes yield very small rectangles. For comparison, P-1 class shredder gives stripes 6-12mm wide of any length (about 1800mm^2), class P-7 gives rectangles with area less than 5mm^2. When you get rectangles instead of stripes, problem yields (4n)! permutations, assuming one one-sided document, if there are lots of shreds from unrelated documents (no pictures, text only) in one bag, problem is not really tractable.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
    $endgroup$
    – Alexander
    59 mins ago










  • $begingroup$
    @Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
    $endgroup$
    – Evil
    43 mins ago










  • $begingroup$
    Makes sense! thanks
    $endgroup$
    – Alexander
    16 mins ago















2












$begingroup$

Your problem is NP-Complete, even for strips (n strips yields (2n)!) used, so people use heuristics, transforms like Hough and morphological filters (to match continuity of text, but this heavily increases complexity for matching) or any kind of genetic / NN search, Ants Colony Optimization.



For summary of consecutive steps and various algorithm I recommend An Investigation into Automated Shredded Document Reconstruction using Heuristic Search Algorithms.



The problem itself may end up in nasty cases, when document is not fully sharp (blurred, printed with low resolution) and strips width is small and cut by physical cutter with dulled edges, because standard merging methods like panorama photo sticher gets lost and yield improper results. This is due to lost information by missing small strips, otherwise if you have full digital image cut into pieces, it is as hard as Jigsaw puzzle, non-digital image falls into approximate search.



To make algorithm automatic another problem is pieces feeding, rarely you can give axis aligned strips, so to start process it is nice to input all stripes as one picture with pieces lay by hand, this imposes another problem (this one is easy) to detect blobs and rotate them.



By special shredder instead of stripes yield very small rectangles. For comparison, P-1 class shredder gives stripes 6-12mm wide of any length (about 1800mm^2), class P-7 gives rectangles with area less than 5mm^2. When you get rectangles instead of stripes, problem yields (4n)! permutations, assuming one one-sided document, if there are lots of shreds from unrelated documents (no pictures, text only) in one bag, problem is not really tractable.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
    $endgroup$
    – Alexander
    59 mins ago










  • $begingroup$
    @Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
    $endgroup$
    – Evil
    43 mins ago










  • $begingroup$
    Makes sense! thanks
    $endgroup$
    – Alexander
    16 mins ago













2












2








2





$begingroup$

Your problem is NP-Complete, even for strips (n strips yields (2n)!) used, so people use heuristics, transforms like Hough and morphological filters (to match continuity of text, but this heavily increases complexity for matching) or any kind of genetic / NN search, Ants Colony Optimization.



For summary of consecutive steps and various algorithm I recommend An Investigation into Automated Shredded Document Reconstruction using Heuristic Search Algorithms.



The problem itself may end up in nasty cases, when document is not fully sharp (blurred, printed with low resolution) and strips width is small and cut by physical cutter with dulled edges, because standard merging methods like panorama photo sticher gets lost and yield improper results. This is due to lost information by missing small strips, otherwise if you have full digital image cut into pieces, it is as hard as Jigsaw puzzle, non-digital image falls into approximate search.



To make algorithm automatic another problem is pieces feeding, rarely you can give axis aligned strips, so to start process it is nice to input all stripes as one picture with pieces lay by hand, this imposes another problem (this one is easy) to detect blobs and rotate them.



By special shredder instead of stripes yield very small rectangles. For comparison, P-1 class shredder gives stripes 6-12mm wide of any length (about 1800mm^2), class P-7 gives rectangles with area less than 5mm^2. When you get rectangles instead of stripes, problem yields (4n)! permutations, assuming one one-sided document, if there are lots of shreds from unrelated documents (no pictures, text only) in one bag, problem is not really tractable.






share|cite|improve this answer









$endgroup$



Your problem is NP-Complete, even for strips (n strips yields (2n)!) used, so people use heuristics, transforms like Hough and morphological filters (to match continuity of text, but this heavily increases complexity for matching) or any kind of genetic / NN search, Ants Colony Optimization.



For summary of consecutive steps and various algorithm I recommend An Investigation into Automated Shredded Document Reconstruction using Heuristic Search Algorithms.



The problem itself may end up in nasty cases, when document is not fully sharp (blurred, printed with low resolution) and strips width is small and cut by physical cutter with dulled edges, because standard merging methods like panorama photo sticher gets lost and yield improper results. This is due to lost information by missing small strips, otherwise if you have full digital image cut into pieces, it is as hard as Jigsaw puzzle, non-digital image falls into approximate search.



To make algorithm automatic another problem is pieces feeding, rarely you can give axis aligned strips, so to start process it is nice to input all stripes as one picture with pieces lay by hand, this imposes another problem (this one is easy) to detect blobs and rotate them.



By special shredder instead of stripes yield very small rectangles. For comparison, P-1 class shredder gives stripes 6-12mm wide of any length (about 1800mm^2), class P-7 gives rectangles with area less than 5mm^2. When you get rectangles instead of stripes, problem yields (4n)! permutations, assuming one one-sided document, if there are lots of shreds from unrelated documents (no pictures, text only) in one bag, problem is not really tractable.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered 2 hours ago









EvilEvil

8,47742447




8,47742447











  • $begingroup$
    There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
    $endgroup$
    – Alexander
    59 mins ago










  • $begingroup$
    @Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
    $endgroup$
    – Evil
    43 mins ago










  • $begingroup$
    Makes sense! thanks
    $endgroup$
    – Alexander
    16 mins ago
















  • $begingroup$
    There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
    $endgroup$
    – Alexander
    59 mins ago










  • $begingroup$
    @Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
    $endgroup$
    – Evil
    43 mins ago










  • $begingroup$
    Makes sense! thanks
    $endgroup$
    – Alexander
    16 mins ago















$begingroup$
There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
$endgroup$
– Alexander
59 mins ago




$begingroup$
There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
$endgroup$
– Alexander
59 mins ago












$begingroup$
@Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
$endgroup$
– Evil
43 mins ago




$begingroup$
@Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
$endgroup$
– Evil
43 mins ago












$begingroup$
Makes sense! thanks
$endgroup$
– Alexander
16 mins ago




$begingroup$
Makes sense! thanks
$endgroup$
– Alexander
16 mins ago










Shogun is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















Shogun is a new contributor. Be nice, and check out our Code of Conduct.












Shogun is a new contributor. Be nice, and check out our Code of Conduct.











Shogun is a new contributor. Be nice, and check out our Code of Conduct.














Thanks for contributing an answer to Computer Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f109567%2fefficient-algorithms-for-destroyed-document-reconstruction%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

Ласкавець круглолистий Зміст Опис | Поширення | Галерея | Примітки | Посилання | Навігаційне меню58171138361-22960890446Bupleurum rotundifoliumEuro+Med PlantbasePlants of the World Online — Kew ScienceGermplasm Resources Information Network (GRIN)Ласкавецькн. VI : Літери Ком — Левиправивши або дописавши її