Why is differential privacy defined over the exponential function?Estimator for sum of independent and identically distributed (iid) variablesWhat is a probabilistic function and where can I learn more about them?Exponential Concentration Inequality for Higher-order moments of Gaussian Random VariablesDifferential Privacy and Randomized Responses for Counting QueriesUnderstanding proof of Theorem 3.3 in Karp's “Probabilistic Recurrence Relations”Relation between variance and mutual informationJanson-type inequality, limited dependenceHeterogeneous Hoeffding/McDiarmidEmpirical Rademacher averages versus Hoeffdings bound

How should we understand "unobscured by flying friends" in this context?

Is it appropriate for a professor to require students to sign a non-disclosure agreement before being taught?

Procedure for traffic not in sight

Why was "leaping into the river" a valid trial outcome to prove one's innocence?

Are there take-over requests from autopilots?

Seized engine due to being run without oil

Are scroll bars dead in 2019?

Is the space of Radon measures a Polish space or at least separable?

I changed a word from a source, how do I cite it correctly?

Why does F + F' = 1?

A medieval fantasy adventurer lights a torch in a 100% pure oxygen room. What happens?

Is English tonal for some words, like "permit"?

Stack class in Java 8

Job offer without any details but asking me to withdraw other applications - is it normal?

For how long could UK opposition parties prevent new elections?

Why is the the worst case for this function O(n^2)?

What is negative current?

Is there any detail about ambulances in Star Wars?

Are there any space probes or landers which regained communication after being lost?

Does the word “uzi” need to be capitalized?

Are the definite and indefinite integrals actually two different things? Where is the flaw in my understanding?

How does Vivi differ from other Black Mages?

How much power do LED smart bulb wireless control systems consume when the light is turned off?

SCOTUS - Can Congress overrule Marbury v. Madison by statute?



Why is differential privacy defined over the exponential function?


Estimator for sum of independent and identically distributed (iid) variablesWhat is a probabilistic function and where can I learn more about them?Exponential Concentration Inequality for Higher-order moments of Gaussian Random VariablesDifferential Privacy and Randomized Responses for Counting QueriesUnderstanding proof of Theorem 3.3 in Karp's “Probabilistic Recurrence Relations”Relation between variance and mutual informationJanson-type inequality, limited dependenceHeterogeneous Hoeffding/McDiarmidEmpirical Rademacher averages versus Hoeffdings bound






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1












$begingroup$


For adjacent database $D,D'$, a randomized algorithm $A$ is $varepsilon$-differential private when the following satisfies



$$fracPr(A(D) in S)Pr(A(D') in S) leq e^varepsilon,$$ where $S$ is any range of A.



Why is the exponential function is used for the upper bounding?



Is that related to Chernoff's inequality? Since most of the textbooks that I have ever seen do not explain why the exponential is used, I have no idea about that.










share|cite|improve this question









New contributor



user9414424 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




















    1












    $begingroup$


    For adjacent database $D,D'$, a randomized algorithm $A$ is $varepsilon$-differential private when the following satisfies



    $$fracPr(A(D) in S)Pr(A(D') in S) leq e^varepsilon,$$ where $S$ is any range of A.



    Why is the exponential function is used for the upper bounding?



    Is that related to Chernoff's inequality? Since most of the textbooks that I have ever seen do not explain why the exponential is used, I have no idea about that.










    share|cite|improve this question









    New contributor



    user9414424 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    $endgroup$
















      1












      1








      1





      $begingroup$


      For adjacent database $D,D'$, a randomized algorithm $A$ is $varepsilon$-differential private when the following satisfies



      $$fracPr(A(D) in S)Pr(A(D') in S) leq e^varepsilon,$$ where $S$ is any range of A.



      Why is the exponential function is used for the upper bounding?



      Is that related to Chernoff's inequality? Since most of the textbooks that I have ever seen do not explain why the exponential is used, I have no idea about that.










      share|cite|improve this question









      New contributor



      user9414424 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      $endgroup$




      For adjacent database $D,D'$, a randomized algorithm $A$ is $varepsilon$-differential private when the following satisfies



      $$fracPr(A(D) in S)Pr(A(D') in S) leq e^varepsilon,$$ where $S$ is any range of A.



      Why is the exponential function is used for the upper bounding?



      Is that related to Chernoff's inequality? Since most of the textbooks that I have ever seen do not explain why the exponential is used, I have no idea about that.







      pr.probability definitions privacy






      share|cite|improve this question









      New contributor



      user9414424 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.










      share|cite|improve this question









      New contributor



      user9414424 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      share|cite|improve this question




      share|cite|improve this question








      edited 3 hours ago









      Clement C.

      2,65517 silver badges42 bronze badges




      2,65517 silver badges42 bronze badges






      New contributor



      user9414424 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      asked 8 hours ago









      user9414424user9414424

      112 bronze badges




      112 bronze badges




      New contributor



      user9414424 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




      New contributor




      user9414424 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.

























          1 Answer
          1






          active

          oldest

          votes


















          4














          $begingroup$

          This answer may be disappointing, but working on a log scale really mostly just makes the formulas nicer. The definition, as written, has the following important properties:



          • Composition: If $A(cdot)$ is an $varepsilon$-DP algorithm, and for any $a$ in the range of $A$, $A'(cdot, a)$ is an $varepsilon'$-DP algorithm, then the composed algorithm $A' circ A$, defined by $A'circ A(D) = A'(D, A(D))$, is $(varepsilon + varepsilon')$-DP.


          • Group Privacy: If $A$ is $varepsilon$-DP, then it satisfies $kvarepsilon$-DP on pairs of data sets that differ in at most $k$ data points.


          It may be more natural to define $varepsilon$-DP with $(1+varepsilon)$ in place of $e^varepsilon$, but then the formulas above would be far less nice. There is no real connection with Chernoff bounds here.



          Another reason is that this definition makes it more clear how the differential privacy definition is related to divergences between distributions. To see what I mean, let me define the privacy loss of an output $a$ of an algorithm $A$ (with respect to datasets $D$ and $D'$) as
          $$
          ell_D, D'(a) = logleft( fracPr[A(D) = a]Pr[A(D') = a]right).
          $$

          Then, the expectation $mathbbE[ell_D, D'(A(D))]$ is simply the KL-divergence between $A(D)$ and $A(D')$. The differential privacy condition asks that this KL-divergence is bounded by $varepsilon$, but in fact it asks much more: that the random variable $ell_D, D'(A(D))$ is bounded by $varepsilon$ everywhere in its support. There are also intermediate definitions which put bounds on moments of $ell_D, D'(A(D))$, and correspond to bounding Renyi divergences between $A(D)$ and $A(D')$.






          share|cite|improve this answer









          $endgroup$

















            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "114"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );







            user9414424 is a new contributor. Be nice, and check out our Code of Conduct.









            draft saved

            draft discarded
















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcstheory.stackexchange.com%2fquestions%2f44507%2fwhy-is-differential-privacy-defined-over-the-exponential-function%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            4














            $begingroup$

            This answer may be disappointing, but working on a log scale really mostly just makes the formulas nicer. The definition, as written, has the following important properties:



            • Composition: If $A(cdot)$ is an $varepsilon$-DP algorithm, and for any $a$ in the range of $A$, $A'(cdot, a)$ is an $varepsilon'$-DP algorithm, then the composed algorithm $A' circ A$, defined by $A'circ A(D) = A'(D, A(D))$, is $(varepsilon + varepsilon')$-DP.


            • Group Privacy: If $A$ is $varepsilon$-DP, then it satisfies $kvarepsilon$-DP on pairs of data sets that differ in at most $k$ data points.


            It may be more natural to define $varepsilon$-DP with $(1+varepsilon)$ in place of $e^varepsilon$, but then the formulas above would be far less nice. There is no real connection with Chernoff bounds here.



            Another reason is that this definition makes it more clear how the differential privacy definition is related to divergences between distributions. To see what I mean, let me define the privacy loss of an output $a$ of an algorithm $A$ (with respect to datasets $D$ and $D'$) as
            $$
            ell_D, D'(a) = logleft( fracPr[A(D) = a]Pr[A(D') = a]right).
            $$

            Then, the expectation $mathbbE[ell_D, D'(A(D))]$ is simply the KL-divergence between $A(D)$ and $A(D')$. The differential privacy condition asks that this KL-divergence is bounded by $varepsilon$, but in fact it asks much more: that the random variable $ell_D, D'(A(D))$ is bounded by $varepsilon$ everywhere in its support. There are also intermediate definitions which put bounds on moments of $ell_D, D'(A(D))$, and correspond to bounding Renyi divergences between $A(D)$ and $A(D')$.






            share|cite|improve this answer









            $endgroup$



















              4














              $begingroup$

              This answer may be disappointing, but working on a log scale really mostly just makes the formulas nicer. The definition, as written, has the following important properties:



              • Composition: If $A(cdot)$ is an $varepsilon$-DP algorithm, and for any $a$ in the range of $A$, $A'(cdot, a)$ is an $varepsilon'$-DP algorithm, then the composed algorithm $A' circ A$, defined by $A'circ A(D) = A'(D, A(D))$, is $(varepsilon + varepsilon')$-DP.


              • Group Privacy: If $A$ is $varepsilon$-DP, then it satisfies $kvarepsilon$-DP on pairs of data sets that differ in at most $k$ data points.


              It may be more natural to define $varepsilon$-DP with $(1+varepsilon)$ in place of $e^varepsilon$, but then the formulas above would be far less nice. There is no real connection with Chernoff bounds here.



              Another reason is that this definition makes it more clear how the differential privacy definition is related to divergences between distributions. To see what I mean, let me define the privacy loss of an output $a$ of an algorithm $A$ (with respect to datasets $D$ and $D'$) as
              $$
              ell_D, D'(a) = logleft( fracPr[A(D) = a]Pr[A(D') = a]right).
              $$

              Then, the expectation $mathbbE[ell_D, D'(A(D))]$ is simply the KL-divergence between $A(D)$ and $A(D')$. The differential privacy condition asks that this KL-divergence is bounded by $varepsilon$, but in fact it asks much more: that the random variable $ell_D, D'(A(D))$ is bounded by $varepsilon$ everywhere in its support. There are also intermediate definitions which put bounds on moments of $ell_D, D'(A(D))$, and correspond to bounding Renyi divergences between $A(D)$ and $A(D')$.






              share|cite|improve this answer









              $endgroup$

















                4














                4










                4







                $begingroup$

                This answer may be disappointing, but working on a log scale really mostly just makes the formulas nicer. The definition, as written, has the following important properties:



                • Composition: If $A(cdot)$ is an $varepsilon$-DP algorithm, and for any $a$ in the range of $A$, $A'(cdot, a)$ is an $varepsilon'$-DP algorithm, then the composed algorithm $A' circ A$, defined by $A'circ A(D) = A'(D, A(D))$, is $(varepsilon + varepsilon')$-DP.


                • Group Privacy: If $A$ is $varepsilon$-DP, then it satisfies $kvarepsilon$-DP on pairs of data sets that differ in at most $k$ data points.


                It may be more natural to define $varepsilon$-DP with $(1+varepsilon)$ in place of $e^varepsilon$, but then the formulas above would be far less nice. There is no real connection with Chernoff bounds here.



                Another reason is that this definition makes it more clear how the differential privacy definition is related to divergences between distributions. To see what I mean, let me define the privacy loss of an output $a$ of an algorithm $A$ (with respect to datasets $D$ and $D'$) as
                $$
                ell_D, D'(a) = logleft( fracPr[A(D) = a]Pr[A(D') = a]right).
                $$

                Then, the expectation $mathbbE[ell_D, D'(A(D))]$ is simply the KL-divergence between $A(D)$ and $A(D')$. The differential privacy condition asks that this KL-divergence is bounded by $varepsilon$, but in fact it asks much more: that the random variable $ell_D, D'(A(D))$ is bounded by $varepsilon$ everywhere in its support. There are also intermediate definitions which put bounds on moments of $ell_D, D'(A(D))$, and correspond to bounding Renyi divergences between $A(D)$ and $A(D')$.






                share|cite|improve this answer









                $endgroup$



                This answer may be disappointing, but working on a log scale really mostly just makes the formulas nicer. The definition, as written, has the following important properties:



                • Composition: If $A(cdot)$ is an $varepsilon$-DP algorithm, and for any $a$ in the range of $A$, $A'(cdot, a)$ is an $varepsilon'$-DP algorithm, then the composed algorithm $A' circ A$, defined by $A'circ A(D) = A'(D, A(D))$, is $(varepsilon + varepsilon')$-DP.


                • Group Privacy: If $A$ is $varepsilon$-DP, then it satisfies $kvarepsilon$-DP on pairs of data sets that differ in at most $k$ data points.


                It may be more natural to define $varepsilon$-DP with $(1+varepsilon)$ in place of $e^varepsilon$, but then the formulas above would be far less nice. There is no real connection with Chernoff bounds here.



                Another reason is that this definition makes it more clear how the differential privacy definition is related to divergences between distributions. To see what I mean, let me define the privacy loss of an output $a$ of an algorithm $A$ (with respect to datasets $D$ and $D'$) as
                $$
                ell_D, D'(a) = logleft( fracPr[A(D) = a]Pr[A(D') = a]right).
                $$

                Then, the expectation $mathbbE[ell_D, D'(A(D))]$ is simply the KL-divergence between $A(D)$ and $A(D')$. The differential privacy condition asks that this KL-divergence is bounded by $varepsilon$, but in fact it asks much more: that the random variable $ell_D, D'(A(D))$ is bounded by $varepsilon$ everywhere in its support. There are also intermediate definitions which put bounds on moments of $ell_D, D'(A(D))$, and correspond to bounding Renyi divergences between $A(D)$ and $A(D')$.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 7 hours ago









                Sasho NikolovSasho Nikolov

                16.7k2 gold badges55 silver badges99 bronze badges




                16.7k2 gold badges55 silver badges99 bronze badges
























                    user9414424 is a new contributor. Be nice, and check out our Code of Conduct.









                    draft saved

                    draft discarded

















                    user9414424 is a new contributor. Be nice, and check out our Code of Conduct.












                    user9414424 is a new contributor. Be nice, and check out our Code of Conduct.











                    user9414424 is a new contributor. Be nice, and check out our Code of Conduct.














                    Thanks for contributing an answer to Theoretical Computer Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcstheory.stackexchange.com%2fquestions%2f44507%2fwhy-is-differential-privacy-defined-over-the-exponential-function%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                    Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                    199年 目錄 大件事 到箇年出世嗰人 到箇年死嗰人 節慶、風俗習慣 導覽選單