Why use regularization instead of decreasing the modelImplementing the Dependency Sensitive CNN (DSCNN ) in KerasAre parametric method and supervised learning exactly the same?Training Error decreasing with each epochGANs and grayscale imagery colorizationIs it always better to use the whole dataset to train the final model?Why might a neural network consistently underestimate its target?How does training a ConvNet with huge number of parameters on a smaller number of images work?Model Not Learning with Sparse Dataset (LSTM with Keras)What is the point of getting rid of overfitting?Medication relations using word2vec

Premier League simulation

Decode a variable-length quantity

Double blind peer review when paper cites author's GitHub repo for code

Is the beaming of this score following a vocal practice or it is just outdated and obscuring the beat?

Casting Goblin Matron with Plague Engineer on the battlefield

What is the German idiom or expression for when someone is being hypocritical against their own teachings?

How does The Fools Guild make its money?

How to draw a flow chart?

What does VB stand for?

How quickly could a country build a tall concrete wall around a city?

Is it true that control+alt+delete only became a thing because IBM would not build Bill Gates a computer with a task manager button?

What are these mathematical groups in U.S. universities?

What word can be used to describe a bug in a movie?

What is the resistivity of copper at 3 kelvin?

How do I get the =LEFT function in excel, to also take the number zero as the first number?

Colleagues speaking another language and it impacts work

How to explain to a team that the project they will work for 6 months will 100% fail?

How to help new students accept function notation

Is it allowed and safe to carry a passenger / non-pilot in the front seat of a small general aviation airplane?

Word or idiom defining something barely functional

Was there ever a difference between 'volo' and 'volo'?

Non-OR journals which regularly publish OR research

Scripting a Maintenance Plan in SQL Server Express

Is TA-ing worth the opportunity cost?



Why use regularization instead of decreasing the model


Implementing the Dependency Sensitive CNN (DSCNN ) in KerasAre parametric method and supervised learning exactly the same?Training Error decreasing with each epochGANs and grayscale imagery colorizationIs it always better to use the whole dataset to train the final model?Why might a neural network consistently underestimate its target?How does training a ConvNet with huge number of parameters on a smaller number of images work?Model Not Learning with Sparse Dataset (LSTM with Keras)What is the point of getting rid of overfitting?Medication relations using word2vec






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3












$begingroup$


Regularization is used to decrease the capacity of a machine learning model to avoid overfitting. Why don't we just use a model with less capacity (e.g. decrease the number of layers). This would also benefit the computational time and memory.



My guess would be that different regularization methods make different assumptions of the dataset. If so, what assumptions are made for the common regularizations (L1, L2, dropout, any other)



Thanks in advance!










share|improve this question







New contributor



Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




















    3












    $begingroup$


    Regularization is used to decrease the capacity of a machine learning model to avoid overfitting. Why don't we just use a model with less capacity (e.g. decrease the number of layers). This would also benefit the computational time and memory.



    My guess would be that different regularization methods make different assumptions of the dataset. If so, what assumptions are made for the common regularizations (L1, L2, dropout, any other)



    Thanks in advance!










    share|improve this question







    New contributor



    Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    $endgroup$
















      3












      3








      3





      $begingroup$


      Regularization is used to decrease the capacity of a machine learning model to avoid overfitting. Why don't we just use a model with less capacity (e.g. decrease the number of layers). This would also benefit the computational time and memory.



      My guess would be that different regularization methods make different assumptions of the dataset. If so, what assumptions are made for the common regularizations (L1, L2, dropout, any other)



      Thanks in advance!










      share|improve this question







      New contributor



      Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      $endgroup$




      Regularization is used to decrease the capacity of a machine learning model to avoid overfitting. Why don't we just use a model with less capacity (e.g. decrease the number of layers). This would also benefit the computational time and memory.



      My guess would be that different regularization methods make different assumptions of the dataset. If so, what assumptions are made for the common regularizations (L1, L2, dropout, any other)



      Thanks in advance!







      machine-learning neural-network regularization






      share|improve this question







      New contributor



      Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.










      share|improve this question







      New contributor



      Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      share|improve this question




      share|improve this question






      New contributor



      Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      asked 9 hours ago









      Deep_OzeanDeep_Ozean

      162 bronze badges




      162 bronze badges




      New contributor



      Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




      New contributor




      Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.

























          2 Answers
          2






          active

          oldest

          votes


















          3












          $begingroup$

          Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.



          L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).



          L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.



          Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting



          All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.






          share|improve this answer










          New contributor



          leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.





          $endgroup$






















            1












            $begingroup$

            Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).



            The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.



            Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.



            To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.






            share|improve this answer









            $endgroup$

















              Your Answer








              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "557"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );






              Deep_Ozean is a new contributor. Be nice, and check out our Code of Conduct.









              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f57267%2fwhy-use-regularization-instead-of-decreasing-the-model%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              3












              $begingroup$

              Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.



              L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).



              L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.



              Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting



              All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.






              share|improve this answer










              New contributor



              leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.





              $endgroup$



















                3












                $begingroup$

                Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.



                L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).



                L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.



                Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting



                All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.






                share|improve this answer










                New contributor



                leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.





                $endgroup$

















                  3












                  3








                  3





                  $begingroup$

                  Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.



                  L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).



                  L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.



                  Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting



                  All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.






                  share|improve this answer










                  New contributor



                  leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





                  $endgroup$



                  Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.



                  L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).



                  L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.



                  Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting



                  All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.







                  share|improve this answer










                  New contributor



                  leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.








                  share|improve this answer



                  share|improve this answer








                  edited 6 hours ago





















                  New contributor



                  leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.








                  answered 6 hours ago









                  leonardleonard

                  413 bronze badges




                  413 bronze badges




                  New contributor



                  leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.




                  New contributor




                  leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.




























                      1












                      $begingroup$

                      Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).



                      The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.



                      Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.



                      To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.






                      share|improve this answer









                      $endgroup$



















                        1












                        $begingroup$

                        Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).



                        The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.



                        Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.



                        To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.






                        share|improve this answer









                        $endgroup$

















                          1












                          1








                          1





                          $begingroup$

                          Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).



                          The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.



                          Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.



                          To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.






                          share|improve this answer









                          $endgroup$



                          Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).



                          The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.



                          Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.



                          To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered 8 hours ago









                          PeterPeter

                          1,2491 gold badge3 silver badges19 bronze badges




                          1,2491 gold badge3 silver badges19 bronze badges























                              Deep_Ozean is a new contributor. Be nice, and check out our Code of Conduct.









                              draft saved

                              draft discarded


















                              Deep_Ozean is a new contributor. Be nice, and check out our Code of Conduct.












                              Deep_Ozean is a new contributor. Be nice, and check out our Code of Conduct.











                              Deep_Ozean is a new contributor. Be nice, and check out our Code of Conduct.














                              Thanks for contributing an answer to Data Science Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f57267%2fwhy-use-regularization-instead-of-decreasing-the-model%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                              Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                              Tom Holland Mục lục Đầu đời và giáo dục | Sự nghiệp | Cuộc sống cá nhân | Phim tham gia | Giải thưởng và đề cử | Chú thích | Liên kết ngoài | Trình đơn chuyển hướngProfile“Person Details for Thomas Stanley Holland, "England and Wales Birth Registration Index, 1837-2008" — FamilySearch.org”"Meet Tom Holland... the 16-year-old star of The Impossible""Schoolboy actor Tom Holland finds himself in Oscar contention for role in tsunami drama"“Naomi Watts on the Prince William and Harry's reaction to her film about the late Princess Diana”lưu trữ"Holland and Pflueger Are West End's Two New 'Billy Elliots'""I'm so envious of my son, the movie star! British writer Dominic Holland's spent 20 years trying to crack Hollywood - but he's been beaten to it by a very unlikely rival"“Richard and Margaret Povey of Jersey, Channel Islands, UK: Information about Thomas Stanley Holland”"Tom Holland to play Billy Elliot""New Billy Elliot leaving the garage"Billy Elliot the Musical - Tom Holland - Billy"A Tale of four Billys: Tom Holland""The Feel Good Factor""Thames Christian College schoolboys join Myleene Klass for The Feelgood Factor""Government launches £600,000 arts bursaries pilot""BILLY's Chapman, Holland, Gardner & Jackson-Keen Visit Prime Minister""Elton John 'blown away' by Billy Elliot fifth birthday" (video with John's interview and fragments of Holland's performance)"First News interviews Arrietty's Tom Holland"“33rd Critics' Circle Film Awards winners”“National Board of Review Current Awards”Bản gốc"Ron Howard Whaling Tale 'In The Heart Of The Sea' Casts Tom Holland"“'Spider-Man' Finds Tom Holland to Star as New Web-Slinger”lưu trữ“Captain America: Civil War (2016)”“Film Review: ‘Captain America: Civil War’”lưu trữ“‘Captain America: Civil War’ review: Choose your own avenger”lưu trữ“The Lost City of Z reviews”“Sony Pictures and Marvel Studios Find Their 'Spider-Man' Star and Director”“‘Mary Magdalene’, ‘Current War’ & ‘Wind River’ Get 2017 Release Dates From Weinstein”“Lionsgate Unleashing Daisy Ridley & Tom Holland Starrer ‘Chaos Walking’ In Cannes”“PTA's 'Master' Leads Chicago Film Critics Nominations, UPDATED: Houston and Indiana Critics Nominations”“Nominaciones Goya 2013 Telecinco Cinema – ENG”“Jameson Empire Film Awards: Martin Freeman wins best actor for performance in The Hobbit”“34th Annual Young Artist Awards”Bản gốc“Teen Choice Awards 2016—Captain America: Civil War Leads Second Wave of Nominations”“BAFTA Film Award Nominations: ‘La La Land’ Leads Race”“Saturn Awards Nominations 2017: 'Rogue One,' 'Walking Dead' Lead”Tom HollandTom HollandTom HollandTom Hollandmedia.gettyimages.comWorldCat Identities300279794no20130442900000 0004 0355 42791085670554170004732cb16706349t(data)XX5557367