I can see my two means are different. What information can a t test add?How to interpret and report eta squared / partial eta squared in statistically significant and non-significant analyses?Test to prove two means do not have a differenceNormalization in pairwise hypothesis testingANOVA or multiple t-tests when comparing pre-existing group means?t-test to compare two meansNull hypothesis of t-test and ANOVACan ANOVA be significant when none of the pairwise t-tests is?Difference between paired t-test and repeated measures ANOVA with two level of repeated measuresWhat statistical tests would you recommend for group difference for my data?Why do t-test and ANOVA give different p-values for two-group comparison?

Ensuring all network services on a device use strong TLS cipher suites

The Knight's estate

LeetCode: Group Anagrams C#

Is “I am getting married with my sister” ambiguous?

Is "The life is beautiful" incorrect or just very non-idiomatic?

Do they have Supervillain(s)?

Is there any way to keep a player from killing an NPC?

How do I request a longer than normal leave of absence period for my wedding?

Architectural feasibility of a tiered circular stone keep

Why did MS-DOS applications built using Turbo Pascal fail to start with a division by zero error on faster systems?

Is for(( ... )) ... ; a valid shell syntax? In which shells?

Avoiding racist tropes in fantasy

Compelling story with the world as a villain

Did a flight controller ever answer Flight with a no-go?

Why do gliders have bungee cords in the control systems and what do they do? Are they on all control surfaces? What about ultralights?

Justifying the use of directed energy weapons

Is immersion of utensils (tevila) valid before koshering (hagala)?

Why doesn't 'd /= d' throw a division by zero exception?

Is gzip atomic?

How to gently end involvement with an online community?

How much authority do teachers get from *In Loco Parentis*?

Why do all fields in a QFT transform like *irreducible* representations of some group?

How can I unambiguously ask for a new user's "Display Name"?

Are there any elected officials in the U.S. who are not legislators, judges, or constitutional officers?



I can see my two means are different. What information can a t test add?


How to interpret and report eta squared / partial eta squared in statistically significant and non-significant analyses?Test to prove two means do not have a differenceNormalization in pairwise hypothesis testingANOVA or multiple t-tests when comparing pre-existing group means?t-test to compare two meansNull hypothesis of t-test and ANOVACan ANOVA be significant when none of the pairwise t-tests is?Difference between paired t-test and repeated measures ANOVA with two level of repeated measuresWhat statistical tests would you recommend for group difference for my data?Why do t-test and ANOVA give different p-values for two-group comparison?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2












$begingroup$


Methods such as t-tests and anova find the difference between two means using a statistical method.



This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.



What is the intuition behind these methods ?










share|cite|improve this question









New contributor



Yash is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$









  • 3




    $begingroup$
    Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
    $endgroup$
    – MauOlivares
    8 hours ago


















2












$begingroup$


Methods such as t-tests and anova find the difference between two means using a statistical method.



This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.



What is the intuition behind these methods ?










share|cite|improve this question









New contributor



Yash is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$









  • 3




    $begingroup$
    Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
    $endgroup$
    – MauOlivares
    8 hours ago














2












2








2





$begingroup$


Methods such as t-tests and anova find the difference between two means using a statistical method.



This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.



What is the intuition behind these methods ?










share|cite|improve this question









New contributor



Yash is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




Methods such as t-tests and anova find the difference between two means using a statistical method.



This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.



What is the intuition behind these methods ?







anova t-test mean






share|cite|improve this question









New contributor



Yash is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|cite|improve this question









New contributor



Yash is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|cite|improve this question




share|cite|improve this question








edited 8 hours ago









Harvey Motulsky

11.3k5 gold badges46 silver badges86 bronze badges




11.3k5 gold badges46 silver badges86 bronze badges






New contributor



Yash is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 8 hours ago









YashYash

111 bronze badge




111 bronze badge




New contributor



Yash is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Yash is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • 3




    $begingroup$
    Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
    $endgroup$
    – MauOlivares
    8 hours ago













  • 3




    $begingroup$
    Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
    $endgroup$
    – MauOlivares
    8 hours ago








3




3




$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
8 hours ago





$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
8 hours ago











2 Answers
2






active

oldest

votes


















4













$begingroup$

This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.



The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.



That may be too cryptic, but should point you to the right chapters in texts to read.






share|cite|improve this answer









$endgroup$






















    1













    $begingroup$

    You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.



    But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.



    Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.



    For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.



    Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.






    share|cite|improve this answer









    $endgroup$

















      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "65"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );






      Yash is a new contributor. Be nice, and check out our Code of Conduct.









      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f423434%2fi-can-see-my-two-means-are-different-what-information-can-a-t-test-add%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      4













      $begingroup$

      This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.



      The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.



      That may be too cryptic, but should point you to the right chapters in texts to read.






      share|cite|improve this answer









      $endgroup$



















        4













        $begingroup$

        This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.



        The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.



        That may be too cryptic, but should point you to the right chapters in texts to read.






        share|cite|improve this answer









        $endgroup$

















          4














          4










          4







          $begingroup$

          This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.



          The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.



          That may be too cryptic, but should point you to the right chapters in texts to read.






          share|cite|improve this answer









          $endgroup$



          This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.



          The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.



          That may be too cryptic, but should point you to the right chapters in texts to read.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered 8 hours ago









          Harvey MotulskyHarvey Motulsky

          11.3k5 gold badges46 silver badges86 bronze badges




          11.3k5 gold badges46 silver badges86 bronze badges


























              1













              $begingroup$

              You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.



              But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.



              Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.



              For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.



              Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.






              share|cite|improve this answer









              $endgroup$



















                1













                $begingroup$

                You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.



                But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.



                Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.



                For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.



                Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.






                share|cite|improve this answer









                $endgroup$

















                  1














                  1










                  1







                  $begingroup$

                  You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.



                  But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.



                  Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.



                  For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.



                  Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.






                  share|cite|improve this answer









                  $endgroup$



                  You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.



                  But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.



                  Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.



                  For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.



                  Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered 6 hours ago









                  atamaluatamalu

                  355 bronze badges




                  355 bronze badges























                      Yash is a new contributor. Be nice, and check out our Code of Conduct.









                      draft saved

                      draft discarded


















                      Yash is a new contributor. Be nice, and check out our Code of Conduct.












                      Yash is a new contributor. Be nice, and check out our Code of Conduct.











                      Yash is a new contributor. Be nice, and check out our Code of Conduct.














                      Thanks for contributing an answer to Cross Validated!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f423434%2fi-can-see-my-two-means-are-different-what-information-can-a-t-test-add%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                      Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                      Ласкавець круглолистий Зміст Опис | Поширення | Галерея | Примітки | Посилання | Навігаційне меню58171138361-22960890446Bupleurum rotundifoliumEuro+Med PlantbasePlants of the World Online — Kew ScienceGermplasm Resources Information Network (GRIN)Ласкавецькн. VI : Літери Ком — Левиправивши або дописавши її