I can see my two means are different. What information can a t test add?How to interpret and report eta squared / partial eta squared in statistically significant and non-significant analyses?Test to prove two means do not have a differenceNormalization in pairwise hypothesis testingANOVA or multiple t-tests when comparing pre-existing group means?t-test to compare two meansNull hypothesis of t-test and ANOVACan ANOVA be significant when none of the pairwise t-tests is?Difference between paired t-test and repeated measures ANOVA with two level of repeated measuresWhat statistical tests would you recommend for group difference for my data?Why do t-test and ANOVA give different p-values for two-group comparison?
Ensuring all network services on a device use strong TLS cipher suites
The Knight's estate
LeetCode: Group Anagrams C#
Is “I am getting married with my sister” ambiguous?
Is "The life is beautiful" incorrect or just very non-idiomatic?
Do they have Supervillain(s)?
Is there any way to keep a player from killing an NPC?
How do I request a longer than normal leave of absence period for my wedding?
Architectural feasibility of a tiered circular stone keep
Why did MS-DOS applications built using Turbo Pascal fail to start with a division by zero error on faster systems?
Is for(( ... )) ... ; a valid shell syntax? In which shells?
Avoiding racist tropes in fantasy
Compelling story with the world as a villain
Did a flight controller ever answer Flight with a no-go?
Why do gliders have bungee cords in the control systems and what do they do? Are they on all control surfaces? What about ultralights?
Justifying the use of directed energy weapons
Is immersion of utensils (tevila) valid before koshering (hagala)?
Why doesn't 'd /= d' throw a division by zero exception?
Is gzip atomic?
How to gently end involvement with an online community?
How much authority do teachers get from *In Loco Parentis*?
Why do all fields in a QFT transform like *irreducible* representations of some group?
How can I unambiguously ask for a new user's "Display Name"?
Are there any elected officials in the U.S. who are not legislators, judges, or constitutional officers?
I can see my two means are different. What information can a t test add?
How to interpret and report eta squared / partial eta squared in statistically significant and non-significant analyses?Test to prove two means do not have a differenceNormalization in pairwise hypothesis testingANOVA or multiple t-tests when comparing pre-existing group means?t-test to compare two meansNull hypothesis of t-test and ANOVACan ANOVA be significant when none of the pairwise t-tests is?Difference between paired t-test and repeated measures ANOVA with two level of repeated measuresWhat statistical tests would you recommend for group difference for my data?Why do t-test and ANOVA give different p-values for two-group comparison?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
Methods such as t-tests and anova find the difference between two means using a statistical method.
This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.
What is the intuition behind these methods ?
anova t-test mean
New contributor
$endgroup$
add a comment |
$begingroup$
Methods such as t-tests and anova find the difference between two means using a statistical method.
This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.
What is the intuition behind these methods ?
anova t-test mean
New contributor
$endgroup$
3
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
8 hours ago
add a comment |
$begingroup$
Methods such as t-tests and anova find the difference between two means using a statistical method.
This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.
What is the intuition behind these methods ?
anova t-test mean
New contributor
$endgroup$
Methods such as t-tests and anova find the difference between two means using a statistical method.
This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.
What is the intuition behind these methods ?
anova t-test mean
anova t-test mean
New contributor
New contributor
edited 8 hours ago
Harvey Motulsky
11.3k5 gold badges46 silver badges86 bronze badges
11.3k5 gold badges46 silver badges86 bronze badges
New contributor
asked 8 hours ago
YashYash
111 bronze badge
111 bronze badge
New contributor
New contributor
3
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
8 hours ago
add a comment |
3
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
8 hours ago
3
3
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
8 hours ago
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
8 hours ago
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.
The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.
That may be too cryptic, but should point you to the right chapters in texts to read.
$endgroup$
add a comment |
$begingroup$
You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Yash is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f423434%2fi-can-see-my-two-means-are-different-what-information-can-a-t-test-add%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.
The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.
That may be too cryptic, but should point you to the right chapters in texts to read.
$endgroup$
add a comment |
$begingroup$
This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.
The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.
That may be too cryptic, but should point you to the right chapters in texts to read.
$endgroup$
add a comment |
$begingroup$
This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.
The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.
That may be too cryptic, but should point you to the right chapters in texts to read.
$endgroup$
This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.
The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.
That may be too cryptic, but should point you to the right chapters in texts to read.
answered 8 hours ago
Harvey MotulskyHarvey Motulsky
11.3k5 gold badges46 silver badges86 bronze badges
11.3k5 gold badges46 silver badges86 bronze badges
add a comment |
add a comment |
$begingroup$
You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.
$endgroup$
add a comment |
$begingroup$
You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.
$endgroup$
add a comment |
$begingroup$
You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.
$endgroup$
You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.
answered 6 hours ago
atamaluatamalu
355 bronze badges
355 bronze badges
add a comment |
add a comment |
Yash is a new contributor. Be nice, and check out our Code of Conduct.
Yash is a new contributor. Be nice, and check out our Code of Conduct.
Yash is a new contributor. Be nice, and check out our Code of Conduct.
Yash is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f423434%2fi-can-see-my-two-means-are-different-what-information-can-a-t-test-add%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
8 hours ago