Why does the likelihood function of a binomial distribution not include the combinatorics term? [duplicate]What does “likelihood is only defined up to a multiplicative constant of proportionality” mean in practice?When using the beta distribution as a prior distribution for binomial, why won't the distribution results match with the calculated probability?Using the binomial distribution to identify chance-level responsesMaximum likelihood estimation of a Poisson binomial distributionWhy is not the definition of probability distribution consistent with the definition of Binomial Distributions?understanding the binomial distributionBinomial distribution as likelihood in Bayesian modeling. When (not) to use it?Deriving likelihood function of binomial distribution, confusion over exponents
ignoring potentiometer value variations
What's a good strategy for offering low on a house?
Theravada and Mahayana - The Crucial Differences
For a command to increase something, should instructions refer to the "+" key or the "=" key?
why we need self-synchronization?
Meaning of "in arms"
Why are so many cities in the list of 50 most violent cities in the world located in South and Central America?
What are the Advantages of having a Double-pane Space Helmet?
How can I list all flight numbers that connect two countries (non-stop)?
Is a new blessing required when taking off and putting back on your tallit?
We know someone is scrying on us. Is there anything we can do about it?
Do some genes follow Rock-Scissors-Paper model of dominance?
Hypothesis testing- with normal approximation
How would a young girl/boy (about 14) who never gets old survive in the 16th century?
Starting a fire in a cold planet that was full of flammable gas
Sci-fi book trilogy about space travel & 'jacking'
Why buy a first class ticket on Southern trains?
Why past tense of vomit generally spelled 'vomited' rather than 'vomitted'?
Right way to say I disagree with the design but ok I will do
On the finite simple groups with an irreducible complex representation of a given dimension
Does the FIDE 75-move rule apply after checkmate or resignation?
How to deal with non-stop callers in the service desk
Why is the Falcon Heavy center core recovery done at sea?
How might people try to stop the world becoming a rogue planet?
Why does the likelihood function of a binomial distribution not include the combinatorics term? [duplicate]
What does “likelihood is only defined up to a multiplicative constant of proportionality” mean in practice?When using the beta distribution as a prior distribution for binomial, why won't the distribution results match with the calculated probability?Using the binomial distribution to identify chance-level responsesMaximum likelihood estimation of a Poisson binomial distributionWhy is not the definition of probability distribution consistent with the definition of Binomial Distributions?understanding the binomial distributionBinomial distribution as likelihood in Bayesian modeling. When (not) to use it?Deriving likelihood function of binomial distribution, confusion over exponents
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
$begingroup$
This question already has an answer here:
What does “likelihood is only defined up to a multiplicative constant of proportionality” mean in practice?
5 answers
So the likelihood function for a binomial distribution is:
Why is the likelihood function above not multiplied by a combinatorics term: n! / (x! * (n - x)!)
If the likelihood function is interpreted as the probability of an outcome occurring x times out of n trials as the parameter, θ, varies, shouldn't there be a combinatorics term as that would provide the actual probability?
Thanks!
bayesian binomial loss-functions combinatorics
$endgroup$
marked as duplicate by Xi'an
StackExchange.ready(function()
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function()
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function()
$hover.showInfoMessage('',
messageElement: $msg.clone().show(),
transient: false,
position: my: 'bottom left', at: 'top center', offsetTop: -7 ,
dismissable: false,
relativeToBody: true
);
,
function()
StackExchange.helpers.removeMessages();
);
);
);
Oct 15 at 4:45
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment
|
$begingroup$
This question already has an answer here:
What does “likelihood is only defined up to a multiplicative constant of proportionality” mean in practice?
5 answers
So the likelihood function for a binomial distribution is:
Why is the likelihood function above not multiplied by a combinatorics term: n! / (x! * (n - x)!)
If the likelihood function is interpreted as the probability of an outcome occurring x times out of n trials as the parameter, θ, varies, shouldn't there be a combinatorics term as that would provide the actual probability?
Thanks!
bayesian binomial loss-functions combinatorics
$endgroup$
marked as duplicate by Xi'an
StackExchange.ready(function()
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function()
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function()
$hover.showInfoMessage('',
messageElement: $msg.clone().show(),
transient: false,
position: my: 'bottom left', at: 'top center', offsetTop: -7 ,
dismissable: false,
relativeToBody: true
);
,
function()
StackExchange.helpers.removeMessages();
);
);
);
Oct 15 at 4:45
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment
|
$begingroup$
This question already has an answer here:
What does “likelihood is only defined up to a multiplicative constant of proportionality” mean in practice?
5 answers
So the likelihood function for a binomial distribution is:
Why is the likelihood function above not multiplied by a combinatorics term: n! / (x! * (n - x)!)
If the likelihood function is interpreted as the probability of an outcome occurring x times out of n trials as the parameter, θ, varies, shouldn't there be a combinatorics term as that would provide the actual probability?
Thanks!
bayesian binomial loss-functions combinatorics
$endgroup$
This question already has an answer here:
What does “likelihood is only defined up to a multiplicative constant of proportionality” mean in practice?
5 answers
So the likelihood function for a binomial distribution is:
Why is the likelihood function above not multiplied by a combinatorics term: n! / (x! * (n - x)!)
If the likelihood function is interpreted as the probability of an outcome occurring x times out of n trials as the parameter, θ, varies, shouldn't there be a combinatorics term as that would provide the actual probability?
Thanks!
This question already has an answer here:
What does “likelihood is only defined up to a multiplicative constant of proportionality” mean in practice?
5 answers
bayesian binomial loss-functions combinatorics
bayesian binomial loss-functions combinatorics
edited Oct 14 at 16:54
confused
asked Oct 14 at 1:25
confusedconfused
3128 bronze badges
3128 bronze badges
marked as duplicate by Xi'an
StackExchange.ready(function()
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function()
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function()
$hover.showInfoMessage('',
messageElement: $msg.clone().show(),
transient: false,
position: my: 'bottom left', at: 'top center', offsetTop: -7 ,
dismissable: false,
relativeToBody: true
);
,
function()
StackExchange.helpers.removeMessages();
);
);
);
Oct 15 at 4:45
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by Xi'an
StackExchange.ready(function()
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function()
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function()
$hover.showInfoMessage('',
messageElement: $msg.clone().show(),
transient: false,
position: my: 'bottom left', at: 'top center', offsetTop: -7 ,
dismissable: false,
relativeToBody: true
);
,
function()
StackExchange.helpers.removeMessages();
);
);
);
Oct 15 at 4:45
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by Xi'an
StackExchange.ready(function()
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function()
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function()
$hover.showInfoMessage('',
messageElement: $msg.clone().show(),
transient: false,
position: my: 'bottom left', at: 'top center', offsetTop: -7 ,
dismissable: false,
relativeToBody: true
);
,
function()
StackExchange.helpers.removeMessages();
);
);
);
Oct 15 at 4:45
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment
|
add a comment
|
1 Answer
1
active
oldest
votes
$begingroup$
We often don’t care about the likelihood, just the value for which the likelihood is maximized.
When you use the likelihood function to find a maximum likelihood estimator, you get the same point giving the maximum whether you include constants out front or not. Sure, that maximum value will be different, but that is not our concern.
So let’s make it convenient for ourselves and drop constants out in front, especially bulky combinatorics terms!
While we’re at it, we usually take the logarithm of the likelihood function since its derivative is easier to calculate, and log doesn’t change the point at which the maximum occurs.
Edit
This is in my comment but ought to be in the main post. We typically care about the argmax of a likelihood function, not the max itself.
$endgroup$
1
$begingroup$
Thanks! I'm guessing there isn't a case where we actually need the likelihood itself? I know when we do Bayes Analysis the constant would cancel out when dividing the joint by the marginal.
$endgroup$
– confused
Oct 14 at 2:10
1
$begingroup$
I don’t want to go as far as saying that we’d never care about the likelihood, but we often can get away with dropping coefficients because they cancel or because we care about the argmax instead of the max itself.
$endgroup$
– Dave
Oct 14 at 2:25
add a comment
|
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
We often don’t care about the likelihood, just the value for which the likelihood is maximized.
When you use the likelihood function to find a maximum likelihood estimator, you get the same point giving the maximum whether you include constants out front or not. Sure, that maximum value will be different, but that is not our concern.
So let’s make it convenient for ourselves and drop constants out in front, especially bulky combinatorics terms!
While we’re at it, we usually take the logarithm of the likelihood function since its derivative is easier to calculate, and log doesn’t change the point at which the maximum occurs.
Edit
This is in my comment but ought to be in the main post. We typically care about the argmax of a likelihood function, not the max itself.
$endgroup$
1
$begingroup$
Thanks! I'm guessing there isn't a case where we actually need the likelihood itself? I know when we do Bayes Analysis the constant would cancel out when dividing the joint by the marginal.
$endgroup$
– confused
Oct 14 at 2:10
1
$begingroup$
I don’t want to go as far as saying that we’d never care about the likelihood, but we often can get away with dropping coefficients because they cancel or because we care about the argmax instead of the max itself.
$endgroup$
– Dave
Oct 14 at 2:25
add a comment
|
$begingroup$
We often don’t care about the likelihood, just the value for which the likelihood is maximized.
When you use the likelihood function to find a maximum likelihood estimator, you get the same point giving the maximum whether you include constants out front or not. Sure, that maximum value will be different, but that is not our concern.
So let’s make it convenient for ourselves and drop constants out in front, especially bulky combinatorics terms!
While we’re at it, we usually take the logarithm of the likelihood function since its derivative is easier to calculate, and log doesn’t change the point at which the maximum occurs.
Edit
This is in my comment but ought to be in the main post. We typically care about the argmax of a likelihood function, not the max itself.
$endgroup$
1
$begingroup$
Thanks! I'm guessing there isn't a case where we actually need the likelihood itself? I know when we do Bayes Analysis the constant would cancel out when dividing the joint by the marginal.
$endgroup$
– confused
Oct 14 at 2:10
1
$begingroup$
I don’t want to go as far as saying that we’d never care about the likelihood, but we often can get away with dropping coefficients because they cancel or because we care about the argmax instead of the max itself.
$endgroup$
– Dave
Oct 14 at 2:25
add a comment
|
$begingroup$
We often don’t care about the likelihood, just the value for which the likelihood is maximized.
When you use the likelihood function to find a maximum likelihood estimator, you get the same point giving the maximum whether you include constants out front or not. Sure, that maximum value will be different, but that is not our concern.
So let’s make it convenient for ourselves and drop constants out in front, especially bulky combinatorics terms!
While we’re at it, we usually take the logarithm of the likelihood function since its derivative is easier to calculate, and log doesn’t change the point at which the maximum occurs.
Edit
This is in my comment but ought to be in the main post. We typically care about the argmax of a likelihood function, not the max itself.
$endgroup$
We often don’t care about the likelihood, just the value for which the likelihood is maximized.
When you use the likelihood function to find a maximum likelihood estimator, you get the same point giving the maximum whether you include constants out front or not. Sure, that maximum value will be different, but that is not our concern.
So let’s make it convenient for ourselves and drop constants out in front, especially bulky combinatorics terms!
While we’re at it, we usually take the logarithm of the likelihood function since its derivative is easier to calculate, and log doesn’t change the point at which the maximum occurs.
Edit
This is in my comment but ought to be in the main post. We typically care about the argmax of a likelihood function, not the max itself.
edited Oct 14 at 15:57
answered Oct 14 at 1:47
DaveDave
2,3882 silver badges19 bronze badges
2,3882 silver badges19 bronze badges
1
$begingroup$
Thanks! I'm guessing there isn't a case where we actually need the likelihood itself? I know when we do Bayes Analysis the constant would cancel out when dividing the joint by the marginal.
$endgroup$
– confused
Oct 14 at 2:10
1
$begingroup$
I don’t want to go as far as saying that we’d never care about the likelihood, but we often can get away with dropping coefficients because they cancel or because we care about the argmax instead of the max itself.
$endgroup$
– Dave
Oct 14 at 2:25
add a comment
|
1
$begingroup$
Thanks! I'm guessing there isn't a case where we actually need the likelihood itself? I know when we do Bayes Analysis the constant would cancel out when dividing the joint by the marginal.
$endgroup$
– confused
Oct 14 at 2:10
1
$begingroup$
I don’t want to go as far as saying that we’d never care about the likelihood, but we often can get away with dropping coefficients because they cancel or because we care about the argmax instead of the max itself.
$endgroup$
– Dave
Oct 14 at 2:25
1
1
$begingroup$
Thanks! I'm guessing there isn't a case where we actually need the likelihood itself? I know when we do Bayes Analysis the constant would cancel out when dividing the joint by the marginal.
$endgroup$
– confused
Oct 14 at 2:10
$begingroup$
Thanks! I'm guessing there isn't a case where we actually need the likelihood itself? I know when we do Bayes Analysis the constant would cancel out when dividing the joint by the marginal.
$endgroup$
– confused
Oct 14 at 2:10
1
1
$begingroup$
I don’t want to go as far as saying that we’d never care about the likelihood, but we often can get away with dropping coefficients because they cancel or because we care about the argmax instead of the max itself.
$endgroup$
– Dave
Oct 14 at 2:25
$begingroup$
I don’t want to go as far as saying that we’d never care about the likelihood, but we often can get away with dropping coefficients because they cancel or because we care about the argmax instead of the max itself.
$endgroup$
– Dave
Oct 14 at 2:25
add a comment
|