Which comes first? Multiple Imputation, Splitting into train/test, or Standardization/Normalization
Movie about a boy who was born old and grew young
Should an arbiter claim draw at a K+R vs K+R endgame?
Find duplicated column value in CSV
Was the Tamarian language in "Darmok" inspired by Jack Vance's "The Asutra"?
Words that signal future content
Was the output of the C64 SID chip 8 bit sound?
What's the name of this light airplane?
Compiling c files on ubuntu and using the executable on Windows
Are there downsides to using std::string as a buffer?
Is it possible to 'live off the sea'
Comparing and find out which feature has highest shape area in QGIS?
How do I write "Show, Don't Tell" as a person with Asperger Syndrome?
How would a aircraft visually signal in distress?
Do simulator games use a realistic trajectory to get into orbit?
Find the Factorial From the Given Prime Relationship
If you had a giant cutting disc 60 miles diameter and rotated it 1000 rps, would the edge be traveling faster than light?
How to chain Python function calls so the behaviour is as follows
How Can I Tell The Difference Between Unmarked Sugar and Stevia?
Can a black dragonborn's acid breath weapon destroy objects?
How can drunken, homicidal elves successfully conduct a wild hunt?
What should the arbiter and what should have I done in this case?
What is the giant octopus in the torture chamber for?
HT12e: How is this a 2¹² encoder?
Do any instruments not produce overtones?
Which comes first? Multiple Imputation, Splitting into train/test, or Standardization/Normalization
$begingroup$
I am working on a multi-class classification problem, with ~65 features and ~150K instances. 30% of features are categorical and the rest are numerical (continuous). I understand that standardization or normalization should be done after splitting the data into train and test subsets, but I am not still sure about the imputation process. For the classification task, I am planning to use Random Forest, Logistic Regression, and XGBOOST (which are not distance-based).
Could someone please explain which should come first? Split > imputation or imputation>split? In case that split>imputation is correct, should I follow imputation>standardization or standardization>imputation?
multiclass-classification normalization data-imputation
$endgroup$
add a comment |
$begingroup$
I am working on a multi-class classification problem, with ~65 features and ~150K instances. 30% of features are categorical and the rest are numerical (continuous). I understand that standardization or normalization should be done after splitting the data into train and test subsets, but I am not still sure about the imputation process. For the classification task, I am planning to use Random Forest, Logistic Regression, and XGBOOST (which are not distance-based).
Could someone please explain which should come first? Split > imputation or imputation>split? In case that split>imputation is correct, should I follow imputation>standardization or standardization>imputation?
multiclass-classification normalization data-imputation
$endgroup$
add a comment |
$begingroup$
I am working on a multi-class classification problem, with ~65 features and ~150K instances. 30% of features are categorical and the rest are numerical (continuous). I understand that standardization or normalization should be done after splitting the data into train and test subsets, but I am not still sure about the imputation process. For the classification task, I am planning to use Random Forest, Logistic Regression, and XGBOOST (which are not distance-based).
Could someone please explain which should come first? Split > imputation or imputation>split? In case that split>imputation is correct, should I follow imputation>standardization or standardization>imputation?
multiclass-classification normalization data-imputation
$endgroup$
I am working on a multi-class classification problem, with ~65 features and ~150K instances. 30% of features are categorical and the rest are numerical (continuous). I understand that standardization or normalization should be done after splitting the data into train and test subsets, but I am not still sure about the imputation process. For the classification task, I am planning to use Random Forest, Logistic Regression, and XGBOOST (which are not distance-based).
Could someone please explain which should come first? Split > imputation or imputation>split? In case that split>imputation is correct, should I follow imputation>standardization or standardization>imputation?
multiclass-classification normalization data-imputation
multiclass-classification normalization data-imputation
edited 8 hours ago
Sarah
asked 9 hours ago
SarahSarah
455
455
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Always split before you do any data pre-processing. Performing pre-processing before splitting will mean that information from your test set will be present during training, causing a data leak.
Think of it like this, the test set is supposed to be a way of estimating performance on totally unseen data. If it affects the training, then it will be partially seen data.
I don't think the order of scaling/imputing is as strict. I would impute first if the method might throw of the scaling/centering.
Your steps should be:
- Splitting
- Imputing
- Scaling
Here are some related questions to support this:
Imputation before or after splitting into train and test?
Imputation of missing data before or after centering and scaling?
$endgroup$
2
$begingroup$
Thank you for adding those references, they were very helpful. I am persuaded, and have removed my answer.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Glad it helped, @Upper_Case. I find it odd that ISLR had examples where this was not the case.
$endgroup$
– Simon Larsson
8 hours ago
$begingroup$
The copy I have is a first-printing, so possibly it was updated later, and the example I referenced doesn't deal with imputation, so details may differ with that element. I'm also not clear on how "bad" it is to do it one way versus the other (I agree about the test-training "leakage", which is bad, but post-split data transformation causes arbitrary data segmentation features to "leak" into the model, which is also bad). As I'm not sure which is worse, especially in the general case, I'm deferring to the votes from CrossValidated.SE.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Can you elaborate what "arbitrary data segmentation" features means? Like the training set having a mean/standard deviation that is not reflective of the entire population at whole?
$endgroup$
– aranglol
1 hour ago
add a comment |
$begingroup$
If you impute/standardize before splitting and then split into train/test you are leaking data from your test set (that is supposed to be completely withheld) into your training set. This will yield extremely biased results on model performance.
The correct way is to split your data first, and to then use imputation/standardization (the order will depend on if the imputation method requires standardization).
The key here is that you are learning everything from the training set and then "predicting" on to the test set. For nornalization/standardization, you learn the sample mean and sample standard deviation from the training set, treat them as constants, and using these learned values you transform the test set. You don't use the test set mean or the test standard deviation in any of these calculations.
For imputation the idea is similar. You learn the required parameters from the training set only and then predict the required test set values.
This way your performance metrics will not be biased optimistically by your methods inadverdently seeing the test set observations.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f53138%2fwhich-comes-first-multiple-imputation-splitting-into-train-test-or-standardiz%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Always split before you do any data pre-processing. Performing pre-processing before splitting will mean that information from your test set will be present during training, causing a data leak.
Think of it like this, the test set is supposed to be a way of estimating performance on totally unseen data. If it affects the training, then it will be partially seen data.
I don't think the order of scaling/imputing is as strict. I would impute first if the method might throw of the scaling/centering.
Your steps should be:
- Splitting
- Imputing
- Scaling
Here are some related questions to support this:
Imputation before or after splitting into train and test?
Imputation of missing data before or after centering and scaling?
$endgroup$
2
$begingroup$
Thank you for adding those references, they were very helpful. I am persuaded, and have removed my answer.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Glad it helped, @Upper_Case. I find it odd that ISLR had examples where this was not the case.
$endgroup$
– Simon Larsson
8 hours ago
$begingroup$
The copy I have is a first-printing, so possibly it was updated later, and the example I referenced doesn't deal with imputation, so details may differ with that element. I'm also not clear on how "bad" it is to do it one way versus the other (I agree about the test-training "leakage", which is bad, but post-split data transformation causes arbitrary data segmentation features to "leak" into the model, which is also bad). As I'm not sure which is worse, especially in the general case, I'm deferring to the votes from CrossValidated.SE.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Can you elaborate what "arbitrary data segmentation" features means? Like the training set having a mean/standard deviation that is not reflective of the entire population at whole?
$endgroup$
– aranglol
1 hour ago
add a comment |
$begingroup$
Always split before you do any data pre-processing. Performing pre-processing before splitting will mean that information from your test set will be present during training, causing a data leak.
Think of it like this, the test set is supposed to be a way of estimating performance on totally unseen data. If it affects the training, then it will be partially seen data.
I don't think the order of scaling/imputing is as strict. I would impute first if the method might throw of the scaling/centering.
Your steps should be:
- Splitting
- Imputing
- Scaling
Here are some related questions to support this:
Imputation before or after splitting into train and test?
Imputation of missing data before or after centering and scaling?
$endgroup$
2
$begingroup$
Thank you for adding those references, they were very helpful. I am persuaded, and have removed my answer.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Glad it helped, @Upper_Case. I find it odd that ISLR had examples where this was not the case.
$endgroup$
– Simon Larsson
8 hours ago
$begingroup$
The copy I have is a first-printing, so possibly it was updated later, and the example I referenced doesn't deal with imputation, so details may differ with that element. I'm also not clear on how "bad" it is to do it one way versus the other (I agree about the test-training "leakage", which is bad, but post-split data transformation causes arbitrary data segmentation features to "leak" into the model, which is also bad). As I'm not sure which is worse, especially in the general case, I'm deferring to the votes from CrossValidated.SE.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Can you elaborate what "arbitrary data segmentation" features means? Like the training set having a mean/standard deviation that is not reflective of the entire population at whole?
$endgroup$
– aranglol
1 hour ago
add a comment |
$begingroup$
Always split before you do any data pre-processing. Performing pre-processing before splitting will mean that information from your test set will be present during training, causing a data leak.
Think of it like this, the test set is supposed to be a way of estimating performance on totally unseen data. If it affects the training, then it will be partially seen data.
I don't think the order of scaling/imputing is as strict. I would impute first if the method might throw of the scaling/centering.
Your steps should be:
- Splitting
- Imputing
- Scaling
Here are some related questions to support this:
Imputation before or after splitting into train and test?
Imputation of missing data before or after centering and scaling?
$endgroup$
Always split before you do any data pre-processing. Performing pre-processing before splitting will mean that information from your test set will be present during training, causing a data leak.
Think of it like this, the test set is supposed to be a way of estimating performance on totally unseen data. If it affects the training, then it will be partially seen data.
I don't think the order of scaling/imputing is as strict. I would impute first if the method might throw of the scaling/centering.
Your steps should be:
- Splitting
- Imputing
- Scaling
Here are some related questions to support this:
Imputation before or after splitting into train and test?
Imputation of missing data before or after centering and scaling?
edited 8 hours ago
answered 8 hours ago
Simon LarssonSimon Larsson
2,080418
2,080418
2
$begingroup$
Thank you for adding those references, they were very helpful. I am persuaded, and have removed my answer.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Glad it helped, @Upper_Case. I find it odd that ISLR had examples where this was not the case.
$endgroup$
– Simon Larsson
8 hours ago
$begingroup$
The copy I have is a first-printing, so possibly it was updated later, and the example I referenced doesn't deal with imputation, so details may differ with that element. I'm also not clear on how "bad" it is to do it one way versus the other (I agree about the test-training "leakage", which is bad, but post-split data transformation causes arbitrary data segmentation features to "leak" into the model, which is also bad). As I'm not sure which is worse, especially in the general case, I'm deferring to the votes from CrossValidated.SE.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Can you elaborate what "arbitrary data segmentation" features means? Like the training set having a mean/standard deviation that is not reflective of the entire population at whole?
$endgroup$
– aranglol
1 hour ago
add a comment |
2
$begingroup$
Thank you for adding those references, they were very helpful. I am persuaded, and have removed my answer.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Glad it helped, @Upper_Case. I find it odd that ISLR had examples where this was not the case.
$endgroup$
– Simon Larsson
8 hours ago
$begingroup$
The copy I have is a first-printing, so possibly it was updated later, and the example I referenced doesn't deal with imputation, so details may differ with that element. I'm also not clear on how "bad" it is to do it one way versus the other (I agree about the test-training "leakage", which is bad, but post-split data transformation causes arbitrary data segmentation features to "leak" into the model, which is also bad). As I'm not sure which is worse, especially in the general case, I'm deferring to the votes from CrossValidated.SE.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Can you elaborate what "arbitrary data segmentation" features means? Like the training set having a mean/standard deviation that is not reflective of the entire population at whole?
$endgroup$
– aranglol
1 hour ago
2
2
$begingroup$
Thank you for adding those references, they were very helpful. I am persuaded, and have removed my answer.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Thank you for adding those references, they were very helpful. I am persuaded, and have removed my answer.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Glad it helped, @Upper_Case. I find it odd that ISLR had examples where this was not the case.
$endgroup$
– Simon Larsson
8 hours ago
$begingroup$
Glad it helped, @Upper_Case. I find it odd that ISLR had examples where this was not the case.
$endgroup$
– Simon Larsson
8 hours ago
$begingroup$
The copy I have is a first-printing, so possibly it was updated later, and the example I referenced doesn't deal with imputation, so details may differ with that element. I'm also not clear on how "bad" it is to do it one way versus the other (I agree about the test-training "leakage", which is bad, but post-split data transformation causes arbitrary data segmentation features to "leak" into the model, which is also bad). As I'm not sure which is worse, especially in the general case, I'm deferring to the votes from CrossValidated.SE.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
The copy I have is a first-printing, so possibly it was updated later, and the example I referenced doesn't deal with imputation, so details may differ with that element. I'm also not clear on how "bad" it is to do it one way versus the other (I agree about the test-training "leakage", which is bad, but post-split data transformation causes arbitrary data segmentation features to "leak" into the model, which is also bad). As I'm not sure which is worse, especially in the general case, I'm deferring to the votes from CrossValidated.SE.
$endgroup$
– Upper_Case
8 hours ago
$begingroup$
Can you elaborate what "arbitrary data segmentation" features means? Like the training set having a mean/standard deviation that is not reflective of the entire population at whole?
$endgroup$
– aranglol
1 hour ago
$begingroup$
Can you elaborate what "arbitrary data segmentation" features means? Like the training set having a mean/standard deviation that is not reflective of the entire population at whole?
$endgroup$
– aranglol
1 hour ago
add a comment |
$begingroup$
If you impute/standardize before splitting and then split into train/test you are leaking data from your test set (that is supposed to be completely withheld) into your training set. This will yield extremely biased results on model performance.
The correct way is to split your data first, and to then use imputation/standardization (the order will depend on if the imputation method requires standardization).
The key here is that you are learning everything from the training set and then "predicting" on to the test set. For nornalization/standardization, you learn the sample mean and sample standard deviation from the training set, treat them as constants, and using these learned values you transform the test set. You don't use the test set mean or the test standard deviation in any of these calculations.
For imputation the idea is similar. You learn the required parameters from the training set only and then predict the required test set values.
This way your performance metrics will not be biased optimistically by your methods inadverdently seeing the test set observations.
$endgroup$
add a comment |
$begingroup$
If you impute/standardize before splitting and then split into train/test you are leaking data from your test set (that is supposed to be completely withheld) into your training set. This will yield extremely biased results on model performance.
The correct way is to split your data first, and to then use imputation/standardization (the order will depend on if the imputation method requires standardization).
The key here is that you are learning everything from the training set and then "predicting" on to the test set. For nornalization/standardization, you learn the sample mean and sample standard deviation from the training set, treat them as constants, and using these learned values you transform the test set. You don't use the test set mean or the test standard deviation in any of these calculations.
For imputation the idea is similar. You learn the required parameters from the training set only and then predict the required test set values.
This way your performance metrics will not be biased optimistically by your methods inadverdently seeing the test set observations.
$endgroup$
add a comment |
$begingroup$
If you impute/standardize before splitting and then split into train/test you are leaking data from your test set (that is supposed to be completely withheld) into your training set. This will yield extremely biased results on model performance.
The correct way is to split your data first, and to then use imputation/standardization (the order will depend on if the imputation method requires standardization).
The key here is that you are learning everything from the training set and then "predicting" on to the test set. For nornalization/standardization, you learn the sample mean and sample standard deviation from the training set, treat them as constants, and using these learned values you transform the test set. You don't use the test set mean or the test standard deviation in any of these calculations.
For imputation the idea is similar. You learn the required parameters from the training set only and then predict the required test set values.
This way your performance metrics will not be biased optimistically by your methods inadverdently seeing the test set observations.
$endgroup$
If you impute/standardize before splitting and then split into train/test you are leaking data from your test set (that is supposed to be completely withheld) into your training set. This will yield extremely biased results on model performance.
The correct way is to split your data first, and to then use imputation/standardization (the order will depend on if the imputation method requires standardization).
The key here is that you are learning everything from the training set and then "predicting" on to the test set. For nornalization/standardization, you learn the sample mean and sample standard deviation from the training set, treat them as constants, and using these learned values you transform the test set. You don't use the test set mean or the test standard deviation in any of these calculations.
For imputation the idea is similar. You learn the required parameters from the training set only and then predict the required test set values.
This way your performance metrics will not be biased optimistically by your methods inadverdently seeing the test set observations.
answered 8 hours ago
aranglolaranglol
56015
56015
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f53138%2fwhich-comes-first-multiple-imputation-splitting-into-train-test-or-standardiz%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown