How to feed LSTM with different input array sizes?2019 Community Moderator ElectionLSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?
Shell script not opening as desktop application
Prevent a directory in /tmp from being deleted
Do VLANs within a subnet need to have their own subnet for router on a stick?
TGV timetables / schedules?
Why not use SQL instead of GraphQL?
If I cast Expeditious Retreat, can I Dash as a bonus action on the same turn?
Infinite past with a beginning?
Do Phineas and Ferb ever actually get busted in real time?
Why is an old chain unsafe?
Has the BBC provided arguments for saying Brexit being cancelled is unlikely?
Methods for deciding between [odd number] players
Why are 150k or 200k jobs considered good when there are 300k+ births a month?
XeLaTeX and pdfLaTeX ignore hyphenation
How did the USSR manage to innovate in an environment characterized by government censorship and high bureaucracy?
N.B. ligature in Latex
How to feed LSTM with different input array sizes?
Accidentally leaked the solution to an assignment, what to do now? (I'm the prof)
How to report a triplet of septets in NMR tabulation?
GPS Rollover on Android Smartphones
Is it legal for company to use my work email to pretend I still work there?
What is the offset in a seaplane's hull?
Is there any sparring that doesn't involve punches to the head?
How long does it take to type this?
I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine
How to feed LSTM with different input array sizes?
2019 Community Moderator ElectionLSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?
$begingroup$
If I like to write a LSTM
network and feed it by different input array sizes, how is it possible?
For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM
that can handle different input array sizes?
I am using Keras
implementation of LSTM
.
keras lstm
$endgroup$
add a comment |
$begingroup$
If I like to write a LSTM
network and feed it by different input array sizes, how is it possible?
For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM
that can handle different input array sizes?
I am using Keras
implementation of LSTM
.
keras lstm
$endgroup$
add a comment |
$begingroup$
If I like to write a LSTM
network and feed it by different input array sizes, how is it possible?
For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM
that can handle different input array sizes?
I am using Keras
implementation of LSTM
.
keras lstm
$endgroup$
If I like to write a LSTM
network and feed it by different input array sizes, how is it possible?
For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM
that can handle different input array sizes?
I am using Keras
implementation of LSTM
.
keras lstm
keras lstm
asked 9 hours ago
user145959user145959
1438
1438
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.
Padding the sequences:
You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.
The values are padded mostly by the value of 0. You can do this in Keras with :
y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.
If the sequence is longer than the max length then, the sequence will be trimmed to the max length.
$endgroup$
add a comment |
$begingroup$
The easiest way is to use Padding and Masking.
There are three general ways to handle variable-length sequences:
- Batch size = 1,
- Batch size > 1, with equi-length samples in each batch, and
- Padding and masking (which can be used for (2))
For cases (1) and (2) you need to set the timesteps
of LSTM to None
, e.g.
model.add(LSTM(units, input_shape=(None, dimension)))
this way LSTM accepts batches that have different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom data generator to model.fit_generator
.
I have provided a complete example for simple case (1) at the end. Based on this example and the link, you should be able to build a generator for case (2). Specifically, we either (a) return batch_size
of sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones as will be illustrated for case (3), and use a Masking
layer before LSTM layer to ignore the padded timestamps, e.g.
model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))
First dimension of input_shape
in Masking
(for timestamps) is again None
for the same aforementioned reason.
Padding and masking
In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10
is the special value, then
X = [
[[1, 1.1],
[0.9, 0.95]],
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]],
]
should be converted to
X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]],
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]],
]
This way, all instances would have the same length. Then, we use a Masking
layer that skips those special timestamps like they don't exist.
Here is the code for cases (1) and (3):
from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *X[index].shape))
yb = np.empty((self.batch_size, *y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
timestamps = np.random.randint(1, 10, halfN) # sequences with timestamps between 1 to 10
X_zero = np.array([np.random.normal(0, 1, size=(timestamp, dimension)) for timestamp in timestamps])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(timestamp, dimension)) for timestamp in timestamps])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_timestamp = max(timestamps)
Xpad = np.full((N, max_timestamp, dimension), fill_value=special_value)
for i, x in enumerate(X):
timestamp = x.shape[0]
Xpad[i, 0:timestamp, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.
Padding the sequences:
You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.
The values are padded mostly by the value of 0. You can do this in Keras with :
y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.
If the sequence is longer than the max length then, the sequence will be trimmed to the max length.
$endgroup$
add a comment |
$begingroup$
We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.
Padding the sequences:
You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.
The values are padded mostly by the value of 0. You can do this in Keras with :
y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.
If the sequence is longer than the max length then, the sequence will be trimmed to the max length.
$endgroup$
add a comment |
$begingroup$
We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.
Padding the sequences:
You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.
The values are padded mostly by the value of 0. You can do this in Keras with :
y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.
If the sequence is longer than the max length then, the sequence will be trimmed to the max length.
$endgroup$
We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.
Padding the sequences:
You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.
The values are padded mostly by the value of 0. You can do this in Keras with :
y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.
If the sequence is longer than the max length then, the sequence will be trimmed to the max length.
answered 6 hours ago
Shubham PanchalShubham Panchal
37118
37118
add a comment |
add a comment |
$begingroup$
The easiest way is to use Padding and Masking.
There are three general ways to handle variable-length sequences:
- Batch size = 1,
- Batch size > 1, with equi-length samples in each batch, and
- Padding and masking (which can be used for (2))
For cases (1) and (2) you need to set the timesteps
of LSTM to None
, e.g.
model.add(LSTM(units, input_shape=(None, dimension)))
this way LSTM accepts batches that have different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom data generator to model.fit_generator
.
I have provided a complete example for simple case (1) at the end. Based on this example and the link, you should be able to build a generator for case (2). Specifically, we either (a) return batch_size
of sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones as will be illustrated for case (3), and use a Masking
layer before LSTM layer to ignore the padded timestamps, e.g.
model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))
First dimension of input_shape
in Masking
(for timestamps) is again None
for the same aforementioned reason.
Padding and masking
In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10
is the special value, then
X = [
[[1, 1.1],
[0.9, 0.95]],
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]],
]
should be converted to
X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]],
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]],
]
This way, all instances would have the same length. Then, we use a Masking
layer that skips those special timestamps like they don't exist.
Here is the code for cases (1) and (3):
from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *X[index].shape))
yb = np.empty((self.batch_size, *y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
timestamps = np.random.randint(1, 10, halfN) # sequences with timestamps between 1 to 10
X_zero = np.array([np.random.normal(0, 1, size=(timestamp, dimension)) for timestamp in timestamps])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(timestamp, dimension)) for timestamp in timestamps])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_timestamp = max(timestamps)
Xpad = np.full((N, max_timestamp, dimension), fill_value=special_value)
for i, x in enumerate(X):
timestamp = x.shape[0]
Xpad[i, 0:timestamp, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
$endgroup$
add a comment |
$begingroup$
The easiest way is to use Padding and Masking.
There are three general ways to handle variable-length sequences:
- Batch size = 1,
- Batch size > 1, with equi-length samples in each batch, and
- Padding and masking (which can be used for (2))
For cases (1) and (2) you need to set the timesteps
of LSTM to None
, e.g.
model.add(LSTM(units, input_shape=(None, dimension)))
this way LSTM accepts batches that have different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom data generator to model.fit_generator
.
I have provided a complete example for simple case (1) at the end. Based on this example and the link, you should be able to build a generator for case (2). Specifically, we either (a) return batch_size
of sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones as will be illustrated for case (3), and use a Masking
layer before LSTM layer to ignore the padded timestamps, e.g.
model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))
First dimension of input_shape
in Masking
(for timestamps) is again None
for the same aforementioned reason.
Padding and masking
In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10
is the special value, then
X = [
[[1, 1.1],
[0.9, 0.95]],
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]],
]
should be converted to
X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]],
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]],
]
This way, all instances would have the same length. Then, we use a Masking
layer that skips those special timestamps like they don't exist.
Here is the code for cases (1) and (3):
from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *X[index].shape))
yb = np.empty((self.batch_size, *y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
timestamps = np.random.randint(1, 10, halfN) # sequences with timestamps between 1 to 10
X_zero = np.array([np.random.normal(0, 1, size=(timestamp, dimension)) for timestamp in timestamps])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(timestamp, dimension)) for timestamp in timestamps])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_timestamp = max(timestamps)
Xpad = np.full((N, max_timestamp, dimension), fill_value=special_value)
for i, x in enumerate(X):
timestamp = x.shape[0]
Xpad[i, 0:timestamp, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
$endgroup$
add a comment |
$begingroup$
The easiest way is to use Padding and Masking.
There are three general ways to handle variable-length sequences:
- Batch size = 1,
- Batch size > 1, with equi-length samples in each batch, and
- Padding and masking (which can be used for (2))
For cases (1) and (2) you need to set the timesteps
of LSTM to None
, e.g.
model.add(LSTM(units, input_shape=(None, dimension)))
this way LSTM accepts batches that have different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom data generator to model.fit_generator
.
I have provided a complete example for simple case (1) at the end. Based on this example and the link, you should be able to build a generator for case (2). Specifically, we either (a) return batch_size
of sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones as will be illustrated for case (3), and use a Masking
layer before LSTM layer to ignore the padded timestamps, e.g.
model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))
First dimension of input_shape
in Masking
(for timestamps) is again None
for the same aforementioned reason.
Padding and masking
In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10
is the special value, then
X = [
[[1, 1.1],
[0.9, 0.95]],
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]],
]
should be converted to
X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]],
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]],
]
This way, all instances would have the same length. Then, we use a Masking
layer that skips those special timestamps like they don't exist.
Here is the code for cases (1) and (3):
from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *X[index].shape))
yb = np.empty((self.batch_size, *y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
timestamps = np.random.randint(1, 10, halfN) # sequences with timestamps between 1 to 10
X_zero = np.array([np.random.normal(0, 1, size=(timestamp, dimension)) for timestamp in timestamps])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(timestamp, dimension)) for timestamp in timestamps])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_timestamp = max(timestamps)
Xpad = np.full((N, max_timestamp, dimension), fill_value=special_value)
for i, x in enumerate(X):
timestamp = x.shape[0]
Xpad[i, 0:timestamp, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
$endgroup$
The easiest way is to use Padding and Masking.
There are three general ways to handle variable-length sequences:
- Batch size = 1,
- Batch size > 1, with equi-length samples in each batch, and
- Padding and masking (which can be used for (2))
For cases (1) and (2) you need to set the timesteps
of LSTM to None
, e.g.
model.add(LSTM(units, input_shape=(None, dimension)))
this way LSTM accepts batches that have different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom data generator to model.fit_generator
.
I have provided a complete example for simple case (1) at the end. Based on this example and the link, you should be able to build a generator for case (2). Specifically, we either (a) return batch_size
of sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones as will be illustrated for case (3), and use a Masking
layer before LSTM layer to ignore the padded timestamps, e.g.
model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))
First dimension of input_shape
in Masking
(for timestamps) is again None
for the same aforementioned reason.
Padding and masking
In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10
is the special value, then
X = [
[[1, 1.1],
[0.9, 0.95]],
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]],
]
should be converted to
X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]],
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]],
]
This way, all instances would have the same length. Then, we use a Masking
layer that skips those special timestamps like they don't exist.
Here is the code for cases (1) and (3):
from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *X[index].shape))
yb = np.empty((self.batch_size, *y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
timestamps = np.random.randint(1, 10, halfN) # sequences with timestamps between 1 to 10
X_zero = np.array([np.random.normal(0, 1, size=(timestamp, dimension)) for timestamp in timestamps])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(timestamp, dimension)) for timestamp in timestamps])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_timestamp = max(timestamps)
Xpad = np.full((N, max_timestamp, dimension), fill_value=special_value)
for i, x in enumerate(X):
timestamp = x.shape[0]
Xpad[i, 0:timestamp, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
edited 5 hours ago
answered 6 hours ago
EsmailianEsmailian
2,650318
2,650318
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown