How to feed LSTM with different input array sizes?2019 Community Moderator ElectionLSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?

Shell script not opening as desktop application

Prevent a directory in /tmp from being deleted

Do VLANs within a subnet need to have their own subnet for router on a stick?

TGV timetables / schedules?

Why not use SQL instead of GraphQL?

If I cast Expeditious Retreat, can I Dash as a bonus action on the same turn?

Infinite past with a beginning?

Do Phineas and Ferb ever actually get busted in real time?

Why is an old chain unsafe?

Has the BBC provided arguments for saying Brexit being cancelled is unlikely?

Methods for deciding between [odd number] players

Why are 150k or 200k jobs considered good when there are 300k+ births a month?

XeLaTeX and pdfLaTeX ignore hyphenation

How did the USSR manage to innovate in an environment characterized by government censorship and high bureaucracy?

N.B. ligature in Latex

How to feed LSTM with different input array sizes?

Accidentally leaked the solution to an assignment, what to do now? (I'm the prof)

How to report a triplet of septets in NMR tabulation?

GPS Rollover on Android Smartphones

Is it legal for company to use my work email to pretend I still work there?

What is the offset in a seaplane's hull?

Is there any sparring that doesn't involve punches to the head?

How long does it take to type this?

I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine



How to feed LSTM with different input array sizes?



2019 Community Moderator ElectionLSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?










3












$begingroup$


If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



I am using Keras implementation of LSTM.










share|improve this question









$endgroup$
















    3












    $begingroup$


    If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



    For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



    I am using Keras implementation of LSTM.










    share|improve this question









    $endgroup$














      3












      3








      3


      1



      $begingroup$


      If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



      For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



      I am using Keras implementation of LSTM.










      share|improve this question









      $endgroup$




      If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



      For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



      I am using Keras implementation of LSTM.







      keras lstm






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 9 hours ago









      user145959user145959

      1438




      1438




















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






          share|improve this answer









          $endgroup$




















            1












            $begingroup$

            The easiest way is to use Padding and Masking.



            There are three general ways to handle variable-length sequences:



            1. Batch size = 1,

            2. Batch size > 1, with equi-length samples in each batch, and

            3. Padding and masking (which can be used for (2))

            For cases (1) and (2) you need to set the timesteps of LSTM to None, e.g.



            model.add(LSTM(units, input_shape=(None, dimension)))


            this way LSTM accepts batches that have different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom data generator to model.fit_generator.



            I have provided a complete example for simple case (1) at the end. Based on this example and the link, you should be able to build a generator for case (2). Specifically, we either (a) return batch_size of sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones as will be illustrated for case (3), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



            model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
            model.add(LSTM(lstm_units))


            First dimension of input_shape in Masking (for timestamps) is again None for the same aforementioned reason.



            Padding and masking



            In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



            X = [

            [[1, 1.1],
            [0.9, 0.95]],

            [[2, 2.2],
            [1.9, 1.95],
            [1.8, 1.85]],

            ]


            should be converted to



            X2 = [

            [[1, 1.1],
            [0.9, 0.95],
            [-10, -10]],

            [[2, 2.2],
            [1.9, 1.95],
            [1.8, 1.85]],
            ]


            This way, all instances would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist.



            Here is the code for cases (1) and (3):



            from keras import Sequential
            from keras.utils import Sequence
            from keras.layers import LSTM, Dense, Masking
            import numpy as np


            class MyBatchGenerator(Sequence):
            'Generates data for Keras'
            def __init__(self, X, y, batch_size=1, shuffle=True):
            'Initialization'
            self.X = X
            self.y = y
            self.batch_size = batch_size
            self.shuffle = shuffle
            self.on_epoch_end()

            def __len__(self):
            'Denotes the number of batches per epoch'
            return int(np.floor(len(self.y)/self.batch_size))

            def __getitem__(self, index):
            return self.__data_generation(index)

            def on_epoch_end(self):
            'Shuffles indexes after each epoch'
            self.indexes = np.arange(len(self.y))
            if self.shuffle == True:
            np.random.shuffle(self.indexes)

            def __data_generation(self, index):
            Xb = np.empty((self.batch_size, *X[index].shape))
            yb = np.empty((self.batch_size, *y[index].shape))
            # naively use the same sample over and over again
            for s in range(0, self.batch_size):
            Xb[s] = X[index]
            yb[s] = y[index]
            return Xb, yb


            # Parameters
            N = 1000
            halfN = int(N/2)
            dimension = 2
            lstm_units = 3

            # Data
            np.random.seed(123) # to generate the same numbers
            timestamps = np.random.randint(1, 10, halfN) # sequences with timestamps between 1 to 10
            X_zero = np.array([np.random.normal(0, 1, size=(timestamp, dimension)) for timestamp in timestamps])
            y_zero = np.zeros((halfN, 1))
            X_one = np.array([np.random.normal(1, 1, size=(timestamp, dimension)) for timestamp in timestamps])
            y_one = np.ones((halfN, 1))
            p = np.random.permutation(N) # to shuffle zero and one classes
            X = np.concatenate((X_zero, X_one))[p]
            y = np.concatenate((y_zero, y_one))[p]

            # Batch = 1
            model = Sequential()
            model.add(LSTM(lstm_units, input_shape=(None, dimension)))
            model.add(Dense(1, activation='sigmoid'))
            model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
            print(model.summary())
            model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

            # Padding and Masking
            special_value = -10.0
            max_timestamp = max(timestamps)
            Xpad = np.full((N, max_timestamp, dimension), fill_value=special_value)
            for i, x in enumerate(X):
            timestamp = x.shape[0]
            Xpad[i, 0:timestamp, :] = x
            model2 = Sequential()
            model2.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
            model2.add(LSTM(lstm_units))
            model2.add(Dense(1, activation='sigmoid'))
            model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
            print(model2.summary())
            model2.fit(Xpad, y, epochs=50, batch_size=32)





            share|improve this answer











            $endgroup$













              Your Answer





              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "557"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1












              $begingroup$

              We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



              Padding the sequences:



              You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



              The values are padded mostly by the value of 0. You can do this in Keras with :



              y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


              • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


              • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






              share|improve this answer









              $endgroup$

















                1












                $begingroup$

                We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



                Padding the sequences:



                You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



                The values are padded mostly by the value of 0. You can do this in Keras with :



                y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


                • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


                • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






                share|improve this answer









                $endgroup$















                  1












                  1








                  1





                  $begingroup$

                  We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



                  Padding the sequences:



                  You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



                  The values are padded mostly by the value of 0. You can do this in Keras with :



                  y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


                  • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


                  • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






                  share|improve this answer









                  $endgroup$



                  We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



                  Padding the sequences:



                  You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



                  The values are padded mostly by the value of 0. You can do this in Keras with :



                  y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


                  • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


                  • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 6 hours ago









                  Shubham PanchalShubham Panchal

                  37118




                  37118





















                      1












                      $begingroup$

                      The easiest way is to use Padding and Masking.



                      There are three general ways to handle variable-length sequences:



                      1. Batch size = 1,

                      2. Batch size > 1, with equi-length samples in each batch, and

                      3. Padding and masking (which can be used for (2))

                      For cases (1) and (2) you need to set the timesteps of LSTM to None, e.g.



                      model.add(LSTM(units, input_shape=(None, dimension)))


                      this way LSTM accepts batches that have different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom data generator to model.fit_generator.



                      I have provided a complete example for simple case (1) at the end. Based on this example and the link, you should be able to build a generator for case (2). Specifically, we either (a) return batch_size of sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones as will be illustrated for case (3), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



                      model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                      model.add(LSTM(lstm_units))


                      First dimension of input_shape in Masking (for timestamps) is again None for the same aforementioned reason.



                      Padding and masking



                      In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



                      X = [

                      [[1, 1.1],
                      [0.9, 0.95]],

                      [[2, 2.2],
                      [1.9, 1.95],
                      [1.8, 1.85]],

                      ]


                      should be converted to



                      X2 = [

                      [[1, 1.1],
                      [0.9, 0.95],
                      [-10, -10]],

                      [[2, 2.2],
                      [1.9, 1.95],
                      [1.8, 1.85]],
                      ]


                      This way, all instances would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist.



                      Here is the code for cases (1) and (3):



                      from keras import Sequential
                      from keras.utils import Sequence
                      from keras.layers import LSTM, Dense, Masking
                      import numpy as np


                      class MyBatchGenerator(Sequence):
                      'Generates data for Keras'
                      def __init__(self, X, y, batch_size=1, shuffle=True):
                      'Initialization'
                      self.X = X
                      self.y = y
                      self.batch_size = batch_size
                      self.shuffle = shuffle
                      self.on_epoch_end()

                      def __len__(self):
                      'Denotes the number of batches per epoch'
                      return int(np.floor(len(self.y)/self.batch_size))

                      def __getitem__(self, index):
                      return self.__data_generation(index)

                      def on_epoch_end(self):
                      'Shuffles indexes after each epoch'
                      self.indexes = np.arange(len(self.y))
                      if self.shuffle == True:
                      np.random.shuffle(self.indexes)

                      def __data_generation(self, index):
                      Xb = np.empty((self.batch_size, *X[index].shape))
                      yb = np.empty((self.batch_size, *y[index].shape))
                      # naively use the same sample over and over again
                      for s in range(0, self.batch_size):
                      Xb[s] = X[index]
                      yb[s] = y[index]
                      return Xb, yb


                      # Parameters
                      N = 1000
                      halfN = int(N/2)
                      dimension = 2
                      lstm_units = 3

                      # Data
                      np.random.seed(123) # to generate the same numbers
                      timestamps = np.random.randint(1, 10, halfN) # sequences with timestamps between 1 to 10
                      X_zero = np.array([np.random.normal(0, 1, size=(timestamp, dimension)) for timestamp in timestamps])
                      y_zero = np.zeros((halfN, 1))
                      X_one = np.array([np.random.normal(1, 1, size=(timestamp, dimension)) for timestamp in timestamps])
                      y_one = np.ones((halfN, 1))
                      p = np.random.permutation(N) # to shuffle zero and one classes
                      X = np.concatenate((X_zero, X_one))[p]
                      y = np.concatenate((y_zero, y_one))[p]

                      # Batch = 1
                      model = Sequential()
                      model.add(LSTM(lstm_units, input_shape=(None, dimension)))
                      model.add(Dense(1, activation='sigmoid'))
                      model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                      print(model.summary())
                      model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

                      # Padding and Masking
                      special_value = -10.0
                      max_timestamp = max(timestamps)
                      Xpad = np.full((N, max_timestamp, dimension), fill_value=special_value)
                      for i, x in enumerate(X):
                      timestamp = x.shape[0]
                      Xpad[i, 0:timestamp, :] = x
                      model2 = Sequential()
                      model2.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                      model2.add(LSTM(lstm_units))
                      model2.add(Dense(1, activation='sigmoid'))
                      model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                      print(model2.summary())
                      model2.fit(Xpad, y, epochs=50, batch_size=32)





                      share|improve this answer











                      $endgroup$

















                        1












                        $begingroup$

                        The easiest way is to use Padding and Masking.



                        There are three general ways to handle variable-length sequences:



                        1. Batch size = 1,

                        2. Batch size > 1, with equi-length samples in each batch, and

                        3. Padding and masking (which can be used for (2))

                        For cases (1) and (2) you need to set the timesteps of LSTM to None, e.g.



                        model.add(LSTM(units, input_shape=(None, dimension)))


                        this way LSTM accepts batches that have different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom data generator to model.fit_generator.



                        I have provided a complete example for simple case (1) at the end. Based on this example and the link, you should be able to build a generator for case (2). Specifically, we either (a) return batch_size of sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones as will be illustrated for case (3), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



                        model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                        model.add(LSTM(lstm_units))


                        First dimension of input_shape in Masking (for timestamps) is again None for the same aforementioned reason.



                        Padding and masking



                        In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



                        X = [

                        [[1, 1.1],
                        [0.9, 0.95]],

                        [[2, 2.2],
                        [1.9, 1.95],
                        [1.8, 1.85]],

                        ]


                        should be converted to



                        X2 = [

                        [[1, 1.1],
                        [0.9, 0.95],
                        [-10, -10]],

                        [[2, 2.2],
                        [1.9, 1.95],
                        [1.8, 1.85]],
                        ]


                        This way, all instances would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist.



                        Here is the code for cases (1) and (3):



                        from keras import Sequential
                        from keras.utils import Sequence
                        from keras.layers import LSTM, Dense, Masking
                        import numpy as np


                        class MyBatchGenerator(Sequence):
                        'Generates data for Keras'
                        def __init__(self, X, y, batch_size=1, shuffle=True):
                        'Initialization'
                        self.X = X
                        self.y = y
                        self.batch_size = batch_size
                        self.shuffle = shuffle
                        self.on_epoch_end()

                        def __len__(self):
                        'Denotes the number of batches per epoch'
                        return int(np.floor(len(self.y)/self.batch_size))

                        def __getitem__(self, index):
                        return self.__data_generation(index)

                        def on_epoch_end(self):
                        'Shuffles indexes after each epoch'
                        self.indexes = np.arange(len(self.y))
                        if self.shuffle == True:
                        np.random.shuffle(self.indexes)

                        def __data_generation(self, index):
                        Xb = np.empty((self.batch_size, *X[index].shape))
                        yb = np.empty((self.batch_size, *y[index].shape))
                        # naively use the same sample over and over again
                        for s in range(0, self.batch_size):
                        Xb[s] = X[index]
                        yb[s] = y[index]
                        return Xb, yb


                        # Parameters
                        N = 1000
                        halfN = int(N/2)
                        dimension = 2
                        lstm_units = 3

                        # Data
                        np.random.seed(123) # to generate the same numbers
                        timestamps = np.random.randint(1, 10, halfN) # sequences with timestamps between 1 to 10
                        X_zero = np.array([np.random.normal(0, 1, size=(timestamp, dimension)) for timestamp in timestamps])
                        y_zero = np.zeros((halfN, 1))
                        X_one = np.array([np.random.normal(1, 1, size=(timestamp, dimension)) for timestamp in timestamps])
                        y_one = np.ones((halfN, 1))
                        p = np.random.permutation(N) # to shuffle zero and one classes
                        X = np.concatenate((X_zero, X_one))[p]
                        y = np.concatenate((y_zero, y_one))[p]

                        # Batch = 1
                        model = Sequential()
                        model.add(LSTM(lstm_units, input_shape=(None, dimension)))
                        model.add(Dense(1, activation='sigmoid'))
                        model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                        print(model.summary())
                        model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

                        # Padding and Masking
                        special_value = -10.0
                        max_timestamp = max(timestamps)
                        Xpad = np.full((N, max_timestamp, dimension), fill_value=special_value)
                        for i, x in enumerate(X):
                        timestamp = x.shape[0]
                        Xpad[i, 0:timestamp, :] = x
                        model2 = Sequential()
                        model2.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                        model2.add(LSTM(lstm_units))
                        model2.add(Dense(1, activation='sigmoid'))
                        model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                        print(model2.summary())
                        model2.fit(Xpad, y, epochs=50, batch_size=32)





                        share|improve this answer











                        $endgroup$















                          1












                          1








                          1





                          $begingroup$

                          The easiest way is to use Padding and Masking.



                          There are three general ways to handle variable-length sequences:



                          1. Batch size = 1,

                          2. Batch size > 1, with equi-length samples in each batch, and

                          3. Padding and masking (which can be used for (2))

                          For cases (1) and (2) you need to set the timesteps of LSTM to None, e.g.



                          model.add(LSTM(units, input_shape=(None, dimension)))


                          this way LSTM accepts batches that have different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom data generator to model.fit_generator.



                          I have provided a complete example for simple case (1) at the end. Based on this example and the link, you should be able to build a generator for case (2). Specifically, we either (a) return batch_size of sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones as will be illustrated for case (3), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



                          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                          model.add(LSTM(lstm_units))


                          First dimension of input_shape in Masking (for timestamps) is again None for the same aforementioned reason.



                          Padding and masking



                          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



                          X = [

                          [[1, 1.1],
                          [0.9, 0.95]],

                          [[2, 2.2],
                          [1.9, 1.95],
                          [1.8, 1.85]],

                          ]


                          should be converted to



                          X2 = [

                          [[1, 1.1],
                          [0.9, 0.95],
                          [-10, -10]],

                          [[2, 2.2],
                          [1.9, 1.95],
                          [1.8, 1.85]],
                          ]


                          This way, all instances would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist.



                          Here is the code for cases (1) and (3):



                          from keras import Sequential
                          from keras.utils import Sequence
                          from keras.layers import LSTM, Dense, Masking
                          import numpy as np


                          class MyBatchGenerator(Sequence):
                          'Generates data for Keras'
                          def __init__(self, X, y, batch_size=1, shuffle=True):
                          'Initialization'
                          self.X = X
                          self.y = y
                          self.batch_size = batch_size
                          self.shuffle = shuffle
                          self.on_epoch_end()

                          def __len__(self):
                          'Denotes the number of batches per epoch'
                          return int(np.floor(len(self.y)/self.batch_size))

                          def __getitem__(self, index):
                          return self.__data_generation(index)

                          def on_epoch_end(self):
                          'Shuffles indexes after each epoch'
                          self.indexes = np.arange(len(self.y))
                          if self.shuffle == True:
                          np.random.shuffle(self.indexes)

                          def __data_generation(self, index):
                          Xb = np.empty((self.batch_size, *X[index].shape))
                          yb = np.empty((self.batch_size, *y[index].shape))
                          # naively use the same sample over and over again
                          for s in range(0, self.batch_size):
                          Xb[s] = X[index]
                          yb[s] = y[index]
                          return Xb, yb


                          # Parameters
                          N = 1000
                          halfN = int(N/2)
                          dimension = 2
                          lstm_units = 3

                          # Data
                          np.random.seed(123) # to generate the same numbers
                          timestamps = np.random.randint(1, 10, halfN) # sequences with timestamps between 1 to 10
                          X_zero = np.array([np.random.normal(0, 1, size=(timestamp, dimension)) for timestamp in timestamps])
                          y_zero = np.zeros((halfN, 1))
                          X_one = np.array([np.random.normal(1, 1, size=(timestamp, dimension)) for timestamp in timestamps])
                          y_one = np.ones((halfN, 1))
                          p = np.random.permutation(N) # to shuffle zero and one classes
                          X = np.concatenate((X_zero, X_one))[p]
                          y = np.concatenate((y_zero, y_one))[p]

                          # Batch = 1
                          model = Sequential()
                          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
                          model.add(Dense(1, activation='sigmoid'))
                          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                          print(model.summary())
                          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

                          # Padding and Masking
                          special_value = -10.0
                          max_timestamp = max(timestamps)
                          Xpad = np.full((N, max_timestamp, dimension), fill_value=special_value)
                          for i, x in enumerate(X):
                          timestamp = x.shape[0]
                          Xpad[i, 0:timestamp, :] = x
                          model2 = Sequential()
                          model2.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                          model2.add(LSTM(lstm_units))
                          model2.add(Dense(1, activation='sigmoid'))
                          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                          print(model2.summary())
                          model2.fit(Xpad, y, epochs=50, batch_size=32)





                          share|improve this answer











                          $endgroup$



                          The easiest way is to use Padding and Masking.



                          There are three general ways to handle variable-length sequences:



                          1. Batch size = 1,

                          2. Batch size > 1, with equi-length samples in each batch, and

                          3. Padding and masking (which can be used for (2))

                          For cases (1) and (2) you need to set the timesteps of LSTM to None, e.g.



                          model.add(LSTM(units, input_shape=(None, dimension)))


                          this way LSTM accepts batches that have different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom data generator to model.fit_generator.



                          I have provided a complete example for simple case (1) at the end. Based on this example and the link, you should be able to build a generator for case (2). Specifically, we either (a) return batch_size of sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones as will be illustrated for case (3), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



                          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                          model.add(LSTM(lstm_units))


                          First dimension of input_shape in Masking (for timestamps) is again None for the same aforementioned reason.



                          Padding and masking



                          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



                          X = [

                          [[1, 1.1],
                          [0.9, 0.95]],

                          [[2, 2.2],
                          [1.9, 1.95],
                          [1.8, 1.85]],

                          ]


                          should be converted to



                          X2 = [

                          [[1, 1.1],
                          [0.9, 0.95],
                          [-10, -10]],

                          [[2, 2.2],
                          [1.9, 1.95],
                          [1.8, 1.85]],
                          ]


                          This way, all instances would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist.



                          Here is the code for cases (1) and (3):



                          from keras import Sequential
                          from keras.utils import Sequence
                          from keras.layers import LSTM, Dense, Masking
                          import numpy as np


                          class MyBatchGenerator(Sequence):
                          'Generates data for Keras'
                          def __init__(self, X, y, batch_size=1, shuffle=True):
                          'Initialization'
                          self.X = X
                          self.y = y
                          self.batch_size = batch_size
                          self.shuffle = shuffle
                          self.on_epoch_end()

                          def __len__(self):
                          'Denotes the number of batches per epoch'
                          return int(np.floor(len(self.y)/self.batch_size))

                          def __getitem__(self, index):
                          return self.__data_generation(index)

                          def on_epoch_end(self):
                          'Shuffles indexes after each epoch'
                          self.indexes = np.arange(len(self.y))
                          if self.shuffle == True:
                          np.random.shuffle(self.indexes)

                          def __data_generation(self, index):
                          Xb = np.empty((self.batch_size, *X[index].shape))
                          yb = np.empty((self.batch_size, *y[index].shape))
                          # naively use the same sample over and over again
                          for s in range(0, self.batch_size):
                          Xb[s] = X[index]
                          yb[s] = y[index]
                          return Xb, yb


                          # Parameters
                          N = 1000
                          halfN = int(N/2)
                          dimension = 2
                          lstm_units = 3

                          # Data
                          np.random.seed(123) # to generate the same numbers
                          timestamps = np.random.randint(1, 10, halfN) # sequences with timestamps between 1 to 10
                          X_zero = np.array([np.random.normal(0, 1, size=(timestamp, dimension)) for timestamp in timestamps])
                          y_zero = np.zeros((halfN, 1))
                          X_one = np.array([np.random.normal(1, 1, size=(timestamp, dimension)) for timestamp in timestamps])
                          y_one = np.ones((halfN, 1))
                          p = np.random.permutation(N) # to shuffle zero and one classes
                          X = np.concatenate((X_zero, X_one))[p]
                          y = np.concatenate((y_zero, y_one))[p]

                          # Batch = 1
                          model = Sequential()
                          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
                          model.add(Dense(1, activation='sigmoid'))
                          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                          print(model.summary())
                          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

                          # Padding and Masking
                          special_value = -10.0
                          max_timestamp = max(timestamps)
                          Xpad = np.full((N, max_timestamp, dimension), fill_value=special_value)
                          for i, x in enumerate(X):
                          timestamp = x.shape[0]
                          Xpad[i, 0:timestamp, :] = x
                          model2 = Sequential()
                          model2.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                          model2.add(LSTM(lstm_units))
                          model2.add(Dense(1, activation='sigmoid'))
                          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                          print(model2.summary())
                          model2.fit(Xpad, y, epochs=50, batch_size=32)






                          share|improve this answer














                          share|improve this answer



                          share|improve this answer








                          edited 5 hours ago

























                          answered 6 hours ago









                          EsmailianEsmailian

                          2,650318




                          2,650318



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Data Science Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                              Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                              Tom Holland Mục lục Đầu đời và giáo dục | Sự nghiệp | Cuộc sống cá nhân | Phim tham gia | Giải thưởng và đề cử | Chú thích | Liên kết ngoài | Trình đơn chuyển hướngProfile“Person Details for Thomas Stanley Holland, "England and Wales Birth Registration Index, 1837-2008" — FamilySearch.org”"Meet Tom Holland... the 16-year-old star of The Impossible""Schoolboy actor Tom Holland finds himself in Oscar contention for role in tsunami drama"“Naomi Watts on the Prince William and Harry's reaction to her film about the late Princess Diana”lưu trữ"Holland and Pflueger Are West End's Two New 'Billy Elliots'""I'm so envious of my son, the movie star! British writer Dominic Holland's spent 20 years trying to crack Hollywood - but he's been beaten to it by a very unlikely rival"“Richard and Margaret Povey of Jersey, Channel Islands, UK: Information about Thomas Stanley Holland”"Tom Holland to play Billy Elliot""New Billy Elliot leaving the garage"Billy Elliot the Musical - Tom Holland - Billy"A Tale of four Billys: Tom Holland""The Feel Good Factor""Thames Christian College schoolboys join Myleene Klass for The Feelgood Factor""Government launches £600,000 arts bursaries pilot""BILLY's Chapman, Holland, Gardner & Jackson-Keen Visit Prime Minister""Elton John 'blown away' by Billy Elliot fifth birthday" (video with John's interview and fragments of Holland's performance)"First News interviews Arrietty's Tom Holland"“33rd Critics' Circle Film Awards winners”“National Board of Review Current Awards”Bản gốc"Ron Howard Whaling Tale 'In The Heart Of The Sea' Casts Tom Holland"“'Spider-Man' Finds Tom Holland to Star as New Web-Slinger”lưu trữ“Captain America: Civil War (2016)”“Film Review: ‘Captain America: Civil War’”lưu trữ“‘Captain America: Civil War’ review: Choose your own avenger”lưu trữ“The Lost City of Z reviews”“Sony Pictures and Marvel Studios Find Their 'Spider-Man' Star and Director”“‘Mary Magdalene’, ‘Current War’ & ‘Wind River’ Get 2017 Release Dates From Weinstein”“Lionsgate Unleashing Daisy Ridley & Tom Holland Starrer ‘Chaos Walking’ In Cannes”“PTA's 'Master' Leads Chicago Film Critics Nominations, UPDATED: Houston and Indiana Critics Nominations”“Nominaciones Goya 2013 Telecinco Cinema – ENG”“Jameson Empire Film Awards: Martin Freeman wins best actor for performance in The Hobbit”“34th Annual Young Artist Awards”Bản gốc“Teen Choice Awards 2016—Captain America: Civil War Leads Second Wave of Nominations”“BAFTA Film Award Nominations: ‘La La Land’ Leads Race”“Saturn Awards Nominations 2017: 'Rogue One,' 'Walking Dead' Lead”Tom HollandTom HollandTom HollandTom Hollandmedia.gettyimages.comWorldCat Identities300279794no20130442900000 0004 0355 42791085670554170004732cb16706349t(data)XX5557367