The value to use for the
dropout
layer, which is used to prevent over-fitting.
This must be a value greater than 0 and less than 1.
The output dimension used for the embedding layer.
See here for further details.
The total number of iterations to use when training a model.
The total number of iterations may be less
than this if the conditions for
earlyStopping
are met.
The number of units to remember in the Long-Short Term Memory layer.
See ltsm
for more details..
The maximum sequence length.
This value corresponds to the length that each sequence is normalised to.
Larger values are better, but take longer to train and use more memory.
Number of epochs with no improvement after which training will be stopped.
See earlyStopping
for more details.
The ratio of messages to split between training and validation.
This must be a value greater than 0 and less than 1.
The maximum vocabulary size.
This value corresponds to the maximum number of distinct symbols (usually words) stored.
Larger values are better, but take longer to train and use more memory.
Generated using TypeDoc
Configuration used for training models.