contrib.keras.optimizers.RMSprop
tf.contrib.keras.optimizers.RMSprop
class tf.contrib.keras.optimizers.RMSprop
Defined in tensorflow/contrib/keras/python/keras/optimizers.py
.
RMSProp optimizer.
It is recommended to leave the parameters of this optimizer at their default values (except the learning rate, which can be freely tuned).
This optimizer is usually a good choice for recurrent neural networks.
Arguments:
lr: float >= 0. Learning rate. rho: float >= 0. epsilon: float >= 0. Fuzz factor. decay: float >= 0. Learning rate decay over each update.
References: - rmsprop: Divide the gradient by a running average of its recent magnitude
Methods
__init__
__init__( lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0, **kwargs )
from_config
from_config( cls, config )
get_config
get_config()
get_gradients
get_gradients( loss, params )
get_updates
get_updates( params, constraints, loss )
get_weights
get_weights()
Returns the current value of the weights of the optimizer.
Returns:
A list of numpy arrays.
set_weights
set_weights(weights)
Sets the weights of the optimizer, from Numpy arrays.
Should only be called after computing the gradients (otherwise the optimizer has no weights).
Arguments:
weights: a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the optimizer (i.e. it should match the output of `get_weights`).
Raises:
ValueError: in case of incompatible weight shapes.
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/keras/optimizers/RMSprop