public <T,U extends Number> SparseApplyRmsProp<T> sparseApplyRmsProp(Operand<T> var, Operand<T> ms, Operand<T> mom, Operand<T> lr, Operand<T> rho, Operand<T> momentum, Operand<T> epsilon, Operand<T> grad, Operand<U> indices, SparseApplyRmsProp.Options... options)
SparseApplyRmsProp
operationvar
- Should be from a Variable().ms
- Should be from a Variable().mom
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.rho
- Decay rate. Must be a scalar.momentum
- epsilon
- Ridge term. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var, ms and mom.options
- carries optional attributes valuesSparseApplyRmsProp
public <T> ApplyMomentum<T> applyMomentum(Operand<T> var, Operand<T> accum, Operand<T> lr, Operand<T> grad, Operand<T> momentum, ApplyMomentum.Options... options)
ApplyMomentum
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.grad
- The gradient.momentum
- Momentum. Must be a scalar.options
- carries optional attributes valuesApplyMomentum
public <T,U extends Number> SparseApplyMomentum<T> sparseApplyMomentum(Operand<T> var, Operand<T> accum, Operand<T> lr, Operand<T> grad, Operand<U> indices, Operand<T> momentum, SparseApplyMomentum.Options... options)
SparseApplyMomentum
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Learning rate. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.momentum
- Momentum. Must be a scalar.options
- carries optional attributes valuesSparseApplyMomentum
public <T,U extends Number> ResourceSparseApplyRmsProp resourceSparseApplyRmsProp(Operand<?> var, Operand<?> ms, Operand<?> mom, Operand<T> lr, Operand<T> rho, Operand<T> momentum, Operand<T> epsilon, Operand<T> grad, Operand<U> indices, ResourceSparseApplyRmsProp.Options... options)
ResourceSparseApplyRmsProp
operationvar
- Should be from a Variable().ms
- Should be from a Variable().mom
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.rho
- Decay rate. Must be a scalar.momentum
- epsilon
- Ridge term. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var, ms and mom.options
- carries optional attributes valuesResourceSparseApplyRmsProp
public <T> ResourceApplyAdagradDa resourceApplyAdagradDa(Operand<?> var, Operand<?> gradientAccumulator, Operand<?> gradientSquaredAccumulator, Operand<T> grad, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<Long> globalStep, ResourceApplyAdagradDa.Options... options)
ResourceApplyAdagradDa
operationvar
- Should be from a Variable().gradientAccumulator
- Should be from a Variable().gradientSquaredAccumulator
- Should be from a Variable().grad
- The gradient.lr
- Scaling factor. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.globalStep
- Training step number. Must be a scalar.options
- carries optional attributes valuesResourceApplyAdagradDa
public <T> ApplyAddSign<T> applyAddSign(Operand<T> var, Operand<T> m, Operand<T> lr, Operand<T> alpha, Operand<T> signDecay, Operand<T> beta, Operand<T> grad, ApplyAddSign.Options... options)
ApplyAddSign
operationvar
- Should be from a Variable().m
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.alpha
- Must be a scalar.signDecay
- Must be a scalar.beta
- Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesApplyAddSign
public <T> ApplyGradientDescent<T> applyGradientDescent(Operand<T> var, Operand<T> alpha, Operand<T> delta, ApplyGradientDescent.Options... options)
ApplyGradientDescent
operationvar
- Should be from a Variable().alpha
- Scaling factor. Must be a scalar.delta
- The change.options
- carries optional attributes valuesApplyGradientDescent
public <T,U extends Number> SparseApplyFtrl<T> sparseApplyFtrl(Operand<T> var, Operand<T> accum, Operand<T> linear, Operand<T> grad, Operand<U> indices, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<T> l2Shrinkage, Operand<T> lrPower, SparseApplyFtrl.Options... options)
SparseApplyFtrl
operationvar
- Should be from a Variable().accum
- Should be from a Variable().linear
- Should be from a Variable().grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.lr
- Scaling factor. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 shrinkage regulariation. Must be a scalar.l2Shrinkage
- lrPower
- Scaling factor. Must be a scalar.options
- carries optional attributes valuesSparseApplyFtrl
public <T> ApplyAdam<T> applyAdam(Operand<T> var, Operand<T> m, Operand<T> v, Operand<T> beta1Power, Operand<T> beta2Power, Operand<T> lr, Operand<T> beta1, Operand<T> beta2, Operand<T> epsilon, Operand<T> grad, ApplyAdam.Options... options)
ApplyAdam
operationvar
- Should be from a Variable().m
- Should be from a Variable().v
- Should be from a Variable().beta1Power
- Must be a scalar.beta2Power
- Must be a scalar.lr
- Scaling factor. Must be a scalar.beta1
- Momentum factor. Must be a scalar.beta2
- Momentum factor. Must be a scalar.epsilon
- Ridge term. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesApplyAdam
public NegTrain negTrain(Operand<Float> wIn, Operand<Float> wOut, Operand<Integer> examples, Operand<Integer> labels, Operand<Float> lr, List<Long> vocabCount, Long numNegativeSamples)
NegTrain
operationwIn
- input word embedding.wOut
- output word embedding.examples
- A vector of word ids.labels
- A vector of word ids.lr
- vocabCount
- Count of words in the vocabulary.numNegativeSamples
- Number of negative samples per example.NegTrain
public <T> PreventGradient<T> preventGradient(Operand<T> input, PreventGradient.Options... options)
PreventGradient
operationinput
- any tensor.options
- carries optional attributes valuesPreventGradient
public <T> ApplyProximalAdagrad<T> applyProximalAdagrad(Operand<T> var, Operand<T> accum, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<T> grad, ApplyProximalAdagrad.Options... options)
ApplyProximalAdagrad
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesApplyProximalAdagrad
public AccumulatorNumAccumulated accumulatorNumAccumulated(Operand<String> handle)
AccumulatorNumAccumulated
operationhandle
- The handle to an accumulator.AccumulatorNumAccumulated
public <T> ApplyAdagrad<T> applyAdagrad(Operand<T> var, Operand<T> accum, Operand<T> lr, Operand<T> grad, ApplyAdagrad.Options... options)
ApplyAdagrad
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesApplyAdagrad
public <T> ResourceApplyProximalAdagrad resourceApplyProximalAdagrad(Operand<?> var, Operand<?> accum, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<T> grad, ResourceApplyProximalAdagrad.Options... options)
ResourceApplyProximalAdagrad
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesResourceApplyProximalAdagrad
public <T,U extends Number> ResourceSparseApplyAdagradDa resourceSparseApplyAdagradDa(Operand<?> var, Operand<?> gradientAccumulator, Operand<?> gradientSquaredAccumulator, Operand<T> grad, Operand<U> indices, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<Long> globalStep, ResourceSparseApplyAdagradDa.Options... options)
ResourceSparseApplyAdagradDa
operationvar
- Should be from a Variable().gradientAccumulator
- Should be from a Variable().gradientSquaredAccumulator
- Should be from a Variable().grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.lr
- Learning rate. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.globalStep
- Training step number. Must be a scalar.options
- carries optional attributes valuesResourceSparseApplyAdagradDa
public <T,U extends Number> SparseApplyCenteredRmsProp<T> sparseApplyCenteredRmsProp(Operand<T> var, Operand<T> mg, Operand<T> ms, Operand<T> mom, Operand<T> lr, Operand<T> rho, Operand<T> momentum, Operand<T> epsilon, Operand<T> grad, Operand<U> indices, SparseApplyCenteredRmsProp.Options... options)
SparseApplyCenteredRmsProp
operationvar
- Should be from a Variable().mg
- Should be from a Variable().ms
- Should be from a Variable().mom
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.rho
- Decay rate. Must be a scalar.momentum
- epsilon
- Ridge term. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var, ms and mom.options
- carries optional attributes valuesSparseApplyCenteredRmsProp
public AccumulatorSetGlobalStep accumulatorSetGlobalStep(Operand<String> handle, Operand<Long> newGlobalStep)
AccumulatorSetGlobalStep
operationhandle
- The handle to an accumulator.newGlobalStep
- The new global_step value to set.AccumulatorSetGlobalStep
public Save save(Operand<String> prefix, Operand<String> tensorNames, Operand<String> shapeAndSlices, Iterable<Operand<?>> tensors)
Save
operationprefix
- Must have a single element. The prefix of the V2 checkpoint to which wetensorNames
- shape {N}. The names of the tensors to be saved.shapeAndSlices
- shape {N}. The slice specs of the tensors to be saved.tensors
- `N` tensors to save.Save
public <T> ApplyProximalGradientDescent<T> applyProximalGradientDescent(Operand<T> var, Operand<T> alpha, Operand<T> l1, Operand<T> l2, Operand<T> delta, ApplyProximalGradientDescent.Options... options)
ApplyProximalGradientDescent
operationvar
- Should be from a Variable().alpha
- Scaling factor. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.delta
- The change.options
- carries optional attributes valuesApplyProximalGradientDescent
public <T> ApplyPowerSign<T> applyPowerSign(Operand<T> var, Operand<T> m, Operand<T> lr, Operand<T> logbase, Operand<T> signDecay, Operand<T> beta, Operand<T> grad, ApplyPowerSign.Options... options)
ApplyPowerSign
operationvar
- Should be from a Variable().m
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.logbase
- Must be a scalar.signDecay
- Must be a scalar.beta
- Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesApplyPowerSign
public <T> ApplyAdadelta<T> applyAdadelta(Operand<T> var, Operand<T> accum, Operand<T> accumUpdate, Operand<T> lr, Operand<T> rho, Operand<T> epsilon, Operand<T> grad, ApplyAdadelta.Options... options)
ApplyAdadelta
operationvar
- Should be from a Variable().accum
- Should be from a Variable().accumUpdate
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.rho
- Decay factor. Must be a scalar.epsilon
- Constant factor. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesApplyAdadelta
public Restore restore(Operand<String> prefix, Operand<String> tensorNames, Operand<String> shapeAndSlices, List<Class<?>> dtypes)
Restore
operationprefix
- Must have a single element. The prefix of a V2 checkpoint.tensorNames
- shape {N}. The names of the tensors to be restored.shapeAndSlices
- shape {N}. The slice specs of the tensors to be restored.dtypes
- shape {N}. The list of expected dtype for the tensors. Must matchRestore
public <T> ResourceApplyRmsProp resourceApplyRmsProp(Operand<?> var, Operand<?> ms, Operand<?> mom, Operand<T> lr, Operand<T> rho, Operand<T> momentum, Operand<T> epsilon, Operand<T> grad, ResourceApplyRmsProp.Options... options)
ResourceApplyRmsProp
operationvar
- Should be from a Variable().ms
- Should be from a Variable().mom
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.rho
- Decay rate. Must be a scalar.momentum
- epsilon
- Ridge term. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesResourceApplyRmsProp
public <T> RestoreSlice<T> restoreSlice(Operand<String> filePattern, Operand<String> tensorName, Operand<String> shapeAndSlice, Class<T> dt, RestoreSlice.Options... options)
RestoreSlice
operationfilePattern
- Must have a single element. The pattern of the files fromtensorName
- Must have a single element. The name of the tensor to beshapeAndSlice
- Scalar. The shapes and slice specifications to use whendt
- The type of the tensor to be restored.options
- carries optional attributes valuesRestoreSlice
public <T,U extends Number> SparseApplyAdagrad<T> sparseApplyAdagrad(Operand<T> var, Operand<T> accum, Operand<T> lr, Operand<T> grad, Operand<U> indices, SparseApplyAdagrad.Options... options)
SparseApplyAdagrad
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Learning rate. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.options
- carries optional attributes valuesSparseApplyAdagrad
public <T> ResourceApplyCenteredRmsProp resourceApplyCenteredRmsProp(Operand<?> var, Operand<?> mg, Operand<?> ms, Operand<?> mom, Operand<T> lr, Operand<T> rho, Operand<T> momentum, Operand<T> epsilon, Operand<T> grad, ResourceApplyCenteredRmsProp.Options... options)
ResourceApplyCenteredRmsProp
operationvar
- Should be from a Variable().mg
- Should be from a Variable().ms
- Should be from a Variable().mom
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.rho
- Decay rate. Must be a scalar.momentum
- epsilon
- Ridge term. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesResourceApplyCenteredRmsProp
public <T> ResourceApplyProximalGradientDescent resourceApplyProximalGradientDescent(Operand<?> var, Operand<T> alpha, Operand<T> l1, Operand<T> l2, Operand<T> delta, ResourceApplyProximalGradientDescent.Options... options)
ResourceApplyProximalGradientDescent
operationvar
- Should be from a Variable().alpha
- Scaling factor. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.delta
- The change.options
- carries optional attributes valuesResourceApplyProximalGradientDescent
public <T> ApplyFtrl<T> applyFtrl(Operand<T> var, Operand<T> accum, Operand<T> linear, Operand<T> grad, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<T> l2Shrinkage, Operand<T> lrPower, ApplyFtrl.Options... options)
ApplyFtrl
operationvar
- Should be from a Variable().accum
- Should be from a Variable().linear
- Should be from a Variable().grad
- The gradient.lr
- Scaling factor. Must be a scalar.l1
- L1 regulariation. Must be a scalar.l2
- L2 shrinkage regulariation. Must be a scalar.l2Shrinkage
- lrPower
- Scaling factor. Must be a scalar.options
- carries optional attributes valuesApplyFtrl
public <T> ResourceApplyFtrl resourceApplyFtrl(Operand<?> var, Operand<?> accum, Operand<?> linear, Operand<T> grad, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<T> l2Shrinkage, Operand<T> lrPower, ResourceApplyFtrl.Options... options)
ResourceApplyFtrl
operationvar
- Should be from a Variable().accum
- Should be from a Variable().linear
- Should be from a Variable().grad
- The gradient.lr
- Scaling factor. Must be a scalar.l1
- L1 regulariation. Must be a scalar.l2
- L2 shrinkage regulariation. Must be a scalar.l2Shrinkage
- lrPower
- Scaling factor. Must be a scalar.options
- carries optional attributes valuesResourceApplyFtrl
public <T,U extends Number> ResourceSparseApplyFtrl resourceSparseApplyFtrl(Operand<?> var, Operand<?> accum, Operand<?> linear, Operand<T> grad, Operand<U> indices, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<T> l2Shrinkage, Operand<T> lrPower, ResourceSparseApplyFtrl.Options... options)
ResourceSparseApplyFtrl
operationvar
- Should be from a Variable().accum
- Should be from a Variable().linear
- Should be from a Variable().grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.lr
- Scaling factor. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 shrinkage regulariation. Must be a scalar.l2Shrinkage
- lrPower
- Scaling factor. Must be a scalar.options
- carries optional attributes valuesResourceSparseApplyFtrl
public <T,U extends Number> ResourceSparseApplyProximalGradientDescent resourceSparseApplyProximalGradientDescent(Operand<?> var, Operand<T> alpha, Operand<T> l1, Operand<T> l2, Operand<T> grad, Operand<U> indices, ResourceSparseApplyProximalGradientDescent.Options... options)
ResourceSparseApplyProximalGradientDescent
operationvar
- Should be from a Variable().alpha
- Scaling factor. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.options
- carries optional attributes valuesResourceSparseApplyProximalGradientDescent
public <T,U extends Number> SparseApplyAdagradDa<T> sparseApplyAdagradDa(Operand<T> var, Operand<T> gradientAccumulator, Operand<T> gradientSquaredAccumulator, Operand<T> grad, Operand<U> indices, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<Long> globalStep, SparseApplyAdagradDa.Options... options)
SparseApplyAdagradDa
operationvar
- Should be from a Variable().gradientAccumulator
- Should be from a Variable().gradientSquaredAccumulator
- Should be from a Variable().grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.lr
- Learning rate. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.globalStep
- Training step number. Must be a scalar.options
- carries optional attributes valuesSparseApplyAdagradDa
public <T> ResourceApplyAdagrad resourceApplyAdagrad(Operand<?> var, Operand<?> accum, Operand<T> lr, Operand<T> grad, ResourceApplyAdagrad.Options... options)
ResourceApplyAdagrad
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesResourceApplyAdagrad
public <T,U extends Number> ResourceSparseApplyAdagrad resourceSparseApplyAdagrad(Operand<?> var, Operand<?> accum, Operand<T> lr, Operand<T> grad, Operand<U> indices, ResourceSparseApplyAdagrad.Options... options)
ResourceSparseApplyAdagrad
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Learning rate. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.options
- carries optional attributes valuesResourceSparseApplyAdagrad
public <T,U extends Number> ResourceSparseApplyMomentum resourceSparseApplyMomentum(Operand<?> var, Operand<?> accum, Operand<T> lr, Operand<T> grad, Operand<U> indices, Operand<T> momentum, ResourceSparseApplyMomentum.Options... options)
ResourceSparseApplyMomentum
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Learning rate. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.momentum
- Momentum. Must be a scalar.options
- carries optional attributes valuesResourceSparseApplyMomentum
public SdcaShrinkL1 sdcaShrinkL1(Iterable<Operand<Float>> weights, Float l1, Float l2)
SdcaShrinkL1
operationweights
- a list of vectors where each value is the weight associated with al1
- Symmetric l1 regularization strength.l2
- Symmetric l2 regularization strength. Should be a positive float.SdcaShrinkL1
public <T> ResourceApplyAddSign resourceApplyAddSign(Operand<?> var, Operand<?> m, Operand<T> lr, Operand<T> alpha, Operand<T> signDecay, Operand<T> beta, Operand<T> grad, ResourceApplyAddSign.Options... options)
ResourceApplyAddSign
operationvar
- Should be from a Variable().m
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.alpha
- Must be a scalar.signDecay
- Must be a scalar.beta
- Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesResourceApplyAddSign
public <T> ResourceApplyAdadelta resourceApplyAdadelta(Operand<?> var, Operand<?> accum, Operand<?> accumUpdate, Operand<T> lr, Operand<T> rho, Operand<T> epsilon, Operand<T> grad, ResourceApplyAdadelta.Options... options)
ResourceApplyAdadelta
operationvar
- Should be from a Variable().accum
- Should be from a Variable().accumUpdate
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.rho
- Decay factor. Must be a scalar.epsilon
- Constant factor. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesResourceApplyAdadelta
public <T> ApplyCenteredRmsProp<T> applyCenteredRmsProp(Operand<T> var, Operand<T> mg, Operand<T> ms, Operand<T> mom, Operand<T> lr, Operand<T> rho, Operand<T> momentum, Operand<T> epsilon, Operand<T> grad, ApplyCenteredRmsProp.Options... options)
ApplyCenteredRmsProp
operationvar
- Should be from a Variable().mg
- Should be from a Variable().ms
- Should be from a Variable().mom
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.rho
- Decay rate. Must be a scalar.momentum
- epsilon
- Ridge term. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesApplyCenteredRmsProp
public <T,U extends Number> ResourceSparseApplyAdadelta resourceSparseApplyAdadelta(Operand<?> var, Operand<?> accum, Operand<?> accumUpdate, Operand<T> lr, Operand<T> rho, Operand<T> epsilon, Operand<T> grad, Operand<U> indices, ResourceSparseApplyAdadelta.Options... options)
ResourceSparseApplyAdadelta
operationvar
- accum
- Should be from a Variable().accumUpdate
- : Should be from a Variable().lr
- Learning rate. Must be a scalar.rho
- Decay factor. Must be a scalar.epsilon
- Constant factor. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.options
- carries optional attributes valuesResourceSparseApplyAdadelta
public <T,U extends Number> SparseApplyProximalGradientDescent<T> sparseApplyProximalGradientDescent(Operand<T> var, Operand<T> alpha, Operand<T> l1, Operand<T> l2, Operand<T> grad, Operand<U> indices, SparseApplyProximalGradientDescent.Options... options)
SparseApplyProximalGradientDescent
operationvar
- Should be from a Variable().alpha
- Scaling factor. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.options
- carries optional attributes valuesSparseApplyProximalGradientDescent
public <T> ResourceApplyGradientDescent resourceApplyGradientDescent(Operand<?> var, Operand<T> alpha, Operand<T> delta, ResourceApplyGradientDescent.Options... options)
ResourceApplyGradientDescent
operationvar
- Should be from a Variable().alpha
- Scaling factor. Must be a scalar.delta
- The change.options
- carries optional attributes valuesResourceApplyGradientDescent
public <T,U extends Number> SparseApplyAdadelta<T> sparseApplyAdadelta(Operand<T> var, Operand<T> accum, Operand<T> accumUpdate, Operand<T> lr, Operand<T> rho, Operand<T> epsilon, Operand<T> grad, Operand<U> indices, SparseApplyAdadelta.Options... options)
SparseApplyAdadelta
operationvar
- accum
- Should be from a Variable().accumUpdate
- : Should be from a Variable().lr
- Learning rate. Must be a scalar.rho
- Decay factor. Must be a scalar.epsilon
- Constant factor. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.options
- carries optional attributes valuesSparseApplyAdadelta
public <T> ResourceApplyMomentum resourceApplyMomentum(Operand<?> var, Operand<?> accum, Operand<T> lr, Operand<T> grad, Operand<T> momentum, ResourceApplyMomentum.Options... options)
ResourceApplyMomentum
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.grad
- The gradient.momentum
- Momentum. Must be a scalar.options
- carries optional attributes valuesResourceApplyMomentum
public <T> ApplyRmsProp<T> applyRmsProp(Operand<T> var, Operand<T> ms, Operand<T> mom, Operand<T> lr, Operand<T> rho, Operand<T> momentum, Operand<T> epsilon, Operand<T> grad, ApplyRmsProp.Options... options)
ApplyRmsProp
operationvar
- Should be from a Variable().ms
- Should be from a Variable().mom
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.rho
- Decay rate. Must be a scalar.momentum
- epsilon
- Ridge term. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesApplyRmsProp
public <T,U extends Number> ResourceSparseApplyProximalAdagrad resourceSparseApplyProximalAdagrad(Operand<?> var, Operand<?> accum, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<T> grad, Operand<U> indices, ResourceSparseApplyProximalAdagrad.Options... options)
ResourceSparseApplyProximalAdagrad
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Learning rate. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.options
- carries optional attributes valuesResourceSparseApplyProximalAdagrad
public GenerateVocabRemapping generateVocabRemapping(Operand<String> newVocabFile, Operand<String> oldVocabFile, Long newVocabOffset, Long numNewVocab, GenerateVocabRemapping.Options... options)
GenerateVocabRemapping
operationnewVocabFile
- Path to the new vocab file.oldVocabFile
- Path to the old vocab file.newVocabOffset
- How many entries into the new vocab file to start reading.numNewVocab
- Number of entries in the new vocab file to remap.options
- carries optional attributes valuesGenerateVocabRemapping
public <T> AccumulatorTakeGradient<T> accumulatorTakeGradient(Operand<String> handle, Operand<Integer> numRequired, Class<T> dtype)
AccumulatorTakeGradient
operationhandle
- The handle to an accumulator.numRequired
- Number of gradients required before we return an aggregate.dtype
- The data type of accumulated gradients. Needs to correspond to the typeAccumulatorTakeGradient
public <T,U extends Number> ResourceSparseApplyCenteredRmsProp resourceSparseApplyCenteredRmsProp(Operand<?> var, Operand<?> mg, Operand<?> ms, Operand<?> mom, Operand<T> lr, Operand<T> rho, Operand<T> momentum, Operand<T> epsilon, Operand<T> grad, Operand<U> indices, ResourceSparseApplyCenteredRmsProp.Options... options)
ResourceSparseApplyCenteredRmsProp
operationvar
- Should be from a Variable().mg
- Should be from a Variable().ms
- Should be from a Variable().mom
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.rho
- Decay rate. Must be a scalar.momentum
- epsilon
- Ridge term. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var, ms and mom.options
- carries optional attributes valuesResourceSparseApplyCenteredRmsProp
public <T,U extends Number> SparseApplyProximalAdagrad<T> sparseApplyProximalAdagrad(Operand<T> var, Operand<T> accum, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<T> grad, Operand<U> indices, SparseApplyProximalAdagrad.Options... options)
SparseApplyProximalAdagrad
operationvar
- Should be from a Variable().accum
- Should be from a Variable().lr
- Learning rate. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.grad
- The gradient.indices
- A vector of indices into the first dimension of var and accum.options
- carries optional attributes valuesSparseApplyProximalAdagrad
public <T> AccumulatorApplyGradient accumulatorApplyGradient(Operand<String> handle, Operand<Long> localStep, Operand<T> gradient)
AccumulatorApplyGradient
operationhandle
- The handle to a accumulator.localStep
- The local_step value at which the gradient was computed.gradient
- A tensor of the gradient to be accumulated.AccumulatorApplyGradient
public <T> ResourceApplyPowerSign resourceApplyPowerSign(Operand<?> var, Operand<?> m, Operand<T> lr, Operand<T> logbase, Operand<T> signDecay, Operand<T> beta, Operand<T> grad, ResourceApplyPowerSign.Options... options)
ResourceApplyPowerSign
operationvar
- Should be from a Variable().m
- Should be from a Variable().lr
- Scaling factor. Must be a scalar.logbase
- Must be a scalar.signDecay
- Must be a scalar.beta
- Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesResourceApplyPowerSign
public <T> ConditionalAccumulator conditionalAccumulator(Class<T> dtype, Shape shape, ConditionalAccumulator.Options... options)
ConditionalAccumulator
operationdtype
- The type of the value being accumulated.shape
- The shape of the values, can be [], in which case shape is unknown.options
- carries optional attributes valuesConditionalAccumulator
public SdcaFprint sdcaFprint(Operand<String> input)
SdcaFprint
operationinput
- vector of strings to compute fingerprints on.SdcaFprint
public <T> ApplyAdagradDa<T> applyAdagradDa(Operand<T> var, Operand<T> gradientAccumulator, Operand<T> gradientSquaredAccumulator, Operand<T> grad, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<Long> globalStep, ApplyAdagradDa.Options... options)
ApplyAdagradDa
operationvar
- Should be from a Variable().gradientAccumulator
- Should be from a Variable().gradientSquaredAccumulator
- Should be from a Variable().grad
- The gradient.lr
- Scaling factor. Must be a scalar.l1
- L1 regularization. Must be a scalar.l2
- L2 regularization. Must be a scalar.globalStep
- Training step number. Must be a scalar.options
- carries optional attributes valuesApplyAdagradDa
public SaveSlices saveSlices(Operand<String> filename, Operand<String> tensorNames, Operand<String> shapesAndSlices, Iterable<Operand<?>> data)
SaveSlices
operationfilename
- Must have a single element. The name of the file to which we write thetensorNames
- Shape `[N]`. The names of the tensors to be saved.shapesAndSlices
- Shape `[N]`. The shapes and slice specifications to use whendata
- `N` tensors to save.SaveSlices
public MergeV2Checkpoints mergeV2Checkpoints(Operand<String> checkpointPrefixes, Operand<String> destinationPrefix, MergeV2Checkpoints.Options... options)
MergeV2Checkpoints
operationcheckpointPrefixes
- prefixes of V2 checkpoints to merge.destinationPrefix
- scalar. The desired final prefix. Allowed to be the sameoptions
- carries optional attributes valuesMergeV2Checkpoints
public <T> ResourceApplyAdam resourceApplyAdam(Operand<?> var, Operand<?> m, Operand<?> v, Operand<T> beta1Power, Operand<T> beta2Power, Operand<T> lr, Operand<T> beta1, Operand<T> beta2, Operand<T> epsilon, Operand<T> grad, ResourceApplyAdam.Options... options)
ResourceApplyAdam
operationvar
- Should be from a Variable().m
- Should be from a Variable().v
- Should be from a Variable().beta1Power
- Must be a scalar.beta2Power
- Must be a scalar.lr
- Scaling factor. Must be a scalar.beta1
- Momentum factor. Must be a scalar.beta2
- Momentum factor. Must be a scalar.epsilon
- Ridge term. Must be a scalar.grad
- The gradient.options
- carries optional attributes valuesResourceApplyAdam
Copyright © 2022. All rights reserved.