Package | Description |
---|---|
org.bytedeco.tensorflow |
Modifier and Type | Method and Description |
---|---|
SparseApplyAdagradDA.Attrs |
SparseApplyAdagradDA.Attrs.getPointer(long i) |
SparseApplyAdagradDA.Attrs |
SparseApplyAdagradDA.Attrs.getPointer(long i) |
SparseApplyAdagradDA.Attrs |
SparseApplyAdagradDA.Attrs.position(long position) |
SparseApplyAdagradDA.Attrs |
SparseApplyAdagradDA.Attrs.position(long position) |
SparseApplyAdagradDA.Attrs |
SparseApplyAdagradDA.Attrs.use_locking_(boolean setter) |
SparseApplyAdagradDA.Attrs |
SparseApplyAdagradDA.Attrs.use_locking_(boolean setter) |
static SparseApplyAdagradDA.Attrs |
SparseApplyAdagradDA.UseLocking(boolean x) |
SparseApplyAdagradDA.Attrs |
SparseApplyAdagradDA.Attrs.UseLocking(boolean x)
If True, updating of the var and accum tensors will be protected by
a lock; otherwise the behavior is undefined, but may exhibit less contention.
|
static SparseApplyAdagradDA.Attrs |
SparseApplyAdagradDA.UseLocking(boolean x) |
SparseApplyAdagradDA.Attrs |
SparseApplyAdagradDA.Attrs.UseLocking(boolean x)
If True, updating of the var and accum tensors will be protected by
a lock; otherwise the behavior is undefined, but may exhibit less contention.
|
Constructor and Description |
---|
SparseApplyAdagradDA(Scope scope,
Input var,
Input gradient_accumulator,
Input gradient_squared_accumulator,
Input grad,
Input indices,
Input lr,
Input l1,
Input l2,
Input global_step,
SparseApplyAdagradDA.Attrs attrs) |
SparseApplyAdagradDA(Scope scope,
Input var,
Input gradient_accumulator,
Input gradient_squared_accumulator,
Input grad,
Input indices,
Input lr,
Input l1,
Input l2,
Input global_step,
SparseApplyAdagradDA.Attrs attrs) |
Copyright © 2022. All rights reserved.