Package | Description |
---|---|
org.bytedeco.tensorflow |
Modifier and Type | Method and Description |
---|---|
ApplyAdagradDA.Attrs |
ApplyAdagradDA.Attrs.getPointer(long i) |
ApplyAdagradDA.Attrs |
ApplyAdagradDA.Attrs.getPointer(long i) |
ApplyAdagradDA.Attrs |
ApplyAdagradDA.Attrs.position(long position) |
ApplyAdagradDA.Attrs |
ApplyAdagradDA.Attrs.position(long position) |
ApplyAdagradDA.Attrs |
ApplyAdagradDA.Attrs.use_locking_(boolean setter) |
ApplyAdagradDA.Attrs |
ApplyAdagradDA.Attrs.use_locking_(boolean setter) |
static ApplyAdagradDA.Attrs |
ApplyAdagradDA.UseLocking(boolean x) |
ApplyAdagradDA.Attrs |
ApplyAdagradDA.Attrs.UseLocking(boolean x)
If True, updating of the var and accum tensors will be protected by
a lock; otherwise the behavior is undefined, but may exhibit less contention.
|
static ApplyAdagradDA.Attrs |
ApplyAdagradDA.UseLocking(boolean x) |
ApplyAdagradDA.Attrs |
ApplyAdagradDA.Attrs.UseLocking(boolean x)
If True, updating of the var and accum tensors will be protected by
a lock; otherwise the behavior is undefined, but may exhibit less contention.
|
Constructor and Description |
---|
ApplyAdagradDA(Scope scope,
Input var,
Input gradient_accumulator,
Input gradient_squared_accumulator,
Input grad,
Input lr,
Input l1,
Input l2,
Input global_step,
ApplyAdagradDA.Attrs attrs) |
ApplyAdagradDA(Scope scope,
Input var,
Input gradient_accumulator,
Input gradient_squared_accumulator,
Input grad,
Input lr,
Input l1,
Input l2,
Input global_step,
ApplyAdagradDA.Attrs attrs) |
Copyright © 2022. All rights reserved.