@Namespace(value="tensorflow::ops") @NoOffset @Properties(inherit=tensorflow.class) public class SparseApplyAdagradDA extends Pointer
Attrs
):
* use_locking: If True, updating of the var and accum tensors will be protected by
a lock; otherwise the behavior is undefined, but may exhibit less contention.
Returns:
* Output
: Same as "var".Modifier and Type | Class and Description |
---|---|
static class |
SparseApplyAdagradDA.Attrs
Optional attribute setters for SparseApplyAdagradDA
|
Pointer.CustomDeallocator, Pointer.Deallocator, Pointer.NativeDeallocator, Pointer.ReferenceCounter
Constructor and Description |
---|
SparseApplyAdagradDA(Pointer p)
Pointer cast constructor.
|
SparseApplyAdagradDA(Scope scope,
Input var,
Input gradient_accumulator,
Input gradient_squared_accumulator,
Input grad,
Input indices,
Input lr,
Input l1,
Input l2,
Input global_step) |
SparseApplyAdagradDA(Scope scope,
Input var,
Input gradient_accumulator,
Input gradient_squared_accumulator,
Input grad,
Input indices,
Input lr,
Input l1,
Input l2,
Input global_step,
SparseApplyAdagradDA.Attrs attrs) |
Modifier and Type | Method and Description |
---|---|
Input |
asInput() |
Output |
asOutput() |
Node |
node() |
Operation |
operation() |
SparseApplyAdagradDA |
operation(Operation setter) |
Output |
out() |
SparseApplyAdagradDA |
out(Output setter) |
static SparseApplyAdagradDA.Attrs |
UseLocking(boolean x) |
address, asBuffer, asByteBuffer, availablePhysicalBytes, calloc, capacity, capacity, close, deallocate, deallocate, deallocateReferences, deallocator, deallocator, equals, fill, formatBytes, free, getDirectBufferAddress, getPointer, getPointer, getPointer, getPointer, hashCode, interruptDeallocatorThread, isNull, isNull, limit, limit, malloc, maxBytes, maxPhysicalBytes, memchr, memcmp, memcpy, memmove, memset, offsetAddress, offsetof, offsetof, parseBytes, physicalBytes, physicalBytesInaccurate, position, position, put, realloc, referenceCount, releaseReference, retainReference, setNull, sizeof, sizeof, toString, totalBytes, totalCount, totalPhysicalBytes, withDeallocator, zero
public SparseApplyAdagradDA(Pointer p)
Pointer(Pointer)
.public SparseApplyAdagradDA(@Const @ByRef Scope scope, @ByVal Input var, @ByVal Input gradient_accumulator, @ByVal Input gradient_squared_accumulator, @ByVal Input grad, @ByVal Input indices, @ByVal Input lr, @ByVal Input l1, @ByVal Input l2, @ByVal Input global_step)
public SparseApplyAdagradDA(@Const @ByRef Scope scope, @ByVal Input var, @ByVal Input gradient_accumulator, @ByVal Input gradient_squared_accumulator, @ByVal Input grad, @ByVal Input indices, @ByVal Input lr, @ByVal Input l1, @ByVal Input l2, @ByVal Input global_step, @Const @ByRef SparseApplyAdagradDA.Attrs attrs)
public Node node()
@ByVal public static SparseApplyAdagradDA.Attrs UseLocking(@Cast(value="bool") boolean x)
public SparseApplyAdagradDA operation(Operation setter)
public SparseApplyAdagradDA out(Output setter)
Copyright © 2022. All rights reserved.