@Namespace(value="tensorflow::ops") @NoOffset @Properties(inherit=tensorflow.class) public class ResourceApplyAdagradDA extends Pointer
Attrs
):
* use_locking: If True, updating of the var and accum tensors will be protected by
a lock; otherwise the behavior is undefined, but may exhibit less contention.
Returns:
* the created Operation
Modifier and Type | Class and Description |
---|---|
static class |
ResourceApplyAdagradDA.Attrs
Optional attribute setters for ResourceApplyAdagradDA
|
Pointer.CustomDeallocator, Pointer.Deallocator, Pointer.NativeDeallocator, Pointer.ReferenceCounter
Constructor and Description |
---|
ResourceApplyAdagradDA(Pointer p)
Pointer cast constructor.
|
ResourceApplyAdagradDA(Scope scope,
Input var,
Input gradient_accumulator,
Input gradient_squared_accumulator,
Input grad,
Input lr,
Input l1,
Input l2,
Input global_step) |
ResourceApplyAdagradDA(Scope scope,
Input var,
Input gradient_accumulator,
Input gradient_squared_accumulator,
Input grad,
Input lr,
Input l1,
Input l2,
Input global_step,
ResourceApplyAdagradDA.Attrs attrs) |
Modifier and Type | Method and Description |
---|---|
Operation |
asOperation() |
Operation |
operation() |
ResourceApplyAdagradDA |
operation(Operation setter) |
static ResourceApplyAdagradDA.Attrs |
UseLocking(boolean x) |
address, asBuffer, asByteBuffer, availablePhysicalBytes, calloc, capacity, capacity, close, deallocate, deallocate, deallocateReferences, deallocator, deallocator, equals, fill, formatBytes, free, getDirectBufferAddress, getPointer, getPointer, getPointer, getPointer, hashCode, interruptDeallocatorThread, isNull, isNull, limit, limit, malloc, maxBytes, maxPhysicalBytes, memchr, memcmp, memcpy, memmove, memset, offsetAddress, offsetof, offsetof, parseBytes, physicalBytes, physicalBytesInaccurate, position, position, put, realloc, referenceCount, releaseReference, retainReference, setNull, sizeof, sizeof, toString, totalBytes, totalCount, totalPhysicalBytes, withDeallocator, zero
public ResourceApplyAdagradDA(Pointer p)
Pointer(Pointer)
.public ResourceApplyAdagradDA(@Const @ByRef Scope scope, @ByVal Input var, @ByVal Input gradient_accumulator, @ByVal Input gradient_squared_accumulator, @ByVal Input grad, @ByVal Input lr, @ByVal Input l1, @ByVal Input l2, @ByVal Input global_step)
public ResourceApplyAdagradDA(@Const @ByRef Scope scope, @ByVal Input var, @ByVal Input gradient_accumulator, @ByVal Input gradient_squared_accumulator, @ByVal Input grad, @ByVal Input lr, @ByVal Input l1, @ByVal Input l2, @ByVal Input global_step, @Const @ByRef ResourceApplyAdagradDA.Attrs attrs)
@ByVal public static ResourceApplyAdagradDA.Attrs UseLocking(@Cast(value="bool") boolean x)
public ResourceApplyAdagradDA operation(Operation setter)
Copyright © 2022. All rights reserved.