Package | Description |
---|---|
org.tensorflow |
Defines classes to build, save, load and execute TensorFlow models.
|
org.tensorflow.op | |
org.tensorflow.op.audio | |
org.tensorflow.op.bitwise | |
org.tensorflow.op.collective | |
org.tensorflow.op.core | |
org.tensorflow.op.data | |
org.tensorflow.op.dtypes | |
org.tensorflow.op.image | |
org.tensorflow.op.io | |
org.tensorflow.op.linalg | |
org.tensorflow.op.math | |
org.tensorflow.op.nn | |
org.tensorflow.op.quantization | |
org.tensorflow.op.random | |
org.tensorflow.op.signal | |
org.tensorflow.op.sparse | |
org.tensorflow.op.strings | |
org.tensorflow.op.summary | |
org.tensorflow.op.train |
Modifier and Type | Method and Description |
---|---|
Output<?>[] |
Graph.addGradients(Output<?> y,
Output<?>[] x)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s,
i.e., dy/dx_1, dy/dx_2... |
Output<?>[] |
Graph.addGradients(Output<?> y,
Output<?>[] x)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s,
i.e., dy/dx_1, dy/dx_2... |
Output<?>[] |
Graph.addGradients(String prefix,
Output<?>[] y,
Output<?>[] x,
Output<?>[] dx)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s, i.e.,
d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2... |
Output<?>[] |
Graph.addGradients(String prefix,
Output<?>[] y,
Output<?>[] x,
Output<?>[] dx)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s, i.e.,
d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2... |
Output<T> |
Operand.asOutput()
Returns the symbolic handle of a tensor.
|
Output<T> |
Output.asOutput() |
Output<T> |
Output.asOutput() |
Output<T> |
Operand.asOutput()
Returns the symbolic handle of a tensor.
|
<T> Output<T> |
Operation.output(int idx)
Returns a symbolic handle to one of the tensors produced by this operation.
|
<T> Output<T> |
Operation.output(int idx)
Returns a symbolic handle to one of the tensors produced by this operation.
|
Output<?>[] |
Operation.outputList(int idx,
int length)
Returns symbolic handles to a list of tensors produced by this operation.
|
Output<?>[] |
Operation.outputList(int idx,
int length)
Returns symbolic handles to a list of tensors produced by this operation.
|
Output<?>[] |
Graph.whileLoop(Output<?>[] inputs,
Graph.WhileSubgraphBuilder cgBuilder,
Graph.WhileSubgraphBuilder bgBuilder,
String name)
Builds a while loop.
|
Output<?>[] |
Graph.whileLoop(Output<?>[] inputs,
Graph.WhileSubgraphBuilder cgBuilder,
Graph.WhileSubgraphBuilder bgBuilder,
String name)
Builds a while loop.
|
Modifier and Type | Method and Description |
---|---|
Output<?>[] |
Graph.addGradients(Output<?> y,
Output<?>[] x)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s,
i.e., dy/dx_1, dy/dx_2... |
Output<?>[] |
Graph.addGradients(Output<?> y,
Output<?>[] x)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s,
i.e., dy/dx_1, dy/dx_2... |
Output<?>[] |
Graph.addGradients(Output<?> y,
Output<?>[] x)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s,
i.e., dy/dx_1, dy/dx_2... |
Output<?>[] |
Graph.addGradients(Output<?> y,
Output<?>[] x)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s,
i.e., dy/dx_1, dy/dx_2... |
Output<?>[] |
Graph.addGradients(String prefix,
Output<?>[] y,
Output<?>[] x,
Output<?>[] dx)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s, i.e.,
d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2... |
Output<?>[] |
Graph.addGradients(String prefix,
Output<?>[] y,
Output<?>[] x,
Output<?>[] dx)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s, i.e.,
d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2... |
Output<?>[] |
Graph.addGradients(String prefix,
Output<?>[] y,
Output<?>[] x,
Output<?>[] dx)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s, i.e.,
d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2... |
Output<?>[] |
Graph.addGradients(String prefix,
Output<?>[] y,
Output<?>[] x,
Output<?>[] dx)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s, i.e.,
d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2... |
Output<?>[] |
Graph.addGradients(String prefix,
Output<?>[] y,
Output<?>[] x,
Output<?>[] dx)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s, i.e.,
d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2... |
Output<?>[] |
Graph.addGradients(String prefix,
Output<?>[] y,
Output<?>[] x,
Output<?>[] dx)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s, i.e.,
d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2... |
GraphOperationBuilder |
GraphOperationBuilder.addInput(Output<?> input) |
OperationBuilder |
OperationBuilder.addInput(Output<?> input)
Add the output of another operation as the next input of the operation being built.
|
OperationBuilder |
OperationBuilder.addInput(Output<?> input)
Add the output of another operation as the next input of the operation being built.
|
GraphOperationBuilder |
GraphOperationBuilder.addInput(Output<?> input) |
GraphOperationBuilder |
GraphOperationBuilder.addInputList(Output<?>[] inputs) |
OperationBuilder |
OperationBuilder.addInputList(Output<?>[] inputs)
Add the outputs of another operation as the next inputs of the operation being built.
|
OperationBuilder |
OperationBuilder.addInputList(Output<?>[] inputs)
Add the outputs of another operation as the next inputs of the operation being built.
|
GraphOperationBuilder |
GraphOperationBuilder.addInputList(Output<?>[] inputs) |
void |
Graph.WhileSubgraphBuilder.buildSubgraph(Graph g,
Output<?>[] inputs,
Output<?>[] outputs)
To be overridden by user with code to build conditional or body subgraph for a while loop
|
void |
Graph.WhileSubgraphBuilder.buildSubgraph(Graph g,
Output<?>[] inputs,
Output<?>[] outputs)
To be overridden by user with code to build conditional or body subgraph for a while loop
|
void |
Graph.WhileSubgraphBuilder.buildSubgraph(Graph g,
Output<?>[] inputs,
Output<?>[] outputs)
To be overridden by user with code to build conditional or body subgraph for a while loop
|
void |
Graph.WhileSubgraphBuilder.buildSubgraph(Graph g,
Output<?>[] inputs,
Output<?>[] outputs)
To be overridden by user with code to build conditional or body subgraph for a while loop
|
Session.Runner |
Session.Runner.fetch(Output<?> output)
Makes
Session.Runner.run() return the Tensor referred to by output . |
Session.Runner |
Session.Runner.fetch(Output<?> output)
Makes
Session.Runner.run() return the Tensor referred to by output . |
Output<?>[] |
Graph.whileLoop(Output<?>[] inputs,
Graph.WhileSubgraphBuilder cgBuilder,
Graph.WhileSubgraphBuilder bgBuilder,
String name)
Builds a while loop.
|
Output<?>[] |
Graph.whileLoop(Output<?>[] inputs,
Graph.WhileSubgraphBuilder cgBuilder,
Graph.WhileSubgraphBuilder bgBuilder,
String name)
Builds a while loop.
|
Modifier and Type | Method and Description |
---|---|
static Output<?>[] |
Operands.asOutputs(Iterable<? extends Operand<?>> inputs)
|
static Output<?>[] |
Operands.asOutputs(Iterable<? extends Operand<?>> inputs)
|
Modifier and Type | Method and Description |
---|---|
Output<Float> |
AudioSpectrogram.asOutput() |
Output<String> |
EncodeWav.asOutput() |
Output<Float> |
Mfcc.asOutput() |
Output<Float> |
Mfcc.asOutput() |
Output<String> |
EncodeWav.asOutput() |
Output<Float> |
AudioSpectrogram.asOutput() |
Output<Float> |
DecodeWav.audio()
2-D with shape `[length, channels]`.
|
Output<Float> |
DecodeWav.audio()
2-D with shape `[length, channels]`.
|
Output<String> |
EncodeWav.contents()
0-D.
|
Output<String> |
EncodeWav.contents()
0-D.
|
Output<Float> |
Mfcc.output() |
Output<Float> |
Mfcc.output() |
Output<Integer> |
DecodeWav.sampleRate()
Scalar holding the sample rate found in the WAV header.
|
Output<Integer> |
DecodeWav.sampleRate()
Scalar holding the sample rate found in the WAV header.
|
Output<Float> |
AudioSpectrogram.spectrogram()
3D representation of the audio frequencies as an image.
|
Output<Float> |
AudioSpectrogram.spectrogram()
3D representation of the audio frequencies as an image.
|
Modifier and Type | Method and Description |
---|---|
Output<T> |
BitwiseAnd.asOutput() |
Output<T> |
BitwiseOr.asOutput() |
Output<T> |
BitwiseXor.asOutput() |
Output<T> |
Invert.asOutput() |
Output<T> |
LeftShift.asOutput() |
Output<T> |
RightShift.asOutput() |
Output<T> |
RightShift.asOutput() |
Output<T> |
LeftShift.asOutput() |
Output<T> |
Invert.asOutput() |
Output<T> |
BitwiseXor.asOutput() |
Output<T> |
BitwiseOr.asOutput() |
Output<T> |
BitwiseAnd.asOutput() |
Output<T> |
Invert.y() |
Output<T> |
Invert.y() |
Output<T> |
BitwiseAnd.z() |
Output<T> |
BitwiseOr.z() |
Output<T> |
BitwiseXor.z() |
Output<T> |
LeftShift.z() |
Output<T> |
RightShift.z() |
Output<T> |
RightShift.z() |
Output<T> |
LeftShift.z() |
Output<T> |
BitwiseXor.z() |
Output<T> |
BitwiseOr.z() |
Output<T> |
BitwiseAnd.z() |
Modifier and Type | Method and Description |
---|---|
Output<T> |
AllReduce.asOutput() |
Output<T> |
BroadcastRecv.asOutput() |
Output<T> |
BroadcastSend.asOutput() |
Output<T> |
BroadcastSend.asOutput() |
Output<T> |
BroadcastRecv.asOutput() |
Output<T> |
AllReduce.asOutput() |
Output<T> |
AllReduce.data() |
Output<T> |
BroadcastRecv.data() |
Output<T> |
BroadcastSend.data() |
Output<T> |
BroadcastSend.data() |
Output<T> |
BroadcastRecv.data() |
Output<T> |
AllReduce.data() |
Modifier and Type | Method and Description |
---|---|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParameters.accumulators()
Parameter accumulators updated by the Adadelta optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.accumulators()
Parameter accumulators updated by the Adadelta optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdagradParameters.accumulators()
Parameter accumulators updated by the Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.accumulators()
Parameter accumulators updated by the Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParameters.accumulators()
Parameter accumulators updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.accumulators()
Parameter accumulators updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMDLAdagradLightParameters.accumulators()
Parameter accumulators updated by the MDL Adagrad Light optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingProximalAdagradParameters.accumulators()
Parameter accumulators updated by the proximal Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.accumulators()
Parameter accumulators updated by the proximal Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.accumulators()
Parameter accumulators updated by the proximal Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingProximalAdagradParameters.accumulators()
Parameter accumulators updated by the proximal Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMDLAdagradLightParameters.accumulators()
Parameter accumulators updated by the MDL Adagrad Light optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.accumulators()
Parameter accumulators updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParameters.accumulators()
Parameter accumulators updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.accumulators()
Parameter accumulators updated by the Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdagradParameters.accumulators()
Parameter accumulators updated by the Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.accumulators()
Parameter accumulators updated by the Adadelta optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParameters.accumulators()
Parameter accumulators updated by the Adadelta optimization algorithm.
|
Output<Boolean> |
All.asOutput() |
Output<T> |
AllToAll.asOutput() |
Output<Boolean> |
Any.asOutput() |
Output<T> |
ApplyAdagradV2.asOutput() |
Output<Object> |
AssertNextDataset.asOutput() |
Output<T> |
Assign.asOutput() |
Output<T> |
AssignAdd.asOutput() |
Output<T> |
AssignSub.asOutput() |
Output<Object> |
AutoShardDataset.asOutput() |
Output<String> |
Barrier.asOutput() |
Output<Integer> |
BarrierIncompleteSize.asOutput() |
Output<Integer> |
BarrierReadySize.asOutput() |
Output<T> |
BatchMatMulV2.asOutput() |
Output<T> |
BatchToSpace.asOutput() |
Output<T> |
BatchToSpaceNd.asOutput() |
Output<U> |
Bitcast.asOutput() |
Output<Float> |
BoostedTreesAggregateStats.asOutput() |
Output<Boolean> |
BoostedTreesCenterBias.asOutput() |
Output<Object> |
BoostedTreesEnsembleResourceHandleOp.asOutput() |
Output<String> |
BoostedTreesExampleDebugOutputs.asOutput() |
Output<Float> |
BoostedTreesMakeStatsSummary.asOutput() |
Output<Float> |
BoostedTreesPredict.asOutput() |
Output<Object> |
BoostedTreesQuantileStreamResourceHandleOp.asOutput() |
Output<T> |
BroadcastDynamicShape.asOutput() |
Output<T> |
BroadcastTo.asOutput() |
Output<Integer> |
Bucketize.asOutput() |
Output<Object> |
CacheDatasetV2.asOutput() |
Output<Object> |
ChooseFastestDataset.asOutput() |
Output<T> |
ClipByValue.asOutput() |
Output<T> |
CollectiveGather.asOutput() |
Output<T> |
CollectivePermute.asOutput() |
Output<T> |
Concat.asOutput() |
Output<String> |
ConfigureDistributedTPU.asOutput() |
Output<T> |
Constant.asOutput() |
Output<T> |
CountUpTo.asOutput() |
Output<T> |
CrossReplicaSum.asOutput() |
Output<Object> |
CSVDataset.asOutput() |
Output<T> |
CudnnRNNCanonicalToParamsV2.asOutput() |
Output<T> |
CumulativeLogsumexp.asOutput() |
Output<Long> |
DatasetCardinality.asOutput() |
Output<Object> |
DatasetFromGraph.asOutput() |
Output<T> |
DebugGradientIdentity.asOutput() |
Output<T> |
DebugGradientRefIdentity.asOutput() |
Output<T> |
DecodePaddedRaw.asOutput() |
Output<T> |
DeepCopy.asOutput() |
Output<T> |
DestroyTemporaryVariable.asOutput() |
Output<Object> |
DirectedInterleaveDataset.asOutput() |
Output<T> |
DrawBoundingBoxesV2.asOutput() |
Output<T> |
DynamicStitch.asOutput() |
Output<Float> |
EditDistance.asOutput() |
Output<T> |
Einsum.asOutput() |
Output<T> |
Empty.asOutput() |
Output<Object> |
EmptyTensorList.asOutput() |
Output<String> |
EncodeProto.asOutput() |
Output<T> |
EnsureShape.asOutput() |
Output<T> |
Enter.asOutput() |
Output<T> |
EuclideanNorm.asOutput() |
Output<T> |
Exit.asOutput() |
Output<T> |
ExpandDims.asOutput() |
Output<Object> |
ExperimentalAutoShardDataset.asOutput() |
Output<Object> |
ExperimentalBytesProducedStatsDataset.asOutput() |
Output<Object> |
ExperimentalChooseFastestDataset.asOutput() |
Output<Long> |
ExperimentalDatasetCardinality.asOutput() |
Output<Object> |
ExperimentalDenseToSparseBatchDataset.asOutput() |
Output<Object> |
ExperimentalLatencyStatsDataset.asOutput() |
Output<Object> |
ExperimentalMatchingFilesDataset.asOutput() |
Output<Object> |
ExperimentalMaxIntraOpParallelismDataset.asOutput() |
Output<Object> |
ExperimentalParseExampleDataset.asOutput() |
Output<Object> |
ExperimentalPrivateThreadPoolDataset.asOutput() |
Output<Object> |
ExperimentalRandomDataset.asOutput() |
Output<Object> |
ExperimentalRebatchDataset.asOutput() |
Output<Object> |
ExperimentalSetStatsAggregatorDataset.asOutput() |
Output<Object> |
ExperimentalSlidingWindowDataset.asOutput() |
Output<Object> |
ExperimentalSqlDataset.asOutput() |
Output<Object> |
ExperimentalStatsAggregatorHandle.asOutput() |
Output<String> |
ExperimentalStatsAggregatorSummary.asOutput() |
Output<Object> |
ExperimentalUnbatchDataset.asOutput() |
Output<T> |
ExtractVolumePatches.asOutput() |
Output<U> |
Fill.asOutput() |
Output<UInt8> |
Fingerprint.asOutput() |
Output<T> |
Gather.asOutput() |
Output<T> |
GatherNd.asOutput() |
Output<Object> |
GetSessionHandle.asOutput() |
Output<T> |
GetSessionTensor.asOutput() |
Output<T> |
GuaranteeConst.asOutput() |
Output<Object> |
HashTable.asOutput() |
Output<U> |
HistogramFixedWidth.asOutput() |
Output<T> |
Identity.asOutput() |
Output<Object> |
IgnoreErrorsDataset.asOutput() |
Output<T> |
ImmutableConst.asOutput() |
Output<T> |
InfeedDequeue.asOutput() |
Output<T> |
InplaceAdd.asOutput() |
Output<T> |
InplaceSub.asOutput() |
Output<T> |
InplaceUpdate.asOutput() |
Output<Boolean> |
IsBoostedTreesEnsembleInitialized.asOutput() |
Output<Boolean> |
IsBoostedTreesQuantileStreamResourceInitialized.asOutput() |
Output<Boolean> |
IsVariableInitialized.asOutput() |
Output<String> |
IteratorGetDevice.asOutput() |
Output<Long> |
KMC2ChainInitialization.asOutput() |
Output<Float> |
KmeansPlusPlusInitialization.asOutput() |
Output<T> |
LinSpace.asOutput() |
Output<Object> |
LMDBDataset.asOutput() |
Output<U> |
LookupTableFind.asOutput() |
Output<Long> |
LookupTableSize.asOutput() |
Output<Boolean> |
LoopCond.asOutput() |
Output<U> |
LowerBound.asOutput() |
Output<Integer> |
MapIncompleteSize.asOutput() |
Output<Integer> |
MapSize.asOutput() |
Output<T> |
MatrixDiagPartV2.asOutput() |
Output<T> |
MatrixDiagV2.asOutput() |
Output<T> |
MatrixSetDiagV2.asOutput() |
Output<T> |
Max.asOutput() |
Output<Object> |
MaxIntraOpParallelismDataset.asOutput() |
Output<T> |
Min.asOutput() |
Output<T> |
MirrorPad.asOutput() |
Output<T> |
MirrorPadGrad.asOutput() |
Output<T> |
MulNoNan.asOutput() |
Output<Object> |
MutableDenseHashTable.asOutput() |
Output<Object> |
MutableHashTable.asOutput() |
Output<Object> |
MutableHashTableOfTensors.asOutput() |
Output<Object> |
Mutex.asOutput() |
Output<Object> |
MutexLock.asOutput() |
Output<T> |
NcclAllReduce.asOutput() |
Output<T> |
NcclBroadcast.asOutput() |
Output<T> |
NcclReduce.asOutput() |
Output<T> |
NextAfter.asOutput() |
Output<T> |
NextIteration.asOutput() |
Output<U> |
NonDeterministicInts.asOutput() |
Output<Object> |
NonSerializableDataset.asOutput() |
Output<U> |
OneHot.asOutput() |
Output<T> |
OnesLike.asOutput() |
Output<Integer> |
OrderedMapIncompleteSize.asOutput() |
Output<Integer> |
OrderedMapSize.asOutput() |
Output<T> |
OutfeedDequeue.asOutput() |
Output<T> |
Pad.asOutput() |
Output<T> |
ParallelConcat.asOutput() |
Output<T> |
ParallelDynamicStitch.asOutput() |
Output<T> |
Placeholder.asOutput() |
Output<T> |
PlaceholderWithDefault.asOutput() |
Output<Object> |
Prelinearize.asOutput() |
Output<Object> |
PrelinearizeTuple.asOutput() |
Output<Object> |
PrivateThreadPoolDataset.asOutput() |
Output<T> |
Prod.asOutput() |
Output<U> |
RaggedTensorToTensor.asOutput() |
Output<Object> |
RaggedTensorToVariant.asOutput() |
Output<T> |
Range.asOutput() |
Output<Integer> |
Rank.asOutput() |
Output<T> |
ReadVariableOp.asOutput() |
Output<Object> |
RebatchDataset.asOutput() |
Output<Boolean> |
ReduceAll.asOutput() |
Output<Boolean> |
ReduceAny.asOutput() |
Output<T> |
ReduceMax.asOutput() |
Output<T> |
ReduceMin.asOutput() |
Output<T> |
ReduceProd.asOutput() |
Output<T> |
ReduceSum.asOutput() |
Output<T> |
RefEnter.asOutput() |
Output<T> |
RefExit.asOutput() |
Output<T> |
RefIdentity.asOutput() |
Output<T> |
RefNextIteration.asOutput() |
Output<T> |
RefSelect.asOutput() |
Output<T> |
Reshape.asOutput() |
Output<Integer> |
ResourceAccumulatorNumAccumulated.asOutput() |
Output<T> |
ResourceAccumulatorTakeGradient.asOutput() |
Output<Object> |
ResourceConditionalAccumulator.asOutput() |
Output<T> |
ResourceCountUpTo.asOutput() |
Output<U> |
ResourceGather.asOutput() |
Output<U> |
ResourceGatherNd.asOutput() |
Output<Float> |
RetrieveTPUEmbeddingStochasticGradientDescentParameters.asOutput() |
Output<T> |
Reverse.asOutput() |
Output<T> |
ReverseSequence.asOutput() |
Output<T> |
Roll.asOutput() |
Output<String> |
Rpc.asOutput() |
Output<Object> |
SamplingDataset.asOutput() |
Output<Float> |
ScaleAndTranslate.asOutput() |
Output<T> |
ScaleAndTranslateGrad.asOutput() |
Output<T> |
ScatterAdd.asOutput() |
Output<T> |
ScatterDiv.asOutput() |
Output<T> |
ScatterMax.asOutput() |
Output<T> |
ScatterMin.asOutput() |
Output<T> |
ScatterMul.asOutput() |
Output<U> |
ScatterNd.asOutput() |
Output<T> |
ScatterNdAdd.asOutput() |
Output<T> |
ScatterNdNonAliasingAdd.asOutput() |
Output<T> |
ScatterNdSub.asOutput() |
Output<T> |
ScatterNdUpdate.asOutput() |
Output<T> |
ScatterSub.asOutput() |
Output<T> |
ScatterUpdate.asOutput() |
Output<T> |
SelectV2.asOutput() |
Output<Integer> |
SetSize.asOutput() |
Output<U> |
Shape.asOutput() |
Output<Object> |
ShardDataset.asOutput() |
Output<Object> |
ShuffleDatasetV2.asOutput() |
Output<U> |
Size.asOutput() |
Output<Object> |
SleepDataset.asOutput() |
Output<T> |
Slice.asOutput() |
Output<Object> |
SlidingWindowDataset.asOutput() |
Output<T> |
Snapshot.asOutput() |
Output<Object> |
SnapshotDataset.asOutput() |
Output<T> |
SpaceToBatchNd.asOutput() |
Output<T> |
SparseApplyAdagradV2.asOutput() |
Output<T> |
Squeeze.asOutput() |
Output<T> |
Stack.asOutput() |
Output<Integer> |
StageSize.asOutput() |
Output<V> |
StatefulRandomBinomial.asOutput() |
Output<U> |
StatefulStandardNormal.asOutput() |
Output<U> |
StatefulStandardNormalV2.asOutput() |
Output<U> |
StatefulTruncatedNormal.asOutput() |
Output<U> |
StatefulUniform.asOutput() |
Output<U> |
StatefulUniformFullInt.asOutput() |
Output<U> |
StatefulUniformInt.asOutput() |
Output<Object> |
StatsAggregatorHandleV2.asOutput() |
Output<T> |
StopGradient.asOutput() |
Output<T> |
StridedSlice.asOutput() |
Output<T> |
StridedSliceAssign.asOutput() |
Output<U> |
StridedSliceGrad.asOutput() |
Output<String> |
StringLower.asOutput() |
Output<String> |
StringUpper.asOutput() |
Output<T> |
Sum.asOutput() |
Output<T> |
TemporaryVariable.asOutput() |
Output<T> |
TensorArrayGather.asOutput() |
Output<T> |
TensorArrayPack.asOutput() |
Output<T> |
TensorArrayRead.asOutput() |
Output<Float> |
TensorArrayScatter.asOutput() |
Output<Integer> |
TensorArraySize.asOutput() |
Output<Float> |
TensorArraySplit.asOutput() |
Output<Float> |
TensorArrayUnpack.asOutput() |
Output<Float> |
TensorArrayWrite.asOutput() |
Output<Boolean> |
TensorForestTreeIsInitializedOp.asOutput() |
Output<Float> |
TensorForestTreePredict.asOutput() |
Output<Object> |
TensorForestTreeResourceHandleOp.asOutput() |
Output<String> |
TensorForestTreeSerialize.asOutput() |
Output<Integer> |
TensorForestTreeSize.asOutput() |
Output<Object> |
TensorListConcatLists.asOutput() |
Output<T> |
TensorListElementShape.asOutput() |
Output<Object> |
TensorListFromTensor.asOutput() |
Output<T> |
TensorListGather.asOutput() |
Output<T> |
TensorListGetItem.asOutput() |
Output<Integer> |
TensorListLength.asOutput() |
Output<Object> |
TensorListPushBack.asOutput() |
Output<Object> |
TensorListPushBackBatch.asOutput() |
Output<Object> |
TensorListReserve.asOutput() |
Output<Object> |
TensorListResize.asOutput() |
Output<Object> |
TensorListScatter.asOutput() |
Output<Object> |
TensorListScatterIntoExistingList.asOutput() |
Output<Object> |
TensorListScatterV2.asOutput() |
Output<Object> |
TensorListSetItem.asOutput() |
Output<Object> |
TensorListSplit.asOutput() |
Output<T> |
TensorListStack.asOutput() |
Output<T> |
TensorScatterAdd.asOutput() |
Output<T> |
TensorScatterSub.asOutput() |
Output<T> |
TensorScatterUpdate.asOutput() |
Output<T> |
TensorStridedSliceUpdate.asOutput() |
Output<Object> |
ThreadPoolDataset.asOutput() |
Output<Object> |
ThreadPoolHandle.asOutput() |
Output<T> |
Tile.asOutput() |
Output<Double> |
Timestamp.asOutput() |
Output<String> |
TPUCompilationResult.asOutput() |
Output<Float> |
TPUEmbeddingActivations.asOutput() |
Output<Integer> |
TPUOrdinalSelector.asOutput() |
Output<T> |
TPUReplicatedInput.asOutput() |
Output<T> |
TridiagonalMatMul.asOutput() |
Output<T> |
TridiagonalSolve.asOutput() |
Output<T> |
Unbatch.asOutput() |
Output<T> |
UnbatchGrad.asOutput() |
Output<String> |
UnicodeEncode.asOutput() |
Output<Object> |
UniqueDataset.asOutput() |
Output<T> |
UnravelIndex.asOutput() |
Output<String> |
UnsortedSegmentJoin.asOutput() |
Output<Object> |
UnwrapDatasetVariant.asOutput() |
Output<U> |
UpperBound.asOutput() |
Output<Object> |
VarHandleOp.asOutput() |
Output<T> |
Variable.asOutput() |
Output<T> |
VariableShape.asOutput() |
Output<Boolean> |
VarIsInitializedOp.asOutput() |
Output<Long> |
Where.asOutput() |
Output<T> |
Where3.asOutput() |
Output<String> |
WorkerHeartbeat.asOutput() |
Output<Object> |
WrapDatasetVariant.asOutput() |
Output<T> |
Zeros.asOutput() |
Output<T> |
ZerosLike.asOutput() |
Output<T> |
ZerosLike.asOutput() |
Output<T> |
Zeros.asOutput() |
Output<Object> |
WrapDatasetVariant.asOutput() |
Output<String> |
WorkerHeartbeat.asOutput() |
Output<T> |
Where3.asOutput() |
Output<Long> |
Where.asOutput() |
Output<Boolean> |
VarIsInitializedOp.asOutput() |
Output<T> |
VariableShape.asOutput() |
Output<T> |
Variable.asOutput() |
Output<Object> |
VarHandleOp.asOutput() |
Output<U> |
UpperBound.asOutput() |
Output<Object> |
UnwrapDatasetVariant.asOutput() |
Output<String> |
UnsortedSegmentJoin.asOutput() |
Output<T> |
UnravelIndex.asOutput() |
Output<Object> |
UniqueDataset.asOutput() |
Output<String> |
UnicodeEncode.asOutput() |
Output<T> |
UnbatchGrad.asOutput() |
Output<T> |
Unbatch.asOutput() |
Output<T> |
TridiagonalSolve.asOutput() |
Output<T> |
TridiagonalMatMul.asOutput() |
Output<T> |
TPUReplicatedInput.asOutput() |
Output<Integer> |
TPUOrdinalSelector.asOutput() |
Output<Float> |
TPUEmbeddingActivations.asOutput() |
Output<String> |
TPUCompilationResult.asOutput() |
Output<Double> |
Timestamp.asOutput() |
Output<T> |
Tile.asOutput() |
Output<Object> |
ThreadPoolHandle.asOutput() |
Output<Object> |
ThreadPoolDataset.asOutput() |
Output<T> |
TensorStridedSliceUpdate.asOutput() |
Output<T> |
TensorScatterUpdate.asOutput() |
Output<T> |
TensorScatterSub.asOutput() |
Output<T> |
TensorScatterAdd.asOutput() |
Output<T> |
TensorListStack.asOutput() |
Output<Object> |
TensorListSplit.asOutput() |
Output<Object> |
TensorListSetItem.asOutput() |
Output<Object> |
TensorListScatterV2.asOutput() |
Output<Object> |
TensorListScatterIntoExistingList.asOutput() |
Output<Object> |
TensorListScatter.asOutput() |
Output<Object> |
TensorListResize.asOutput() |
Output<Object> |
TensorListReserve.asOutput() |
Output<Object> |
TensorListPushBackBatch.asOutput() |
Output<Object> |
TensorListPushBack.asOutput() |
Output<Integer> |
TensorListLength.asOutput() |
Output<T> |
TensorListGetItem.asOutput() |
Output<T> |
TensorListGather.asOutput() |
Output<Object> |
TensorListFromTensor.asOutput() |
Output<T> |
TensorListElementShape.asOutput() |
Output<Object> |
TensorListConcatLists.asOutput() |
Output<Integer> |
TensorForestTreeSize.asOutput() |
Output<String> |
TensorForestTreeSerialize.asOutput() |
Output<Object> |
TensorForestTreeResourceHandleOp.asOutput() |
Output<Float> |
TensorForestTreePredict.asOutput() |
Output<Boolean> |
TensorForestTreeIsInitializedOp.asOutput() |
Output<Float> |
TensorArrayWrite.asOutput() |
Output<Float> |
TensorArrayUnpack.asOutput() |
Output<Float> |
TensorArraySplit.asOutput() |
Output<Integer> |
TensorArraySize.asOutput() |
Output<Float> |
TensorArrayScatter.asOutput() |
Output<T> |
TensorArrayRead.asOutput() |
Output<T> |
TensorArrayPack.asOutput() |
Output<T> |
TensorArrayGather.asOutput() |
Output<T> |
TemporaryVariable.asOutput() |
Output<T> |
Sum.asOutput() |
Output<String> |
StringUpper.asOutput() |
Output<String> |
StringLower.asOutput() |
Output<U> |
StridedSliceGrad.asOutput() |
Output<T> |
StridedSliceAssign.asOutput() |
Output<T> |
StridedSlice.asOutput() |
Output<T> |
StopGradient.asOutput() |
Output<Object> |
StatsAggregatorHandleV2.asOutput() |
Output<U> |
StatefulUniformInt.asOutput() |
Output<U> |
StatefulUniformFullInt.asOutput() |
Output<U> |
StatefulUniform.asOutput() |
Output<U> |
StatefulTruncatedNormal.asOutput() |
Output<U> |
StatefulStandardNormalV2.asOutput() |
Output<U> |
StatefulStandardNormal.asOutput() |
Output<V> |
StatefulRandomBinomial.asOutput() |
Output<Integer> |
StageSize.asOutput() |
Output<T> |
Stack.asOutput() |
Output<T> |
Squeeze.asOutput() |
Output<T> |
SparseApplyAdagradV2.asOutput() |
Output<T> |
SpaceToBatchNd.asOutput() |
Output<Object> |
SnapshotDataset.asOutput() |
Output<T> |
Snapshot.asOutput() |
Output<Object> |
SlidingWindowDataset.asOutput() |
Output<T> |
Slice.asOutput() |
Output<Object> |
SleepDataset.asOutput() |
Output<U> |
Size.asOutput() |
Output<Object> |
ShuffleDatasetV2.asOutput() |
Output<Object> |
ShardDataset.asOutput() |
Output<U> |
Shape.asOutput() |
Output<Integer> |
SetSize.asOutput() |
Output<T> |
SelectV2.asOutput() |
Output<T> |
ScatterUpdate.asOutput() |
Output<T> |
ScatterSub.asOutput() |
Output<T> |
ScatterNdUpdate.asOutput() |
Output<T> |
ScatterNdSub.asOutput() |
Output<T> |
ScatterNdNonAliasingAdd.asOutput() |
Output<T> |
ScatterNdAdd.asOutput() |
Output<U> |
ScatterNd.asOutput() |
Output<T> |
ScatterMul.asOutput() |
Output<T> |
ScatterMin.asOutput() |
Output<T> |
ScatterMax.asOutput() |
Output<T> |
ScatterDiv.asOutput() |
Output<T> |
ScatterAdd.asOutput() |
Output<T> |
ScaleAndTranslateGrad.asOutput() |
Output<Float> |
ScaleAndTranslate.asOutput() |
Output<Object> |
SamplingDataset.asOutput() |
Output<String> |
Rpc.asOutput() |
Output<T> |
Roll.asOutput() |
Output<T> |
ReverseSequence.asOutput() |
Output<T> |
Reverse.asOutput() |
Output<Float> |
RetrieveTPUEmbeddingStochasticGradientDescentParameters.asOutput() |
Output<U> |
ResourceGatherNd.asOutput() |
Output<U> |
ResourceGather.asOutput() |
Output<T> |
ResourceCountUpTo.asOutput() |
Output<Object> |
ResourceConditionalAccumulator.asOutput() |
Output<T> |
ResourceAccumulatorTakeGradient.asOutput() |
Output<Integer> |
ResourceAccumulatorNumAccumulated.asOutput() |
Output<T> |
Reshape.asOutput() |
Output<T> |
RefSelect.asOutput() |
Output<T> |
RefNextIteration.asOutput() |
Output<T> |
RefIdentity.asOutput() |
Output<T> |
RefExit.asOutput() |
Output<T> |
RefEnter.asOutput() |
Output<T> |
ReduceSum.asOutput() |
Output<T> |
ReduceProd.asOutput() |
Output<T> |
ReduceMin.asOutput() |
Output<T> |
ReduceMax.asOutput() |
Output<Boolean> |
ReduceAny.asOutput() |
Output<Boolean> |
ReduceAll.asOutput() |
Output<Object> |
RebatchDataset.asOutput() |
Output<T> |
ReadVariableOp.asOutput() |
Output<Integer> |
Rank.asOutput() |
Output<T> |
Range.asOutput() |
Output<Object> |
RaggedTensorToVariant.asOutput() |
Output<U> |
RaggedTensorToTensor.asOutput() |
Output<T> |
Prod.asOutput() |
Output<Object> |
PrivateThreadPoolDataset.asOutput() |
Output<Object> |
PrelinearizeTuple.asOutput() |
Output<Object> |
Prelinearize.asOutput() |
Output<T> |
PlaceholderWithDefault.asOutput() |
Output<T> |
Placeholder.asOutput() |
Output<T> |
ParallelDynamicStitch.asOutput() |
Output<T> |
ParallelConcat.asOutput() |
Output<T> |
Pad.asOutput() |
Output<T> |
OutfeedDequeue.asOutput() |
Output<Integer> |
OrderedMapSize.asOutput() |
Output<Integer> |
OrderedMapIncompleteSize.asOutput() |
Output<T> |
OnesLike.asOutput() |
Output<U> |
OneHot.asOutput() |
Output<Object> |
NonSerializableDataset.asOutput() |
Output<U> |
NonDeterministicInts.asOutput() |
Output<T> |
NextIteration.asOutput() |
Output<T> |
NextAfter.asOutput() |
Output<T> |
NcclReduce.asOutput() |
Output<T> |
NcclBroadcast.asOutput() |
Output<T> |
NcclAllReduce.asOutput() |
Output<Object> |
MutexLock.asOutput() |
Output<Object> |
Mutex.asOutput() |
Output<Object> |
MutableHashTableOfTensors.asOutput() |
Output<Object> |
MutableHashTable.asOutput() |
Output<Object> |
MutableDenseHashTable.asOutput() |
Output<T> |
MulNoNan.asOutput() |
Output<T> |
MirrorPadGrad.asOutput() |
Output<T> |
MirrorPad.asOutput() |
Output<T> |
Min.asOutput() |
Output<Object> |
MaxIntraOpParallelismDataset.asOutput() |
Output<T> |
Max.asOutput() |
Output<T> |
MatrixSetDiagV2.asOutput() |
Output<T> |
MatrixDiagV2.asOutput() |
Output<T> |
MatrixDiagPartV2.asOutput() |
Output<Integer> |
MapSize.asOutput() |
Output<Integer> |
MapIncompleteSize.asOutput() |
Output<U> |
LowerBound.asOutput() |
Output<Boolean> |
LoopCond.asOutput() |
Output<Long> |
LookupTableSize.asOutput() |
Output<U> |
LookupTableFind.asOutput() |
Output<Object> |
LMDBDataset.asOutput() |
Output<T> |
LinSpace.asOutput() |
Output<Float> |
KmeansPlusPlusInitialization.asOutput() |
Output<Long> |
KMC2ChainInitialization.asOutput() |
Output<String> |
IteratorGetDevice.asOutput() |
Output<Boolean> |
IsVariableInitialized.asOutput() |
Output<Boolean> |
IsBoostedTreesQuantileStreamResourceInitialized.asOutput() |
Output<Boolean> |
IsBoostedTreesEnsembleInitialized.asOutput() |
Output<T> |
InplaceUpdate.asOutput() |
Output<T> |
InplaceSub.asOutput() |
Output<T> |
InplaceAdd.asOutput() |
Output<T> |
InfeedDequeue.asOutput() |
Output<T> |
ImmutableConst.asOutput() |
Output<Object> |
IgnoreErrorsDataset.asOutput() |
Output<T> |
Identity.asOutput() |
Output<U> |
HistogramFixedWidth.asOutput() |
Output<Object> |
HashTable.asOutput() |
Output<T> |
GuaranteeConst.asOutput() |
Output<T> |
GetSessionTensor.asOutput() |
Output<Object> |
GetSessionHandle.asOutput() |
Output<T> |
GatherNd.asOutput() |
Output<T> |
Gather.asOutput() |
Output<UInt8> |
Fingerprint.asOutput() |
Output<U> |
Fill.asOutput() |
Output<T> |
ExtractVolumePatches.asOutput() |
Output<Object> |
ExperimentalUnbatchDataset.asOutput() |
Output<String> |
ExperimentalStatsAggregatorSummary.asOutput() |
Output<Object> |
ExperimentalStatsAggregatorHandle.asOutput() |
Output<Object> |
ExperimentalSqlDataset.asOutput() |
Output<Object> |
ExperimentalSlidingWindowDataset.asOutput() |
Output<Object> |
ExperimentalSetStatsAggregatorDataset.asOutput() |
Output<Object> |
ExperimentalRebatchDataset.asOutput() |
Output<Object> |
ExperimentalRandomDataset.asOutput() |
Output<Object> |
ExperimentalPrivateThreadPoolDataset.asOutput() |
Output<Object> |
ExperimentalParseExampleDataset.asOutput() |
Output<Object> |
ExperimentalMaxIntraOpParallelismDataset.asOutput() |
Output<Object> |
ExperimentalMatchingFilesDataset.asOutput() |
Output<Object> |
ExperimentalLatencyStatsDataset.asOutput() |
Output<Object> |
ExperimentalDenseToSparseBatchDataset.asOutput() |
Output<Long> |
ExperimentalDatasetCardinality.asOutput() |
Output<Object> |
ExperimentalChooseFastestDataset.asOutput() |
Output<Object> |
ExperimentalBytesProducedStatsDataset.asOutput() |
Output<Object> |
ExperimentalAutoShardDataset.asOutput() |
Output<T> |
ExpandDims.asOutput() |
Output<T> |
Exit.asOutput() |
Output<T> |
EuclideanNorm.asOutput() |
Output<T> |
Enter.asOutput() |
Output<T> |
EnsureShape.asOutput() |
Output<String> |
EncodeProto.asOutput() |
Output<Object> |
EmptyTensorList.asOutput() |
Output<T> |
Empty.asOutput() |
Output<T> |
Einsum.asOutput() |
Output<Float> |
EditDistance.asOutput() |
Output<T> |
DynamicStitch.asOutput() |
Output<T> |
DrawBoundingBoxesV2.asOutput() |
Output<Object> |
DirectedInterleaveDataset.asOutput() |
Output<T> |
DestroyTemporaryVariable.asOutput() |
Output<T> |
DeepCopy.asOutput() |
Output<T> |
DecodePaddedRaw.asOutput() |
Output<T> |
DebugGradientRefIdentity.asOutput() |
Output<T> |
DebugGradientIdentity.asOutput() |
Output<Object> |
DatasetFromGraph.asOutput() |
Output<Long> |
DatasetCardinality.asOutput() |
Output<T> |
CumulativeLogsumexp.asOutput() |
Output<T> |
CudnnRNNCanonicalToParamsV2.asOutput() |
Output<Object> |
CSVDataset.asOutput() |
Output<T> |
CrossReplicaSum.asOutput() |
Output<T> |
CountUpTo.asOutput() |
Output<T> |
Constant.asOutput() |
Output<String> |
ConfigureDistributedTPU.asOutput() |
Output<T> |
Concat.asOutput() |
Output<T> |
CollectivePermute.asOutput() |
Output<T> |
CollectiveGather.asOutput() |
Output<T> |
ClipByValue.asOutput() |
Output<Object> |
ChooseFastestDataset.asOutput() |
Output<Object> |
CacheDatasetV2.asOutput() |
Output<Integer> |
Bucketize.asOutput() |
Output<T> |
BroadcastTo.asOutput() |
Output<T> |
BroadcastDynamicShape.asOutput() |
Output<Object> |
BoostedTreesQuantileStreamResourceHandleOp.asOutput() |
Output<Float> |
BoostedTreesPredict.asOutput() |
Output<Float> |
BoostedTreesMakeStatsSummary.asOutput() |
Output<String> |
BoostedTreesExampleDebugOutputs.asOutput() |
Output<Object> |
BoostedTreesEnsembleResourceHandleOp.asOutput() |
Output<Boolean> |
BoostedTreesCenterBias.asOutput() |
Output<Float> |
BoostedTreesAggregateStats.asOutput() |
Output<U> |
Bitcast.asOutput() |
Output<T> |
BatchToSpaceNd.asOutput() |
Output<T> |
BatchToSpace.asOutput() |
Output<T> |
BatchMatMulV2.asOutput() |
Output<Integer> |
BarrierReadySize.asOutput() |
Output<Integer> |
BarrierIncompleteSize.asOutput() |
Output<String> |
Barrier.asOutput() |
Output<Object> |
AutoShardDataset.asOutput() |
Output<T> |
AssignSub.asOutput() |
Output<T> |
AssignAdd.asOutput() |
Output<T> |
Assign.asOutput() |
Output<Object> |
AssertNextDataset.asOutput() |
Output<T> |
ApplyAdagradV2.asOutput() |
Output<Boolean> |
Any.asOutput() |
Output<T> |
AllToAll.asOutput() |
Output<Boolean> |
All.asOutput() |
Output<T> |
ResourceAccumulatorTakeGradient.average()
The average of the accumulated gradients.
|
Output<T> |
ResourceAccumulatorTakeGradient.average()
The average of the accumulated gradients.
|
Output<T> |
UnbatchGrad.batchedGrad() |
Output<T> |
UnbatchGrad.batchedGrad() |
Output<Long> |
Batch.batchIndex() |
Output<Long> |
Batch.batchIndex() |
Output<U> |
FusedBatchNormV3.batchMean()
A 1D Tensor for the computed batch mean, to be used by TensorFlow
to compute the running mean.
|
Output<U> |
FusedBatchNormV3.batchMean()
A 1D Tensor for the computed batch mean, to be used by TensorFlow
to compute the running mean.
|
Output<U> |
FusedBatchNormV3.batchVariance()
A 1D Tensor for the computed batch variance, to be used by
TensorFlow to compute the running variance.
|
Output<U> |
FusedBatchNormV3.batchVariance()
A 1D Tensor for the computed batch variance, to be used by
TensorFlow to compute the running variance.
|
Output<Float> |
RetrieveTPUEmbeddingMDLAdagradLightParameters.benefits()
Parameter benefits updated by the MDL Adagrad Light optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMDLAdagradLightParameters.benefits()
Parameter benefits updated by the MDL Adagrad Light optimization algorithm.
|
Output<T> |
BlockLSTMGrad.bGrad()
The gradient for w to be back-propped.
|
Output<T> |
BlockLSTMGradV2.bGrad()
The gradient for w to be back-propped.
|
Output<T> |
BlockLSTMGradV2.bGrad()
The gradient for w to be back-propped.
|
Output<T> |
BlockLSTMGrad.bGrad()
The gradient for w to be back-propped.
|
Output<String> |
EncodeProto.bytes()
Tensor of serialized protos with shape `batch_shape`.
|
Output<String> |
EncodeProto.bytes()
Tensor of serialized protos with shape `batch_shape`.
|
Output<T> |
GRUBlockCell.c() |
Output<T> |
GRUBlockCell.c() |
Output<Long> |
DatasetCardinality.cardinality()
The cardinality of `input_dataset`.
|
Output<Long> |
ExperimentalDatasetCardinality.cardinality()
The cardinality of `input_dataset`.
|
Output<Long> |
ExperimentalDatasetCardinality.cardinality()
The cardinality of `input_dataset`.
|
Output<Long> |
DatasetCardinality.cardinality()
The cardinality of `input_dataset`.
|
Output<Integer> |
UnicodeDecode.charValues()
A 1D int32 Tensor containing the decoded codepoints.
|
Output<Integer> |
UnicodeDecode.charValues()
A 1D int32 Tensor containing the decoded codepoints.
|
Output<T> |
BlockLSTM.ci()
The cell input over the whole time sequence.
|
Output<T> |
BlockLSTMV2.ci()
The cell input over the whole time sequence.
|
Output<T> |
LSTMBlockCell.ci()
The cell input.
|
Output<T> |
LSTMBlockCell.ci()
The cell input.
|
Output<T> |
BlockLSTMV2.ci()
The cell input over the whole time sequence.
|
Output<T> |
BlockLSTM.ci()
The cell input over the whole time sequence.
|
Output<T> |
BlockLSTM.co()
The cell after the tanh over the whole time sequence.
|
Output<T> |
BlockLSTMV2.co()
The cell after the tanh over the whole time sequence.
|
Output<T> |
LSTMBlockCell.co()
The cell after the tanh.
|
Output<T> |
LSTMBlockCell.co()
The cell after the tanh.
|
Output<T> |
BlockLSTMV2.co()
The cell after the tanh over the whole time sequence.
|
Output<T> |
BlockLSTM.co()
The cell after the tanh over the whole time sequence.
|
Output<Boolean> |
BoostedTreesCenterBias.continueCentering()
Bool, whether to continue bias centering.
|
Output<Boolean> |
BoostedTreesCenterBias.continueCentering()
Bool, whether to continue bias centering.
|
Output<V> |
UniqueWithCounts.count()
A 1-D Tensor.
|
Output<V> |
UniqueWithCounts.count()
A 1-D Tensor.
|
Output<T> |
BlockLSTM.cs()
The cell state before the tanh over the whole time sequence.
|
Output<T> |
BlockLSTMV2.cs()
The cell state before the tanh over the whole time sequence.
|
Output<T> |
LSTMBlockCell.cs()
The cell state before the tanh.
|
Output<T> |
LSTMBlockCell.cs()
The cell state before the tanh.
|
Output<T> |
BlockLSTMV2.cs()
The cell state before the tanh over the whole time sequence.
|
Output<T> |
BlockLSTM.cs()
The cell state before the tanh over the whole time sequence.
|
Output<T> |
BlockLSTMGrad.csPrevGrad()
The gradient of cs_prev to be back-propped.
|
Output<T> |
BlockLSTMGradV2.csPrevGrad()
The gradient of cs_prev to be back-propped.
|
Output<T> |
LSTMBlockCellGrad.csPrevGrad()
The gradient of cs to be back-propped.
|
Output<T> |
LSTMBlockCellGrad.csPrevGrad()
The gradient of cs to be back-propped.
|
Output<T> |
BlockLSTMGradV2.csPrevGrad()
The gradient of cs_prev to be back-propped.
|
Output<T> |
BlockLSTMGrad.csPrevGrad()
The gradient of cs_prev to be back-propped.
|
Output<Integer> |
Skipgram.currentEpoch()
The current epoch number.
|
Output<Integer> |
Skipgram.currentEpoch()
The current epoch number.
|
Output<T> |
CollectiveGather.data() |
Output<T> |
NcclAllReduce.data() |
Output<T> |
NcclReduce.data() |
Output<T> |
NcclReduce.data() |
Output<T> |
NcclAllReduce.data() |
Output<T> |
CollectiveGather.data() |
Output<T> |
GRUBlockCellGrad.dCBar() |
Output<T> |
GRUBlockCellGrad.dCBar() |
Output<?> |
AnonymousIteratorV2.deleter()
A variant deleter that should be passed into the op that deletes the iterator.
|
Output<?> |
AnonymousMemoryCache.deleter() |
Output<?> |
AnonymousMultiDeviceIterator.deleter()
A variant deleter that should be passed into the op that deletes the iterator.
|
Output<?> |
AnonymousRandomSeedGenerator.deleter() |
Output<?> |
AnonymousRandomSeedGenerator.deleter() |
Output<?> |
AnonymousMultiDeviceIterator.deleter()
A variant deleter that should be passed into the op that deletes the iterator.
|
Output<?> |
AnonymousMemoryCache.deleter() |
Output<?> |
AnonymousIteratorV2.deleter()
A variant deleter that should be passed into the op that deletes the iterator.
|
Output<String> |
IteratorGetDevice.device() |
Output<String> |
IteratorGetDevice.device() |
Output<Integer> |
TPUOrdinalSelector.deviceOrdinals()
A vector 1 or more TPU cores.
|
Output<Integer> |
TPUOrdinalSelector.deviceOrdinals()
A vector 1 or more TPU cores.
|
Output<T> |
GRUBlockCellGrad.dHPrev() |
Output<T> |
GRUBlockCellGrad.dHPrev() |
Output<T> |
MatrixDiagPartV2.diagonal()
The extracted diagonal(s).
|
Output<T> |
MatrixDiagPartV2.diagonal()
The extracted diagonal(s).
|
Output<T> |
LSTMBlockCellGrad.dicfo()
The derivative wrt to [i, cs, f, o].
|
Output<T> |
LSTMBlockCellGrad.dicfo()
The derivative wrt to [i, cs, f, o].
|
Output<T> |
GRUBlockCellGrad.dRBarUBar() |
Output<T> |
GRUBlockCellGrad.dRBarUBar() |
Output<T> |
GRUBlockCellGrad.dX() |
Output<T> |
GRUBlockCellGrad.dX() |
<T> Output<T> |
Gradients.dy(int index)
Returns a symbolic handle to one of the gradient operation output
|
<T> Output<T> |
Gradients.dy(int index)
Returns a symbolic handle to one of the gradient operation output
|
Output<T> |
TensorListElementShape.elementShape() |
Output<T> |
TensorListElementShape.elementShape() |
Output<?> |
RaggedTensorToVariant.encodedRagged()
A `variant` Tensor that containing encoded `RaggedTensor`.
|
Output<?> |
RaggedTensorToVariant.encodedRagged()
A `variant` Tensor that containing encoded `RaggedTensor`.
|
Output<Integer> |
Skipgram.examples()
A vector of word ids.
|
Output<Integer> |
Skipgram.examples()
A vector of word ids.
|
Output<String> |
BoostedTreesExampleDebugOutputs.examplesDebugOutputsSerialized()
Output rank 1 Tensor containing a proto serialized as a string for each example.
|
Output<String> |
BoostedTreesExampleDebugOutputs.examplesDebugOutputsSerialized()
Output rank 1 Tensor containing a proto serialized as a string for each example.
|
Output<T> |
BlockLSTM.f()
The forget gate over the whole time sequence.
|
Output<T> |
BlockLSTMV2.f()
The forget gate over the whole time sequence.
|
Output<T> |
LSTMBlockCell.f()
The forget gate.
|
Output<T> |
LSTMBlockCell.f()
The forget gate.
|
Output<T> |
BlockLSTMV2.f()
The forget gate over the whole time sequence.
|
Output<T> |
BlockLSTM.f()
The forget gate over the whole time sequence.
|
Output<Integer> |
BoostedTreesCalculateBestFeatureSplit.featureDimensions()
A Rank 1 tensors indicating the best feature dimension for each feature to split for certain nodes if the feature is multi-dimension.
|
Output<Integer> |
BoostedTreesSparseCalculateBestFeatureSplit.featureDimensions()
A Rank 1 tensor indicating the best feature dimension for each feature to split for each node.
|
Output<Integer> |
BoostedTreesSparseCalculateBestFeatureSplit.featureDimensions()
A Rank 1 tensor indicating the best feature dimension for each feature to split for each node.
|
Output<Integer> |
BoostedTreesCalculateBestFeatureSplit.featureDimensions()
A Rank 1 tensors indicating the best feature dimension for each feature to split for certain nodes if the feature is multi-dimension.
|
Output<UInt8> |
Fingerprint.fingerprint()
A two-dimensional `Tensor` of type `tf.uint8`.
|
Output<UInt8> |
Fingerprint.fingerprint()
A two-dimensional `Tensor` of type `tf.uint8`.
|
Output<Float> |
TensorArray.flow()
A scalar used to control gradient flow.
|
Output<Float> |
TensorArray.flow()
A scalar used to control gradient flow.
|
Output<Float> |
TensorArrayGrad.flowOut() |
Output<Float> |
TensorArrayGradWithShape.flowOut() |
Output<Float> |
TensorArrayScatter.flowOut()
A float scalar that enforces proper chaining of operations.
|
Output<Float> |
TensorArraySplit.flowOut()
A float scalar that enforces proper chaining of operations.
|
Output<Float> |
TensorArrayUnpack.flowOut() |
Output<Float> |
TensorArrayWrite.flowOut()
A float scalar that enforces proper chaining of operations.
|
Output<Float> |
TensorArrayWrite.flowOut()
A float scalar that enforces proper chaining of operations.
|
Output<Float> |
TensorArrayUnpack.flowOut() |
Output<Float> |
TensorArraySplit.flowOut()
A float scalar that enforces proper chaining of operations.
|
Output<Float> |
TensorArrayScatter.flowOut()
A float scalar that enforces proper chaining of operations.
|
Output<Float> |
TensorArrayGradWithShape.flowOut() |
Output<Float> |
TensorArrayGrad.flowOut() |
Output<Float> |
BoostedTreesCalculateBestFeatureSplit.gains()
A Rank 1 tensors indicating the best gains for each feature to split for certain nodes.
|
Output<Float> |
BoostedTreesSparseCalculateBestFeatureSplit.gains()
A Rank 1 tensor indicating the best gains to split each node.
|
Output<Float> |
BoostedTreesSparseCalculateBestFeatureSplit.gains()
A Rank 1 tensor indicating the best gains to split each node.
|
Output<Float> |
BoostedTreesCalculateBestFeatureSplit.gains()
A Rank 1 tensors indicating the best gains for each feature to split for certain nodes.
|
Output<?> |
TensorArrayGrad.gradHandle() |
Output<?> |
TensorArrayGradWithShape.gradHandle() |
Output<?> |
TensorArrayGradWithShape.gradHandle() |
Output<?> |
TensorArrayGrad.gradHandle() |
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the Adadelta optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the Momentum optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the proximal Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the proximal Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the Momentum optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.gradientAccumulators()
Parameter gradient_accumulators updated by the Adadelta optimization algorithm.
|
Output<T> |
BlockLSTM.h()
The output h vector over the whole time sequence.
|
Output<T> |
BlockLSTMV2.h()
The output h vector over the whole time sequence.
|
Output<T> |
GRUBlockCell.h() |
Output<T> |
LSTMBlockCell.h()
The output h vector.
|
Output<T> |
LSTMBlockCell.h()
The output h vector.
|
Output<T> |
GRUBlockCell.h() |
Output<T> |
BlockLSTMV2.h()
The output h vector over the whole time sequence.
|
Output<T> |
BlockLSTM.h()
The output h vector over the whole time sequence.
|
Output<?> |
AnonymousIteratorV2.handle()
A handle to the iterator that can be passed to a "MakeIterator" or
"IteratorGetNext" op.
|
Output<?> |
AnonymousMemoryCache.handle() |
Output<?> |
AnonymousMultiDeviceIterator.handle()
A handle to a multi device iterator that can be passed to a
"MultiDeviceIteratorGetNextFromShard" op.
|
Output<?> |
AnonymousRandomSeedGenerator.handle() |
Output<?> |
AssertNextDataset.handle() |
Output<?> |
AutoShardDataset.handle() |
Output<String> |
Barrier.handle()
The handle to the barrier.
|
Output<?> |
CacheDatasetV2.handle() |
Output<?> |
ChooseFastestDataset.handle() |
Output<?> |
CSVDataset.handle() |
Output<?> |
DatasetFromGraph.handle()
A variant tensor representing the dataset.
|
Output<?> |
DirectedInterleaveDataset.handle() |
Output<?> |
EmptyTensorList.handle() |
Output<?> |
ExperimentalAutoShardDataset.handle() |
Output<?> |
ExperimentalBytesProducedStatsDataset.handle() |
Output<?> |
ExperimentalChooseFastestDataset.handle() |
Output<?> |
ExperimentalDenseToSparseBatchDataset.handle() |
Output<?> |
ExperimentalLatencyStatsDataset.handle() |
Output<?> |
ExperimentalMatchingFilesDataset.handle() |
Output<?> |
ExperimentalMaxIntraOpParallelismDataset.handle() |
Output<?> |
ExperimentalParseExampleDataset.handle() |
Output<?> |
ExperimentalPrivateThreadPoolDataset.handle() |
Output<?> |
ExperimentalRandomDataset.handle() |
Output<?> |
ExperimentalRebatchDataset.handle() |
Output<?> |
ExperimentalSetStatsAggregatorDataset.handle() |
Output<?> |
ExperimentalSlidingWindowDataset.handle() |
Output<?> |
ExperimentalSqlDataset.handle() |
Output<?> |
ExperimentalStatsAggregatorHandle.handle() |
Output<?> |
ExperimentalUnbatchDataset.handle() |
Output<?> |
GetSessionHandle.handle()
The handle for the tensor stored in the session state, represented
as a ResourceHandle object.
|
Output<?> |
IgnoreErrorsDataset.handle() |
Output<?> |
LMDBDataset.handle() |
Output<?> |
MaxIntraOpParallelismDataset.handle() |
Output<?> |
NonSerializableDataset.handle() |
Output<?> |
PrivateThreadPoolDataset.handle() |
Output<?> |
RebatchDataset.handle() |
Output<?> |
ResourceConditionalAccumulator.handle()
The handle to the accumulator.
|
Output<?> |
SamplingDataset.handle() |
Output<?> |
ShardDataset.handle() |
Output<?> |
ShuffleDatasetV2.handle() |
Output<?> |
SleepDataset.handle() |
Output<?> |
SlidingWindowDataset.handle() |
Output<?> |
SnapshotDataset.handle() |
Output<?> |
StatsAggregatorHandleV2.handle() |
Output<?> |
TensorArray.handle()
The handle to the TensorArray.
|
Output<?> |
TensorListReserve.handle() |
Output<?> |
ThreadPoolDataset.handle() |
Output<?> |
ThreadPoolHandle.handle()
A resource that can be consumed by one or more ExperimentalThreadPoolDataset
ops.
|
Output<?> |
UniqueDataset.handle() |
Output<?> |
UniqueDataset.handle() |
Output<?> |
ThreadPoolHandle.handle()
A resource that can be consumed by one or more ExperimentalThreadPoolDataset
ops.
|
Output<?> |
ThreadPoolDataset.handle() |
Output<?> |
TensorListReserve.handle() |
Output<?> |
TensorArray.handle()
The handle to the TensorArray.
|
Output<?> |
StatsAggregatorHandleV2.handle() |
Output<?> |
SnapshotDataset.handle() |
Output<?> |
SlidingWindowDataset.handle() |
Output<?> |
SleepDataset.handle() |
Output<?> |
ShuffleDatasetV2.handle() |
Output<?> |
ShardDataset.handle() |
Output<?> |
SamplingDataset.handle() |
Output<?> |
ResourceConditionalAccumulator.handle()
The handle to the accumulator.
|
Output<?> |
RebatchDataset.handle() |
Output<?> |
PrivateThreadPoolDataset.handle() |
Output<?> |
NonSerializableDataset.handle() |
Output<?> |
MaxIntraOpParallelismDataset.handle() |
Output<?> |
LMDBDataset.handle() |
Output<?> |
IgnoreErrorsDataset.handle() |
Output<?> |
GetSessionHandle.handle()
The handle for the tensor stored in the session state, represented
as a ResourceHandle object.
|
Output<?> |
ExperimentalUnbatchDataset.handle() |
Output<?> |
ExperimentalStatsAggregatorHandle.handle() |
Output<?> |
ExperimentalSqlDataset.handle() |
Output<?> |
ExperimentalSlidingWindowDataset.handle() |
Output<?> |
ExperimentalSetStatsAggregatorDataset.handle() |
Output<?> |
ExperimentalRebatchDataset.handle() |
Output<?> |
ExperimentalRandomDataset.handle() |
Output<?> |
ExperimentalPrivateThreadPoolDataset.handle() |
Output<?> |
ExperimentalParseExampleDataset.handle() |
Output<?> |
ExperimentalMaxIntraOpParallelismDataset.handle() |
Output<?> |
ExperimentalMatchingFilesDataset.handle() |
Output<?> |
ExperimentalLatencyStatsDataset.handle() |
Output<?> |
ExperimentalDenseToSparseBatchDataset.handle() |
Output<?> |
ExperimentalChooseFastestDataset.handle() |
Output<?> |
ExperimentalBytesProducedStatsDataset.handle() |
Output<?> |
ExperimentalAutoShardDataset.handle() |
Output<?> |
EmptyTensorList.handle() |
Output<?> |
DirectedInterleaveDataset.handle() |
Output<?> |
DatasetFromGraph.handle()
A variant tensor representing the dataset.
|
Output<?> |
CSVDataset.handle() |
Output<?> |
ChooseFastestDataset.handle() |
Output<?> |
CacheDatasetV2.handle() |
Output<String> |
Barrier.handle()
The handle to the barrier.
|
Output<?> |
AutoShardDataset.handle() |
Output<?> |
AssertNextDataset.handle() |
Output<?> |
AnonymousRandomSeedGenerator.handle() |
Output<?> |
AnonymousMultiDeviceIterator.handle()
A handle to a multi device iterator that can be passed to a
"MultiDeviceIteratorGetNextFromShard" op.
|
Output<?> |
AnonymousMemoryCache.handle() |
Output<?> |
AnonymousIteratorV2.handle()
A handle to the iterator that can be passed to a "MakeIterator" or
"IteratorGetNext" op.
|
Output<?> |
CudnnRNNV3.hostReserved() |
Output<?> |
CudnnRNNV3.hostReserved() |
Output<T> |
BlockLSTMGrad.hPrevGrad()
The gradient of h_prev to be back-propped.
|
Output<T> |
BlockLSTMGradV2.hPrevGrad()
The gradient of h_prev to be back-propped.
|
Output<T> |
BlockLSTMGradV2.hPrevGrad()
The gradient of h_prev to be back-propped.
|
Output<T> |
BlockLSTMGrad.hPrevGrad()
The gradient of h_prev to be back-propped.
|
Output<T> |
BlockLSTM.i()
The input gate over the whole time sequence.
|
Output<T> |
BlockLSTMV2.i()
The input gate over the whole time sequence.
|
Output<T> |
LSTMBlockCell.i()
The input gate.
|
Output<T> |
LSTMBlockCell.i()
The input gate.
|
Output<T> |
BlockLSTMV2.i()
The input gate over the whole time sequence.
|
Output<T> |
BlockLSTM.i()
The input gate over the whole time sequence.
|
Output<Long> |
Batch.id() |
Output<Long> |
Batch.id() |
Output<U> |
SetDiff1d.idx()
1-D.
|
Output<V> |
Unique.idx()
A 1-D Tensor.
|
Output<V> |
UniqueWithCounts.idx()
A 1-D Tensor.
|
Output<V> |
UniqueWithCounts.idx()
A 1-D Tensor.
|
Output<V> |
Unique.idx()
A 1-D Tensor.
|
Output<U> |
SetDiff1d.idx()
1-D.
|
Output<Long> |
KMC2ChainInitialization.index()
Scalar with the index of the sampled point.
|
Output<Long> |
Where.index() |
Output<Long> |
Where.index() |
Output<Long> |
KMC2ChainInitialization.index()
Scalar with the index of the sampled point.
|
Output<Long> |
BarrierTakeMany.indices()
A one-dimensional tensor of indices, with length num_elems.
|
Output<Long> |
BarrierTakeMany.indices()
A one-dimensional tensor of indices, with length num_elems.
|
Output<T> |
CudnnRNNBackpropV3.inputBackprop() |
Output<T> |
CudnnRNNBackpropV3.inputBackprop() |
Output<T> |
CudnnRNNBackpropV3.inputCBackprop() |
Output<T> |
CudnnRNNBackpropV3.inputCBackprop() |
Output<T> |
CudnnRNNBackpropV3.inputHBackprop() |
Output<T> |
CudnnRNNBackpropV3.inputHBackprop() |
Output<Boolean> |
IsBoostedTreesEnsembleInitialized.isInitialized()
output boolean on whether it is initialized or not.
|
Output<Boolean> |
IsBoostedTreesQuantileStreamResourceInitialized.isInitialized()
bool; True if the resource is initialized, False otherwise.
|
Output<Boolean> |
IsVariableInitialized.isInitialized() |
Output<Boolean> |
TensorForestTreeIsInitializedOp.isInitialized()
Whether the tree is initialized.
|
Output<Boolean> |
VarIsInitializedOp.isInitialized()
a scalar boolean which is true if the variable has been
initialized.
|
Output<Boolean> |
VarIsInitializedOp.isInitialized()
a scalar boolean which is true if the variable has been
initialized.
|
Output<Boolean> |
TensorForestTreeIsInitializedOp.isInitialized()
Whether the tree is initialized.
|
Output<Boolean> |
IsVariableInitialized.isInitialized() |
Output<Boolean> |
IsBoostedTreesQuantileStreamResourceInitialized.isInitialized()
bool; True if the resource is initialized, False otherwise.
|
Output<Boolean> |
IsBoostedTreesEnsembleInitialized.isInitialized()
output boolean on whether it is initialized or not.
|
Output<T> |
TensorListGetItem.item() |
Output<T> |
TensorListGetItem.item() |
Output<Long> |
MapUnstageNoKey.key() |
Output<Long> |
OrderedMapUnstageNoKey.key() |
Output<Long> |
OrderedMapUnstageNoKey.key() |
Output<Long> |
MapUnstageNoKey.key() |
Output<String> |
BarrierTakeMany.keys()
A one-dimensional tensor of keys, with length num_elements.
|
Output<T> |
LookupTableExport.keys()
Vector of all keys present in the table.
|
Output<T> |
LookupTableExport.keys()
Vector of all keys present in the table.
|
Output<String> |
BarrierTakeMany.keys()
A one-dimensional tensor of keys, with length num_elements.
|
Output<Integer> |
Skipgram.labels()
A vector of word ids.
|
Output<Integer> |
Skipgram.labels()
A vector of word ids.
|
Output<Integer> |
BoostedTreesGetEnsembleStates.lastLayerNodesRange()
Rank size 2 tensor that contains start and end ids of the nodes in the latest
layer.
|
Output<Integer> |
BoostedTreesGetEnsembleStates.lastLayerNodesRange()
Rank size 2 tensor that contains start and end ids of the nodes in the latest
layer.
|
Output<Float> |
BoostedTreesCalculateBestFeatureSplit.leftNodeContribs()
A Rank 2 tensors indicating the contribution of the left nodes when branching from parent nodes (given by the tensor element in the output node_ids_list) to the left direction by the given threshold for each feature.
|
Output<Float> |
BoostedTreesSparseCalculateBestFeatureSplit.leftNodeContribs()
A Rank 2 tensor indicating the contribution of the left nodes when branching from parent nodes to the left direction by the given threshold for each feature.
|
Output<Float> |
BoostedTreesSparseCalculateBestFeatureSplit.leftNodeContribs()
A Rank 2 tensor indicating the contribution of the left nodes when branching from parent nodes to the left direction by the given threshold for each feature.
|
Output<Float> |
BoostedTreesCalculateBestFeatureSplit.leftNodeContribs()
A Rank 2 tensors indicating the contribution of the left nodes when branching from parent nodes (given by the tensor element in the output node_ids_list) to the left direction by the given threshold for each feature.
|
Output<Integer> |
TensorListLength.length() |
Output<Integer> |
TensorListLength.length() |
Output<Long> |
TensorArrayConcat.lengths()
A vector of the row sizes of the original T elements in the
value output.
|
Output<Long> |
TensorListConcat.lengths() |
Output<Long> |
TensorListConcatV2.lengths() |
Output<Long> |
TensorListConcatV2.lengths() |
Output<Long> |
TensorListConcat.lengths() |
Output<Long> |
TensorArrayConcat.lengths()
A vector of the row sizes of the original T elements in the
value output.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParameters.linears()
Parameter linears updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.linears()
Parameter linears updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.linears()
Parameter linears updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParameters.linears()
Parameter linears updated by the FTRL optimization algorithm.
|
Output<Float> |
BoostedTreesPredict.logits()
Output rank 2 Tensor containing logits for each example.
|
Output<Float> |
TensorForestTreePredict.logits()
The logits predictions from the tree for each instance in the batch.
|
Output<Float> |
TensorForestTreePredict.logits()
The logits predictions from the tree for each instance in the batch.
|
Output<Float> |
BoostedTreesPredict.logits()
Output rank 2 Tensor containing logits for each example.
|
Output<T> |
Lu.lu()
A tensor of shape `[..., M, M]` whose strictly lower triangular part denotes the
lower triangular factor `L` with unit diagonal, and whose upper triangular part
denotes the upper triangular factor `U`.
|
Output<T> |
Lu.lu()
A tensor of shape `[..., M, M]` whose strictly lower triangular part denotes the
lower triangular factor `L` with unit diagonal, and whose upper triangular part
denotes the upper triangular factor `U`.
|
Output<Float> |
QuantizedMatMulWithBias.maxOut()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMatMulWithBiasAndRelu.maxOut()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMatMulWithBiasAndReluAndRequantize.maxOut()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMatMulWithBiasAndReluAndRequantize.maxOut()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMatMulWithBiasAndRelu.maxOut()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMatMulWithBias.maxOut()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedConv2DAndRelu.maxOutput() |
Output<Float> |
QuantizedConv2DAndReluAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DPerChannel.maxOutput()
The maximum value of the final output tensor.
|
Output<Float> |
QuantizedConv2DWithBias.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndRelu.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndReluAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasSignedSumAndReluAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasSumAndRelu.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasSumAndReluAndRequantize.maxOutput() |
Output<Float> |
QuantizedDepthwiseConv2D.maxOutput()
The float value that the maximum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBias.maxOutput()
The float value that the maximum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBiasAndRelu.maxOutput()
The float value that the maximum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize.maxOutput()
The float value that the maximum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize.maxOutput()
The float value that the maximum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBiasAndRelu.maxOutput()
The float value that the maximum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBias.maxOutput()
The float value that the maximum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2D.maxOutput()
The float value that the maximum quantized output value represents.
|
Output<Float> |
QuantizedConv2DWithBiasSumAndReluAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasSumAndRelu.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasSignedSumAndReluAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndReluAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndRelu.maxOutput() |
Output<Float> |
QuantizedConv2DWithBias.maxOutput() |
Output<Float> |
QuantizedConv2DPerChannel.maxOutput()
The maximum value of the final output tensor.
|
Output<Float> |
QuantizedConv2DAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DAndReluAndRequantize.maxOutput() |
Output<Float> |
QuantizedConv2DAndRelu.maxOutput() |
Output<T> |
DynamicStitch.merged() |
Output<T> |
ParallelDynamicStitch.merged() |
Output<T> |
ParallelDynamicStitch.merged() |
Output<T> |
DynamicStitch.merged() |
Output<Float> |
RetrieveTPUEmbeddingCenteredRMSPropParameters.mg()
Parameter mg updated by the centered RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingCenteredRMSPropParameters.mg()
Parameter mg updated by the centered RMSProp optimization algorithm.
|
Output<Float> |
QuantizedMatMulWithBias.minOut()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedMatMulWithBiasAndRelu.minOut()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedMatMulWithBiasAndReluAndRequantize.minOut()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedMatMulWithBiasAndReluAndRequantize.minOut()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedMatMulWithBiasAndRelu.minOut()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedMatMulWithBias.minOut()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedConv2DAndRelu.minOutput() |
Output<Float> |
QuantizedConv2DAndReluAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DPerChannel.minOutput()
The minimum value of the final output tensor.
|
Output<Float> |
QuantizedConv2DWithBias.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndRelu.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndReluAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasSignedSumAndReluAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasSumAndRelu.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasSumAndReluAndRequantize.minOutput() |
Output<Float> |
QuantizedDepthwiseConv2D.minOutput()
The float value that the minimum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBias.minOutput()
The float value that the minimum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBiasAndRelu.minOutput()
The float value that the minimum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize.minOutput()
The float value that the minimum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize.minOutput()
The float value that the minimum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBiasAndRelu.minOutput()
The float value that the minimum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2DWithBias.minOutput()
The float value that the minimum quantized output value represents.
|
Output<Float> |
QuantizedDepthwiseConv2D.minOutput()
The float value that the minimum quantized output value represents.
|
Output<Float> |
QuantizedConv2DWithBiasSumAndReluAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasSumAndRelu.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasSignedSumAndReluAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndReluAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DWithBiasAndRelu.minOutput() |
Output<Float> |
QuantizedConv2DWithBias.minOutput() |
Output<Float> |
QuantizedConv2DPerChannel.minOutput()
The minimum value of the final output tensor.
|
Output<Float> |
QuantizedConv2DAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DAndReluAndRequantize.minOutput() |
Output<Float> |
QuantizedConv2DAndRelu.minOutput() |
Output<Float> |
RetrieveTPUEmbeddingCenteredRMSPropParameters.mom()
Parameter mom updated by the centered RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParameters.mom()
Parameter mom updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.mom()
Parameter mom updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.mom()
Parameter mom updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParameters.mom()
Parameter mom updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingCenteredRMSPropParameters.mom()
Parameter mom updated by the centered RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParameters.momenta()
Parameter momenta updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParametersGradAccumDebug.momenta()
Parameter momenta updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMomentumParameters.momenta()
Parameter momenta updated by the Momentum optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.momenta()
Parameter momenta updated by the Momentum optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.momenta()
Parameter momenta updated by the Momentum optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMomentumParameters.momenta()
Parameter momenta updated by the Momentum optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParametersGradAccumDebug.momenta()
Parameter momenta updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParameters.momenta()
Parameter momenta updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingCenteredRMSPropParameters.ms()
Parameter ms updated by the centered RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParameters.ms()
Parameter ms updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.ms()
Parameter ms updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.ms()
Parameter ms updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParameters.ms()
Parameter ms updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingCenteredRMSPropParameters.ms()
Parameter ms updated by the centered RMSProp optimization algorithm.
|
Output<?> |
MutexLock.mutexLock()
A tensor that keeps a shared pointer to a lock on the mutex;
when the Tensor is destroyed, the use count on the shared pointer is decreased
by 1.
|
Output<?> |
MutexLock.mutexLock()
A tensor that keeps a shared pointer to a lock on the mutex;
when the Tensor is destroyed, the use count on the shared pointer is decreased
by 1.
|
Output<Float> |
NearestNeighbors.nearestCenterDistances()
Matrix of shape (n, min(m, k)).
|
Output<Float> |
NearestNeighbors.nearestCenterDistances()
Matrix of shape (n, min(m, k)).
|
Output<Long> |
NearestNeighbors.nearestCenterIndices()
Matrix of shape (n, min(m, k)).
|
Output<Long> |
NearestNeighbors.nearestCenterIndices()
Matrix of shape (n, min(m, k)).
|
Output<String> |
StringNGrams.ngrams()
The values tensor of the output ngrams ragged tensor.
|
Output<String> |
StringNGrams.ngrams()
The values tensor of the output ngrams ragged tensor.
|
Output<T> |
StringNGrams.ngramsSplits()
The splits tensor of the output ngrams ragged tensor.
|
Output<T> |
StringNGrams.ngramsSplits()
The splits tensor of the output ngrams ragged tensor.
|
Output<Float> |
CombinedNonMaxSuppression.nmsedBoxes()
A [batch_size, max_detections, 4] float32 tensor
containing the non-max suppressed boxes.
|
Output<Float> |
CombinedNonMaxSuppression.nmsedBoxes()
A [batch_size, max_detections, 4] float32 tensor
containing the non-max suppressed boxes.
|
Output<Float> |
CombinedNonMaxSuppression.nmsedClasses()
A [batch_size, max_detections] float32 tensor
containing the classes for the boxes.
|
Output<Float> |
CombinedNonMaxSuppression.nmsedClasses()
A [batch_size, max_detections] float32 tensor
containing the classes for the boxes.
|
Output<Float> |
CombinedNonMaxSuppression.nmsedScores()
A [batch_size, max_detections] float32 tensor
containing the scores for the boxes.
|
Output<Float> |
CombinedNonMaxSuppression.nmsedScores()
A [batch_size, max_detections] float32 tensor
containing the scores for the boxes.
|
Output<Integer> |
BoostedTreesCalculateBestFeatureSplit.nodeIds()
A Rank 1 tensors indicating possible split node ids for each feature.
|
Output<Integer> |
BoostedTreesSparseCalculateBestFeatureSplit.nodeIds()
A Rank 1 tensor indicating possible node ids that can be split.
|
Output<Integer> |
BoostedTreesTrainingPredict.nodeIds()
Rank 1 Tensor containing new node ids in the new tree_ids.
|
Output<Integer> |
BoostedTreesTrainingPredict.nodeIds()
Rank 1 Tensor containing new node ids in the new tree_ids.
|
Output<Integer> |
BoostedTreesSparseCalculateBestFeatureSplit.nodeIds()
A Rank 1 tensor indicating possible node ids that can be split.
|
Output<Integer> |
BoostedTreesCalculateBestFeatureSplit.nodeIds()
A Rank 1 tensors indicating possible split node ids for each feature.
|
Output<Integer> |
ResourceAccumulatorNumAccumulated.numAccumulated()
The number of gradients aggregated in the given accumulator.
|
Output<Integer> |
ResourceAccumulatorNumAccumulated.numAccumulated()
The number of gradients aggregated in the given accumulator.
|
Output<Integer> |
BoostedTreesGetEnsembleStates.numAttemptedLayers()
The number of layers we attempted to build (but not necessarily succeeded).
|
Output<Integer> |
BoostedTreesGetEnsembleStates.numAttemptedLayers()
The number of layers we attempted to build (but not necessarily succeeded).
|
Output<Integer> |
BoostedTreesGetEnsembleStates.numFinalizedTrees()
The number of trees that were finished successfully.
|
Output<Integer> |
BoostedTreesGetEnsembleStates.numFinalizedTrees()
The number of trees that were finished successfully.
|
Output<Integer> |
BoostedTreesGetEnsembleStates.numTrees()
The number of trees in the tree ensemble resource.
|
Output<Integer> |
BoostedTreesGetEnsembleStates.numTrees()
The number of trees in the tree ensemble resource.
|
Output<T> |
BlockLSTM.o()
The output gate over the whole time sequence.
|
Output<T> |
BlockLSTMV2.o()
The output gate over the whole time sequence.
|
Output<T> |
LSTMBlockCell.o()
The output gate.
|
Output<T> |
LSTMBlockCell.o()
The output gate.
|
Output<T> |
BlockLSTMV2.o()
The output gate over the whole time sequence.
|
Output<T> |
BlockLSTM.o()
The output gate over the whole time sequence.
|
Output<U> |
FusedBatchNormGradV3.offsetBackprop()
A 1D Tensor for the gradient with respect to offset.
|
Output<U> |
FusedBatchNormGradV3.offsetBackprop()
A 1D Tensor for the gradient with respect to offset.
|
Output<T> |
ApplyAdagradV2.out()
Same as "var".
|
Output<T> |
CumulativeLogsumexp.out() |
Output<U> |
HistogramFixedWidth.out()
A 1-D `Tensor` holding histogram of values.
|
Output<W> |
QuantizedMatMulWithBias.out() |
Output<V> |
QuantizedMatMulWithBiasAndRelu.out() |
Output<W> |
QuantizedMatMulWithBiasAndReluAndRequantize.out() |
Output<T> |
SetDiff1d.out()
1-D.
|
Output<T> |
SparseApplyAdagradV2.out()
Same as "var".
|
Output<T> |
SparseApplyAdagradV2.out()
Same as "var".
|
Output<T> |
SetDiff1d.out()
1-D.
|
Output<W> |
QuantizedMatMulWithBiasAndReluAndRequantize.out() |
Output<V> |
QuantizedMatMulWithBiasAndRelu.out() |
Output<W> |
QuantizedMatMulWithBias.out() |
Output<U> |
HistogramFixedWidth.out()
A 1-D `Tensor` holding histogram of values.
|
Output<T> |
CumulativeLogsumexp.out() |
Output<T> |
ApplyAdagradV2.out()
Same as "var".
|
Output<Boolean> |
All.output()
The reduced tensor.
|
Output<T> |
AllToAll.output()
The exchanged result.
|
Output<Boolean> |
Any.output()
The reduced tensor.
|
Output<T> |
BatchMatMulV2.output()
3-D or higher with shape `[..., r_o, c_o]`
|
Output<T> |
BatchToSpace.output()
4-D with shape `[batch, height, width, depth]`, where:
|
Output<T> |
BatchToSpaceNd.output() |
Output<U> |
Bitcast.output() |
Output<T> |
BroadcastTo.output()
A Tensor.
|
Output<Integer> |
Bucketize.output()
Same shape with 'input', each value of input replaced with bucket index.
|
Output<T> |
ClipByValue.output()
A clipped `Tensor` with the same shape as input 't'.
|
Output<T> |
CollectivePermute.output()
The permuted input.
|
Output<T> |
Concat.output()
A `Tensor` with the concatenation of values stacked along the
`concat_dim` dimension.
|
Output<T> |
CountUpTo.output()
A copy of the input before increment.
|
Output<T> |
CrossReplicaSum.output()
The sum of all the distributed inputs.
|
Output<T> |
CudnnRNNV3.output() |
Output<T> |
DebugGradientIdentity.output() |
Output<T> |
DebugGradientRefIdentity.output() |
Output<T> |
DecodePaddedRaw.output()
A Tensor with one more dimension than the input `bytes`.
|
Output<T> |
DrawBoundingBoxesV2.output()
4-D with the same shape as `images`.
|
Output<Float> |
EditDistance.output()
A dense float tensor with rank R - 1.
|
Output<T> |
Einsum.output()
Output Tensor with shape depending upon `equation`.
|
Output<T> |
Empty.output()
A `Tensor` of type `T`.
|
Output<T> |
EnsureShape.output()
A tensor with the same shape and contents as the input tensor or value.
|
Output<T> |
Enter.output()
The same tensor as `data`.
|
Output<T> |
EuclideanNorm.output()
The reduced tensor.
|
Output<T> |
Exit.output()
The same tensor as `data`.
|
Output<T> |
ExpandDims.output()
Contains the same data as `input`, but its shape has an additional
dimension of size 1 added.
|
Output<U> |
Fill.output() |
Output<T> |
Gather.output()
Values from `params` gathered from indices given by `indices`, with
shape `params.shape[:axis] + indices.shape + params.shape[axis + 1:]`.
|
Output<T> |
GatherNd.output()
Values from `params` gathered from indices given by `indices`, with
shape `indices.shape[:-1] + params.shape[indices.shape[-1]:]`.
|
Output<T> |
GuaranteeConst.output() |
Output<T> |
Identity.output() |
Output<T> |
InfeedDequeue.output()
A tensor that will be provided using the infeed mechanism.
|
Output<T> |
LinSpace.output()
1-D.
|
Output<Boolean> |
LoopCond.output()
The same tensor as `input`.
|
Output<U> |
LowerBound.output()
A `Tensor` with the same shape as `values`.
|
Output<T> |
MatrixDiagV2.output()
Has rank `r+1` when `k` is an integer or `k[0] == k[1]`, rank `r` otherwise.
|
Output<T> |
MatrixSetDiagV2.output()
Rank `r+1`, with `output.shape = input.shape`.
|
Output<T> |
Max.output()
The reduced tensor.
|
Output<T> |
Merge.output()
Will be set to the available input tensor.
|
Output<T> |
Min.output()
The reduced tensor.
|
Output<T> |
MirrorPad.output()
The padded tensor.
|
Output<T> |
MirrorPadGrad.output()
The folded tensor.
|
Output<T> |
NcclBroadcast.output() |
Output<T> |
NextAfter.output() |
Output<T> |
NextIteration.output()
The same tensor as `data`.
|
Output<U> |
NonDeterministicInts.output()
Non-deterministic integer values with specified shape.
|
Output<U> |
OneHot.output()
The one-hot tensor.
|
Output<T> |
OutfeedDequeue.output()
A tensor that will be read from the device outfeed.
|
Output<T> |
Pad.output() |
Output<T> |
ParallelConcat.output()
The concatenated tensor.
|
Output<T> |
Placeholder.output()
A placeholder tensor that must be replaced using the feed mechanism.
|
Output<T> |
PlaceholderWithDefault.output()
A placeholder tensor that defaults to `input` if it is not fed.
|
Output<?> |
Prelinearize.output() |
Output<?> |
PrelinearizeTuple.output() |
Output<T> |
Prod.output()
The reduced tensor.
|
Output<T> |
QuantizedConcat.output()
A `Tensor` with the concatenation of values stacked along the
`concat_dim` dimension.
|
Output<T> |
QuantizedConcatV2.output() |
Output<V> |
QuantizedConv2DAndRelu.output() |
Output<V> |
QuantizedConv2DAndReluAndRequantize.output() |
Output<V> |
QuantizedConv2DAndRequantize.output() |
Output<V> |
QuantizedConv2DPerChannel.output()
The output tensor.
|
Output<V> |
QuantizedConv2DWithBias.output() |
Output<V> |
QuantizedConv2DWithBiasAndRelu.output() |
Output<W> |
QuantizedConv2DWithBiasAndReluAndRequantize.output() |
Output<W> |
QuantizedConv2DWithBiasAndRequantize.output() |
Output<X> |
QuantizedConv2DWithBiasSignedSumAndReluAndRequantize.output() |
Output<V> |
QuantizedConv2DWithBiasSumAndRelu.output() |
Output<X> |
QuantizedConv2DWithBiasSumAndReluAndRequantize.output() |
Output<V> |
QuantizedDepthwiseConv2D.output()
The output tensor.
|
Output<V> |
QuantizedDepthwiseConv2DWithBias.output()
The output tensor.
|
Output<V> |
QuantizedDepthwiseConv2DWithBiasAndRelu.output()
The output tensor.
|
Output<W> |
QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize.output()
The output tensor.
|
Output<T> |
QuantizedReshape.output() |
Output<T> |
Range.output()
1-D.
|
Output<Integer> |
Rank.output() |
Output<Boolean> |
ReduceAll.output()
The reduced tensor.
|
Output<Boolean> |
ReduceAny.output()
The reduced tensor.
|
Output<T> |
ReduceMax.output()
The reduced tensor.
|
Output<T> |
ReduceMin.output()
The reduced tensor.
|
Output<T> |
ReduceProd.output()
The reduced tensor.
|
Output<T> |
ReduceSum.output()
The reduced tensor.
|
Output<T> |
RefEnter.output()
The same tensor as `data`.
|
Output<T> |
RefExit.output()
The same tensor as `data`.
|
Output<T> |
RefIdentity.output() |
Output<T> |
RefMerge.output()
Will be set to the available input tensor.
|
Output<T> |
RefNextIteration.output()
The same tensor as `data`.
|
Output<T> |
RefSelect.output()
The forwarded tensor.
|
Output<U> |
RequantizePerChannel.output()
Output tensor.
|
Output<T> |
Reshape.output() |
Output<T> |
ResourceCountUpTo.output()
A copy of the input before increment.
|
Output<U> |
ResourceGather.output() |
Output<U> |
ResourceGatherNd.output() |
Output<T> |
Reverse.output()
The same shape as `tensor`.
|
Output<T> |
ReverseSequence.output()
The partially reversed input.
|
Output<T> |
Roll.output()
Has the same shape and size as the input.
|
Output<T> |
ScaleAndTranslateGrad.output() |
Output<U> |
ScatterNd.output()
A new tensor with the given shape and updates applied according
to the indices.
|
Output<T> |
ScatterNdNonAliasingAdd.output()
A `Tensor` with the same shape as `input`, containing values of `input`
updated with `updates`.
|
Output<T> |
SelectV2.output() |
Output<U> |
Shape.output() |
Output<U> |
Size.output() |
Output<T> |
Slice.output() |
Output<T> |
Snapshot.output() |
Output<T> |
SpaceToBatchNd.output() |
Output<T> |
Squeeze.output()
Contains the same data as `input`, but has one or more dimensions of
size 1 removed.
|
Output<T> |
Stack.output()
The packed tensor.
|
Output<V> |
StatefulRandomBinomial.output() |
Output<U> |
StatefulStandardNormal.output()
A tensor of the specified shape filled with random normal values.
|
Output<U> |
StatefulStandardNormalV2.output()
A tensor of the specified shape filled with random normal values.
|
Output<U> |
StatefulTruncatedNormal.output()
Random values with specified shape.
|
Output<U> |
StatefulUniform.output()
Random values with specified shape.
|
Output<U> |
StatefulUniformFullInt.output()
Random values with specified shape.
|
Output<U> |
StatefulUniformInt.output()
Random values with specified shape.
|
Output<T> |
StopGradient.output() |
Output<T> |
StridedSlice.output() |
Output<U> |
StridedSliceGrad.output() |
Output<String> |
StringLower.output() |
Output<String> |
StringUpper.output() |
Output<T> |
Sum.output()
The reduced tensor.
|
Output<?> |
TensorListConcatLists.output() |
Output<T> |
TensorScatterAdd.output()
A new tensor copied from tensor and updates added according to the indices.
|
Output<T> |
TensorScatterSub.output()
A new tensor copied from tensor and updates subtracted according to the indices.
|
Output<T> |
TensorScatterUpdate.output()
A new tensor with the given shape and updates applied according
to the indices.
|
Output<T> |
TensorStridedSliceUpdate.output() |
Output<T> |
Tile.output() |
Output<String> |
TPUCompilationResult.output() |
Output<Float> |
TPUEmbeddingActivations.output() |
Output<T> |
TPUReplicatedInput.output() |
Output<T> |
TridiagonalMatMul.output()
Tensor of shape `[..., M, N]` containing the product.
|
Output<T> |
TridiagonalSolve.output()
Tensor of shape `[..., M, K]` containing the solutions
|
Output<String> |
UnicodeEncode.output()
The 1-D Tensor of strings encoded from the provided unicode codepoints.
|
Output<T> |
UnravelIndex.output()
An 2-D (or 1-D if indices is 0-D) tensor where each row has the
same shape as the indices array.
|
Output<String> |
UnsortedSegmentJoin.output() |
Output<U> |
UpperBound.output()
A `Tensor` with the same shape as `values`.
|
Output<T> |
VariableShape.output() |
Output<T> |
Where3.output()
= A `Tensor` with the same type and shape as `x` and `y`.
|
Output<T> |
Where3.output()
= A `Tensor` with the same type and shape as `x` and `y`.
|
Output<T> |
VariableShape.output() |
Output<U> |
UpperBound.output()
A `Tensor` with the same shape as `values`.
|
Output<String> |
UnsortedSegmentJoin.output() |
Output<T> |
UnravelIndex.output()
An 2-D (or 1-D if indices is 0-D) tensor where each row has the
same shape as the indices array.
|
Output<String> |
UnicodeEncode.output()
The 1-D Tensor of strings encoded from the provided unicode codepoints.
|
Output<T> |
TridiagonalSolve.output()
Tensor of shape `[..., M, K]` containing the solutions
|
Output<T> |
TridiagonalMatMul.output()
Tensor of shape `[..., M, N]` containing the product.
|
Output<T> |
TPUReplicatedInput.output() |
Output<Float> |
TPUEmbeddingActivations.output() |
Output<String> |
TPUCompilationResult.output() |
Output<T> |
Tile.output() |
Output<T> |
TensorStridedSliceUpdate.output() |
Output<T> |
TensorScatterUpdate.output()
A new tensor with the given shape and updates applied according
to the indices.
|
Output<T> |
TensorScatterSub.output()
A new tensor copied from tensor and updates subtracted according to the indices.
|
Output<T> |
TensorScatterAdd.output()
A new tensor copied from tensor and updates added according to the indices.
|
Output<?> |
TensorListConcatLists.output() |
Output<T> |
Sum.output()
The reduced tensor.
|
Output<String> |
StringUpper.output() |
Output<String> |
StringLower.output() |
Output<U> |
StridedSliceGrad.output() |
Output<T> |
StridedSlice.output() |
Output<T> |
StopGradient.output() |
Output<U> |
StatefulUniformInt.output()
Random values with specified shape.
|
Output<U> |
StatefulUniformFullInt.output()
Random values with specified shape.
|
Output<U> |
StatefulUniform.output()
Random values with specified shape.
|
Output<U> |
StatefulTruncatedNormal.output()
Random values with specified shape.
|
Output<U> |
StatefulStandardNormalV2.output()
A tensor of the specified shape filled with random normal values.
|
Output<U> |
StatefulStandardNormal.output()
A tensor of the specified shape filled with random normal values.
|
Output<V> |
StatefulRandomBinomial.output() |
Output<T> |
Stack.output()
The packed tensor.
|
Output<T> |
Squeeze.output()
Contains the same data as `input`, but has one or more dimensions of
size 1 removed.
|
Output<T> |
SpaceToBatchNd.output() |
Output<T> |
Snapshot.output() |
Output<T> |
Slice.output() |
Output<U> |
Size.output() |
Output<U> |
Shape.output() |
Output<T> |
SelectV2.output() |
Output<T> |
ScatterNdNonAliasingAdd.output()
A `Tensor` with the same shape as `input`, containing values of `input`
updated with `updates`.
|
Output<U> |
ScatterNd.output()
A new tensor with the given shape and updates applied according
to the indices.
|
Output<T> |
ScaleAndTranslateGrad.output() |
Output<T> |
Roll.output()
Has the same shape and size as the input.
|
Output<T> |
ReverseSequence.output()
The partially reversed input.
|
Output<T> |
Reverse.output()
The same shape as `tensor`.
|
Output<U> |
ResourceGatherNd.output() |
Output<U> |
ResourceGather.output() |
Output<T> |
ResourceCountUpTo.output()
A copy of the input before increment.
|
Output<T> |
Reshape.output() |
Output<U> |
RequantizePerChannel.output()
Output tensor.
|
Output<T> |
RefSelect.output()
The forwarded tensor.
|
Output<T> |
RefNextIteration.output()
The same tensor as `data`.
|
Output<T> |
RefMerge.output()
Will be set to the available input tensor.
|
Output<T> |
RefIdentity.output() |
Output<T> |
RefExit.output()
The same tensor as `data`.
|
Output<T> |
RefEnter.output()
The same tensor as `data`.
|
Output<T> |
ReduceSum.output()
The reduced tensor.
|
Output<T> |
ReduceProd.output()
The reduced tensor.
|
Output<T> |
ReduceMin.output()
The reduced tensor.
|
Output<T> |
ReduceMax.output()
The reduced tensor.
|
Output<Boolean> |
ReduceAny.output()
The reduced tensor.
|
Output<Boolean> |
ReduceAll.output()
The reduced tensor.
|
Output<Integer> |
Rank.output() |
Output<T> |
Range.output()
1-D.
|
Output<T> |
QuantizedReshape.output() |
Output<W> |
QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize.output()
The output tensor.
|
Output<V> |
QuantizedDepthwiseConv2DWithBiasAndRelu.output()
The output tensor.
|
Output<V> |
QuantizedDepthwiseConv2DWithBias.output()
The output tensor.
|
Output<V> |
QuantizedDepthwiseConv2D.output()
The output tensor.
|
Output<X> |
QuantizedConv2DWithBiasSumAndReluAndRequantize.output() |
Output<V> |
QuantizedConv2DWithBiasSumAndRelu.output() |
Output<X> |
QuantizedConv2DWithBiasSignedSumAndReluAndRequantize.output() |
Output<W> |
QuantizedConv2DWithBiasAndRequantize.output() |
Output<W> |
QuantizedConv2DWithBiasAndReluAndRequantize.output() |
Output<V> |
QuantizedConv2DWithBiasAndRelu.output() |
Output<V> |
QuantizedConv2DWithBias.output() |
Output<V> |
QuantizedConv2DPerChannel.output()
The output tensor.
|
Output<V> |
QuantizedConv2DAndRequantize.output() |
Output<V> |
QuantizedConv2DAndReluAndRequantize.output() |
Output<V> |
QuantizedConv2DAndRelu.output() |
Output<T> |
QuantizedConcatV2.output() |
Output<T> |
QuantizedConcat.output()
A `Tensor` with the concatenation of values stacked along the
`concat_dim` dimension.
|
Output<T> |
Prod.output()
The reduced tensor.
|
Output<?> |
PrelinearizeTuple.output() |
Output<?> |
Prelinearize.output() |
Output<T> |
PlaceholderWithDefault.output()
A placeholder tensor that defaults to `input` if it is not fed.
|
Output<T> |
Placeholder.output()
A placeholder tensor that must be replaced using the feed mechanism.
|
Output<T> |
ParallelConcat.output()
The concatenated tensor.
|
Output<T> |
Pad.output() |
Output<T> |
OutfeedDequeue.output()
A tensor that will be read from the device outfeed.
|
Output<U> |
OneHot.output()
The one-hot tensor.
|
Output<U> |
NonDeterministicInts.output()
Non-deterministic integer values with specified shape.
|
Output<T> |
NextIteration.output()
The same tensor as `data`.
|
Output<T> |
NextAfter.output() |
Output<T> |
NcclBroadcast.output() |
Output<T> |
MirrorPadGrad.output()
The folded tensor.
|
Output<T> |
MirrorPad.output()
The padded tensor.
|
Output<T> |
Min.output()
The reduced tensor.
|
Output<T> |
Merge.output()
Will be set to the available input tensor.
|
Output<T> |
Max.output()
The reduced tensor.
|
Output<T> |
MatrixSetDiagV2.output()
Rank `r+1`, with `output.shape = input.shape`.
|
Output<T> |
MatrixDiagV2.output()
Has rank `r+1` when `k` is an integer or `k[0] == k[1]`, rank `r` otherwise.
|
Output<U> |
LowerBound.output()
A `Tensor` with the same shape as `values`.
|
Output<Boolean> |
LoopCond.output()
The same tensor as `input`.
|
Output<T> |
LinSpace.output()
1-D.
|
Output<T> |
InfeedDequeue.output()
A tensor that will be provided using the infeed mechanism.
|
Output<T> |
Identity.output() |
Output<T> |
GuaranteeConst.output() |
Output<T> |
GatherNd.output()
Values from `params` gathered from indices given by `indices`, with
shape `indices.shape[:-1] + params.shape[indices.shape[-1]:]`.
|
Output<T> |
Gather.output()
Values from `params` gathered from indices given by `indices`, with
shape `params.shape[:axis] + indices.shape + params.shape[axis + 1:]`.
|
Output<U> |
Fill.output() |
Output<T> |
ExpandDims.output()
Contains the same data as `input`, but its shape has an additional
dimension of size 1 added.
|
Output<T> |
Exit.output()
The same tensor as `data`.
|
Output<T> |
EuclideanNorm.output()
The reduced tensor.
|
Output<T> |
Enter.output()
The same tensor as `data`.
|
Output<T> |
EnsureShape.output()
A tensor with the same shape and contents as the input tensor or value.
|
Output<T> |
Empty.output()
A `Tensor` of type `T`.
|
Output<T> |
Einsum.output()
Output Tensor with shape depending upon `equation`.
|
Output<Float> |
EditDistance.output()
A dense float tensor with rank R - 1.
|
Output<T> |
DrawBoundingBoxesV2.output()
4-D with the same shape as `images`.
|
Output<T> |
DecodePaddedRaw.output()
A Tensor with one more dimension than the input `bytes`.
|
Output<T> |
DebugGradientRefIdentity.output() |
Output<T> |
DebugGradientIdentity.output() |
Output<T> |
CudnnRNNV3.output() |
Output<T> |
CrossReplicaSum.output()
The sum of all the distributed inputs.
|
Output<T> |
CountUpTo.output()
A copy of the input before increment.
|
Output<T> |
Concat.output()
A `Tensor` with the concatenation of values stacked along the
`concat_dim` dimension.
|
Output<T> |
CollectivePermute.output()
The permuted input.
|
Output<T> |
ClipByValue.output()
A clipped `Tensor` with the same shape as input 't'.
|
Output<Integer> |
Bucketize.output()
Same shape with 'input', each value of input replaced with bucket index.
|
Output<T> |
BroadcastTo.output()
A Tensor.
|
Output<U> |
Bitcast.output() |
Output<T> |
BatchToSpaceNd.output() |
Output<T> |
BatchToSpace.output()
4-D with shape `[batch, height, width, depth]`, where:
|
Output<T> |
BatchMatMulV2.output()
3-D or higher with shape `[..., r_o, c_o]`
|
Output<Boolean> |
Any.output()
The reduced tensor.
|
Output<T> |
AllToAll.output()
The exchanged result.
|
Output<Boolean> |
All.output()
The reduced tensor.
|
Output<T> |
CudnnRNNV3.outputC() |
Output<T> |
CudnnRNNV3.outputC() |
Output<U> |
RaggedGather.outputDenseValues()
The `flat_values` for the returned RaggedTensor.
|
Output<T> |
RaggedTensorFromVariant.outputDenseValues()
A Tensor representing the values of the output `RaggedTensor`.
|
Output<T> |
RaggedTensorFromVariant.outputDenseValues()
A Tensor representing the values of the output `RaggedTensor`.
|
Output<U> |
RaggedGather.outputDenseValues()
The `flat_values` for the returned RaggedTensor.
|
Output<T> |
RefSwitch.outputFalse()
If `pred` is false, data will be forwarded to this output.
|
Output<T> |
SwitchCond.outputFalse()
If `pred` is false, data will be forwarded to this output.
|
Output<T> |
SwitchCond.outputFalse()
If `pred` is false, data will be forwarded to this output.
|
Output<T> |
RefSwitch.outputFalse()
If `pred` is false, data will be forwarded to this output.
|
Output<T> |
CudnnRNNV3.outputH() |
Output<T> |
CudnnRNNV3.outputH() |
Output<?> |
TensorListFromTensor.outputHandle() |
Output<?> |
TensorListPopBack.outputHandle() |
Output<?> |
TensorListPushBack.outputHandle() |
Output<?> |
TensorListResize.outputHandle() |
Output<?> |
TensorListScatter.outputHandle() |
Output<?> |
TensorListScatterIntoExistingList.outputHandle() |
Output<?> |
TensorListScatterV2.outputHandle() |
Output<?> |
TensorListSetItem.outputHandle() |
Output<?> |
TensorListSplit.outputHandle() |
Output<?> |
UnwrapDatasetVariant.outputHandle() |
Output<?> |
WrapDatasetVariant.outputHandle() |
Output<?> |
WrapDatasetVariant.outputHandle() |
Output<?> |
UnwrapDatasetVariant.outputHandle() |
Output<?> |
TensorListSplit.outputHandle() |
Output<?> |
TensorListSetItem.outputHandle() |
Output<?> |
TensorListScatterV2.outputHandle() |
Output<?> |
TensorListScatterIntoExistingList.outputHandle() |
Output<?> |
TensorListScatter.outputHandle() |
Output<?> |
TensorListResize.outputHandle() |
Output<?> |
TensorListPushBack.outputHandle() |
Output<?> |
TensorListPopBack.outputHandle() |
Output<?> |
TensorListFromTensor.outputHandle() |
Output<?> |
TensorListPushBackBatch.outputHandles() |
Output<?> |
TensorListPushBackBatch.outputHandles() |
Output<Float> |
QuantizedConcat.outputMax()
The float value that the maximum quantized output value represents.
|
Output<Float> |
QuantizedConcatV2.outputMax() |
Output<Float> |
QuantizedReshape.outputMax()
This value is copied from input_max.
|
Output<Float> |
RequantizationRangePerChannel.outputMax()
The maximum value of the final output tensor.
|
Output<Float> |
RequantizePerChannel.outputMax()
The maximum value of the final output tensor.
|
Output<Float> |
RequantizePerChannel.outputMax()
The maximum value of the final output tensor.
|
Output<Float> |
RequantizationRangePerChannel.outputMax()
The maximum value of the final output tensor.
|
Output<Float> |
QuantizedReshape.outputMax()
This value is copied from input_max.
|
Output<Float> |
QuantizedConcatV2.outputMax() |
Output<Float> |
QuantizedConcat.outputMax()
The float value that the maximum quantized output value represents.
|
Output<Float> |
QuantizedConcat.outputMin()
The float value that the minimum quantized output value represents.
|
Output<Float> |
QuantizedConcatV2.outputMin() |
Output<Float> |
QuantizedReshape.outputMin()
This value is copied from input_min.
|
Output<Float> |
RequantizationRangePerChannel.outputMin()
The minimum value of the final output tensor
|
Output<Float> |
RequantizePerChannel.outputMin()
The minimum value of the final output tensor
|
Output<Float> |
RequantizePerChannel.outputMin()
The minimum value of the final output tensor
|
Output<Float> |
RequantizationRangePerChannel.outputMin()
The minimum value of the final output tensor
|
Output<Float> |
QuantizedReshape.outputMin()
This value is copied from input_min.
|
Output<Float> |
QuantizedConcatV2.outputMin() |
Output<Float> |
QuantizedConcat.outputMin()
The float value that the minimum quantized output value represents.
|
Output<T> |
Assign.outputRef()
= Same as "ref".
|
Output<T> |
AssignAdd.outputRef()
= Same as "ref".
|
Output<T> |
AssignSub.outputRef()
= Same as "ref".
|
Output<T> |
ScatterAdd.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterDiv.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterMax.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterMin.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterMul.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterNdAdd.outputRef()
Same as ref.
|
Output<T> |
ScatterNdSub.outputRef()
Same as ref.
|
Output<T> |
ScatterNdUpdate.outputRef()
Same as ref.
|
Output<T> |
ScatterSub.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterUpdate.outputRef()
= Same as `ref`.
|
Output<T> |
StridedSliceAssign.outputRef() |
Output<T> |
StridedSliceAssign.outputRef() |
Output<T> |
ScatterUpdate.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterSub.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterNdUpdate.outputRef()
Same as ref.
|
Output<T> |
ScatterNdSub.outputRef()
Same as ref.
|
Output<T> |
ScatterNdAdd.outputRef()
Same as ref.
|
Output<T> |
ScatterMul.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterMin.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterMax.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterDiv.outputRef()
= Same as `ref`.
|
Output<T> |
ScatterAdd.outputRef()
= Same as `ref`.
|
Output<T> |
AssignSub.outputRef()
= Same as "ref".
|
Output<T> |
AssignAdd.outputRef()
= Same as "ref".
|
Output<T> |
Assign.outputRef()
= Same as "ref".
|
Output<T> |
RefSwitch.outputTrue()
If `pred` is true, data will be forwarded to this output.
|
Output<T> |
SwitchCond.outputTrue()
If `pred` is true, data will be forwarded to this output.
|
Output<T> |
SwitchCond.outputTrue()
If `pred` is true, data will be forwarded to this output.
|
Output<T> |
RefSwitch.outputTrue()
If `pred` is true, data will be forwarded to this output.
|
Output<U> |
Lu.p()
Permutation of the rows encoded as a list of indices in `0..M-1`.
|
Output<U> |
Lu.p()
Permutation of the rows encoded as a list of indices in `0..M-1`.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParameters.parameters()
Parameter parameters updated by the Adadelta optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.parameters()
Parameter parameters updated by the Adadelta optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdagradParameters.parameters()
Parameter parameters updated by the Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.parameters()
Parameter parameters updated by the Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParameters.parameters()
Parameter parameters updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParametersGradAccumDebug.parameters()
Parameter parameters updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingCenteredRMSPropParameters.parameters()
Parameter parameters updated by the centered RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParameters.parameters()
Parameter parameters updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.parameters()
Parameter parameters updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMDLAdagradLightParameters.parameters()
Parameter parameters updated by the MDL Adagrad Light optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMomentumParameters.parameters()
Parameter parameters updated by the Momentum optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.parameters()
Parameter parameters updated by the Momentum optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingProximalAdagradParameters.parameters()
Parameter parameters updated by the proximal Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.parameters()
Parameter parameters updated by the proximal Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParameters.parameters()
Parameter parameters updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.parameters()
Parameter parameters updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingStochasticGradientDescentParameters.parameters()
Parameter parameters updated by the stochastic gradient descent optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingStochasticGradientDescentParameters.parameters()
Parameter parameters updated by the stochastic gradient descent optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug.parameters()
Parameter parameters updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingRMSPropParameters.parameters()
Parameter parameters updated by the RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.parameters()
Parameter parameters updated by the proximal Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingProximalAdagradParameters.parameters()
Parameter parameters updated by the proximal Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.parameters()
Parameter parameters updated by the Momentum optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMomentumParameters.parameters()
Parameter parameters updated by the Momentum optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMDLAdagradLightParameters.parameters()
Parameter parameters updated by the MDL Adagrad Light optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParametersGradAccumDebug.parameters()
Parameter parameters updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingFTRLParameters.parameters()
Parameter parameters updated by the FTRL optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingCenteredRMSPropParameters.parameters()
Parameter parameters updated by the centered RMSProp optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParametersGradAccumDebug.parameters()
Parameter parameters updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParameters.parameters()
Parameter parameters updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.parameters()
Parameter parameters updated by the Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdagradParameters.parameters()
Parameter parameters updated by the Adagrad optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.parameters()
Parameter parameters updated by the Adadelta optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParameters.parameters()
Parameter parameters updated by the Adadelta optimization algorithm.
|
Output<T> |
CudnnRNNCanonicalToParamsV2.params() |
Output<T> |
CudnnRNNCanonicalToParamsV2.params() |
Output<T> |
CudnnRNNBackpropV3.paramsBackprop() |
Output<T> |
CudnnRNNBackpropV3.paramsBackprop() |
Output<Float> |
BoostedTreesTrainingPredict.partialLogits()
Rank 2 Tensor containing logits update (with respect to cached
values stored) for each example.
|
Output<Float> |
BoostedTreesTrainingPredict.partialLogits()
Rank 2 Tensor containing logits update (with respect to cached
values stored) for each example.
|
Output<T> |
ExtractVolumePatches.patches()
5-D Tensor with shape `[batch, out_planes, out_rows, out_cols,
ksize_planes * ksize_rows * ksize_cols * depth]` containing patches
with size `ksize_planes x ksize_rows x ksize_cols x depth` vectorized
in the "depth" dimension.
|
Output<T> |
ExtractVolumePatches.patches()
5-D Tensor with shape `[batch, out_planes, out_rows, out_cols,
ksize_planes * ksize_rows * ksize_cols * depth]` containing patches
with size `ksize_planes x ksize_rows x ksize_cols x depth` vectorized
in the "depth" dimension.
|
Output<T> |
GRUBlockCell.r() |
Output<T> |
GRUBlockCell.r() |
Output<T> |
BroadcastDynamicShape.r0() |
Output<T> |
BroadcastGradientArgs.r0() |
Output<T> |
BroadcastGradientArgs.r0() |
Output<T> |
BroadcastDynamicShape.r0() |
Output<T> |
BroadcastGradientArgs.r1() |
Output<T> |
BroadcastGradientArgs.r1() |
Output<T> |
TemporaryVariable.ref()
A reference to the variable tensor.
|
Output<T> |
Variable.ref()
A reference to the variable tensor.
|
Output<T> |
Variable.ref()
A reference to the variable tensor.
|
Output<T> |
TemporaryVariable.ref()
A reference to the variable tensor.
|
Output<T> |
CudnnRNNV3.reserveSpace() |
Output<T> |
CudnnRNNV3.reserveSpace() |
Output<U> |
FusedBatchNormV3.reserveSpace1()
A 1D Tensor for the computed batch mean, to be reused
in the gradient computation.
|
Output<U> |
FusedBatchNormV3.reserveSpace1()
A 1D Tensor for the computed batch mean, to be reused
in the gradient computation.
|
Output<U> |
FusedBatchNormV3.reserveSpace2()
A 1D Tensor for the computed batch variance (inverted variance
in the cuDNN case), to be reused in the gradient computation.
|
Output<U> |
FusedBatchNormV3.reserveSpace2()
A 1D Tensor for the computed batch variance (inverted variance
in the cuDNN case), to be reused in the gradient computation.
|
Output<U> |
FusedBatchNormV3.reserveSpace3()
A 1D Tensor for some intermediate results, to be reused in the gradient
computation for better efficiency.
|
Output<U> |
FusedBatchNormV3.reserveSpace3()
A 1D Tensor for some intermediate results, to be reused in the gradient
computation for better efficiency.
|
Output<U> |
FusedBatchNormGradV3.reserveSpace4()
Unused placeholder to match the mean input in FusedBatchNorm.
|
Output<U> |
FusedBatchNormGradV3.reserveSpace4()
Unused placeholder to match the mean input in FusedBatchNorm.
|
Output<U> |
FusedBatchNormGradV3.reserveSpace5()
Unused placeholder to match the variance input
in FusedBatchNorm.
|
Output<U> |
FusedBatchNormGradV3.reserveSpace5()
Unused placeholder to match the variance input
in FusedBatchNorm.
|
Output<Float> |
ScaleAndTranslate.resizedImages() |
Output<Float> |
ScaleAndTranslate.resizedImages() |
Output<?> |
BoostedTreesEnsembleResourceHandleOp.resource() |
Output<?> |
BoostedTreesQuantileStreamResourceHandleOp.resource() |
Output<?> |
Mutex.resource()
The mutex resource.
|
Output<?> |
TensorForestTreeResourceHandleOp.resource() |
Output<?> |
VarHandleOp.resource() |
Output<?> |
VarHandleOp.resource() |
Output<?> |
TensorForestTreeResourceHandleOp.resource() |
Output<?> |
Mutex.resource()
The mutex resource.
|
Output<?> |
BoostedTreesQuantileStreamResourceHandleOp.resource() |
Output<?> |
BoostedTreesEnsembleResourceHandleOp.resource() |
Output<String> |
Rpc.response()
Same shape as `request`.
|
Output<String> |
TryRpc.response()
Same shape as `request`.
|
Output<String> |
WorkerHeartbeat.response()
A string tensor containing a serialized WorkerHeartbeatResponse
|
Output<String> |
WorkerHeartbeat.response()
A string tensor containing a serialized WorkerHeartbeatResponse
|
Output<String> |
TryRpc.response()
Same shape as `request`.
|
Output<String> |
Rpc.response()
Same shape as `request`.
|
Output<U> |
RaggedTensorToTensor.result()
The resulting dense tensor.
|
Output<U> |
RaggedTensorToTensor.result()
The resulting dense tensor.
|
Output<Float> |
BoostedTreesCalculateBestFeatureSplit.rightNodeContribs()
A Rank 2 tensors, with the same shape/conditions as left_node_contribs_list, but just that the value is for the right node.
|
Output<Float> |
BoostedTreesSparseCalculateBestFeatureSplit.rightNodeContribs()
A Rank 2 tensor, with the same shape/conditions as left_node_contribs_list, but just that the value is for the right node.
|
Output<Float> |
BoostedTreesSparseCalculateBestFeatureSplit.rightNodeContribs()
A Rank 2 tensor, with the same shape/conditions as left_node_contribs_list, but just that the value is for the right node.
|
Output<Float> |
BoostedTreesCalculateBestFeatureSplit.rightNodeContribs()
A Rank 2 tensors, with the same shape/conditions as left_node_contribs_list, but just that the value is for the right node.
|
Output<T> |
UnicodeDecode.rowSplits()
A 1D int32 tensor containing the row splits.
|
Output<T> |
UnicodeDecode.rowSplits()
A 1D int32 tensor containing the row splits.
|
Output<T> |
RaggedRange.rtDenseValues()
The `flat_values` for the returned `RaggedTensor`.
|
Output<T> |
RaggedRange.rtDenseValues()
The `flat_values` for the returned `RaggedTensor`.
|
Output<U> |
RaggedRange.rtNestedSplits()
The `row_splits` for the returned `RaggedTensor`.
|
Output<U> |
RaggedRange.rtNestedSplits()
The `row_splits` for the returned `RaggedTensor`.
|
Output<Float> |
KmeansPlusPlusInitialization.samples()
Matrix of shape (num_to_sample, d).
|
Output<Float> |
KmeansPlusPlusInitialization.samples()
Matrix of shape (num_to_sample, d).
|
Output<U> |
FusedBatchNormGradV3.scaleBackprop()
A 1D Tensor for the gradient with respect to scale.
|
Output<U> |
FusedBatchNormGradV3.scaleBackprop()
A 1D Tensor for the gradient with respect to scale.
|
Output<Integer> |
NonMaxSuppressionV5.selectedIndices()
A 1-D integer tensor of shape `[M]` representing the selected
indices from the boxes tensor, where `M <= max_output_size`.
|
Output<Integer> |
NonMaxSuppressionV5.selectedIndices()
A 1-D integer tensor of shape `[M]` representing the selected
indices from the boxes tensor, where `M <= max_output_size`.
|
Output<T> |
NonMaxSuppressionV5.selectedScores()
A 1-D float tensor of shape `[M]` representing the corresponding
scores for each selected box, where `M <= max_output_size`.
|
Output<T> |
NonMaxSuppressionV5.selectedScores()
A 1-D float tensor of shape `[M]` representing the corresponding
scores for each selected box, where `M <= max_output_size`.
|
Output<Integer> |
BarrierIncompleteSize.size()
The number of incomplete elements (i.e.
|
Output<Integer> |
BarrierReadySize.size()
The number of complete elements (i.e.
|
Output<Long> |
LookupTableSize.size()
Scalar that contains number of elements in the table.
|
Output<Integer> |
MapIncompleteSize.size() |
Output<Integer> |
MapSize.size() |
Output<Integer> |
OrderedMapIncompleteSize.size() |
Output<Integer> |
OrderedMapSize.size() |
Output<Integer> |
SetSize.size()
For `set` ranked `n`, this is a `Tensor` with rank `n-1`, and the same 1st
`n-1` dimensions as `set`.
|
Output<Integer> |
StageSize.size() |
Output<Integer> |
TensorArraySize.size()
The current size of the TensorArray.
|
Output<Integer> |
TensorArraySize.size()
The current size of the TensorArray.
|
Output<Integer> |
StageSize.size() |
Output<Integer> |
SetSize.size()
For `set` ranked `n`, this is a `Tensor` with rank `n-1`, and the same 1st
`n-1` dimensions as `set`.
|
Output<Integer> |
OrderedMapSize.size() |
Output<Integer> |
OrderedMapIncompleteSize.size() |
Output<Integer> |
MapSize.size() |
Output<Integer> |
MapIncompleteSize.size() |
Output<Long> |
LookupTableSize.size()
Scalar that contains number of elements in the table.
|
Output<Integer> |
BarrierReadySize.size()
The number of complete elements (i.e.
|
Output<Integer> |
BarrierIncompleteSize.size()
The number of incomplete elements (i.e.
|
Output<Integer> |
DecodeProto.sizes()
Tensor of int32 with shape `[batch_shape, len(field_names)]`.
|
Output<Integer> |
DecodeProto.sizes()
Tensor of int32 with shape `[batch_shape, len(field_names)]`.
|
Output<Long> |
RaggedTensorToSparse.sparseDenseShape()
`sparse_dense_shape` is a tight bounding box of the input `RaggedTensor`.
|
Output<Long> |
RaggedTensorToSparse.sparseDenseShape()
`sparse_dense_shape` is a tight bounding box of the input `RaggedTensor`.
|
Output<Long> |
RaggedTensorToSparse.sparseIndices()
The indices for the `SparseTensor`.
|
Output<Long> |
RaggedTensorToSparse.sparseIndices()
The indices for the `SparseTensor`.
|
Output<U> |
RaggedTensorToSparse.sparseValues()
The values of the `SparseTensor`.
|
Output<U> |
RaggedTensorToSparse.sparseValues()
The values of the `SparseTensor`.
|
Output<String> |
BoostedTreesCalculateBestFeatureSplit.splitWithDefaultDirections()
A Rank 1 tensors indicating the which direction to go if data is missing.
|
Output<String> |
BoostedTreesSparseCalculateBestFeatureSplit.splitWithDefaultDirections()
A Rank 1 tensor indicating which direction to go if data is missing.
|
Output<String> |
BoostedTreesSparseCalculateBestFeatureSplit.splitWithDefaultDirections()
A Rank 1 tensor indicating which direction to go if data is missing.
|
Output<String> |
BoostedTreesCalculateBestFeatureSplit.splitWithDefaultDirections()
A Rank 1 tensors indicating the which direction to go if data is missing.
|
Output<Long> |
BoostedTreesGetEnsembleStates.stampToken()
Stamp token of the tree ensemble resource.
|
Output<Long> |
BoostedTreesSerializeEnsemble.stampToken()
Stamp token of the tree ensemble resource.
|
Output<Long> |
BoostedTreesSerializeEnsemble.stampToken()
Stamp token of the tree ensemble resource.
|
Output<Long> |
BoostedTreesGetEnsembleStates.stampToken()
Stamp token of the tree ensemble resource.
|
Output<Float> |
BoostedTreesAggregateStats.statsSummary()
output Rank 4 Tensor (shape=[splits, feature_dimension, buckets, logits_dimension + hessian_dimension])
containing accumulated stats for each node, feature dimension and bucket.
|
Output<Float> |
BoostedTreesMakeStatsSummary.statsSummary()
output Rank 4 Tensor (shape=[#features, #splits, #buckets, 2]) containing accumulated stats put into the corresponding node and bucket.
|
Output<Float> |
BoostedTreesMakeStatsSummary.statsSummary()
output Rank 4 Tensor (shape=[#features, #splits, #buckets, 2]) containing accumulated stats put into the corresponding node and bucket.
|
Output<Float> |
BoostedTreesAggregateStats.statsSummary()
output Rank 4 Tensor (shape=[splits, feature_dimension, buckets, logits_dimension + hessian_dimension])
containing accumulated stats for each node, feature dimension and bucket.
|
Output<Integer> |
BoostedTreesSparseAggregateStats.statsSummaryIndices()
int32; Rank 2 indices of summary sparse Tensors (shape=[number of non zero statistics, 4])
The second axis can only be 4 including node id, feature dimension, bucket id, and statistics_dimension.
|
Output<Integer> |
BoostedTreesSparseAggregateStats.statsSummaryIndices()
int32; Rank 2 indices of summary sparse Tensors (shape=[number of non zero statistics, 4])
The second axis can only be 4 including node id, feature dimension, bucket id, and statistics_dimension.
|
Output<Integer> |
BoostedTreesSparseAggregateStats.statsSummaryShape()
output Rank 1 Tensor (shape=[4])
The tensor has following 4 values: [max_splits, feature_dimension, num_buckets, statistics_dimension],
where statistics_dimension = gradient_dimension + hessian_dimension.
|
Output<Integer> |
BoostedTreesSparseAggregateStats.statsSummaryShape()
output Rank 1 Tensor (shape=[4])
The tensor has following 4 values: [max_splits, feature_dimension, num_buckets, statistics_dimension],
where statistics_dimension = gradient_dimension + hessian_dimension.
|
Output<Float> |
BoostedTreesSparseAggregateStats.statsSummaryValues()
output Rank 1 Tensor (shape=[number of non zero statistics])
|
Output<Float> |
BoostedTreesSparseAggregateStats.statsSummaryValues()
output Rank 1 Tensor (shape=[number of non zero statistics])
|
Output<Integer> |
TryRpc.statusCode()
Same shape as `request`.
|
Output<Integer> |
TryRpc.statusCode()
Same shape as `request`.
|
Output<String> |
TryRpc.statusMessage()
Same shape as `request`.
|
Output<String> |
TryRpc.statusMessage()
Same shape as `request`.
|
Output<String> |
ExperimentalStatsAggregatorSummary.summary() |
Output<String> |
ExperimentalStatsAggregatorSummary.summary() |
Output<?> |
HashTable.tableHandle()
Handle to a table.
|
Output<?> |
MutableDenseHashTable.tableHandle()
Handle to a table.
|
Output<?> |
MutableHashTable.tableHandle()
Handle to a table.
|
Output<?> |
MutableHashTableOfTensors.tableHandle()
Handle to a table.
|
Output<?> |
MutableHashTableOfTensors.tableHandle()
Handle to a table.
|
Output<?> |
MutableHashTable.tableHandle()
Handle to a table.
|
Output<?> |
MutableDenseHashTable.tableHandle()
Handle to a table.
|
Output<?> |
HashTable.tableHandle()
Handle to a table.
|
Output<T> |
ImmutableConst.tensor() |
Output<T> |
TensorListConcat.tensor() |
Output<U> |
TensorListConcatV2.tensor() |
Output<T> |
TensorListPopBack.tensor() |
Output<T> |
TensorListStack.tensor() |
Output<T> |
TensorListStack.tensor() |
Output<T> |
TensorListPopBack.tensor() |
Output<U> |
TensorListConcatV2.tensor() |
Output<T> |
TensorListConcat.tensor() |
Output<T> |
ImmutableConst.tensor() |
Output<Integer> |
BoostedTreesCalculateBestFeatureSplit.thresholds()
A Rank 1 tensors indicating the bucket id to compare with (as a threshold) for split in each node.
|
Output<Integer> |
BoostedTreesSparseCalculateBestFeatureSplit.thresholds()
A Rank 1 tensor indicating the bucket id to compare with (as a threshold) for split in each node.
|
Output<Integer> |
BoostedTreesSparseCalculateBestFeatureSplit.thresholds()
A Rank 1 tensor indicating the bucket id to compare with (as a threshold) for split in each node.
|
Output<Integer> |
BoostedTreesCalculateBestFeatureSplit.thresholds()
A Rank 1 tensors indicating the bucket id to compare with (as a threshold) for split in each node.
|
Output<String> |
ConfigureDistributedTPU.topology()
A serialized tensorflow.tpu.TopologyProto that describes the TPU
topology.
|
Output<String> |
ConfigureDistributedTPU.topology()
A serialized tensorflow.tpu.TopologyProto that describes the TPU
topology.
|
Output<Long> |
Skipgram.totalWordsProcessed()
The total number of words processed so far.
|
Output<Long> |
Skipgram.totalWordsProcessed()
The total number of words processed so far.
|
Output<String> |
TensorForestTreeSerialize.treeConfig()
Serialied proto string of the tree resource.
|
Output<String> |
TensorForestTreeSerialize.treeConfig()
Serialied proto string of the tree resource.
|
Output<String> |
BoostedTreesSerializeEnsemble.treeEnsembleSerialized()
Serialized proto of the ensemble.
|
Output<String> |
BoostedTreesSerializeEnsemble.treeEnsembleSerialized()
Serialized proto of the ensemble.
|
Output<Integer> |
BoostedTreesTrainingPredict.treeIds()
Rank 1 Tensor containing new tree ids for each example.
|
Output<Integer> |
BoostedTreesTrainingPredict.treeIds()
Rank 1 Tensor containing new tree ids for each example.
|
Output<Integer> |
TensorForestTreeSize.treeSize()
The size of the tree.
|
Output<Integer> |
TensorForestTreeSize.treeSize()
The size of the tree.
|
Output<Double> |
Timestamp.ts() |
Output<Double> |
Timestamp.ts() |
Output<T> |
GRUBlockCell.u() |
Output<T> |
GRUBlockCell.u() |
Output<T> |
Unbatch.unbatchedTensor() |
Output<T> |
Unbatch.unbatchedTensor() |
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParameters.updates()
Parameter updates updated by the Adadelta optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.updates()
Parameter updates updated by the Adadelta optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug.updates()
Parameter updates updated by the Adadelta optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingAdadeltaParameters.updates()
Parameter updates updated by the Adadelta optimization algorithm.
|
Output<Integer> |
CombinedNonMaxSuppression.validDetections()
A [batch_size] int32 tensor indicating the number of
valid detections per batch item.
|
Output<Integer> |
CombinedNonMaxSuppression.validDetections()
A [batch_size] int32 tensor indicating the number of
valid detections per batch item.
|
Output<Integer> |
NonMaxSuppressionV5.validOutputs()
A 0-D integer tensor representing the number of valid elements in
`selected_indices`, with the valid elements appearing first.
|
Output<Integer> |
NonMaxSuppressionV5.validOutputs()
A 0-D integer tensor representing the number of valid elements in
`selected_indices`, with the valid elements appearing first.
|
Output<T> |
DestroyTemporaryVariable.value() |
Output<T> |
GetSessionTensor.value()
The tensor for the given handle.
|
Output<T> |
ReadVariableOp.value() |
Output<T> |
TensorArrayConcat.value()
All of the elements in the TensorArray, concatenated along the first
axis.
|
Output<T> |
TensorArrayGather.value()
All of the elements in the TensorArray, concatenated along a new
axis (the new dimension 0).
|
Output<T> |
TensorArrayPack.value() |
Output<T> |
TensorArrayRead.value()
The tensor that is read from the TensorArray.
|
Output<T> |
TensorArrayRead.value()
The tensor that is read from the TensorArray.
|
Output<T> |
TensorArrayPack.value() |
Output<T> |
TensorArrayGather.value()
All of the elements in the TensorArray, concatenated along a new
axis (the new dimension 0).
|
Output<T> |
TensorArrayConcat.value()
All of the elements in the TensorArray, concatenated along the first
axis.
|
Output<T> |
ReadVariableOp.value() |
Output<T> |
GetSessionTensor.value()
The tensor for the given handle.
|
Output<T> |
DestroyTemporaryVariable.value() |
Output<Integer> |
Merge.valueIndex()
The index of the chosen input tensor in `inputs`.
|
Output<Integer> |
RefMerge.valueIndex()
The index of the chosen input tensor in `inputs`.
|
Output<Integer> |
RefMerge.valueIndex()
The index of the chosen input tensor in `inputs`.
|
Output<Integer> |
Merge.valueIndex()
The index of the chosen input tensor in `inputs`.
|
Output<U> |
LookupTableExport.values()
Tensor of all values in the table.
|
Output<U> |
LookupTableFind.values()
Same shape as `keys`.
|
Output<T> |
TensorListGather.values() |
Output<T> |
TensorListGather.values() |
Output<U> |
LookupTableFind.values()
Same shape as `keys`.
|
Output<U> |
LookupTableExport.values()
Tensor of all values in the table.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParameters.velocities()
Parameter velocities updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParametersGradAccumDebug.velocities()
Parameter velocities updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParametersGradAccumDebug.velocities()
Parameter velocities updated by the ADAM optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingADAMParameters.velocities()
Parameter velocities updated by the ADAM optimization algorithm.
|
Output<Integer> |
Skipgram.vocabFreq()
Frequencies of words.
|
Output<Integer> |
Skipgram.vocabFreq()
Frequencies of words.
|
Output<String> |
Skipgram.vocabWord()
A vector of words in the corpus.
|
Output<String> |
Skipgram.vocabWord()
A vector of words in the corpus.
|
Output<T> |
BlockLSTMGrad.wcfGrad()
The gradient for wcf to be back-propped.
|
Output<T> |
BlockLSTMGradV2.wcfGrad()
The gradient for wcf to be back-propped.
|
Output<T> |
LSTMBlockCellGrad.wcfGrad()
The gradient for wcf to be back-propped.
|
Output<T> |
LSTMBlockCellGrad.wcfGrad()
The gradient for wcf to be back-propped.
|
Output<T> |
BlockLSTMGradV2.wcfGrad()
The gradient for wcf to be back-propped.
|
Output<T> |
BlockLSTMGrad.wcfGrad()
The gradient for wcf to be back-propped.
|
Output<T> |
BlockLSTMGrad.wciGrad()
The gradient for wci to be back-propped.
|
Output<T> |
BlockLSTMGradV2.wciGrad()
The gradient for wci to be back-propped.
|
Output<T> |
LSTMBlockCellGrad.wciGrad()
The gradient for wci to be back-propped.
|
Output<T> |
LSTMBlockCellGrad.wciGrad()
The gradient for wci to be back-propped.
|
Output<T> |
BlockLSTMGradV2.wciGrad()
The gradient for wci to be back-propped.
|
Output<T> |
BlockLSTMGrad.wciGrad()
The gradient for wci to be back-propped.
|
Output<T> |
BlockLSTMGrad.wcoGrad()
The gradient for wco to be back-propped.
|
Output<T> |
BlockLSTMGradV2.wcoGrad()
The gradient for wco to be back-propped.
|
Output<T> |
LSTMBlockCellGrad.wcoGrad()
The gradient for wco to be back-propped.
|
Output<T> |
LSTMBlockCellGrad.wcoGrad()
The gradient for wco to be back-propped.
|
Output<T> |
BlockLSTMGradV2.wcoGrad()
The gradient for wco to be back-propped.
|
Output<T> |
BlockLSTMGrad.wcoGrad()
The gradient for wco to be back-propped.
|
Output<Float> |
RetrieveTPUEmbeddingMDLAdagradLightParameters.weights()
Parameter weights updated by the MDL Adagrad Light optimization algorithm.
|
Output<Float> |
RetrieveTPUEmbeddingMDLAdagradLightParameters.weights()
Parameter weights updated by the MDL Adagrad Light optimization algorithm.
|
Output<T> |
BlockLSTMGrad.wGrad()
The gradient for w to be back-propped.
|
Output<T> |
BlockLSTMGradV2.wGrad()
The gradient for w to be back-propped.
|
Output<T> |
BlockLSTMGradV2.wGrad()
The gradient for w to be back-propped.
|
Output<T> |
BlockLSTMGrad.wGrad()
The gradient for w to be back-propped.
|
Output<Long> |
Skipgram.wordsPerEpoch()
Number of words per epoch in the data file.
|
Output<Long> |
Skipgram.wordsPerEpoch()
Number of words per epoch in the data file.
|
Output<T> |
FusedBatchNormGradV3.xBackprop()
A 4D Tensor for the gradient with respect to x.
|
Output<T> |
FusedBatchNormGradV3.xBackprop()
A 4D Tensor for the gradient with respect to x.
|
Output<T> |
BlockLSTMGrad.xGrad()
The gradient of x to be back-propped.
|
Output<T> |
BlockLSTMGradV2.xGrad()
The gradient of x to be back-propped.
|
Output<T> |
BlockLSTMGradV2.xGrad()
The gradient of x to be back-propped.
|
Output<T> |
BlockLSTMGrad.xGrad()
The gradient of x to be back-propped.
|
Output<T> |
DeepCopy.y()
y: A `Tensor` of type `T`.
|
Output<T> |
FusedBatchNormV3.y()
A 4D Tensor for output data.
|
Output<T> |
InplaceAdd.y()
A `Tensor` of type T.
|
Output<T> |
InplaceSub.y()
A `Tensor` of type T.
|
Output<T> |
InplaceUpdate.y()
A `Tensor` of type T.
|
Output<T> |
OnesLike.y()
a tensor of the same shape and type as x but filled with ones.
|
Output<T> |
Unique.y()
A `Tensor`.
|
Output<T> |
UniqueWithCounts.y()
A `Tensor`.
|
Output<T> |
ZerosLike.y()
a tensor of the same shape and type as x but filled with zeros.
|
Output<T> |
ZerosLike.y()
a tensor of the same shape and type as x but filled with zeros.
|
Output<T> |
UniqueWithCounts.y()
A `Tensor`.
|
Output<T> |
Unique.y()
A `Tensor`.
|
Output<T> |
OnesLike.y()
a tensor of the same shape and type as x but filled with ones.
|
Output<T> |
InplaceUpdate.y()
A `Tensor` of type T.
|
Output<T> |
InplaceSub.y()
A `Tensor` of type T.
|
Output<T> |
InplaceAdd.y()
A `Tensor` of type T.
|
Output<T> |
FusedBatchNormV3.y()
A 4D Tensor for output data.
|
Output<T> |
DeepCopy.y()
y: A `Tensor` of type `T`.
|
Output<T> |
MulNoNan.z() |
Output<T> |
MulNoNan.z() |
Modifier and Type | Method and Description |
---|---|
List<Output<?>> |
Batch.batchedTensors() |
List<Output<?>> |
Batch.batchedTensors() |
List<Output<T>> |
CudnnRNNParamsToCanonicalV2.biases() |
List<Output<T>> |
CudnnRNNParamsToCanonicalV2.biases() |
List<Output<Float>> |
BoostedTreesQuantileStreamResourceGetBucketBoundaries.bucketBoundaries()
float; List of Rank 1 Tensors each containing the bucket boundaries for a feature.
|
List<Output<Float>> |
BoostedTreesQuantileStreamResourceGetBucketBoundaries.bucketBoundaries()
float; List of Rank 1 Tensors each containing the bucket boundaries for a feature.
|
List<Output<Integer>> |
BoostedTreesBucketize.buckets()
int; List of Rank 1 Tensors each containing the bucketized values for a single feature.
|
List<Output<Integer>> |
BoostedTreesBucketize.buckets()
int; List of Rank 1 Tensors each containing the bucketized values for a single feature.
|
List<Output<?>> |
Gradients.dy()
Partial derivatives of
y s w.r.t. |
List<Output<?>> |
Gradients.dy()
Partial derivatives of
y s w.r.t. |
List<Output<Float>> |
BoostedTreesCalculateBestGainsPerFeature.gainsList()
An output list of Rank 1 tensors indicating the best gains for each feature to split for certain nodes.
|
List<Output<Float>> |
BoostedTreesCalculateBestGainsPerFeature.gainsList()
An output list of Rank 1 tensors indicating the best gains for each feature to split for certain nodes.
|
List<Output<Float>> |
BoostedTreesCalculateBestGainsPerFeature.leftNodeContribsList()
A list of Rank 2 tensors indicating the contribution of the left nodes when branching from parent nodes (given by the tensor element in the output node_ids_list) to the left direction by the given threshold for each feature.
|
List<Output<Float>> |
BoostedTreesCalculateBestGainsPerFeature.leftNodeContribsList()
A list of Rank 2 tensors indicating the contribution of the left nodes when branching from parent nodes (given by the tensor element in the output node_ids_list) to the left direction by the given threshold for each feature.
|
List<Output<Integer>> |
BoostedTreesCalculateBestGainsPerFeature.nodeIdsList()
An output list of Rank 1 tensors indicating possible split node ids for each feature.
|
List<Output<Integer>> |
BoostedTreesCalculateBestGainsPerFeature.nodeIdsList()
An output list of Rank 1 tensors indicating possible split node ids for each feature.
|
List<Output<?>> |
IdentityN.output() |
List<Output<U>> |
ShapeN.output() |
List<Output<T>> |
Split.output()
They are identically shaped tensors, whose shape matches that of `value`
except along `axis`, where their sizes are
`values.shape[split_dim] / num_split`.
|
List<Output<T>> |
SplitV.output()
Tensors whose shape matches that of `value`
except along `axis`, where their sizes are
`size_splits[i]`.
|
List<Output<T>> |
Unstack.output()
The list of tensors unpacked from `value`.
|
List<Output<T>> |
Unstack.output()
The list of tensors unpacked from `value`.
|
List<Output<T>> |
SplitV.output()
Tensors whose shape matches that of `value`
except along `axis`, where their sizes are
`size_splits[i]`.
|
List<Output<T>> |
Split.output()
They are identically shaped tensors, whose shape matches that of `value`
except along `axis`, where their sizes are
`values.shape[split_dim] / num_split`.
|
List<Output<U>> |
ShapeN.output() |
List<Output<?>> |
IdentityN.output() |
List<Output<T>> |
RaggedGather.outputNestedSplits()
The `nested_row_splits` tensors that define the row-partitioning for the
returned RaggedTensor.
|
List<Output<U>> |
RaggedTensorFromVariant.outputNestedSplits()
A list of one or more Tensors representing the splits of the output
`RaggedTensor`.
|
List<Output<U>> |
RaggedTensorFromVariant.outputNestedSplits()
A list of one or more Tensors representing the splits of the output
`RaggedTensor`.
|
List<Output<T>> |
RaggedGather.outputNestedSplits()
The `nested_row_splits` tensors that define the row-partitioning for the
returned RaggedTensor.
|
List<Output<T>> |
DynamicPartition.outputs() |
List<Output<?>> |
InfeedDequeueTuple.outputs()
A list of tensors that will be provided using the infeed mechanism.
|
List<Output<?>> |
OutfeedDequeueTuple.outputs()
A list of tensors that will be read from the outfeed.
|
List<Output<Float>> |
RecvTPUEmbeddingActivations.outputs()
A TensorList of embedding activations containing one Tensor per
embedding table in the model.
|
List<Output<?>> |
RemoteFusedGraphExecute.outputs()
Arbitrary number of tensors with arbitrary data types
|
List<Output<T>> |
TPUReplicatedOutput.outputs() |
List<Output<T>> |
TPUReplicatedOutput.outputs() |
List<Output<?>> |
RemoteFusedGraphExecute.outputs()
Arbitrary number of tensors with arbitrary data types
|
List<Output<Float>> |
RecvTPUEmbeddingActivations.outputs()
A TensorList of embedding activations containing one Tensor per
embedding table in the model.
|
List<Output<?>> |
OutfeedDequeueTuple.outputs()
A list of tensors that will be read from the outfeed.
|
List<Output<?>> |
InfeedDequeueTuple.outputs()
A list of tensors that will be provided using the infeed mechanism.
|
List<Output<T>> |
DynamicPartition.outputs() |
List<Output<Float>> |
BoostedTreesCalculateBestGainsPerFeature.rightNodeContribsList()
A list of Rank 2 tensors, with the same shape/conditions as left_node_contribs_list, but just that the value is for the right node.
|
List<Output<Float>> |
BoostedTreesCalculateBestGainsPerFeature.rightNodeContribsList()
A list of Rank 2 tensors, with the same shape/conditions as left_node_contribs_list, but just that the value is for the right node.
|
List<Output<Float>> |
BoostedTreesFlushQuantileSummaries.summaries() |
List<Output<Float>> |
BoostedTreesMakeQuantileSummaries.summaries()
float; List of Rank 2 Tensors each containing the quantile summary
(value, weight, min_rank, max_rank) of a single feature.
|
List<Output<Float>> |
BoostedTreesMakeQuantileSummaries.summaries()
float; List of Rank 2 Tensors each containing the quantile summary
(value, weight, min_rank, max_rank) of a single feature.
|
List<Output<Float>> |
BoostedTreesFlushQuantileSummaries.summaries() |
List<Output<Integer>> |
BoostedTreesCalculateBestGainsPerFeature.thresholdsList()
An output list of Rank 1 tensors indicating the bucket id to compare with (as a threshold) for split in each node.
|
List<Output<Integer>> |
BoostedTreesCalculateBestGainsPerFeature.thresholdsList()
An output list of Rank 1 tensors indicating the bucket id to compare with (as a threshold) for split in each node.
|
List<Output<?>> |
BarrierTakeMany.values()
One any-dimensional tensor per component in a barrier element.
|
List<Output<?>> |
DecodeProto.values()
List of tensors containing values for the corresponding field.
|
List<Output<?>> |
MapPeek.values() |
List<Output<?>> |
MapUnstage.values() |
List<Output<?>> |
MapUnstageNoKey.values() |
List<Output<?>> |
OrderedMapPeek.values() |
List<Output<?>> |
OrderedMapUnstage.values() |
List<Output<?>> |
OrderedMapUnstageNoKey.values() |
List<Output<?>> |
StagePeek.values() |
List<Output<?>> |
Unstage.values() |
List<Output<?>> |
Unstage.values() |
List<Output<?>> |
StagePeek.values() |
List<Output<?>> |
OrderedMapUnstageNoKey.values() |
List<Output<?>> |
OrderedMapUnstage.values() |
List<Output<?>> |
OrderedMapPeek.values() |
List<Output<?>> |
MapUnstageNoKey.values() |
List<Output<?>> |
MapUnstage.values() |
List<Output<?>> |
MapPeek.values() |
List<Output<?>> |
DecodeProto.values()
List of tensors containing values for the corresponding field.
|
List<Output<?>> |
BarrierTakeMany.values()
One any-dimensional tensor per component in a barrier element.
|
List<Output<T>> |
CudnnRNNParamsToCanonicalV2.weights() |
List<Output<T>> |
CudnnRNNParamsToCanonicalV2.weights() |
Modifier and Type | Method and Description |
---|---|
Output<Object> |
AnonymousIterator.asOutput() |
Output<Object> |
BatchDataset.asOutput() |
Output<Object> |
BytesProducedStatsDataset.asOutput() |
Output<Object> |
CacheDataset.asOutput() |
Output<Object> |
ConcatenateDataset.asOutput() |
Output<String> |
DatasetToGraph.asOutput() |
Output<Object> |
DenseToSparseBatchDataset.asOutput() |
Output<Object> |
ExperimentalAssertNextDataset.asOutput() |
Output<Object> |
ExperimentalCsvDataset.asOutput() |
Output<Object> |
ExperimentalDirectedInterleaveDataset.asOutput() |
Output<Object> |
ExperimentalIgnoreErrorsDataset.asOutput() |
Output<String> |
ExperimentalIteratorGetDevice.asOutput() |
Output<Object> |
ExperimentalLmdbDataset.asOutput() |
Output<Object> |
ExperimentalNonSerializableDataset.asOutput() |
Output<Object> |
ExperimentalSleepDataset.asOutput() |
Output<Object> |
ExperimentalThreadPoolDataset.asOutput() |
Output<Object> |
ExperimentalThreadPoolHandle.asOutput() |
Output<Object> |
ExperimentalUniqueDataset.asOutput() |
Output<Object> |
FilterByLastComponentDataset.asOutput() |
Output<Object> |
FixedLengthRecordDataset.asOutput() |
Output<Object> |
Iterator.asOutput() |
Output<Object> |
IteratorFromStringHandle.asOutput() |
Output<Object> |
IteratorGetNextAsOptional.asOutput() |
Output<String> |
IteratorToStringHandle.asOutput() |
Output<Object> |
LatencyStatsDataset.asOutput() |
Output<T> |
LeakyReluGrad.asOutput() |
Output<Object> |
MatchingFilesDataset.asOutput() |
Output<Object> |
ModelDataset.asOutput() |
Output<Object> |
MultiDeviceIterator.asOutput() |
Output<Object> |
MultiDeviceIteratorFromStringHandle.asOutput() |
Output<Long> |
MultiDeviceIteratorInit.asOutput() |
Output<String> |
MultiDeviceIteratorToStringHandle.asOutput() |
Output<Object> |
OptimizeDataset.asOutput() |
Output<Object> |
OptionalFromValue.asOutput() |
Output<Boolean> |
OptionalHasValue.asOutput() |
Output<Object> |
OptionalNone.asOutput() |
Output<Object> |
PaddedBatchDataset.asOutput() |
Output<Object> |
ParseExampleDataset.asOutput() |
Output<Object> |
PrefetchDataset.asOutput() |
Output<Object> |
RandomDataset.asOutput() |
Output<Object> |
RangeDataset.asOutput() |
Output<Object> |
RepeatDataset.asOutput() |
Output<Object> |
SerializeIterator.asOutput() |
Output<Object> |
SetStatsAggregatorDataset.asOutput() |
Output<Object> |
ShuffleAndRepeatDataset.asOutput() |
Output<Object> |
ShuffleDataset.asOutput() |
Output<Object> |
SkipDataset.asOutput() |
Output<Object> |
SparseTensorSliceDataset.asOutput() |
Output<Object> |
SqlDataset.asOutput() |
Output<Object> |
StatsAggregatorHandle.asOutput() |
Output<Object> |
TakeDataset.asOutput() |
Output<Object> |
TensorDataset.asOutput() |
Output<Object> |
TensorSliceDataset.asOutput() |
Output<Object> |
TextLineDataset.asOutput() |
Output<Object> |
TfRecordDataset.asOutput() |
Output<Object> |
UnbatchDataset.asOutput() |
Output<Object> |
WindowDataset.asOutput() |
Output<Object> |
ZipDataset.asOutput() |
Output<Object> |
ZipDataset.asOutput() |
Output<Object> |
WindowDataset.asOutput() |
Output<Object> |
UnbatchDataset.asOutput() |
Output<Object> |
TfRecordDataset.asOutput() |
Output<Object> |
TextLineDataset.asOutput() |
Output<Object> |
TensorSliceDataset.asOutput() |
Output<Object> |
TensorDataset.asOutput() |
Output<Object> |
TakeDataset.asOutput() |
Output<Object> |
StatsAggregatorHandle.asOutput() |
Output<Object> |
SqlDataset.asOutput() |
Output<Object> |
SparseTensorSliceDataset.asOutput() |
Output<Object> |
SkipDataset.asOutput() |
Output<Object> |
ShuffleDataset.asOutput() |
Output<Object> |
ShuffleAndRepeatDataset.asOutput() |
Output<Object> |
SetStatsAggregatorDataset.asOutput() |
Output<Object> |
SerializeIterator.asOutput() |
Output<Object> |
RepeatDataset.asOutput() |
Output<Object> |
RangeDataset.asOutput() |
Output<Object> |
RandomDataset.asOutput() |
Output<Object> |
PrefetchDataset.asOutput() |
Output<Object> |
ParseExampleDataset.asOutput() |
Output<Object> |
PaddedBatchDataset.asOutput() |
Output<Object> |
OptionalNone.asOutput() |
Output<Boolean> |
OptionalHasValue.asOutput() |
Output<Object> |
OptionalFromValue.asOutput() |
Output<Object> |
OptimizeDataset.asOutput() |
Output<String> |
MultiDeviceIteratorToStringHandle.asOutput() |
Output<Long> |
MultiDeviceIteratorInit.asOutput() |
Output<Object> |
MultiDeviceIteratorFromStringHandle.asOutput() |
Output<Object> |
MultiDeviceIterator.asOutput() |
Output<Object> |
ModelDataset.asOutput() |
Output<Object> |
MatchingFilesDataset.asOutput() |
Output<T> |
LeakyReluGrad.asOutput() |
Output<Object> |
LatencyStatsDataset.asOutput() |
Output<String> |
IteratorToStringHandle.asOutput() |
Output<Object> |
IteratorGetNextAsOptional.asOutput() |
Output<Object> |
IteratorFromStringHandle.asOutput() |
Output<Object> |
Iterator.asOutput() |
Output<Object> |
FixedLengthRecordDataset.asOutput() |
Output<Object> |
FilterByLastComponentDataset.asOutput() |
Output<Object> |
ExperimentalUniqueDataset.asOutput() |
Output<Object> |
ExperimentalThreadPoolHandle.asOutput() |
Output<Object> |
ExperimentalThreadPoolDataset.asOutput() |
Output<Object> |
ExperimentalSleepDataset.asOutput() |
Output<Object> |
ExperimentalNonSerializableDataset.asOutput() |
Output<Object> |
ExperimentalLmdbDataset.asOutput() |
Output<String> |
ExperimentalIteratorGetDevice.asOutput() |
Output<Object> |
ExperimentalIgnoreErrorsDataset.asOutput() |
Output<Object> |
ExperimentalDirectedInterleaveDataset.asOutput() |
Output<Object> |
ExperimentalCsvDataset.asOutput() |
Output<Object> |
ExperimentalAssertNextDataset.asOutput() |
Output<Object> |
DenseToSparseBatchDataset.asOutput() |
Output<String> |
DatasetToGraph.asOutput() |
Output<Object> |
ConcatenateDataset.asOutput() |
Output<Object> |
CacheDataset.asOutput() |
Output<Object> |
BytesProducedStatsDataset.asOutput() |
Output<Object> |
BatchDataset.asOutput() |
Output<Object> |
AnonymousIterator.asOutput() |
Output<T> |
LeakyReluGrad.backprops()
`gradients * (features > 0) + alpha * gradients * (features <= 0)`.
|
Output<T> |
LeakyReluGrad.backprops()
`gradients * (features > 0) + alpha * gradients * (features <= 0)`.
|
Output<String> |
ExperimentalIteratorGetDevice.device() |
Output<String> |
ExperimentalIteratorGetDevice.device() |
Output<String> |
DatasetToGraph.graph()
The graph representation of the dataset (as serialized GraphDef).
|
Output<String> |
DatasetToGraph.graph()
The graph representation of the dataset (as serialized GraphDef).
|
Output<?> |
AnonymousIterator.handle()
A handle to the iterator that can be passed to a "MakeIterator" or
"IteratorGetNext" op.
|
Output<?> |
BatchDataset.handle() |
Output<?> |
BytesProducedStatsDataset.handle() |
Output<?> |
CacheDataset.handle() |
Output<?> |
ConcatenateDataset.handle() |
Output<?> |
DenseToSparseBatchDataset.handle() |
Output<?> |
ExperimentalAssertNextDataset.handle() |
Output<?> |
ExperimentalCsvDataset.handle() |
Output<?> |
ExperimentalDirectedInterleaveDataset.handle() |
Output<?> |
ExperimentalIgnoreErrorsDataset.handle() |
Output<?> |
ExperimentalLmdbDataset.handle() |
Output<?> |
ExperimentalNonSerializableDataset.handle() |
Output<?> |
ExperimentalSleepDataset.handle() |
Output<?> |
ExperimentalThreadPoolDataset.handle() |
Output<?> |
ExperimentalThreadPoolHandle.handle()
A resource that can be consumed by one or more ExperimentalThreadPoolDataset
ops.
|
Output<?> |
ExperimentalUniqueDataset.handle() |
Output<?> |
FixedLengthRecordDataset.handle() |
Output<?> |
Iterator.handle() |
Output<?> |
LatencyStatsDataset.handle() |
Output<?> |
MatchingFilesDataset.handle() |
Output<?> |
ModelDataset.handle() |
Output<?> |
MultiDeviceIterator.handle()
Handle to the resource created.
|
Output<?> |
OptimizeDataset.handle() |
Output<?> |
PaddedBatchDataset.handle() |
Output<?> |
ParseExampleDataset.handle() |
Output<?> |
PrefetchDataset.handle() |
Output<?> |
RandomDataset.handle() |
Output<?> |
RangeDataset.handle() |
Output<?> |
RepeatDataset.handle() |
Output<?> |
SetStatsAggregatorDataset.handle() |
Output<?> |
ShuffleAndRepeatDataset.handle() |
Output<?> |
ShuffleDataset.handle() |
Output<?> |
SkipDataset.handle() |
Output<?> |
SparseTensorSliceDataset.handle() |
Output<?> |
SqlDataset.handle() |
Output<?> |
StatsAggregatorHandle.handle() |
Output<?> |
TakeDataset.handle() |
Output<?> |
TensorDataset.handle() |
Output<?> |
TensorSliceDataset.handle() |
Output<?> |
TextLineDataset.handle() |
Output<?> |
TfRecordDataset.handle() |
Output<?> |
UnbatchDataset.handle() |
Output<?> |
WindowDataset.handle() |
Output<?> |
ZipDataset.handle() |
Output<?> |
ZipDataset.handle() |
Output<?> |
WindowDataset.handle() |
Output<?> |
UnbatchDataset.handle() |
Output<?> |
TfRecordDataset.handle() |
Output<?> |
TextLineDataset.handle() |
Output<?> |
TensorSliceDataset.handle() |
Output<?> |
TensorDataset.handle() |
Output<?> |
TakeDataset.handle() |
Output<?> |
StatsAggregatorHandle.handle() |
Output<?> |
SqlDataset.handle() |
Output<?> |
SparseTensorSliceDataset.handle() |
Output<?> |
SkipDataset.handle() |
Output<?> |
ShuffleDataset.handle() |
Output<?> |
ShuffleAndRepeatDataset.handle() |
Output<?> |
SetStatsAggregatorDataset.handle() |
Output<?> |
RepeatDataset.handle() |
Output<?> |
RangeDataset.handle() |
Output<?> |
RandomDataset.handle() |
Output<?> |
PrefetchDataset.handle() |
Output<?> |
ParseExampleDataset.handle() |
Output<?> |
PaddedBatchDataset.handle() |
Output<?> |
OptimizeDataset.handle() |
Output<?> |
MultiDeviceIterator.handle()
Handle to the resource created.
|
Output<?> |
ModelDataset.handle() |
Output<?> |
MatchingFilesDataset.handle() |
Output<?> |
LatencyStatsDataset.handle() |
Output<?> |
Iterator.handle() |
Output<?> |
FixedLengthRecordDataset.handle() |
Output<?> |
ExperimentalUniqueDataset.handle() |
Output<?> |
ExperimentalThreadPoolHandle.handle()
A resource that can be consumed by one or more ExperimentalThreadPoolDataset
ops.
|
Output<?> |
ExperimentalThreadPoolDataset.handle() |
Output<?> |
ExperimentalSleepDataset.handle() |
Output<?> |
ExperimentalNonSerializableDataset.handle() |
Output<?> |
ExperimentalLmdbDataset.handle() |
Output<?> |
ExperimentalIgnoreErrorsDataset.handle() |
Output<?> |
ExperimentalDirectedInterleaveDataset.handle() |
Output<?> |
ExperimentalCsvDataset.handle() |
Output<?> |
ExperimentalAssertNextDataset.handle() |
Output<?> |
DenseToSparseBatchDataset.handle() |
Output<?> |
ConcatenateDataset.handle() |
Output<?> |
CacheDataset.handle() |
Output<?> |
BytesProducedStatsDataset.handle() |
Output<?> |
BatchDataset.handle() |
Output<?> |
AnonymousIterator.handle()
A handle to the iterator that can be passed to a "MakeIterator" or
"IteratorGetNext" op.
|
Output<Boolean> |
OptionalHasValue.hasValue() |
Output<Boolean> |
OptionalHasValue.hasValue() |
Output<Long> |
MultiDeviceIteratorInit.incarnationId()
An int64 indicating which incarnation of the MultiDeviceIterator
is running.
|
Output<Long> |
MultiDeviceIteratorInit.incarnationId()
An int64 indicating which incarnation of the MultiDeviceIterator
is running.
|
Output<?> |
MultiDeviceIteratorFromStringHandle.multiDeviceIterator()
A MultiDeviceIterator resource.
|
Output<?> |
MultiDeviceIteratorFromStringHandle.multiDeviceIterator()
A MultiDeviceIterator resource.
|
Output<?> |
IteratorGetNextAsOptional.optional() |
Output<?> |
OptionalFromValue.optional() |
Output<?> |
OptionalNone.optional() |
Output<?> |
OptionalNone.optional() |
Output<?> |
OptionalFromValue.optional() |
Output<?> |
IteratorGetNextAsOptional.optional() |
Output<?> |
FilterByLastComponentDataset.output() |
Output<?> |
FilterByLastComponentDataset.output() |
Output<?> |
IteratorFromStringHandle.resourceHandle() |
Output<?> |
IteratorFromStringHandle.resourceHandle() |
Output<?> |
SerializeIterator.serialized()
A variant tensor storing the state of the iterator contained in the
resource.
|
Output<?> |
SerializeIterator.serialized()
A variant tensor storing the state of the iterator contained in the
resource.
|
Output<String> |
IteratorToStringHandle.stringHandle()
A string representation of the given handle.
|
Output<String> |
MultiDeviceIteratorToStringHandle.stringHandle()
A string representing the resource.
|
Output<String> |
MultiDeviceIteratorToStringHandle.stringHandle()
A string representing the resource.
|
Output<String> |
IteratorToStringHandle.stringHandle()
A string representation of the given handle.
|
Modifier and Type | Method and Description |
---|---|
List<Output<?>> |
DatasetToSingleElement.components()
The components of the single element of `input`.
|
List<Output<?>> |
IteratorGetNext.components() |
List<Output<?>> |
IteratorGetNextSync.components() |
List<Output<?>> |
MultiDeviceIteratorGetNextFromShard.components()
Result of the get_next on the dataset.
|
List<Output<?>> |
OptionalGetValue.components() |
List<Output<?>> |
OptionalGetValue.components() |
List<Output<?>> |
MultiDeviceIteratorGetNextFromShard.components()
Result of the get_next on the dataset.
|
List<Output<?>> |
IteratorGetNextSync.components() |
List<Output<?>> |
IteratorGetNext.components() |
List<Output<?>> |
DatasetToSingleElement.components()
The components of the single element of `input`.
|
Modifier and Type | Method and Description |
---|---|
Output<String> |
AsString.asOutput() |
Output<U> |
Cast.asOutput() |
Output<U> |
Complex.asOutput() |
Output<U> |
Complex.asOutput() |
Output<U> |
Cast.asOutput() |
Output<String> |
AsString.asOutput() |
Output<U> |
Complex.out() |
Output<U> |
Complex.out() |
Output<String> |
AsString.output() |
Output<String> |
AsString.output() |
Output<U> |
Cast.y() |
Output<U> |
Cast.y() |
Modifier and Type | Method and Description |
---|---|
Output<T> |
AdjustContrast.asOutput() |
Output<T> |
AdjustHue.asOutput() |
Output<T> |
AdjustSaturation.asOutput() |
Output<Float> |
CropAndResize.asOutput() |
Output<Float> |
CropAndResizeGradBoxes.asOutput() |
Output<T> |
CropAndResizeGradImage.asOutput() |
Output<UInt8> |
DecodeAndCropJpeg.asOutput() |
Output<UInt8> |
DecodeBmp.asOutput() |
Output<UInt8> |
DecodeGif.asOutput() |
Output<UInt8> |
DecodeJpeg.asOutput() |
Output<T> |
DecodePng.asOutput() |
Output<T> |
DrawBoundingBoxes.asOutput() |
Output<String> |
EncodeJpeg.asOutput() |
Output<String> |
EncodeJpegVariableQuality.asOutput() |
Output<String> |
EncodePng.asOutput() |
Output<Float> |
ExtractGlimpse.asOutput() |
Output<T> |
ExtractImagePatches.asOutput() |
Output<T> |
ExtractJpegShape.asOutput() |
Output<T> |
HsvToRgb.asOutput() |
Output<Integer> |
NonMaxSuppressionWithOverlaps.asOutput() |
Output<T> |
RandomCrop.asOutput() |
Output<Float> |
ResizeArea.asOutput() |
Output<Float> |
ResizeBicubic.asOutput() |
Output<T> |
ResizeBicubicGrad.asOutput() |
Output<Float> |
ResizeBilinear.asOutput() |
Output<T> |
ResizeBilinearGrad.asOutput() |
Output<T> |
ResizeNearestNeighbor.asOutput() |
Output<T> |
ResizeNearestNeighborGrad.asOutput() |
Output<T> |
RgbToHsv.asOutput() |
Output<T> |
RgbToHsv.asOutput() |
Output<T> |
ResizeNearestNeighborGrad.asOutput() |
Output<T> |
ResizeNearestNeighbor.asOutput() |
Output<T> |
ResizeBilinearGrad.asOutput() |
Output<Float> |
ResizeBilinear.asOutput() |
Output<T> |
ResizeBicubicGrad.asOutput() |
Output<Float> |
ResizeBicubic.asOutput() |
Output<Float> |
ResizeArea.asOutput() |
Output<T> |
RandomCrop.asOutput() |
Output<Integer> |
NonMaxSuppressionWithOverlaps.asOutput() |
Output<T> |
HsvToRgb.asOutput() |
Output<T> |
ExtractJpegShape.asOutput() |
Output<T> |
ExtractImagePatches.asOutput() |
Output<Float> |
ExtractGlimpse.asOutput() |
Output<String> |
EncodePng.asOutput() |
Output<String> |
EncodeJpegVariableQuality.asOutput() |
Output<String> |
EncodeJpeg.asOutput() |
Output<T> |
DrawBoundingBoxes.asOutput() |
Output<T> |
DecodePng.asOutput() |
Output<UInt8> |
DecodeJpeg.asOutput() |
Output<UInt8> |
DecodeGif.asOutput() |
Output<UInt8> |
DecodeBmp.asOutput() |
Output<UInt8> |
DecodeAndCropJpeg.asOutput() |
Output<T> |
CropAndResizeGradImage.asOutput() |
Output<Float> |
CropAndResizeGradBoxes.asOutput() |
Output<Float> |
CropAndResize.asOutput() |
Output<T> |
AdjustSaturation.asOutput() |
Output<T> |
AdjustHue.asOutput() |
Output<T> |
AdjustContrast.asOutput() |
Output<Float> |
SampleDistortedBoundingBox.bboxes()
3-D with shape `[1, 1, 4]` containing the distorted bounding box.
|
Output<Float> |
SampleDistortedBoundingBox.bboxes()
3-D with shape `[1, 1, 4]` containing the distorted bounding box.
|
Output<T> |
SampleDistortedBoundingBox.begin()
1-D, containing `[offset_height, offset_width, 0]`.
|
Output<T> |
SampleDistortedBoundingBox.begin()
1-D, containing `[offset_height, offset_width, 0]`.
|
Output<String> |
EncodeJpeg.contents()
0-D.
|
Output<String> |
EncodeJpegVariableQuality.contents()
0-D.
|
Output<String> |
EncodePng.contents()
0-D.
|
Output<String> |
EncodePng.contents()
0-D.
|
Output<String> |
EncodeJpegVariableQuality.contents()
0-D.
|
Output<String> |
EncodeJpeg.contents()
0-D.
|
Output<Float> |
CropAndResize.crops()
A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.
|
Output<Float> |
CropAndResize.crops()
A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.
|
Output<Float> |
ExtractGlimpse.glimpse()
A tensor representing the glimpses `[batch_size,
glimpse_height, glimpse_width, channels]`.
|
Output<Float> |
ExtractGlimpse.glimpse()
A tensor representing the glimpses `[batch_size,
glimpse_height, glimpse_width, channels]`.
|
Output<UInt8> |
DecodeAndCropJpeg.image()
3-D with shape `[height, width, channels]`..
|
Output<UInt8> |
DecodeBmp.image()
3-D with shape `[height, width, channels]`.
|
Output<UInt8> |
DecodeGif.image()
4-D with shape `[num_frames, height, width, 3]`.
|
Output<UInt8> |
DecodeJpeg.image()
3-D with shape `[height, width, channels]`..
|
Output<T> |
DecodePng.image()
3-D with shape `[height, width, channels]`.
|
Output<T> |
DecodePng.image()
3-D with shape `[height, width, channels]`.
|
Output<UInt8> |
DecodeJpeg.image()
3-D with shape `[height, width, channels]`..
|
Output<UInt8> |
DecodeGif.image()
4-D with shape `[num_frames, height, width, 3]`.
|
Output<UInt8> |
DecodeBmp.image()
3-D with shape `[height, width, channels]`.
|
Output<UInt8> |
DecodeAndCropJpeg.image()
3-D with shape `[height, width, channels]`..
|
Output<T> |
ExtractJpegShape.imageShape()
1-D.
|
Output<T> |
ExtractJpegShape.imageShape()
1-D.
|
Output<Float> |
QuantizedResizeBilinear.outMax() |
Output<Float> |
QuantizedResizeBilinear.outMax() |
Output<Float> |
QuantizedResizeBilinear.outMin() |
Output<Float> |
QuantizedResizeBilinear.outMin() |
Output<T> |
AdjustContrast.output()
The contrast-adjusted image or images.
|
Output<T> |
AdjustHue.output()
The hue-adjusted image or images.
|
Output<T> |
AdjustSaturation.output()
The hue-adjusted image or images.
|
Output<Float> |
CropAndResizeGradBoxes.output()
A 2-D tensor of shape `[num_boxes, 4]`.
|
Output<T> |
CropAndResizeGradImage.output()
A 4-D tensor of shape `[batch, image_height, image_width, depth]`.
|
Output<T> |
DrawBoundingBoxes.output()
4-D with the same shape as `images`.
|
Output<T> |
HsvToRgb.output()
`images` converted to RGB.
|
Output<T> |
RandomCrop.output()
3-D of shape `[crop_height, crop_width, channels].`
|
Output<T> |
ResizeBicubicGrad.output()
4-D with shape `[batch, orig_height, orig_width, channels]`.
|
Output<T> |
ResizeBilinearGrad.output()
4-D with shape `[batch, orig_height, orig_width, channels]`.
|
Output<T> |
ResizeNearestNeighborGrad.output()
4-D with shape `[batch, orig_height, orig_width, channels]`.
|
Output<T> |
RgbToHsv.output()
`images` converted to HSV.
|
Output<T> |
RgbToHsv.output()
`images` converted to HSV.
|
Output<T> |
ResizeNearestNeighborGrad.output()
4-D with shape `[batch, orig_height, orig_width, channels]`.
|
Output<T> |
ResizeBilinearGrad.output()
4-D with shape `[batch, orig_height, orig_width, channels]`.
|
Output<T> |
ResizeBicubicGrad.output()
4-D with shape `[batch, orig_height, orig_width, channels]`.
|
Output<T> |
RandomCrop.output()
3-D of shape `[crop_height, crop_width, channels].`
|
Output<T> |
HsvToRgb.output()
`images` converted to RGB.
|
Output<T> |
DrawBoundingBoxes.output()
4-D with the same shape as `images`.
|
Output<T> |
CropAndResizeGradImage.output()
A 4-D tensor of shape `[batch, image_height, image_width, depth]`.
|
Output<Float> |
CropAndResizeGradBoxes.output()
A 2-D tensor of shape `[num_boxes, 4]`.
|
Output<T> |
AdjustSaturation.output()
The hue-adjusted image or images.
|
Output<T> |
AdjustHue.output()
The hue-adjusted image or images.
|
Output<T> |
AdjustContrast.output()
The contrast-adjusted image or images.
|
Output<T> |
ExtractImagePatches.patches()
4-D Tensor with shape `[batch, out_rows, out_cols, ksize_rows *
ksize_cols * depth]` containing image patches with size
`ksize_rows x ksize_cols x depth` vectorized in the "depth" dimension.
|
Output<T> |
ExtractImagePatches.patches()
4-D Tensor with shape `[batch, out_rows, out_cols, ksize_rows *
ksize_cols * depth]` containing image patches with size
`ksize_rows x ksize_cols x depth` vectorized in the "depth" dimension.
|
Output<T> |
QuantizedResizeBilinear.resizedImages()
4-D with shape
`[batch, new_height, new_width, channels]`.
|
Output<Float> |
ResizeArea.resizedImages()
4-D with shape
`[batch, new_height, new_width, channels]`.
|
Output<Float> |
ResizeBicubic.resizedImages()
4-D with shape
`[batch, new_height, new_width, channels]`.
|
Output<Float> |
ResizeBilinear.resizedImages()
4-D with shape
`[batch, new_height, new_width, channels]`.
|
Output<T> |
ResizeNearestNeighbor.resizedImages()
4-D with shape
`[batch, new_height, new_width, channels]`.
|
Output<T> |
ResizeNearestNeighbor.resizedImages()
4-D with shape
`[batch, new_height, new_width, channels]`.
|
Output<Float> |
ResizeBilinear.resizedImages()
4-D with shape
`[batch, new_height, new_width, channels]`.
|
Output<Float> |
ResizeBicubic.resizedImages()
4-D with shape
`[batch, new_height, new_width, channels]`.
|
Output<Float> |
ResizeArea.resizedImages()
4-D with shape
`[batch, new_height, new_width, channels]`.
|
Output<T> |
QuantizedResizeBilinear.resizedImages()
4-D with shape
`[batch, new_height, new_width, channels]`.
|
Output<Integer> |
NonMaxSuppression.selectedIndices()
A 1-D integer tensor of shape `[M]` representing the selected
indices from the boxes tensor, where `M <= max_output_size`.
|
Output<Integer> |
NonMaxSuppressionWithOverlaps.selectedIndices()
A 1-D integer tensor of shape `[M]` representing the selected
indices from the boxes tensor, where `M <= max_output_size`.
|
Output<Integer> |
NonMaxSuppressionWithOverlaps.selectedIndices()
A 1-D integer tensor of shape `[M]` representing the selected
indices from the boxes tensor, where `M <= max_output_size`.
|
Output<Integer> |
NonMaxSuppression.selectedIndices()
A 1-D integer tensor of shape `[M]` representing the selected
indices from the boxes tensor, where `M <= max_output_size`.
|
Output<T> |
SampleDistortedBoundingBox.size()
1-D, containing `[target_height, target_width, -1]`.
|
Output<T> |
SampleDistortedBoundingBox.size()
1-D, containing `[target_height, target_width, -1]`.
|
Output<Integer> |
NonMaxSuppression.validOutputs()
A 0-D integer tensor representing the number of valid elements in
`selected_indices`, with the valid elements appearing first.
|
Output<Integer> |
NonMaxSuppression.validOutputs()
A 0-D integer tensor representing the number of valid elements in
`selected_indices`, with the valid elements appearing first.
|
Modifier and Type | Method and Description |
---|---|
Output<String> |
DecodeBase64.asOutput() |
Output<String> |
DecodeCompressed.asOutput() |
Output<String> |
DecodeJsonExample.asOutput() |
Output<T> |
DecodeRaw.asOutput() |
Output<String> |
EncodeBase64.asOutput() |
Output<Object> |
FifoQueue.asOutput() |
Output<Object> |
FixedLengthRecordReader.asOutput() |
Output<Object> |
IdentityReader.asOutput() |
Output<String> |
LmdbReader.asOutput() |
Output<String> |
MatchingFiles.asOutput() |
Output<Object> |
PaddingFifoQueue.asOutput() |
Output<T> |
ParseTensor.asOutput() |
Output<Object> |
PriorityQueue.asOutput() |
Output<Boolean> |
QueueIsClosed.asOutput() |
Output<Integer> |
QueueSize.asOutput() |
Output<Object> |
RandomShuffleQueue.asOutput() |
Output<Long> |
ReaderNumRecordsProduced.asOutput() |
Output<Long> |
ReaderNumWorkUnitsCompleted.asOutput() |
Output<String> |
ReaderSerializeState.asOutput() |
Output<String> |
ReadFile.asOutput() |
Output<U> |
SerializeManySparse.asOutput() |
Output<U> |
SerializeSparse.asOutput() |
Output<String> |
SerializeTensor.asOutput() |
Output<String> |
ShardedFilename.asOutput() |
Output<String> |
ShardedFilespec.asOutput() |
Output<Object> |
TextLineReader.asOutput() |
Output<Object> |
TfRecordReader.asOutput() |
Output<Object> |
WholeFileReader.asOutput() |
Output<Object> |
WholeFileReader.asOutput() |
Output<Object> |
TfRecordReader.asOutput() |
Output<Object> |
TextLineReader.asOutput() |
Output<String> |
ShardedFilespec.asOutput() |
Output<String> |
ShardedFilename.asOutput() |
Output<String> |
SerializeTensor.asOutput() |
Output<U> |
SerializeSparse.asOutput() |
Output<U> |
SerializeManySparse.asOutput() |
Output<String> |
ReadFile.asOutput() |
Output<String> |
ReaderSerializeState.asOutput() |
Output<Long> |
ReaderNumWorkUnitsCompleted.asOutput() |
Output<Long> |
ReaderNumRecordsProduced.asOutput() |
Output<Object> |
RandomShuffleQueue.asOutput() |
Output<Integer> |
QueueSize.asOutput() |
Output<Boolean> |
QueueIsClosed.asOutput() |
Output<Object> |
PriorityQueue.asOutput() |
Output<T> |
ParseTensor.asOutput() |
Output<Object> |
PaddingFifoQueue.asOutput() |
Output<String> |
MatchingFiles.asOutput() |
Output<String> |
LmdbReader.asOutput() |
Output<Object> |
IdentityReader.asOutput() |
Output<Object> |
FixedLengthRecordReader.asOutput() |
Output<Object> |
FifoQueue.asOutput() |
Output<String> |
EncodeBase64.asOutput() |
Output<T> |
DecodeRaw.asOutput() |
Output<String> |
DecodeJsonExample.asOutput() |
Output<String> |
DecodeCompressed.asOutput() |
Output<String> |
DecodeBase64.asOutput() |
Output<String> |
DecodeJsonExample.binaryExamples()
Each string is a binary Example protocol buffer corresponding
to the respective element of `json_examples`.
|
Output<String> |
DecodeJsonExample.binaryExamples()
Each string is a binary Example protocol buffer corresponding
to the respective element of `json_examples`.
|
Output<String> |
ReadFile.contents() |
Output<String> |
ReadFile.contents() |
Output<String> |
ShardedFilename.filename() |
Output<String> |
ShardedFilespec.filename() |
Output<String> |
ShardedFilespec.filename() |
Output<String> |
ShardedFilename.filename() |
Output<String> |
MatchingFiles.filenames()
A vector of matching filenames.
|
Output<String> |
MatchingFiles.filenames()
A vector of matching filenames.
|
Output<?> |
FifoQueue.handle()
The handle to the queue.
|
Output<?> |
PaddingFifoQueue.handle()
The handle to the queue.
|
Output<?> |
PriorityQueue.handle()
The handle to the queue.
|
Output<?> |
RandomShuffleQueue.handle()
The handle to the queue.
|
Output<?> |
RandomShuffleQueue.handle()
The handle to the queue.
|
Output<?> |
PriorityQueue.handle()
The handle to the queue.
|
Output<?> |
PaddingFifoQueue.handle()
The handle to the queue.
|
Output<?> |
FifoQueue.handle()
The handle to the queue.
|
Output<Boolean> |
QueueIsClosed.isClosed() |
Output<Boolean> |
QueueIsClosed.isClosed() |
Output<String> |
ReaderRead.key()
A scalar.
|
Output<String> |
ReaderRead.key()
A scalar.
|
Output<String> |
ReaderReadUpTo.keys()
A 1-D tensor.
|
Output<String> |
ReaderReadUpTo.keys()
A 1-D tensor.
|
Output<String> |
DecodeBase64.output()
Decoded strings.
|
Output<String> |
DecodeCompressed.output()
A Tensor with the same shape as input `bytes`, uncompressed
from bytes.
|
Output<T> |
DecodeRaw.output()
A Tensor with one more dimension than the input `bytes`.
|
Output<String> |
EncodeBase64.output()
Input strings encoded in base64.
|
Output<T> |
ParseTensor.output()
A Tensor of type `out_type`.
|
Output<T> |
ParseTensor.output()
A Tensor of type `out_type`.
|
Output<String> |
EncodeBase64.output()
Input strings encoded in base64.
|
Output<T> |
DecodeRaw.output()
A Tensor with one more dimension than the input `bytes`.
|
Output<String> |
DecodeCompressed.output()
A Tensor with the same shape as input `bytes`, uncompressed
from bytes.
|
Output<String> |
DecodeBase64.output()
Decoded strings.
|
Output<?> |
FixedLengthRecordReader.readerHandle()
The handle to reference the Reader.
|
Output<?> |
IdentityReader.readerHandle()
The handle to reference the Reader.
|
Output<String> |
LmdbReader.readerHandle()
The handle to reference the Reader.
|
Output<?> |
TextLineReader.readerHandle()
The handle to reference the Reader.
|
Output<?> |
TfRecordReader.readerHandle()
The handle to reference the Reader.
|
Output<?> |
WholeFileReader.readerHandle()
The handle to reference the Reader.
|
Output<?> |
WholeFileReader.readerHandle()
The handle to reference the Reader.
|
Output<?> |
TfRecordReader.readerHandle()
The handle to reference the Reader.
|
Output<?> |
TextLineReader.readerHandle()
The handle to reference the Reader.
|
Output<String> |
LmdbReader.readerHandle()
The handle to reference the Reader.
|
Output<?> |
IdentityReader.readerHandle()
The handle to reference the Reader.
|
Output<?> |
FixedLengthRecordReader.readerHandle()
The handle to reference the Reader.
|
Output<Long> |
ReaderNumRecordsProduced.recordsProduced() |
Output<Long> |
ReaderNumRecordsProduced.recordsProduced() |
Output<String> |
SerializeTensor.serialized()
A serialized TensorProto proto of the input tensor.
|
Output<String> |
SerializeTensor.serialized()
A serialized TensorProto proto of the input tensor.
|
Output<U> |
SerializeManySparse.serializedSparse() |
Output<U> |
SerializeSparse.serializedSparse() |
Output<U> |
SerializeSparse.serializedSparse() |
Output<U> |
SerializeManySparse.serializedSparse() |
Output<Integer> |
QueueSize.size()
The number of elements in the given queue.
|
Output<Integer> |
QueueSize.size()
The number of elements in the given queue.
|
Output<Long> |
DeserializeManySparse.sparseIndices() |
Output<Long> |
DeserializeManySparse.sparseIndices() |
Output<Long> |
DeserializeManySparse.sparseShape() |
Output<Long> |
DeserializeManySparse.sparseShape() |
Output<T> |
DeserializeManySparse.sparseValues() |
Output<T> |
DeserializeManySparse.sparseValues() |
Output<String> |
ReaderSerializeState.state() |
Output<String> |
ReaderSerializeState.state() |
Output<Long> |
ReaderNumWorkUnitsCompleted.unitsCompleted() |
Output<Long> |
ReaderNumWorkUnitsCompleted.unitsCompleted() |
Output<String> |
ReaderRead.value()
A scalar.
|
Output<String> |
ReaderRead.value()
A scalar.
|
Output<String> |
ReaderReadUpTo.values()
A 1-D tensor.
|
Output<String> |
ReaderReadUpTo.values()
A 1-D tensor.
|
Modifier and Type | Method and Description |
---|---|
List<Output<?>> |
QueueDequeue.components()
One or more tensors that were dequeued as a tuple.
|
List<Output<?>> |
QueueDequeueMany.components()
One or more tensors that were dequeued as a tuple.
|
List<Output<?>> |
QueueDequeueUpTo.components()
One or more tensors that were dequeued as a tuple.
|
List<Output<?>> |
QueueDequeueUpTo.components()
One or more tensors that were dequeued as a tuple.
|
List<Output<?>> |
QueueDequeueMany.components()
One or more tensors that were dequeued as a tuple.
|
List<Output<?>> |
QueueDequeue.components()
One or more tensors that were dequeued as a tuple.
|
List<Output<?>> |
ParseSequenceExample.contextDenseValues() |
List<Output<?>> |
ParseSingleSequenceExample.contextDenseValues() |
List<Output<?>> |
ParseSingleSequenceExample.contextDenseValues() |
List<Output<?>> |
ParseSequenceExample.contextDenseValues() |
List<Output<Long>> |
ParseSequenceExample.contextSparseIndices() |
List<Output<Long>> |
ParseSingleSequenceExample.contextSparseIndices() |
List<Output<Long>> |
ParseSingleSequenceExample.contextSparseIndices() |
List<Output<Long>> |
ParseSequenceExample.contextSparseIndices() |
List<Output<Long>> |
ParseSequenceExample.contextSparseShapes() |
List<Output<Long>> |
ParseSingleSequenceExample.contextSparseShapes() |
List<Output<Long>> |
ParseSingleSequenceExample.contextSparseShapes() |
List<Output<Long>> |
ParseSequenceExample.contextSparseShapes() |
List<Output<?>> |
ParseSequenceExample.contextSparseValues() |
List<Output<?>> |
ParseSingleSequenceExample.contextSparseValues() |
List<Output<?>> |
ParseSingleSequenceExample.contextSparseValues() |
List<Output<?>> |
ParseSequenceExample.contextSparseValues() |
List<Output<?>> |
ParseExample.denseValues() |
List<Output<?>> |
ParseSingleExample.denseValues() |
List<Output<?>> |
ParseSingleExample.denseValues() |
List<Output<?>> |
ParseExample.denseValues() |
List<Output<Long>> |
ParseSequenceExample.featureListDenseLengths() |
List<Output<Long>> |
ParseSequenceExample.featureListDenseLengths() |
List<Output<?>> |
ParseSequenceExample.featureListDenseValues() |
List<Output<?>> |
ParseSingleSequenceExample.featureListDenseValues() |
List<Output<?>> |
ParseSingleSequenceExample.featureListDenseValues() |
List<Output<?>> |
ParseSequenceExample.featureListDenseValues() |
List<Output<Long>> |
ParseSequenceExample.featureListSparseIndices() |
List<Output<Long>> |
ParseSingleSequenceExample.featureListSparseIndices() |
List<Output<Long>> |
ParseSingleSequenceExample.featureListSparseIndices() |
List<Output<Long>> |
ParseSequenceExample.featureListSparseIndices() |
List<Output<Long>> |
ParseSequenceExample.featureListSparseShapes() |
List<Output<Long>> |
ParseSingleSequenceExample.featureListSparseShapes() |
List<Output<Long>> |
ParseSingleSequenceExample.featureListSparseShapes() |
List<Output<Long>> |
ParseSequenceExample.featureListSparseShapes() |
List<Output<?>> |
ParseSequenceExample.featureListSparseValues() |
List<Output<?>> |
ParseSingleSequenceExample.featureListSparseValues() |
List<Output<?>> |
ParseSingleSequenceExample.featureListSparseValues() |
List<Output<?>> |
ParseSequenceExample.featureListSparseValues() |
List<Output<?>> |
DecodeCsv.output()
Each tensor will have the same shape as records.
|
List<Output<?>> |
DecodeCsv.output()
Each tensor will have the same shape as records.
|
List<Output<Long>> |
ParseExample.sparseIndices() |
List<Output<Long>> |
ParseSingleExample.sparseIndices() |
List<Output<Long>> |
ParseSingleExample.sparseIndices() |
List<Output<Long>> |
ParseExample.sparseIndices() |
List<Output<Long>> |
ParseExample.sparseShapes() |
List<Output<Long>> |
ParseSingleExample.sparseShapes() |
List<Output<Long>> |
ParseSingleExample.sparseShapes() |
List<Output<Long>> |
ParseExample.sparseShapes() |
List<Output<?>> |
ParseExample.sparseValues() |
List<Output<?>> |
ParseSingleExample.sparseValues() |
List<Output<?>> |
ParseSingleExample.sparseValues() |
List<Output<?>> |
ParseExample.sparseValues() |
Modifier and Type | Method and Description |
---|---|
Output<T> |
BandPart.asOutput() |
Output<T> |
BatchCholesky.asOutput() |
Output<T> |
BatchCholeskyGrad.asOutput() |
Output<T> |
BatchMatMul.asOutput() |
Output<T> |
BatchMatrixBandPart.asOutput() |
Output<T> |
BatchMatrixDeterminant.asOutput() |
Output<T> |
BatchMatrixDiag.asOutput() |
Output<T> |
BatchMatrixDiagPart.asOutput() |
Output<T> |
BatchMatrixInverse.asOutput() |
Output<T> |
BatchMatrixSetDiag.asOutput() |
Output<T> |
BatchMatrixSolve.asOutput() |
Output<T> |
BatchMatrixSolveLs.asOutput() |
Output<T> |
BatchMatrixTriangularSolve.asOutput() |
Output<T> |
Cholesky.asOutput() |
Output<T> |
CholeskyGrad.asOutput() |
Output<T> |
ConjugateTranspose.asOutput() |
Output<T> |
Cross.asOutput() |
Output<T> |
Det.asOutput() |
Output<T> |
Diag.asOutput() |
Output<T> |
DiagPart.asOutput() |
Output<T> |
Inv.asOutput() |
Output<Float> |
LoadAndRemapMatrix.asOutput() |
Output<T> |
MatMul.asOutput() |
Output<T> |
MatrixLogarithm.asOutput() |
Output<T> |
MatrixSolveLs.asOutput() |
Output<T> |
SetDiag.asOutput() |
Output<T> |
Solve.asOutput() |
Output<T> |
Sqrtm.asOutput() |
Output<T> |
TensorDiag.asOutput() |
Output<T> |
TensorDiagPart.asOutput() |
Output<T> |
Transpose.asOutput() |
Output<T> |
TriangularSolve.asOutput() |
Output<T> |
TriangularSolve.asOutput() |
Output<T> |
Transpose.asOutput() |
Output<T> |
TensorDiagPart.asOutput() |
Output<T> |
TensorDiag.asOutput() |
Output<T> |
Sqrtm.asOutput() |
Output<T> |
Solve.asOutput() |
Output<T> |
SetDiag.asOutput() |
Output<T> |
MatrixSolveLs.asOutput() |
Output<T> |
MatrixLogarithm.asOutput() |
Output<T> |
MatMul.asOutput() |
Output<Float> |
LoadAndRemapMatrix.asOutput() |
Output<T> |
Inv.asOutput() |
Output<T> |
DiagPart.asOutput() |
Output<T> |
Diag.asOutput() |
Output<T> |
Det.asOutput() |
Output<T> |
Cross.asOutput() |
Output<T> |
ConjugateTranspose.asOutput() |
Output<T> |
CholeskyGrad.asOutput() |
Output<T> |
Cholesky.asOutput() |
Output<T> |
BatchMatrixTriangularSolve.asOutput() |
Output<T> |
BatchMatrixSolveLs.asOutput() |
Output<T> |
BatchMatrixSolve.asOutput() |
Output<T> |
BatchMatrixSetDiag.asOutput() |
Output<T> |
BatchMatrixInverse.asOutput() |
Output<T> |
BatchMatrixDiagPart.asOutput() |
Output<T> |
BatchMatrixDiag.asOutput() |
Output<T> |
BatchMatrixDeterminant.asOutput() |
Output<T> |
BatchMatrixBandPart.asOutput() |
Output<T> |
BatchMatMul.asOutput() |
Output<T> |
BatchCholeskyGrad.asOutput() |
Output<T> |
BatchCholesky.asOutput() |
Output<T> |
BandPart.asOutput() |
Output<T> |
BandPart.band()
Rank `k` tensor of the same shape as input.
|
Output<T> |
BatchMatrixBandPart.band() |
Output<T> |
BatchMatrixBandPart.band() |
Output<T> |
BandPart.band()
Rank `k` tensor of the same shape as input.
|
Output<T> |
BatchMatrixDiagPart.diagonal() |
Output<T> |
DiagPart.diagonal()
The extracted diagonal(s) having shape
`diagonal.shape = input.shape[:-2] + [min(input.shape[-2:])]`.
|
Output<T> |
TensorDiagPart.diagonal()
The extracted diagonal.
|
Output<T> |
TensorDiagPart.diagonal()
The extracted diagonal.
|
Output<T> |
DiagPart.diagonal()
The extracted diagonal(s) having shape
`diagonal.shape = input.shape[:-2] + [min(input.shape[-2:])]`.
|
Output<T> |
BatchMatrixDiagPart.diagonal() |
Output<T> |
BatchSelfAdjointEig.e() |
Output<T> |
SelfAdjointEig.e()
Eigenvalues.
|
Output<T> |
SelfAdjointEig.e()
Eigenvalues.
|
Output<T> |
BatchSelfAdjointEig.e() |
Output<T> |
LogMatrixDeterminant.logAbsDeterminant()
The logs of the absolute values of the determinants
of the N input matrices.
|
Output<T> |
LogMatrixDeterminant.logAbsDeterminant()
The logs of the absolute values of the determinants
of the N input matrices.
|
Output<Float> |
QuantizedMatMul.maxOut()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMatMul.maxOut()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMatMul.minOut()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedMatMul.minOut()
The float value that the lowest quantized output value represents.
|
Output<V> |
QuantizedMatMul.out() |
Output<V> |
QuantizedMatMul.out() |
Output<T> |
BatchCholesky.output() |
Output<T> |
BatchCholeskyGrad.output() |
Output<T> |
BatchMatMul.output()
3-D or higher with shape `[..., r_o, c_o]`
|
Output<T> |
BatchMatrixDeterminant.output() |
Output<T> |
BatchMatrixDiag.output() |
Output<T> |
BatchMatrixInverse.output() |
Output<T> |
BatchMatrixSetDiag.output() |
Output<T> |
BatchMatrixSolve.output() |
Output<T> |
BatchMatrixSolveLs.output() |
Output<T> |
BatchMatrixTriangularSolve.output() |
Output<T> |
Cholesky.output()
Shape is `[..., M, M]`.
|
Output<T> |
CholeskyGrad.output()
Symmetrized version of df/dA .
|
Output<T> |
Det.output()
Shape is `[...]`.
|
Output<T> |
Diag.output()
Rank `k+1`, with `output.shape = diagonal.shape + [diagonal.shape[-1]]`.
|
Output<T> |
Inv.output()
Shape is `[..., M, M]`.
|
Output<T> |
MatrixLogarithm.output()
Shape is `[..., M, M]`.
|
Output<T> |
MatrixSolveLs.output()
Shape is `[..., N, K]`.
|
Output<T> |
SetDiag.output()
Rank `k+1`, with `output.shape = input.shape`.
|
Output<T> |
Solve.output()
Shape is `[..., M, K]`.
|
Output<T> |
Sqrtm.output()
Shape is `[..., M, M]`.
|
Output<T> |
TensorDiag.output() |
Output<T> |
TriangularSolve.output()
Shape is `[..., M, K]`.
|
Output<T> |
TriangularSolve.output()
Shape is `[..., M, K]`.
|
Output<T> |
TensorDiag.output() |
Output<T> |
Sqrtm.output()
Shape is `[..., M, M]`.
|
Output<T> |
Solve.output()
Shape is `[..., M, K]`.
|
Output<T> |
SetDiag.output()
Rank `k+1`, with `output.shape = input.shape`.
|
Output<T> |
MatrixSolveLs.output()
Shape is `[..., N, K]`.
|
Output<T> |
MatrixLogarithm.output()
Shape is `[..., M, M]`.
|
Output<T> |
Inv.output()
Shape is `[..., M, M]`.
|
Output<T> |
Diag.output()
Rank `k+1`, with `output.shape = diagonal.shape + [diagonal.shape[-1]]`.
|
Output<T> |
Det.output()
Shape is `[...]`.
|
Output<T> |
CholeskyGrad.output()
Symmetrized version of df/dA .
|
Output<T> |
Cholesky.output()
Shape is `[..., M, M]`.
|
Output<T> |
BatchMatrixTriangularSolve.output() |
Output<T> |
BatchMatrixSolveLs.output() |
Output<T> |
BatchMatrixSolve.output() |
Output<T> |
BatchMatrixSetDiag.output() |
Output<T> |
BatchMatrixInverse.output() |
Output<T> |
BatchMatrixDiag.output() |
Output<T> |
BatchMatrixDeterminant.output() |
Output<T> |
BatchMatMul.output()
3-D or higher with shape `[..., r_o, c_o]`
|
Output<T> |
BatchCholeskyGrad.output() |
Output<T> |
BatchCholesky.output() |
Output<Float> |
LoadAndRemapMatrix.outputMatrix()
Output matrix containing existing values loaded from the
checkpoint, and with any missing values filled in from initializing_values.
|
Output<Float> |
LoadAndRemapMatrix.outputMatrix()
Output matrix containing existing values loaded from the
checkpoint, and with any missing values filled in from initializing_values.
|
Output<T> |
Cross.product()
Pairwise cross product of the vectors in `a` and `b`.
|
Output<T> |
MatMul.product() |
Output<T> |
MatMul.product() |
Output<T> |
Cross.product()
Pairwise cross product of the vectors in `a` and `b`.
|
Output<T> |
Qr.q()
Orthonormal basis for range of `a`.
|
Output<T> |
Qr.q()
Orthonormal basis for range of `a`.
|
Output<T> |
Qr.r()
Triangular factor.
|
Output<T> |
Qr.r()
Triangular factor.
|
Output<T> |
BatchSvd.s() |
Output<T> |
Svd.s()
Singular values.
|
Output<T> |
Svd.s()
Singular values.
|
Output<T> |
BatchSvd.s() |
Output<T> |
LogMatrixDeterminant.sign()
The signs of the log determinants of the inputs.
|
Output<T> |
LogMatrixDeterminant.sign()
The signs of the log determinants of the inputs.
|
Output<T> |
BatchSvd.u() |
Output<T> |
Svd.u()
Left singular vectors.
|
Output<T> |
Svd.u()
Left singular vectors.
|
Output<T> |
BatchSvd.u() |
Output<T> |
BatchSelfAdjointEig.v() |
Output<T> |
BatchSvd.v() |
Output<T> |
SelfAdjointEig.v()
Eigenvectors.
|
Output<T> |
Svd.v()
Left singular vectors.
|
Output<T> |
Svd.v()
Left singular vectors.
|
Output<T> |
SelfAdjointEig.v()
Eigenvectors.
|
Output<T> |
BatchSvd.v() |
Output<T> |
BatchSelfAdjointEig.v() |
Output<T> |
ConjugateTranspose.y() |
Output<T> |
Transpose.y() |
Output<T> |
Transpose.y() |
Output<T> |
ConjugateTranspose.y() |
Modifier and Type | Method and Description |
---|---|
Output<T> |
Softplus.activations() |
Output<T> |
Softplus.activations() |
Output<T> |
Abs.asOutput() |
Output<T> |
AccumulateN.asOutput() |
Output<T> |
Acos.asOutput() |
Output<T> |
Acosh.asOutput() |
Output<T> |
Add.asOutput() |
Output<T> |
AddN.asOutput() |
Output<U> |
Angle.asOutput() |
Output<Boolean> |
ApproximateEqual.asOutput() |
Output<V> |
ArgMax.asOutput() |
Output<V> |
ArgMin.asOutput() |
Output<T> |
Asin.asOutput() |
Output<T> |
Asinh.asOutput() |
Output<T> |
Atan.asOutput() |
Output<T> |
Atan2.asOutput() |
Output<T> |
Atanh.asOutput() |
Output<T> |
BesselI0e.asOutput() |
Output<T> |
BesselI1e.asOutput() |
Output<T> |
Betainc.asOutput() |
Output<T> |
Bincount.asOutput() |
Output<T> |
Ceil.asOutput() |
Output<T> |
CheckNumerics.asOutput() |
Output<UInt8> |
CompareAndBitpack.asOutput() |
Output<U> |
ComplexAbs.asOutput() |
Output<T> |
Conj.asOutput() |
Output<T> |
Cos.asOutput() |
Output<T> |
Cosh.asOutput() |
Output<T> |
Cumprod.asOutput() |
Output<T> |
Cumsum.asOutput() |
Output<T> |
Digamma.asOutput() |
Output<T> |
Div.asOutput() |
Output<T> |
DivNoNan.asOutput() |
Output<Boolean> |
Equal.asOutput() |
Output<T> |
Erf.asOutput() |
Output<T> |
Erfc.asOutput() |
Output<T> |
Exp.asOutput() |
Output<T> |
Expm1.asOutput() |
Output<String> |
Fact.asOutput() |
Output<T> |
Floor.asOutput() |
Output<T> |
FloorDiv.asOutput() |
Output<T> |
FloorMod.asOutput() |
Output<Boolean> |
Greater.asOutput() |
Output<Boolean> |
GreaterEqual.asOutput() |
Output<T> |
Igamma.asOutput() |
Output<T> |
Igammac.asOutput() |
Output<T> |
IgammaGradA.asOutput() |
Output<U> |
Imag.asOutput() |
Output<T> |
InvertPermutation.asOutput() |
Output<Boolean> |
IsFinite.asOutput() |
Output<Boolean> |
IsInf.asOutput() |
Output<Boolean> |
IsNan.asOutput() |
Output<Boolean> |
Less.asOutput() |
Output<Boolean> |
LessEqual.asOutput() |
Output<T> |
Lgamma.asOutput() |
Output<T> |
Log.asOutput() |
Output<T> |
Log1p.asOutput() |
Output<Boolean> |
LogicalAnd.asOutput() |
Output<Boolean> |
LogicalNot.asOutput() |
Output<Boolean> |
LogicalOr.asOutput() |
Output<T> |
Maximum.asOutput() |
Output<T> |
Mean.asOutput() |
Output<T> |
Minimum.asOutput() |
Output<T> |
Mod.asOutput() |
Output<T> |
Mul.asOutput() |
Output<T> |
Neg.asOutput() |
Output<Boolean> |
NotEqual.asOutput() |
Output<T> |
Polygamma.asOutput() |
Output<UInt8> |
PopulationCount.asOutput() |
Output<T> |
Pow.asOutput() |
Output<U> |
Real.asOutput() |
Output<T> |
RealDiv.asOutput() |
Output<T> |
Reciprocal.asOutput() |
Output<T> |
ReciprocalGrad.asOutput() |
Output<T> |
Rint.asOutput() |
Output<T> |
Round.asOutput() |
Output<T> |
Rsqrt.asOutput() |
Output<T> |
RsqrtGrad.asOutput() |
Output<T> |
SegmentMax.asOutput() |
Output<T> |
SegmentMean.asOutput() |
Output<T> |
SegmentMin.asOutput() |
Output<T> |
SegmentProd.asOutput() |
Output<T> |
SegmentSum.asOutput() |
Output<T> |
Sigmoid.asOutput() |
Output<T> |
SigmoidGrad.asOutput() |
Output<T> |
Sign.asOutput() |
Output<T> |
Sin.asOutput() |
Output<T> |
Sinh.asOutput() |
Output<T> |
Softplus.asOutput() |
Output<T> |
SoftplusGrad.asOutput() |
Output<T> |
Sqrt.asOutput() |
Output<T> |
SqrtGrad.asOutput() |
Output<T> |
Square.asOutput() |
Output<T> |
SquaredDifference.asOutput() |
Output<T> |
Sub.asOutput() |
Output<T> |
Tan.asOutput() |
Output<T> |
Tanh.asOutput() |
Output<T> |
TanhGrad.asOutput() |
Output<T> |
TruncateDiv.asOutput() |
Output<T> |
TruncateMod.asOutput() |
Output<T> |
UnsortedSegmentMax.asOutput() |
Output<T> |
UnsortedSegmentMin.asOutput() |
Output<T> |
UnsortedSegmentProd.asOutput() |
Output<T> |
UnsortedSegmentSum.asOutput() |
Output<T> |
Xdivy.asOutput() |
Output<T> |
Xlogy.asOutput() |
Output<T> |
Zeta.asOutput() |
Output<T> |
Zeta.asOutput() |
Output<T> |
Xlogy.asOutput() |
Output<T> |
Xdivy.asOutput() |
Output<T> |
UnsortedSegmentSum.asOutput() |
Output<T> |
UnsortedSegmentProd.asOutput() |
Output<T> |
UnsortedSegmentMin.asOutput() |
Output<T> |
UnsortedSegmentMax.asOutput() |
Output<T> |
TruncateMod.asOutput() |
Output<T> |
TruncateDiv.asOutput() |
Output<T> |
TanhGrad.asOutput() |
Output<T> |
Tanh.asOutput() |
Output<T> |
Tan.asOutput() |
Output<T> |
Sub.asOutput() |
Output<T> |
SquaredDifference.asOutput() |
Output<T> |
Square.asOutput() |
Output<T> |
SqrtGrad.asOutput() |
Output<T> |
Sqrt.asOutput() |
Output<T> |
SoftplusGrad.asOutput() |
Output<T> |
Softplus.asOutput() |
Output<T> |
Sinh.asOutput() |
Output<T> |
Sin.asOutput() |
Output<T> |
Sign.asOutput() |
Output<T> |
SigmoidGrad.asOutput() |
Output<T> |
Sigmoid.asOutput() |
Output<T> |
SegmentSum.asOutput() |
Output<T> |
SegmentProd.asOutput() |
Output<T> |
SegmentMin.asOutput() |
Output<T> |
SegmentMean.asOutput() |
Output<T> |
SegmentMax.asOutput() |
Output<T> |
RsqrtGrad.asOutput() |
Output<T> |
Rsqrt.asOutput() |
Output<T> |
Round.asOutput() |
Output<T> |
Rint.asOutput() |
Output<T> |
ReciprocalGrad.asOutput() |
Output<T> |
Reciprocal.asOutput() |
Output<T> |
RealDiv.asOutput() |
Output<U> |
Real.asOutput() |
Output<T> |
Pow.asOutput() |
Output<UInt8> |
PopulationCount.asOutput() |
Output<T> |
Polygamma.asOutput() |
Output<Boolean> |
NotEqual.asOutput() |
Output<T> |
Neg.asOutput() |
Output<T> |
Mul.asOutput() |
Output<T> |
Mod.asOutput() |
Output<T> |
Minimum.asOutput() |
Output<T> |
Mean.asOutput() |
Output<T> |
Maximum.asOutput() |
Output<Boolean> |
LogicalOr.asOutput() |
Output<Boolean> |
LogicalNot.asOutput() |
Output<Boolean> |
LogicalAnd.asOutput() |
Output<T> |
Log1p.asOutput() |
Output<T> |
Log.asOutput() |
Output<T> |
Lgamma.asOutput() |
Output<Boolean> |
LessEqual.asOutput() |
Output<Boolean> |
Less.asOutput() |
Output<Boolean> |
IsNan.asOutput() |
Output<Boolean> |
IsInf.asOutput() |
Output<Boolean> |
IsFinite.asOutput() |
Output<T> |
InvertPermutation.asOutput() |
Output<U> |
Imag.asOutput() |
Output<T> |
IgammaGradA.asOutput() |
Output<T> |
Igammac.asOutput() |
Output<T> |
Igamma.asOutput() |
Output<Boolean> |
GreaterEqual.asOutput() |
Output<Boolean> |
Greater.asOutput() |
Output<T> |
FloorMod.asOutput() |
Output<T> |
FloorDiv.asOutput() |
Output<T> |
Floor.asOutput() |
Output<String> |
Fact.asOutput() |
Output<T> |
Expm1.asOutput() |
Output<T> |
Exp.asOutput() |
Output<T> |
Erfc.asOutput() |
Output<T> |
Erf.asOutput() |
Output<Boolean> |
Equal.asOutput() |
Output<T> |
DivNoNan.asOutput() |
Output<T> |
Div.asOutput() |
Output<T> |
Digamma.asOutput() |
Output<T> |
Cumsum.asOutput() |
Output<T> |
Cumprod.asOutput() |
Output<T> |
Cosh.asOutput() |
Output<T> |
Cos.asOutput() |
Output<T> |
Conj.asOutput() |
Output<U> |
ComplexAbs.asOutput() |
Output<UInt8> |
CompareAndBitpack.asOutput() |
Output<T> |
CheckNumerics.asOutput() |
Output<T> |
Ceil.asOutput() |
Output<T> |
Bincount.asOutput() |
Output<T> |
Betainc.asOutput() |
Output<T> |
BesselI1e.asOutput() |
Output<T> |
BesselI0e.asOutput() |
Output<T> |
Atanh.asOutput() |
Output<T> |
Atan2.asOutput() |
Output<T> |
Atan.asOutput() |
Output<T> |
Asinh.asOutput() |
Output<T> |
Asin.asOutput() |
Output<V> |
ArgMin.asOutput() |
Output<V> |
ArgMax.asOutput() |
Output<Boolean> |
ApproximateEqual.asOutput() |
Output<U> |
Angle.asOutput() |
Output<T> |
AddN.asOutput() |
Output<T> |
Add.asOutput() |
Output<T> |
Acosh.asOutput() |
Output<T> |
Acos.asOutput() |
Output<T> |
AccumulateN.asOutput() |
Output<T> |
Abs.asOutput() |
Output<T> |
SoftplusGrad.backprops()
The gradients: `gradients / (1 + exp(-features))`.
|
Output<T> |
SoftplusGrad.backprops()
The gradients: `gradients / (1 + exp(-features))`.
|
Output<T> |
Bincount.bins()
1D `Tensor` with length equal to `size`.
|
Output<T> |
Bincount.bins()
1D `Tensor` with length equal to `size`.
|
Output<String> |
Fact.fact() |
Output<String> |
Fact.fact() |
Output<Float> |
QuantizedAdd.maxZ()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMul.maxZ()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMul.maxZ()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedAdd.maxZ()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedAdd.minZ()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedMul.minZ()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedMul.minZ()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedAdd.minZ()
The float value that the lowest quantized output value represents.
|
Output<T> |
Cumprod.out() |
Output<T> |
Cumsum.out() |
Output<T> |
Cumsum.out() |
Output<T> |
Cumprod.out() |
Output<U> |
Angle.output() |
Output<V> |
ArgMax.output() |
Output<V> |
ArgMin.output() |
Output<T> |
CheckNumerics.output() |
Output<UInt8> |
CompareAndBitpack.output()
The bitpacked comparisons.
|
Output<T> |
Conj.output() |
Output<U> |
Imag.output() |
Output<T> |
Mean.output()
The reduced tensor.
|
Output<U> |
Real.output() |
Output<T> |
SegmentMax.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SegmentMean.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SegmentMin.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SegmentProd.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SegmentSum.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
UnsortedSegmentMax.output()
Has same shape as data, except for the first `segment_ids.rank`
dimensions, which are replaced with a single dimension which has size
`num_segments`.
|
Output<T> |
UnsortedSegmentMin.output()
Has same shape as data, except for the first `segment_ids.rank`
dimensions, which are replaced with a single dimension which has size
`num_segments`.
|
Output<T> |
UnsortedSegmentProd.output()
Has same shape as data, except for the first `segment_ids.rank`
dimensions, which are replaced with a single dimension which has size
`num_segments`.
|
Output<T> |
UnsortedSegmentSum.output()
Has same shape as data, except for the first `segment_ids.rank`
dimensions, which are replaced with a single dimension which has size
`num_segments`.
|
Output<T> |
UnsortedSegmentSum.output()
Has same shape as data, except for the first `segment_ids.rank`
dimensions, which are replaced with a single dimension which has size
`num_segments`.
|
Output<T> |
UnsortedSegmentProd.output()
Has same shape as data, except for the first `segment_ids.rank`
dimensions, which are replaced with a single dimension which has size
`num_segments`.
|
Output<T> |
UnsortedSegmentMin.output()
Has same shape as data, except for the first `segment_ids.rank`
dimensions, which are replaced with a single dimension which has size
`num_segments`.
|
Output<T> |
UnsortedSegmentMax.output()
Has same shape as data, except for the first `segment_ids.rank`
dimensions, which are replaced with a single dimension which has size
`num_segments`.
|
Output<T> |
SegmentSum.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SegmentProd.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SegmentMin.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SegmentMean.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SegmentMax.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<U> |
Real.output() |
Output<T> |
Mean.output()
The reduced tensor.
|
Output<U> |
Imag.output() |
Output<T> |
Conj.output() |
Output<UInt8> |
CompareAndBitpack.output()
The bitpacked comparisons.
|
Output<T> |
CheckNumerics.output() |
Output<V> |
ArgMin.output() |
Output<V> |
ArgMax.output() |
Output<U> |
Angle.output() |
Output<T> |
AccumulateN.sum() |
Output<T> |
AddN.sum() |
Output<T> |
AddN.sum() |
Output<T> |
AccumulateN.sum() |
Output<T> |
Abs.y() |
Output<T> |
Acos.y() |
Output<T> |
Acosh.y() |
Output<T> |
Asin.y() |
Output<T> |
Asinh.y() |
Output<T> |
Atan.y() |
Output<T> |
Atanh.y() |
Output<T> |
BesselI0e.y() |
Output<T> |
BesselI1e.y() |
Output<T> |
Ceil.y() |
Output<U> |
ComplexAbs.y() |
Output<T> |
Cos.y() |
Output<T> |
Cosh.y() |
Output<T> |
Digamma.y() |
Output<T> |
Erf.y() |
Output<T> |
Erfc.y() |
Output<T> |
Exp.y() |
Output<T> |
Expm1.y() |
Output<T> |
Floor.y() |
Output<T> |
InvertPermutation.y()
1-D.
|
Output<Boolean> |
IsFinite.y() |
Output<Boolean> |
IsInf.y() |
Output<Boolean> |
IsNan.y() |
Output<T> |
Lgamma.y() |
Output<T> |
Log.y() |
Output<T> |
Log1p.y() |
Output<Boolean> |
LogicalNot.y() |
Output<T> |
Neg.y() |
Output<UInt8> |
PopulationCount.y() |
Output<T> |
Reciprocal.y() |
Output<T> |
Rint.y() |
Output<T> |
Round.y() |
Output<T> |
Rsqrt.y() |
Output<T> |
Sigmoid.y() |
Output<T> |
Sign.y() |
Output<T> |
Sin.y() |
Output<T> |
Sinh.y() |
Output<T> |
Sqrt.y() |
Output<T> |
Square.y() |
Output<T> |
Tan.y() |
Output<T> |
Tanh.y() |
Output<T> |
Tanh.y() |
Output<T> |
Tan.y() |
Output<T> |
Square.y() |
Output<T> |
Sqrt.y() |
Output<T> |
Sinh.y() |
Output<T> |
Sin.y() |
Output<T> |
Sign.y() |
Output<T> |
Sigmoid.y() |
Output<T> |
Rsqrt.y() |
Output<T> |
Round.y() |
Output<T> |
Rint.y() |
Output<T> |
Reciprocal.y() |
Output<UInt8> |
PopulationCount.y() |
Output<T> |
Neg.y() |
Output<Boolean> |
LogicalNot.y() |
Output<T> |
Log1p.y() |
Output<T> |
Log.y() |
Output<T> |
Lgamma.y() |
Output<Boolean> |
IsNan.y() |
Output<Boolean> |
IsInf.y() |
Output<Boolean> |
IsFinite.y() |
Output<T> |
InvertPermutation.y()
1-D.
|
Output<T> |
Floor.y() |
Output<T> |
Expm1.y() |
Output<T> |
Exp.y() |
Output<T> |
Erfc.y() |
Output<T> |
Erf.y() |
Output<T> |
Digamma.y() |
Output<T> |
Cosh.y() |
Output<T> |
Cos.y() |
Output<U> |
ComplexAbs.y() |
Output<T> |
Ceil.y() |
Output<T> |
BesselI1e.y() |
Output<T> |
BesselI0e.y() |
Output<T> |
Atanh.y() |
Output<T> |
Atan.y() |
Output<T> |
Asinh.y() |
Output<T> |
Asin.y() |
Output<T> |
Acosh.y() |
Output<T> |
Acos.y() |
Output<T> |
Abs.y() |
Output<T> |
Add.z() |
Output<Boolean> |
ApproximateEqual.z() |
Output<T> |
Atan2.z() |
Output<T> |
Betainc.z() |
Output<T> |
Div.z() |
Output<T> |
DivNoNan.z() |
Output<Boolean> |
Equal.z() |
Output<T> |
FloorDiv.z() |
Output<T> |
FloorMod.z() |
Output<Boolean> |
Greater.z() |
Output<Boolean> |
GreaterEqual.z() |
Output<T> |
Igamma.z() |
Output<T> |
Igammac.z() |
Output<T> |
IgammaGradA.z() |
Output<Boolean> |
Less.z() |
Output<Boolean> |
LessEqual.z() |
Output<Boolean> |
LogicalAnd.z() |
Output<Boolean> |
LogicalOr.z() |
Output<T> |
Maximum.z() |
Output<T> |
Minimum.z() |
Output<T> |
Mod.z() |
Output<T> |
Mul.z() |
Output<Boolean> |
NotEqual.z() |
Output<T> |
Polygamma.z() |
Output<T> |
Pow.z() |
Output<V> |
QuantizedAdd.z() |
Output<V> |
QuantizedMul.z() |
Output<T> |
RealDiv.z() |
Output<T> |
ReciprocalGrad.z() |
Output<T> |
RsqrtGrad.z() |
Output<T> |
SigmoidGrad.z() |
Output<T> |
SqrtGrad.z() |
Output<T> |
SquaredDifference.z() |
Output<T> |
Sub.z() |
Output<T> |
TanhGrad.z() |
Output<T> |
TruncateDiv.z() |
Output<T> |
TruncateMod.z() |
Output<T> |
Xdivy.z() |
Output<T> |
Xlogy.z() |
Output<T> |
Zeta.z() |
Output<T> |
Zeta.z() |
Output<T> |
Xlogy.z() |
Output<T> |
Xdivy.z() |
Output<T> |
TruncateMod.z() |
Output<T> |
TruncateDiv.z() |
Output<T> |
TanhGrad.z() |
Output<T> |
Sub.z() |
Output<T> |
SquaredDifference.z() |
Output<T> |
SqrtGrad.z() |
Output<T> |
SigmoidGrad.z() |
Output<T> |
RsqrtGrad.z() |
Output<T> |
ReciprocalGrad.z() |
Output<T> |
RealDiv.z() |
Output<V> |
QuantizedMul.z() |
Output<V> |
QuantizedAdd.z() |
Output<T> |
Pow.z() |
Output<T> |
Polygamma.z() |
Output<Boolean> |
NotEqual.z() |
Output<T> |
Mul.z() |
Output<T> |
Mod.z() |
Output<T> |
Minimum.z() |
Output<T> |
Maximum.z() |
Output<Boolean> |
LogicalOr.z() |
Output<Boolean> |
LogicalAnd.z() |
Output<Boolean> |
LessEqual.z() |
Output<Boolean> |
Less.z() |
Output<T> |
IgammaGradA.z() |
Output<T> |
Igammac.z() |
Output<T> |
Igamma.z() |
Output<Boolean> |
GreaterEqual.z() |
Output<Boolean> |
Greater.z() |
Output<T> |
FloorMod.z() |
Output<T> |
FloorDiv.z() |
Output<Boolean> |
Equal.z() |
Output<T> |
DivNoNan.z() |
Output<T> |
Div.z() |
Output<T> |
Betainc.z() |
Output<T> |
Atan2.z() |
Output<Boolean> |
ApproximateEqual.z() |
Output<T> |
Add.z() |
Modifier and Type | Method and Description |
---|---|
Output<T> |
Elu.activations() |
Output<T> |
LeakyRelu.activations() |
Output<U> |
QuantizedRelu.activations()
Has the same output shape as "features".
|
Output<U> |
QuantizedRelu6.activations()
Has the same output shape as "features".
|
Output<U> |
QuantizedReluX.activations()
Has the same output shape as "features".
|
Output<T> |
Relu.activations() |
Output<T> |
Relu6.activations() |
Output<T> |
Selu.activations() |
Output<T> |
Softsign.activations() |
Output<T> |
Softsign.activations() |
Output<T> |
Selu.activations() |
Output<T> |
Relu6.activations() |
Output<T> |
Relu.activations() |
Output<U> |
QuantizedReluX.activations()
Has the same output shape as "features".
|
Output<U> |
QuantizedRelu6.activations()
Has the same output shape as "features".
|
Output<U> |
QuantizedRelu.activations()
Has the same output shape as "features".
|
Output<T> |
LeakyRelu.activations() |
Output<T> |
Elu.activations() |
Output<U> |
MaxPoolWithArgmax.argmax()
4-D.
|
Output<U> |
MaxPoolWithArgmax.argmax()
4-D.
|
Output<T> |
AvgPool.asOutput() |
Output<T> |
AvgPool3d.asOutput() |
Output<T> |
AvgPool3dGrad.asOutput() |
Output<T> |
AvgPoolGrad.asOutput() |
Output<T> |
BatchNormWithGlobalNormalization.asOutput() |
Output<T> |
BiasAdd.asOutput() |
Output<T> |
BiasAddGrad.asOutput() |
Output<T> |
Conv2d.asOutput() |
Output<T> |
Conv2dBackpropFilter.asOutput() |
Output<T> |
Conv2dBackpropInput.asOutput() |
Output<T> |
Conv3d.asOutput() |
Output<T> |
Conv3dBackpropFilter.asOutput() |
Output<U> |
Conv3dBackpropInput.asOutput() |
Output<T> |
CudnnRnnCanonicalToParams.asOutput() |
Output<U> |
CudnnRnnParamsSize.asOutput() |
Output<T> |
DataFormatDimMap.asOutput() |
Output<T> |
DataFormatVecPermute.asOutput() |
Output<T> |
DepthToSpace.asOutput() |
Output<T> |
DepthwiseConv2dNative.asOutput() |
Output<T> |
DepthwiseConv2dNativeBackpropFilter.asOutput() |
Output<T> |
DepthwiseConv2dNativeBackpropInput.asOutput() |
Output<T> |
Dilation2d.asOutput() |
Output<T> |
Dilation2dBackpropFilter.asOutput() |
Output<T> |
Dilation2dBackpropInput.asOutput() |
Output<T> |
Elu.asOutput() |
Output<T> |
EluGrad.asOutput() |
Output<T> |
FractionalAvgPoolGrad.asOutput() |
Output<T> |
FractionalMaxPoolGrad.asOutput() |
Output<T> |
FusedPadConv2d.asOutput() |
Output<T> |
FusedResizeAndPadConv2d.asOutput() |
Output<Boolean> |
InTopK.asOutput() |
Output<T> |
InvGrad.asOutput() |
Output<T> |
L2Loss.asOutput() |
Output<T> |
LeakyRelu.asOutput() |
Output<T> |
LocalResponseNormalization.asOutput() |
Output<T> |
LocalResponseNormalizationGrad.asOutput() |
Output<T> |
LogSoftmax.asOutput() |
Output<T> |
MaxPool.asOutput() |
Output<T> |
MaxPool3d.asOutput() |
Output<U> |
MaxPool3dGrad.asOutput() |
Output<T> |
MaxPool3dGradGrad.asOutput() |
Output<T> |
MaxPoolGrad.asOutput() |
Output<T> |
MaxPoolGradGrad.asOutput() |
Output<T> |
MaxPoolGradGradWithArgmax.asOutput() |
Output<T> |
MaxPoolGradWithArgmax.asOutput() |
Output<T> |
NthElement.asOutput() |
Output<T> |
Relu.asOutput() |
Output<T> |
Relu6.asOutput() |
Output<T> |
Relu6Grad.asOutput() |
Output<T> |
ReluGrad.asOutput() |
Output<T> |
Selu.asOutput() |
Output<T> |
SeluGrad.asOutput() |
Output<T> |
Softmax.asOutput() |
Output<T> |
Softsign.asOutput() |
Output<T> |
SoftsignGrad.asOutput() |
Output<T> |
SpaceToBatch.asOutput() |
Output<T> |
SpaceToDepth.asOutput() |
Output<T> |
SpaceToDepth.asOutput() |
Output<T> |
SpaceToBatch.asOutput() |
Output<T> |
SoftsignGrad.asOutput() |
Output<T> |
Softsign.asOutput() |
Output<T> |
Softmax.asOutput() |
Output<T> |
SeluGrad.asOutput() |
Output<T> |
Selu.asOutput() |
Output<T> |
ReluGrad.asOutput() |
Output<T> |
Relu6Grad.asOutput() |
Output<T> |
Relu6.asOutput() |
Output<T> |
Relu.asOutput() |
Output<T> |
NthElement.asOutput() |
Output<T> |
MaxPoolGradWithArgmax.asOutput() |
Output<T> |
MaxPoolGradGradWithArgmax.asOutput() |
Output<T> |
MaxPoolGradGrad.asOutput() |
Output<T> |
MaxPoolGrad.asOutput() |
Output<T> |
MaxPool3dGradGrad.asOutput() |
Output<U> |
MaxPool3dGrad.asOutput() |
Output<T> |
MaxPool3d.asOutput() |
Output<T> |
MaxPool.asOutput() |
Output<T> |
LogSoftmax.asOutput() |
Output<T> |
LocalResponseNormalizationGrad.asOutput() |
Output<T> |
LocalResponseNormalization.asOutput() |
Output<T> |
LeakyRelu.asOutput() |
Output<T> |
L2Loss.asOutput() |
Output<T> |
InvGrad.asOutput() |
Output<Boolean> |
InTopK.asOutput() |
Output<T> |
FusedResizeAndPadConv2d.asOutput() |
Output<T> |
FusedPadConv2d.asOutput() |
Output<T> |
FractionalMaxPoolGrad.asOutput() |
Output<T> |
FractionalAvgPoolGrad.asOutput() |
Output<T> |
EluGrad.asOutput() |
Output<T> |
Elu.asOutput() |
Output<T> |
Dilation2dBackpropInput.asOutput() |
Output<T> |
Dilation2dBackpropFilter.asOutput() |
Output<T> |
Dilation2d.asOutput() |
Output<T> |
DepthwiseConv2dNativeBackpropInput.asOutput() |
Output<T> |
DepthwiseConv2dNativeBackpropFilter.asOutput() |
Output<T> |
DepthwiseConv2dNative.asOutput() |
Output<T> |
DepthToSpace.asOutput() |
Output<T> |
DataFormatVecPermute.asOutput() |
Output<T> |
DataFormatDimMap.asOutput() |
Output<U> |
CudnnRnnParamsSize.asOutput() |
Output<T> |
CudnnRnnCanonicalToParams.asOutput() |
Output<U> |
Conv3dBackpropInput.asOutput() |
Output<T> |
Conv3dBackpropFilter.asOutput() |
Output<T> |
Conv3d.asOutput() |
Output<T> |
Conv2dBackpropInput.asOutput() |
Output<T> |
Conv2dBackpropFilter.asOutput() |
Output<T> |
Conv2d.asOutput() |
Output<T> |
BiasAddGrad.asOutput() |
Output<T> |
BiasAdd.asOutput() |
Output<T> |
BatchNormWithGlobalNormalization.asOutput() |
Output<T> |
AvgPoolGrad.asOutput() |
Output<T> |
AvgPool3dGrad.asOutput() |
Output<T> |
AvgPool3d.asOutput() |
Output<T> |
AvgPool.asOutput() |
Output<T> |
SoftmaxCrossEntropyWithLogits.backprop()
backpropagated gradients (batch_size x num_classes matrix).
|
Output<T> |
SparseSoftmaxCrossEntropyWithLogits.backprop()
backpropagated gradients (batch_size x num_classes matrix).
|
Output<T> |
SparseSoftmaxCrossEntropyWithLogits.backprop()
backpropagated gradients (batch_size x num_classes matrix).
|
Output<T> |
SoftmaxCrossEntropyWithLogits.backprop()
backpropagated gradients (batch_size x num_classes matrix).
|
Output<T> |
EluGrad.backprops()
The gradients: `gradients * (outputs + 1)` if outputs < 0,
`gradients` otherwise.
|
Output<T> |
Relu6Grad.backprops()
The gradients:
`gradients * (features > 0) * (features < 6)`.
|
Output<T> |
ReluGrad.backprops()
`gradients * (features > 0)`.
|
Output<T> |
SeluGrad.backprops()
The gradients: `gradients * (outputs + scale * alpha)`
if outputs < 0, `scale * gradients` otherwise.
|
Output<T> |
SoftsignGrad.backprops()
The gradients: `gradients / (1 + abs(features)) ** 2`.
|
Output<T> |
SoftsignGrad.backprops()
The gradients: `gradients / (1 + abs(features)) ** 2`.
|
Output<T> |
SeluGrad.backprops()
The gradients: `gradients * (outputs + scale * alpha)`
if outputs < 0, `scale * gradients` otherwise.
|
Output<T> |
ReluGrad.backprops()
`gradients * (features > 0)`.
|
Output<T> |
Relu6Grad.backprops()
The gradients:
`gradients * (features > 0) * (features < 6)`.
|
Output<T> |
EluGrad.backprops()
The gradients: `gradients * (outputs + 1)` if outputs < 0,
`gradients` otherwise.
|
Output<U> |
FusedBatchNorm.batchMean()
A 1D Tensor for the computed batch mean, to be used by TensorFlow
to compute the running mean.
|
Output<U> |
FusedBatchNorm.batchMean()
A 1D Tensor for the computed batch mean, to be used by TensorFlow
to compute the running mean.
|
Output<U> |
FusedBatchNorm.batchVariance()
A 1D Tensor for the computed batch variance, to be used by
TensorFlow to compute the running variance.
|
Output<U> |
FusedBatchNorm.batchVariance()
A 1D Tensor for the computed batch variance, to be used by
TensorFlow to compute the running variance.
|
Output<Long> |
FractionalAvgPool.colPoolingSequence()
column pooling sequence, needed to calculate gradient.
|
Output<Long> |
FractionalMaxPool.colPoolingSequence()
column pooling sequence, needed to calculate gradient.
|
Output<Long> |
FractionalMaxPool.colPoolingSequence()
column pooling sequence, needed to calculate gradient.
|
Output<Long> |
FractionalAvgPool.colPoolingSequence()
column pooling sequence, needed to calculate gradient.
|
Output<T> |
BatchNormWithGlobalNormalizationGrad.db()
1D backprop tensor for beta.
|
Output<T> |
BatchNormWithGlobalNormalizationGrad.db()
1D backprop tensor for beta.
|
Output<Long> |
CtcGreedyDecoder.decodedIndices()
Indices matrix, size `(total_decoded_outputs x 2)`,
of a `SparseTensor
|
Output<Long> |
CtcGreedyDecoder.decodedIndices()
Indices matrix, size `(total_decoded_outputs x 2)`,
of a `SparseTensor
|
Output<Long> |
CtcGreedyDecoder.decodedShape()
Shape vector, size `(2)`, of the decoded SparseTensor.
|
Output<Long> |
CtcGreedyDecoder.decodedShape()
Shape vector, size `(2)`, of the decoded SparseTensor.
|
Output<Long> |
CtcGreedyDecoder.decodedValues()
Values vector, size: `(total_decoded_outputs)`,
of a `SparseTensor
|
Output<Long> |
CtcGreedyDecoder.decodedValues()
Values vector, size: `(total_decoded_outputs)`,
of a `SparseTensor
|
Output<T> |
BatchNormWithGlobalNormalizationGrad.dg()
1D backprop tensor for gamma.
|
Output<T> |
BatchNormWithGlobalNormalizationGrad.dg()
1D backprop tensor for gamma.
|
Output<T> |
BatchNormWithGlobalNormalizationGrad.dm()
1D backprop tensor for mean.
|
Output<T> |
BatchNormWithGlobalNormalizationGrad.dm()
1D backprop tensor for mean.
|
Output<T> |
BatchNormWithGlobalNormalizationGrad.dv()
1D backprop tensor for variance.
|
Output<T> |
BatchNormWithGlobalNormalizationGrad.dv()
1D backprop tensor for variance.
|
Output<T> |
BatchNormWithGlobalNormalizationGrad.dx()
4D backprop tensor for input.
|
Output<T> |
BatchNormWithGlobalNormalizationGrad.dx()
4D backprop tensor for input.
|
Output<T> |
Dilation2dBackpropFilter.filterBackprop()
3-D with shape `[filter_height, filter_width, depth]`.
|
Output<T> |
Dilation2dBackpropFilter.filterBackprop()
3-D with shape `[filter_height, filter_width, depth]`.
|
Output<T> |
CtcLoss.gradient()
The gradient of `loss`.
|
Output<T> |
CtcLoss.gradient()
The gradient of `loss`.
|
Output<?> |
CudnnRnn.hostReserved() |
Output<?> |
CudnnRnn.hostReserved() |
Output<Long> |
ComputeAccidentalHits.ids()
A vector of IDs of positions in sampled_candidates that match a true_label
for the row with the corresponding index in indices.
|
Output<Long> |
ComputeAccidentalHits.ids()
A vector of IDs of positions in sampled_candidates that match a true_label
for the row with the corresponding index in indices.
|
Output<T> |
Dilation2dBackpropInput.inBackprop()
4-D with shape `[batch, in_height, in_width, depth]`.
|
Output<T> |
Dilation2dBackpropInput.inBackprop()
4-D with shape `[batch, in_height, in_width, depth]`.
|
Output<Integer> |
ComputeAccidentalHits.indices()
A vector of indices corresponding to rows of true_candidates.
|
Output<Integer> |
TopK.indices()
The indices of `values` within the last dimension of `input`.
|
Output<Integer> |
TopK.indices()
The indices of `values` within the last dimension of `input`.
|
Output<Integer> |
ComputeAccidentalHits.indices()
A vector of indices corresponding to rows of true_candidates.
|
Output<T> |
CudnnRnnBackprop.inputBackprop() |
Output<T> |
CudnnRnnBackprop.inputBackprop() |
Output<T> |
CudnnRnnBackprop.inputCBackprop() |
Output<T> |
CudnnRnnBackprop.inputCBackprop() |
Output<T> |
CudnnRnnBackprop.inputHBackprop() |
Output<T> |
CudnnRnnBackprop.inputHBackprop() |
Output<T> |
CtcBeamSearchDecoder.logProbability()
A matrix, shaped: `(batch_size x top_paths)`.
|
Output<T> |
CtcGreedyDecoder.logProbability()
Matrix, size `(batch_size x 1)`, containing sequence
log-probabilities.
|
Output<T> |
CtcGreedyDecoder.logProbability()
Matrix, size `(batch_size x 1)`, containing sequence
log-probabilities.
|
Output<T> |
CtcBeamSearchDecoder.logProbability()
A matrix, shaped: `(batch_size x top_paths)`.
|
Output<T> |
LogSoftmax.logsoftmax()
Same shape as `logits`.
|
Output<T> |
LogSoftmax.logsoftmax()
Same shape as `logits`.
|
Output<T> |
CtcLoss.loss()
A vector (batch) containing log-probabilities.
|
Output<T> |
SoftmaxCrossEntropyWithLogits.loss()
Per example loss (batch_size vector).
|
Output<T> |
SparseSoftmaxCrossEntropyWithLogits.loss()
Per example loss (batch_size vector).
|
Output<T> |
SparseSoftmaxCrossEntropyWithLogits.loss()
Per example loss (batch_size vector).
|
Output<T> |
SoftmaxCrossEntropyWithLogits.loss()
Per example loss (batch_size vector).
|
Output<T> |
CtcLoss.loss()
A vector (batch) containing log-probabilities.
|
Output<Float> |
QuantizedRelu.maxActivations()
The float value that the highest quantized value represents.
|
Output<Float> |
QuantizedRelu6.maxActivations()
The float value that the highest quantized value represents.
|
Output<Float> |
QuantizedReluX.maxActivations()
The float value that the highest quantized value represents.
|
Output<Float> |
QuantizedReluX.maxActivations()
The float value that the highest quantized value represents.
|
Output<Float> |
QuantizedRelu6.maxActivations()
The float value that the highest quantized value represents.
|
Output<Float> |
QuantizedRelu.maxActivations()
The float value that the highest quantized value represents.
|
Output<Float> |
QuantizedBiasAdd.maxOut()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedBiasAdd.maxOut()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedAvgPool.maxOutput()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedConv2d.maxOutput()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMaxPool.maxOutput()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedMaxPool.maxOutput()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedConv2d.maxOutput()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedAvgPool.maxOutput()
The float value that the highest quantized output value represents.
|
Output<Float> |
QuantizedRelu.minActivations()
The float value that the lowest quantized value represents.
|
Output<Float> |
QuantizedRelu6.minActivations()
The float value that the lowest quantized value represents.
|
Output<Float> |
QuantizedReluX.minActivations()
The float value that the lowest quantized value represents.
|
Output<Float> |
QuantizedReluX.minActivations()
The float value that the lowest quantized value represents.
|
Output<Float> |
QuantizedRelu6.minActivations()
The float value that the lowest quantized value represents.
|
Output<Float> |
QuantizedRelu.minActivations()
The float value that the lowest quantized value represents.
|
Output<Float> |
QuantizedBiasAdd.minOut()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedBiasAdd.minOut()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedAvgPool.minOutput()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedConv2d.minOutput()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedMaxPool.minOutput()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedMaxPool.minOutput()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedConv2d.minOutput()
The float value that the lowest quantized output value represents.
|
Output<Float> |
QuantizedAvgPool.minOutput()
The float value that the lowest quantized output value represents.
|
Output<U> |
FusedBatchNormGrad.offsetBackprop()
A 1D Tensor for the gradient with respect to offset.
|
Output<U> |
FusedBatchNormGrad.offsetBackprop()
A 1D Tensor for the gradient with respect to offset.
|
Output<T> |
AvgPool.output()
The average pooled output tensor.
|
Output<T> |
AvgPool3d.output()
The average pooled output tensor.
|
Output<T> |
AvgPool3dGrad.output()
The backprop for input.
|
Output<T> |
AvgPoolGrad.output()
4-D.
|
Output<T> |
BiasAdd.output()
Broadcasted sum of `value` and `bias`.
|
Output<T> |
BiasAddGrad.output()
1-D with size the feature dimension of `out_backprop`.
|
Output<T> |
Conv2d.output()
A 4-D tensor.
|
Output<T> |
Conv2dBackpropFilter.output()
4-D with shape
`[filter_height, filter_width, in_channels, out_channels]`.
|
Output<T> |
Conv2dBackpropInput.output()
4-D with shape `[batch, in_height, in_width, in_channels]`.
|
Output<T> |
Conv3d.output() |
Output<T> |
Conv3dBackpropFilter.output() |
Output<U> |
Conv3dBackpropInput.output() |
Output<T> |
CudnnRnn.output() |
Output<T> |
DepthToSpace.output() |
Output<T> |
DepthwiseConv2dNative.output() |
Output<T> |
DepthwiseConv2dNativeBackpropFilter.output()
4-D with shape
`[filter_height, filter_width, in_channels, out_channels]`.
|
Output<T> |
DepthwiseConv2dNativeBackpropInput.output()
4-D with shape according to `data_format`.
|
Output<T> |
Dilation2d.output()
4-D with shape `[batch, out_height, out_width, depth]`.
|
Output<T> |
FractionalAvgPool.output()
output tensor after fractional avg pooling.
|
Output<T> |
FractionalAvgPoolGrad.output()
4-D.
|
Output<T> |
FractionalMaxPool.output()
output tensor after fractional max pooling.
|
Output<T> |
FractionalMaxPoolGrad.output()
4-D.
|
Output<T> |
FusedPadConv2d.output() |
Output<T> |
FusedResizeAndPadConv2d.output() |
Output<T> |
L2Loss.output()
0-D.
|
Output<T> |
LocalResponseNormalization.output() |
Output<T> |
LocalResponseNormalizationGrad.output()
The gradients for LRN.
|
Output<T> |
MaxPool.output()
The max pooled output tensor.
|
Output<T> |
MaxPool3d.output()
The max pooled output tensor.
|
Output<U> |
MaxPool3dGrad.output() |
Output<T> |
MaxPool3dGradGrad.output()
Gradients of gradients w.r.t.
|
Output<T> |
MaxPoolGrad.output()
Gradients w.r.t.
|
Output<T> |
MaxPoolGradGrad.output()
Gradients of gradients w.r.t.
|
Output<T> |
MaxPoolGradGradWithArgmax.output()
Gradients of gradients w.r.t.
|
Output<T> |
MaxPoolGradWithArgmax.output()
Gradients w.r.t.
|
Output<T> |
MaxPoolWithArgmax.output()
The max pooled output tensor.
|
Output<T> |
QuantizedAvgPool.output() |
Output<V> |
QuantizedBiasAdd.output() |
Output<V> |
QuantizedConv2d.output() |
Output<T> |
QuantizedMaxPool.output() |
Output<T> |
SpaceToBatch.output() |
Output<T> |
SpaceToDepth.output() |
Output<T> |
SpaceToDepth.output() |
Output<T> |
SpaceToBatch.output() |
Output<T> |
QuantizedMaxPool.output() |
Output<V> |
QuantizedConv2d.output() |
Output<V> |
QuantizedBiasAdd.output() |
Output<T> |
QuantizedAvgPool.output() |
Output<T> |
MaxPoolWithArgmax.output()
The max pooled output tensor.
|
Output<T> |
MaxPoolGradWithArgmax.output()
Gradients w.r.t.
|
Output<T> |
MaxPoolGradGradWithArgmax.output()
Gradients of gradients w.r.t.
|
Output<T> |
MaxPoolGradGrad.output()
Gradients of gradients w.r.t.
|
Output<T> |
MaxPoolGrad.output()
Gradients w.r.t.
|
Output<T> |
MaxPool3dGradGrad.output()
Gradients of gradients w.r.t.
|
Output<U> |
MaxPool3dGrad.output() |
Output<T> |
MaxPool3d.output()
The max pooled output tensor.
|
Output<T> |
MaxPool.output()
The max pooled output tensor.
|
Output<T> |
LocalResponseNormalizationGrad.output()
The gradients for LRN.
|
Output<T> |
LocalResponseNormalization.output() |
Output<T> |
L2Loss.output()
0-D.
|
Output<T> |
FusedResizeAndPadConv2d.output() |
Output<T> |
FusedPadConv2d.output() |
Output<T> |
FractionalMaxPoolGrad.output()
4-D.
|
Output<T> |
FractionalMaxPool.output()
output tensor after fractional max pooling.
|
Output<T> |
FractionalAvgPoolGrad.output()
4-D.
|
Output<T> |
FractionalAvgPool.output()
output tensor after fractional avg pooling.
|
Output<T> |
Dilation2d.output()
4-D with shape `[batch, out_height, out_width, depth]`.
|
Output<T> |
DepthwiseConv2dNativeBackpropInput.output()
4-D with shape according to `data_format`.
|
Output<T> |
DepthwiseConv2dNativeBackpropFilter.output()
4-D with shape
`[filter_height, filter_width, in_channels, out_channels]`.
|
Output<T> |
DepthwiseConv2dNative.output() |
Output<T> |
DepthToSpace.output() |
Output<T> |
CudnnRnn.output() |
Output<U> |
Conv3dBackpropInput.output() |
Output<T> |
Conv3dBackpropFilter.output() |
Output<T> |
Conv3d.output() |
Output<T> |
Conv2dBackpropInput.output()
4-D with shape `[batch, in_height, in_width, in_channels]`.
|
Output<T> |
Conv2dBackpropFilter.output()
4-D with shape
`[filter_height, filter_width, in_channels, out_channels]`.
|
Output<T> |
Conv2d.output()
A 4-D tensor.
|
Output<T> |
BiasAddGrad.output()
1-D with size the feature dimension of `out_backprop`.
|
Output<T> |
BiasAdd.output()
Broadcasted sum of `value` and `bias`.
|
Output<T> |
AvgPoolGrad.output()
4-D.
|
Output<T> |
AvgPool3dGrad.output()
The backprop for input.
|
Output<T> |
AvgPool3d.output()
The average pooled output tensor.
|
Output<T> |
AvgPool.output()
The average pooled output tensor.
|
Output<T> |
CudnnRnn.outputC() |
Output<T> |
CudnnRnn.outputC() |
Output<T> |
CudnnRnn.outputH() |
Output<T> |
CudnnRnn.outputH() |
Output<T> |
CudnnRnnCanonicalToParams.params() |
Output<T> |
CudnnRnnCanonicalToParams.params() |
Output<T> |
CudnnRnnBackprop.paramsBackprop() |
Output<T> |
CudnnRnnBackprop.paramsBackprop() |
Output<U> |
CudnnRnnParamsSize.paramsSize() |
Output<U> |
CudnnRnnParamsSize.paramsSize() |
Output<Boolean> |
InTopK.precision()
Computed precision at `k` as a `bool Tensor`.
|
Output<Boolean> |
InTopK.precision()
Computed precision at `k` as a `bool Tensor`.
|
Output<T> |
CudnnRnn.reserveSpace() |
Output<T> |
CudnnRnn.reserveSpace() |
Output<U> |
FusedBatchNorm.reserveSpace1()
A 1D Tensor for the computed batch mean, to be reused
in the gradient computation.
|
Output<U> |
FusedBatchNorm.reserveSpace1()
A 1D Tensor for the computed batch mean, to be reused
in the gradient computation.
|
Output<U> |
FusedBatchNorm.reserveSpace2()
A 1D Tensor for the computed batch variance (inverted variance
in the cuDNN case), to be reused in the gradient computation.
|
Output<U> |
FusedBatchNorm.reserveSpace2()
A 1D Tensor for the computed batch variance (inverted variance
in the cuDNN case), to be reused in the gradient computation.
|
Output<U> |
FusedBatchNormGrad.reserveSpace3()
Unused placeholder to match the mean input in FusedBatchNorm.
|
Output<U> |
FusedBatchNormGrad.reserveSpace3()
Unused placeholder to match the mean input in FusedBatchNorm.
|
Output<U> |
FusedBatchNormGrad.reserveSpace4()
Unused placeholder to match the variance input
in FusedBatchNorm.
|
Output<U> |
FusedBatchNormGrad.reserveSpace4()
Unused placeholder to match the variance input
in FusedBatchNorm.
|
Output<T> |
BatchNormWithGlobalNormalization.result() |
Output<U> |
QuantizedBatchNormWithGlobalNormalization.result() |
Output<U> |
QuantizedBatchNormWithGlobalNormalization.result() |
Output<T> |
BatchNormWithGlobalNormalization.result() |
Output<Float> |
QuantizedBatchNormWithGlobalNormalization.resultMax() |
Output<Float> |
QuantizedBatchNormWithGlobalNormalization.resultMax() |
Output<Float> |
QuantizedBatchNormWithGlobalNormalization.resultMin() |
Output<Float> |
QuantizedBatchNormWithGlobalNormalization.resultMin() |
Output<Long> |
FractionalAvgPool.rowPoolingSequence()
row pooling sequence, needed to calculate gradient.
|
Output<Long> |
FractionalMaxPool.rowPoolingSequence()
row pooling sequence, needed to calculate gradient.
|
Output<Long> |
FractionalMaxPool.rowPoolingSequence()
row pooling sequence, needed to calculate gradient.
|
Output<Long> |
FractionalAvgPool.rowPoolingSequence()
row pooling sequence, needed to calculate gradient.
|
Output<Long> |
FixedUnigramCandidateSampler.sampledCandidates()
A vector of length num_sampled, in which each element is
the ID of a sampled candidate.
|
Output<Long> |
LearnedUnigramCandidateSampler.sampledCandidates()
A vector of length num_sampled, in which each element is
the ID of a sampled candidate.
|
Output<Long> |
LearnedUnigramCandidateSampler.sampledCandidates()
A vector of length num_sampled, in which each element is
the ID of a sampled candidate.
|
Output<Long> |
FixedUnigramCandidateSampler.sampledCandidates()
A vector of length num_sampled, in which each element is
the ID of a sampled candidate.
|
Output<Float> |
FixedUnigramCandidateSampler.sampledExpectedCount()
A vector of length num_sampled, for each sampled
candidate representing the number of times the candidate is expected
to occur in a batch of sampled candidates.
|
Output<Float> |
LearnedUnigramCandidateSampler.sampledExpectedCount()
A vector of length num_sampled, for each sampled
candidate representing the number of times the candidate is expected
to occur in a batch of sampled candidates.
|
Output<Float> |
LearnedUnigramCandidateSampler.sampledExpectedCount()
A vector of length num_sampled, for each sampled
candidate representing the number of times the candidate is expected
to occur in a batch of sampled candidates.
|
Output<Float> |
FixedUnigramCandidateSampler.sampledExpectedCount()
A vector of length num_sampled, for each sampled
candidate representing the number of times the candidate is expected
to occur in a batch of sampled candidates.
|
Output<U> |
FusedBatchNormGrad.scaleBackprop()
A 1D Tensor for the gradient with respect to scale.
|
Output<U> |
FusedBatchNormGrad.scaleBackprop()
A 1D Tensor for the gradient with respect to scale.
|
Output<T> |
Softmax.softmax()
Same shape as `logits`.
|
Output<T> |
Softmax.softmax()
Same shape as `logits`.
|
Output<Float> |
FixedUnigramCandidateSampler.trueExpectedCount()
A batch_size * num_true matrix, representing
the number of times each candidate is expected to occur in a batch
of sampled candidates.
|
Output<Float> |
LearnedUnigramCandidateSampler.trueExpectedCount()
A batch_size * num_true matrix, representing
the number of times each candidate is expected to occur in a batch
of sampled candidates.
|
Output<Float> |
LearnedUnigramCandidateSampler.trueExpectedCount()
A batch_size * num_true matrix, representing
the number of times each candidate is expected to occur in a batch
of sampled candidates.
|
Output<Float> |
FixedUnigramCandidateSampler.trueExpectedCount()
A batch_size * num_true matrix, representing
the number of times each candidate is expected to occur in a batch
of sampled candidates.
|
Output<T> |
NthElement.values()
The `n`-th order statistic along each last dimensional slice.
|
Output<T> |
TopK.values()
The `k` largest elements along each last dimensional slice.
|
Output<T> |
TopK.values()
The `k` largest elements along each last dimensional slice.
|
Output<T> |
NthElement.values()
The `n`-th order statistic along each last dimensional slice.
|
Output<Float> |
ComputeAccidentalHits.weights()
A vector of the same length as indices and ids, in which each element
is -FLOAT_MAX.
|
Output<Float> |
ComputeAccidentalHits.weights()
A vector of the same length as indices and ids, in which each element
is -FLOAT_MAX.
|
Output<T> |
FusedBatchNormGrad.xBackprop()
A 4D Tensor for the gradient with respect to x.
|
Output<T> |
FusedBatchNormGrad.xBackprop()
A 4D Tensor for the gradient with respect to x.
|
Output<T> |
DataFormatDimMap.y()
A Tensor with each element as a dimension index in destination data format.
|
Output<T> |
DataFormatVecPermute.y()
Vector of size 4 or Tensor of shape (4, 2) in destination data format.
|
Output<T> |
FusedBatchNorm.y()
A 4D Tensor for output data.
|
Output<T> |
QuantizedInstanceNorm.y()
A 4D Tensor.
|
Output<T> |
QuantizedInstanceNorm.y()
A 4D Tensor.
|
Output<T> |
FusedBatchNorm.y()
A 4D Tensor for output data.
|
Output<T> |
DataFormatVecPermute.y()
Vector of size 4 or Tensor of shape (4, 2) in destination data format.
|
Output<T> |
DataFormatDimMap.y()
A Tensor with each element as a dimension index in destination data format.
|
Output<Float> |
QuantizedInstanceNorm.yMax()
The value represented by the highest quantized output.
|
Output<Float> |
QuantizedInstanceNorm.yMax()
The value represented by the highest quantized output.
|
Output<Float> |
QuantizedInstanceNorm.yMin()
The value represented by the lowest quantized output.
|
Output<Float> |
QuantizedInstanceNorm.yMin()
The value represented by the lowest quantized output.
|
Output<T> |
InvGrad.z() |
Output<T> |
InvGrad.z() |
Modifier and Type | Method and Description |
---|---|
List<Output<T>> |
CudnnRnnParamsToCanonical.biases() |
List<Output<T>> |
CudnnRnnParamsToCanonical.biases() |
List<Output<Long>> |
CtcBeamSearchDecoder.decodedIndices()
A list (length: top_paths) of indices matrices.
|
List<Output<Long>> |
CtcBeamSearchDecoder.decodedIndices()
A list (length: top_paths) of indices matrices.
|
List<Output<Long>> |
CtcBeamSearchDecoder.decodedShape()
A list (length: top_paths) of shape vector.
|
List<Output<Long>> |
CtcBeamSearchDecoder.decodedShape()
A list (length: top_paths) of shape vector.
|
List<Output<Long>> |
CtcBeamSearchDecoder.decodedValues()
A list (length: top_paths) of values vectors.
|
List<Output<Long>> |
CtcBeamSearchDecoder.decodedValues()
A list (length: top_paths) of values vectors.
|
List<Output<T>> |
CudnnRnnParamsToCanonical.weights() |
List<Output<T>> |
CudnnRnnParamsToCanonical.weights() |
Modifier and Type | Method and Description |
---|---|
Output<Float> |
Dequantize.asOutput() |
Output<Float> |
FakeQuantWithMinMaxArgs.asOutput() |
Output<Float> |
FakeQuantWithMinMaxArgsGradient.asOutput() |
Output<Float> |
FakeQuantWithMinMaxVars.asOutput() |
Output<Float> |
FakeQuantWithMinMaxVarsPerChannel.asOutput() |
Output<T> |
QuantizeAndDequantize.asOutput() |
Output<T> |
QuantizeAndDequantize.asOutput() |
Output<Float> |
FakeQuantWithMinMaxVarsPerChannel.asOutput() |
Output<Float> |
FakeQuantWithMinMaxVars.asOutput() |
Output<Float> |
FakeQuantWithMinMaxArgsGradient.asOutput() |
Output<Float> |
FakeQuantWithMinMaxArgs.asOutput() |
Output<Float> |
Dequantize.asOutput() |
Output<Float> |
FakeQuantWithMinMaxArgsGradient.backprops()
Backpropagated gradients below the FakeQuantWithMinMaxArgs operation:
`gradients * (inputs >= min && inputs <= max)`.
|
Output<Float> |
FakeQuantWithMinMaxArgsGradient.backprops()
Backpropagated gradients below the FakeQuantWithMinMaxArgs operation:
`gradients * (inputs >= min && inputs <= max)`.
|
Output<Float> |
FakeQuantWithMinMaxVarsGradient.backpropsWrtInput()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsPerChannelGradient.backpropsWrtInput()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsPerChannelGradient.backpropsWrtInput()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsGradient.backpropsWrtInput()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsGradient.backpropWrtMax()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsPerChannelGradient.backpropWrtMax()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsPerChannelGradient.backpropWrtMax()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsGradient.backpropWrtMax()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsGradient.backpropWrtMin()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsPerChannelGradient.backpropWrtMin()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsPerChannelGradient.backpropWrtMin()
Backpropagated gradients w.r.t.
|
Output<Float> |
FakeQuantWithMinMaxVarsGradient.backpropWrtMin()
Backpropagated gradients w.r.t.
|
Output<Float> |
Dequantize.output() |
Output<T> |
Quantize.output()
The quantized data produced from the float input.
|
Output<T> |
QuantizeAndDequantize.output() |
Output<U> |
QuantizeDownAndShrinkRange.output() |
Output<U> |
Requantize.output() |
Output<U> |
Requantize.output() |
Output<U> |
QuantizeDownAndShrinkRange.output() |
Output<T> |
QuantizeAndDequantize.output() |
Output<T> |
Quantize.output()
The quantized data produced from the float input.
|
Output<Float> |
Dequantize.output() |
Output<Float> |
Quantize.outputMax()
The actual maximum scalar value used for the output.
|
Output<Float> |
QuantizeDownAndShrinkRange.outputMax()
The float value that the maximum quantized output value represents.
|
Output<Float> |
RequantizationRange.outputMax()
the computed max output.
|
Output<Float> |
Requantize.outputMax()
The requested_output_max value is copied into this output.
|
Output<Float> |
Requantize.outputMax()
The requested_output_max value is copied into this output.
|
Output<Float> |
RequantizationRange.outputMax()
the computed max output.
|
Output<Float> |
QuantizeDownAndShrinkRange.outputMax()
The float value that the maximum quantized output value represents.
|
Output<Float> |
Quantize.outputMax()
The actual maximum scalar value used for the output.
|
Output<Float> |
Quantize.outputMin()
The actual minimum scalar value used for the output.
|
Output<Float> |
QuantizeDownAndShrinkRange.outputMin()
The float value that the minimum quantized output value represents.
|
Output<Float> |
RequantizationRange.outputMin()
The computed min output.
|
Output<Float> |
Requantize.outputMin()
The requested_output_min value is copied into this output.
|
Output<Float> |
Requantize.outputMin()
The requested_output_min value is copied into this output.
|
Output<Float> |
RequantizationRange.outputMin()
The computed min output.
|
Output<Float> |
QuantizeDownAndShrinkRange.outputMin()
The float value that the minimum quantized output value represents.
|
Output<Float> |
Quantize.outputMin()
The actual minimum scalar value used for the output.
|
Output<Float> |
FakeQuantWithMinMaxArgs.outputs() |
Output<Float> |
FakeQuantWithMinMaxVars.outputs() |
Output<Float> |
FakeQuantWithMinMaxVarsPerChannel.outputs() |
Output<Float> |
FakeQuantWithMinMaxVarsPerChannel.outputs() |
Output<Float> |
FakeQuantWithMinMaxVars.outputs() |
Output<Float> |
FakeQuantWithMinMaxArgs.outputs() |
Modifier and Type | Method and Description |
---|---|
Output<U> |
Multinomial.asOutput() |
Output<U> |
ParameterizedTruncatedNormal.asOutput() |
Output<U> |
RandomGamma.asOutput() |
Output<T> |
RandomGammaGrad.asOutput() |
Output<V> |
RandomPoisson.asOutput() |
Output<T> |
RandomShuffle.asOutput() |
Output<U> |
RandomStandardNormal.asOutput() |
Output<U> |
RandomUniform.asOutput() |
Output<U> |
RandomUniformInt.asOutput() |
Output<String> |
RecordInput.asOutput() |
Output<V> |
StatelessMultinomial.asOutput() |
Output<V> |
StatelessRandomNormal.asOutput() |
Output<V> |
StatelessRandomUniform.asOutput() |
Output<V> |
StatelessRandomUniformInt.asOutput() |
Output<V> |
StatelessTruncatedNormal.asOutput() |
Output<U> |
TruncatedNormal.asOutput() |
Output<U> |
TruncatedNormal.asOutput() |
Output<V> |
StatelessTruncatedNormal.asOutput() |
Output<V> |
StatelessRandomUniformInt.asOutput() |
Output<V> |
StatelessRandomUniform.asOutput() |
Output<V> |
StatelessRandomNormal.asOutput() |
Output<V> |
StatelessMultinomial.asOutput() |
Output<String> |
RecordInput.asOutput() |
Output<U> |
RandomUniformInt.asOutput() |
Output<U> |
RandomUniform.asOutput() |
Output<U> |
RandomStandardNormal.asOutput() |
Output<T> |
RandomShuffle.asOutput() |
Output<V> |
RandomPoisson.asOutput() |
Output<T> |
RandomGammaGrad.asOutput() |
Output<U> |
RandomGamma.asOutput() |
Output<U> |
ParameterizedTruncatedNormal.asOutput() |
Output<U> |
Multinomial.asOutput() |
Output<U> |
Multinomial.output()
2-D Tensor with shape `[batch_size, num_samples]`.
|
Output<U> |
ParameterizedTruncatedNormal.output()
A matrix of shape num_batches x samples_per_batch, filled with random
truncated normal values using the parameters for each row.
|
Output<U> |
RandomGamma.output()
A tensor with shape `shape + shape(alpha)`.
|
Output<T> |
RandomGammaGrad.output() |
Output<V> |
RandomPoisson.output()
A tensor with shape `shape + shape(rate)`.
|
Output<T> |
RandomShuffle.output()
A tensor of same shape and type as `value`, shuffled along its first
dimension.
|
Output<U> |
RandomStandardNormal.output()
A tensor of the specified shape filled with random normal values.
|
Output<U> |
RandomUniform.output()
A tensor of the specified shape filled with uniform random values.
|
Output<U> |
RandomUniformInt.output()
A tensor of the specified shape filled with uniform random integers.
|
Output<V> |
StatelessMultinomial.output()
2-D Tensor with shape `[batch_size, num_samples]`.
|
Output<V> |
StatelessRandomNormal.output()
Random values with specified shape.
|
Output<V> |
StatelessRandomUniform.output()
Random values with specified shape.
|
Output<V> |
StatelessRandomUniformInt.output()
Random values with specified shape.
|
Output<V> |
StatelessTruncatedNormal.output()
Random values with specified shape.
|
Output<U> |
TruncatedNormal.output()
A tensor of the specified shape filled with random truncated normal
values.
|
Output<U> |
TruncatedNormal.output()
A tensor of the specified shape filled with random truncated normal
values.
|
Output<V> |
StatelessTruncatedNormal.output()
Random values with specified shape.
|
Output<V> |
StatelessRandomUniformInt.output()
Random values with specified shape.
|
Output<V> |
StatelessRandomUniform.output()
Random values with specified shape.
|
Output<V> |
StatelessRandomNormal.output()
Random values with specified shape.
|
Output<V> |
StatelessMultinomial.output()
2-D Tensor with shape `[batch_size, num_samples]`.
|
Output<U> |
RandomUniformInt.output()
A tensor of the specified shape filled with uniform random integers.
|
Output<U> |
RandomUniform.output()
A tensor of the specified shape filled with uniform random values.
|
Output<U> |
RandomStandardNormal.output()
A tensor of the specified shape filled with random normal values.
|
Output<T> |
RandomShuffle.output()
A tensor of same shape and type as `value`, shuffled along its first
dimension.
|
Output<V> |
RandomPoisson.output()
A tensor with shape `shape + shape(rate)`.
|
Output<T> |
RandomGammaGrad.output() |
Output<U> |
RandomGamma.output()
A tensor with shape `shape + shape(alpha)`.
|
Output<U> |
ParameterizedTruncatedNormal.output()
A matrix of shape num_batches x samples_per_batch, filled with random
truncated normal values using the parameters for each row.
|
Output<U> |
Multinomial.output()
2-D Tensor with shape `[batch_size, num_samples]`.
|
Output<String> |
RecordInput.records()
A tensor of shape [batch_size].
|
Output<String> |
RecordInput.records()
A tensor of shape [batch_size].
|
Output<Long> |
AllCandidateSampler.sampledCandidates()
A vector of length num_sampled, in which each element is
the ID of a sampled candidate.
|
Output<Long> |
LogUniformCandidateSampler.sampledCandidates()
A vector of length num_sampled, in which each element is
the ID of a sampled candidate.
|
Output<Long> |
UniformCandidateSampler.sampledCandidates()
A vector of length num_sampled, in which each element is
the ID of a sampled candidate.
|
Output<Long> |
UniformCandidateSampler.sampledCandidates()
A vector of length num_sampled, in which each element is
the ID of a sampled candidate.
|
Output<Long> |
LogUniformCandidateSampler.sampledCandidates()
A vector of length num_sampled, in which each element is
the ID of a sampled candidate.
|
Output<Long> |
AllCandidateSampler.sampledCandidates()
A vector of length num_sampled, in which each element is
the ID of a sampled candidate.
|
Output<Float> |
AllCandidateSampler.sampledExpectedCount()
A vector of length num_sampled, for each sampled
candidate representing the number of times the candidate is expected
to occur in a batch of sampled candidates.
|
Output<Float> |
LogUniformCandidateSampler.sampledExpectedCount()
A vector of length num_sampled, for each sampled
candidate representing the number of times the candidate is expected
to occur in a batch of sampled candidates.
|
Output<Float> |
UniformCandidateSampler.sampledExpectedCount()
A vector of length num_sampled, for each sampled
candidate representing the number of times the candidate is expected
to occur in a batch of sampled candidates.
|
Output<Float> |
UniformCandidateSampler.sampledExpectedCount()
A vector of length num_sampled, for each sampled
candidate representing the number of times the candidate is expected
to occur in a batch of sampled candidates.
|
Output<Float> |
LogUniformCandidateSampler.sampledExpectedCount()
A vector of length num_sampled, for each sampled
candidate representing the number of times the candidate is expected
to occur in a batch of sampled candidates.
|
Output<Float> |
AllCandidateSampler.sampledExpectedCount()
A vector of length num_sampled, for each sampled
candidate representing the number of times the candidate is expected
to occur in a batch of sampled candidates.
|
Output<Float> |
AllCandidateSampler.trueExpectedCount()
A batch_size * num_true matrix, representing
the number of times each candidate is expected to occur in a batch
of sampled candidates.
|
Output<Float> |
LogUniformCandidateSampler.trueExpectedCount()
A batch_size * num_true matrix, representing
the number of times each candidate is expected to occur in a batch
of sampled candidates.
|
Output<Float> |
UniformCandidateSampler.trueExpectedCount()
A batch_size * num_true matrix, representing
the number of times each candidate is expected to occur in a batch
of sampled candidates.
|
Output<Float> |
UniformCandidateSampler.trueExpectedCount()
A batch_size * num_true matrix, representing
the number of times each candidate is expected to occur in a batch
of sampled candidates.
|
Output<Float> |
LogUniformCandidateSampler.trueExpectedCount()
A batch_size * num_true matrix, representing
the number of times each candidate is expected to occur in a batch
of sampled candidates.
|
Output<Float> |
AllCandidateSampler.trueExpectedCount()
A batch_size * num_true matrix, representing
the number of times each candidate is expected to occur in a batch
of sampled candidates.
|
Modifier and Type | Method and Description |
---|---|
Output<Object> |
BatchFft.asOutput() |
Output<Object> |
BatchFft2d.asOutput() |
Output<Object> |
BatchFft3d.asOutput() |
Output<Object> |
BatchIfft.asOutput() |
Output<Object> |
BatchIfft2d.asOutput() |
Output<Object> |
BatchIfft3d.asOutput() |
Output<T> |
Fft.asOutput() |
Output<T> |
Fft2d.asOutput() |
Output<T> |
Fft3d.asOutput() |
Output<T> |
Ifft.asOutput() |
Output<T> |
Ifft2d.asOutput() |
Output<T> |
Ifft3d.asOutput() |
Output<Float> |
Irfft.asOutput() |
Output<Float> |
Irfft2d.asOutput() |
Output<Float> |
Irfft3d.asOutput() |
Output<Object> |
Rfft.asOutput() |
Output<Object> |
Rfft2d.asOutput() |
Output<Object> |
Rfft3d.asOutput() |
Output<Object> |
Rfft3d.asOutput() |
Output<Object> |
Rfft2d.asOutput() |
Output<Object> |
Rfft.asOutput() |
Output<Float> |
Irfft3d.asOutput() |
Output<Float> |
Irfft2d.asOutput() |
Output<Float> |
Irfft.asOutput() |
Output<T> |
Ifft3d.asOutput() |
Output<T> |
Ifft2d.asOutput() |
Output<T> |
Ifft.asOutput() |
Output<T> |
Fft3d.asOutput() |
Output<T> |
Fft2d.asOutput() |
Output<T> |
Fft.asOutput() |
Output<Object> |
BatchIfft3d.asOutput() |
Output<Object> |
BatchIfft2d.asOutput() |
Output<Object> |
BatchIfft.asOutput() |
Output<Object> |
BatchFft3d.asOutput() |
Output<Object> |
BatchFft2d.asOutput() |
Output<Object> |
BatchFft.asOutput() |
Output<?> |
BatchFft.output() |
Output<?> |
BatchFft2d.output() |
Output<?> |
BatchFft3d.output() |
Output<?> |
BatchIfft.output() |
Output<?> |
BatchIfft2d.output() |
Output<?> |
BatchIfft3d.output() |
Output<T> |
Fft.output()
A complex tensor of the same shape as `input`.
|
Output<T> |
Fft2d.output()
A complex tensor of the same shape as `input`.
|
Output<T> |
Fft3d.output()
A complex64 tensor of the same shape as `input`.
|
Output<T> |
Ifft.output()
A complex tensor of the same shape as `input`.
|
Output<T> |
Ifft2d.output()
A complex tensor of the same shape as `input`.
|
Output<T> |
Ifft3d.output()
A complex64 tensor of the same shape as `input`.
|
Output<Float> |
Irfft.output()
A float32 tensor of the same rank as `input`.
|
Output<Float> |
Irfft2d.output()
A float32 tensor of the same rank as `input`.
|
Output<Float> |
Irfft3d.output()
A float32 tensor of the same rank as `input`.
|
Output<?> |
Rfft.output()
A complex64 tensor of the same rank as `input`.
|
Output<?> |
Rfft2d.output()
A complex64 tensor of the same rank as `input`.
|
Output<?> |
Rfft3d.output()
A complex64 tensor of the same rank as `input`.
|
Output<?> |
Rfft3d.output()
A complex64 tensor of the same rank as `input`.
|
Output<?> |
Rfft2d.output()
A complex64 tensor of the same rank as `input`.
|
Output<?> |
Rfft.output()
A complex64 tensor of the same rank as `input`.
|
Output<Float> |
Irfft3d.output()
A float32 tensor of the same rank as `input`.
|
Output<Float> |
Irfft2d.output()
A float32 tensor of the same rank as `input`.
|
Output<Float> |
Irfft.output()
A float32 tensor of the same rank as `input`.
|
Output<T> |
Ifft3d.output()
A complex64 tensor of the same shape as `input`.
|
Output<T> |
Ifft2d.output()
A complex tensor of the same shape as `input`.
|
Output<T> |
Ifft.output()
A complex tensor of the same shape as `input`.
|
Output<T> |
Fft3d.output()
A complex64 tensor of the same shape as `input`.
|
Output<T> |
Fft2d.output()
A complex tensor of the same shape as `input`.
|
Output<T> |
Fft.output()
A complex tensor of the same shape as `input`.
|
Output<?> |
BatchIfft3d.output() |
Output<?> |
BatchIfft2d.output() |
Output<?> |
BatchIfft.output() |
Output<?> |
BatchFft3d.output() |
Output<?> |
BatchFft2d.output() |
Output<?> |
BatchFft.output() |
Modifier and Type | Method and Description |
---|---|
Output<Long> |
AddManySparseToTensorsMap.asOutput() |
Output<Long> |
AddSparseToTensorsMap.asOutput() |
Output<String> |
SparseConditionalAccumulator.asOutput() |
Output<T> |
SparseDenseCwiseAdd.asOutput() |
Output<T> |
SparseDenseCwiseDiv.asOutput() |
Output<T> |
SparseDenseCwiseMul.asOutput() |
Output<Float> |
SparseMatMul.asOutput() |
Output<T> |
SparseReduceMax.asOutput() |
Output<T> |
SparseReduceSum.asOutput() |
Output<T> |
SparseSegmentMean.asOutput() |
Output<T> |
SparseSegmentMeanGrad.asOutput() |
Output<T> |
SparseSegmentMeanWithNumSegments.asOutput() |
Output<T> |
SparseSegmentSqrtN.asOutput() |
Output<T> |
SparseSegmentSqrtNGrad.asOutput() |
Output<T> |
SparseSegmentSqrtNWithNumSegments.asOutput() |
Output<T> |
SparseSegmentSum.asOutput() |
Output<T> |
SparseSegmentSumWithNumSegments.asOutput() |
Output<T> |
SparseSliceGrad.asOutput() |
Output<T> |
SparseSoftmax.asOutput() |
Output<U> |
SparseTensorDenseAdd.asOutput() |
Output<U> |
SparseTensorDenseMatMul.asOutput() |
Output<U> |
SparseToDense.asOutput() |
Output<U> |
SparseToDense.asOutput() |
Output<U> |
SparseTensorDenseMatMul.asOutput() |
Output<U> |
SparseTensorDenseAdd.asOutput() |
Output<T> |
SparseSoftmax.asOutput() |
Output<T> |
SparseSliceGrad.asOutput() |
Output<T> |
SparseSegmentSumWithNumSegments.asOutput() |
Output<T> |
SparseSegmentSum.asOutput() |
Output<T> |
SparseSegmentSqrtNWithNumSegments.asOutput() |
Output<T> |
SparseSegmentSqrtNGrad.asOutput() |
Output<T> |
SparseSegmentSqrtN.asOutput() |
Output<T> |
SparseSegmentMeanWithNumSegments.asOutput() |
Output<T> |
SparseSegmentMeanGrad.asOutput() |
Output<T> |
SparseSegmentMean.asOutput() |
Output<T> |
SparseReduceSum.asOutput() |
Output<T> |
SparseReduceMax.asOutput() |
Output<Float> |
SparseMatMul.asOutput() |
Output<T> |
SparseDenseCwiseMul.asOutput() |
Output<T> |
SparseDenseCwiseDiv.asOutput() |
Output<T> |
SparseDenseCwiseAdd.asOutput() |
Output<String> |
SparseConditionalAccumulator.asOutput() |
Output<Long> |
AddSparseToTensorsMap.asOutput() |
Output<Long> |
AddManySparseToTensorsMap.asOutput() |
Output<T> |
SparseAddGrad.aValGrad()
1-D with shape `[nnz(A)]`.
|
Output<T> |
SparseAddGrad.aValGrad()
1-D with shape `[nnz(A)]`.
|
Output<T> |
SparseAddGrad.bValGrad()
1-D with shape `[nnz(B)]`.
|
Output<T> |
SparseAddGrad.bValGrad()
1-D with shape `[nnz(B)]`.
|
Output<T> |
SparseFillEmptyRowsGrad.dDefaultValue()
0-D.
|
Output<T> |
SparseFillEmptyRowsGrad.dDefaultValue()
0-D.
|
Output<U> |
SparseToDense.dense()
Dense output tensor of shape `output_shape`.
|
Output<U> |
SparseToDense.dense()
Dense output tensor of shape `output_shape`.
|
Output<T> |
SparseFillEmptyRowsGrad.dValues()
1-D.
|
Output<T> |
SparseFillEmptyRowsGrad.dValues()
1-D.
|
Output<Boolean> |
SparseFillEmptyRows.emptyRowIndicator()
1-D.
|
Output<Boolean> |
SparseFillEmptyRows.emptyRowIndicator()
1-D.
|
Output<String> |
SparseConditionalAccumulator.handle()
The handle to the accumulator.
|
Output<String> |
SparseConditionalAccumulator.handle()
The handle to the accumulator.
|
Output<Long> |
SparseAccumulatorTakeGradient.indices()
Indices of the average of the accumulated sparse gradients.
|
Output<Long> |
SparseAccumulatorTakeGradient.indices()
Indices of the average of the accumulated sparse gradients.
|
Output<T> |
SparseDenseCwiseAdd.output()
1-D.
|
Output<T> |
SparseDenseCwiseDiv.output()
1-D.
|
Output<T> |
SparseDenseCwiseMul.output()
1-D.
|
Output<T> |
SparseReduceMax.output()
`R-K`-D.
|
Output<T> |
SparseReduceSum.output()
`R-K`-D.
|
Output<T> |
SparseSegmentMean.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SparseSegmentMeanGrad.output() |
Output<T> |
SparseSegmentMeanWithNumSegments.output()
Has same shape as data, except for dimension 0 which has size
`num_segments`.
|
Output<T> |
SparseSegmentSqrtN.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SparseSegmentSqrtNGrad.output() |
Output<T> |
SparseSegmentSqrtNWithNumSegments.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SparseSegmentSum.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SparseSegmentSumWithNumSegments.output()
Has same shape as data, except for dimension 0 which
has size `num_segments`.
|
Output<T> |
SparseSoftmax.output()
1-D.
|
Output<U> |
SparseTensorDenseAdd.output() |
Output<U> |
SparseTensorDenseAdd.output() |
Output<T> |
SparseSoftmax.output()
1-D.
|
Output<T> |
SparseSegmentSumWithNumSegments.output()
Has same shape as data, except for dimension 0 which
has size `num_segments`.
|
Output<T> |
SparseSegmentSum.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SparseSegmentSqrtNWithNumSegments.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SparseSegmentSqrtNGrad.output() |
Output<T> |
SparseSegmentSqrtN.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SparseSegmentMeanWithNumSegments.output()
Has same shape as data, except for dimension 0 which has size
`num_segments`.
|
Output<T> |
SparseSegmentMeanGrad.output() |
Output<T> |
SparseSegmentMean.output()
Has same shape as data, except for dimension 0 which
has size `k`, the number of segments.
|
Output<T> |
SparseReduceSum.output()
`R-K`-D.
|
Output<T> |
SparseReduceMax.output()
`R-K`-D.
|
Output<T> |
SparseDenseCwiseMul.output()
1-D.
|
Output<T> |
SparseDenseCwiseDiv.output()
1-D.
|
Output<T> |
SparseDenseCwiseAdd.output()
1-D.
|
Output<Long> |
SparseConcat.outputIndices()
2-D.
|
Output<Long> |
SparseCross.outputIndices()
2-D.
|
Output<Long> |
SparseFillEmptyRows.outputIndices() |
Output<Long> |
SparseReduceMaxSparse.outputIndices() |
Output<Long> |
SparseReduceSumSparse.outputIndices() |
Output<Long> |
SparseReorder.outputIndices()
2-D.
|
Output<Long> |
SparseReshape.outputIndices()
2-D.
|
Output<Long> |
SparseSlice.outputIndices() |
Output<Long> |
SparseSparseMaximum.outputIndices()
2-D.
|
Output<Long> |
SparseSparseMinimum.outputIndices()
2-D.
|
Output<Long> |
SparseSparseMinimum.outputIndices()
2-D.
|
Output<Long> |
SparseSparseMaximum.outputIndices()
2-D.
|
Output<Long> |
SparseSlice.outputIndices() |
Output<Long> |
SparseReshape.outputIndices()
2-D.
|
Output<Long> |
SparseReorder.outputIndices()
2-D.
|
Output<Long> |
SparseReduceSumSparse.outputIndices() |
Output<Long> |
SparseReduceMaxSparse.outputIndices() |
Output<Long> |
SparseFillEmptyRows.outputIndices() |
Output<Long> |
SparseCross.outputIndices()
2-D.
|
Output<Long> |
SparseConcat.outputIndices()
2-D.
|
Output<Long> |
SparseConcat.outputShape()
1-D.
|
Output<Long> |
SparseCross.outputShape()
1-D.
|
Output<Long> |
SparseReduceMaxSparse.outputShape() |
Output<Long> |
SparseReduceSumSparse.outputShape() |
Output<Long> |
SparseReshape.outputShape()
1-D.
|
Output<Long> |
SparseSlice.outputShape()
A list of 1-D tensors represents the shape of the output sparse
tensors.
|
Output<Long> |
SparseSlice.outputShape()
A list of 1-D tensors represents the shape of the output sparse
tensors.
|
Output<Long> |
SparseReshape.outputShape()
1-D.
|
Output<Long> |
SparseReduceSumSparse.outputShape() |
Output<Long> |
SparseReduceMaxSparse.outputShape() |
Output<Long> |
SparseCross.outputShape()
1-D.
|
Output<Long> |
SparseConcat.outputShape()
1-D.
|
Output<T> |
SparseConcat.outputValues()
1-D.
|
Output<T> |
SparseCross.outputValues()
1-D.
|
Output<T> |
SparseFillEmptyRows.outputValues()
1-D.
|
Output<T> |
SparseReduceMaxSparse.outputValues() |
Output<T> |
SparseReduceSumSparse.outputValues() |
Output<T> |
SparseReorder.outputValues()
1-D.
|
Output<T> |
SparseSlice.outputValues()
A list of 1-D tensors represents the values of the output sparse
tensors.
|
Output<T> |
SparseSparseMaximum.outputValues()
1-D.
|
Output<T> |
SparseSparseMinimum.outputValues()
1-D.
|
Output<T> |
SparseSparseMinimum.outputValues()
1-D.
|
Output<T> |
SparseSparseMaximum.outputValues()
1-D.
|
Output<T> |
SparseSlice.outputValues()
A list of 1-D tensors represents the values of the output sparse
tensors.
|
Output<T> |
SparseReorder.outputValues()
1-D.
|
Output<T> |
SparseReduceSumSparse.outputValues() |
Output<T> |
SparseReduceMaxSparse.outputValues() |
Output<T> |
SparseFillEmptyRows.outputValues()
1-D.
|
Output<T> |
SparseCross.outputValues()
1-D.
|
Output<T> |
SparseConcat.outputValues()
1-D.
|
Output<Float> |
SparseMatMul.product() |
Output<U> |
SparseTensorDenseMatMul.product() |
Output<U> |
SparseTensorDenseMatMul.product() |
Output<Float> |
SparseMatMul.product() |
Output<Long> |
DenseToDenseSetOperation.resultIndices()
2D indices of a `SparseTensor`.
|
Output<Long> |
DenseToSparseSetOperation.resultIndices()
2D indices of a `SparseTensor`.
|
Output<Long> |
SparseToSparseSetOperation.resultIndices()
2D indices of a `SparseTensor`.
|
Output<Long> |
SparseToSparseSetOperation.resultIndices()
2D indices of a `SparseTensor`.
|
Output<Long> |
DenseToSparseSetOperation.resultIndices()
2D indices of a `SparseTensor`.
|
Output<Long> |
DenseToDenseSetOperation.resultIndices()
2D indices of a `SparseTensor`.
|
Output<Long> |
DenseToDenseSetOperation.resultShape()
1D `Tensor` shape of a `SparseTensor`.
|
Output<Long> |
DenseToSparseSetOperation.resultShape()
1D `Tensor` shape of a `SparseTensor`.
|
Output<Long> |
SparseToSparseSetOperation.resultShape()
1D `Tensor` shape of a `SparseTensor`.
|
Output<Long> |
SparseToSparseSetOperation.resultShape()
1D `Tensor` shape of a `SparseTensor`.
|
Output<Long> |
DenseToSparseSetOperation.resultShape()
1D `Tensor` shape of a `SparseTensor`.
|
Output<Long> |
DenseToDenseSetOperation.resultShape()
1D `Tensor` shape of a `SparseTensor`.
|
Output<T> |
DenseToDenseSetOperation.resultValues()
1D values of a `SparseTensor`.
|
Output<T> |
DenseToSparseSetOperation.resultValues()
1D values of a `SparseTensor`.
|
Output<T> |
SparseToSparseSetOperation.resultValues()
1D values of a `SparseTensor`.
|
Output<T> |
SparseToSparseSetOperation.resultValues()
1D values of a `SparseTensor`.
|
Output<T> |
DenseToSparseSetOperation.resultValues()
1D values of a `SparseTensor`.
|
Output<T> |
DenseToDenseSetOperation.resultValues()
1D values of a `SparseTensor`.
|
Output<Long> |
SparseFillEmptyRows.reverseIndexMap()
1-D.
|
Output<Long> |
SparseFillEmptyRows.reverseIndexMap()
1-D.
|
Output<Long> |
SparseAccumulatorTakeGradient.shape()
Shape of the average of the accumulated sparse gradients.
|
Output<Long> |
SparseAccumulatorTakeGradient.shape()
Shape of the average of the accumulated sparse gradients.
|
Output<Long> |
AddSparseToTensorsMap.sparseHandle()
0-D.
|
Output<Long> |
AddSparseToTensorsMap.sparseHandle()
0-D.
|
Output<Long> |
AddManySparseToTensorsMap.sparseHandles()
1-D.
|
Output<Long> |
AddManySparseToTensorsMap.sparseHandles()
1-D.
|
Output<Long> |
DeserializeSparse.sparseIndices() |
Output<Long> |
TakeManySparseFromTensorsMap.sparseIndices()
2-D.
|
Output<Long> |
TakeManySparseFromTensorsMap.sparseIndices()
2-D.
|
Output<Long> |
DeserializeSparse.sparseIndices() |
Output<Long> |
DeserializeSparse.sparseShape() |
Output<Long> |
TakeManySparseFromTensorsMap.sparseShape()
1-D.
|
Output<Long> |
TakeManySparseFromTensorsMap.sparseShape()
1-D.
|
Output<Long> |
DeserializeSparse.sparseShape() |
Output<U> |
DeserializeSparse.sparseValues() |
Output<T> |
TakeManySparseFromTensorsMap.sparseValues()
1-D.
|
Output<T> |
TakeManySparseFromTensorsMap.sparseValues()
1-D.
|
Output<U> |
DeserializeSparse.sparseValues() |
Output<Long> |
SparseAdd.sumIndices() |
Output<Long> |
SparseAdd.sumIndices() |
Output<Long> |
SparseAdd.sumShape() |
Output<Long> |
SparseAdd.sumShape() |
Output<T> |
SparseAdd.sumValues() |
Output<T> |
SparseAdd.sumValues() |
Output<T> |
SparseSliceGrad.valGrad()
1-D.
|
Output<T> |
SparseSliceGrad.valGrad()
1-D.
|
Output<T> |
SparseAccumulatorTakeGradient.values()
Values of the average of the accumulated sparse gradients.
|
Output<T> |
SparseAccumulatorTakeGradient.values()
Values of the average of the accumulated sparse gradients.
|
Modifier and Type | Method and Description |
---|---|
List<Output<Long>> |
SparseSplit.outputIndices() |
List<Output<Long>> |
SparseSplit.outputIndices() |
List<Output<Long>> |
SparseSplit.outputShape()
A list of 1-D tensors represents the shape of the output sparse
tensors.
|
List<Output<Long>> |
SparseSplit.outputShape()
A list of 1-D tensors represents the shape of the output sparse
tensors.
|
List<Output<T>> |
SparseSplit.outputValues()
A list of 1-D tensors represents the values of the output sparse
tensors.
|
List<Output<T>> |
SparseSplit.outputValues()
A list of 1-D tensors represents the values of the output sparse
tensors.
|
Modifier and Type | Method and Description |
---|---|
Output<String> |
Join.asOutput() |
Output<String> |
ReduceJoin.asOutput() |
Output<Boolean> |
RegexFullMatch.asOutput() |
Output<String> |
RegexReplace.asOutput() |
Output<Boolean> |
StaticRegexFullMatch.asOutput() |
Output<String> |
StaticRegexReplace.asOutput() |
Output<String> |
StringFormat.asOutput() |
Output<Integer> |
StringLength.asOutput() |
Output<String> |
Strip.asOutput() |
Output<String> |
Substr.asOutput() |
Output<Long> |
ToHashBucket.asOutput() |
Output<Long> |
ToHashBucketFast.asOutput() |
Output<Long> |
ToHashBucketStrong.asOutput() |
Output<T> |
ToNumber.asOutput() |
Output<Integer> |
UnicodeScript.asOutput() |
Output<String> |
UnicodeTranscode.asOutput() |
Output<String> |
UnicodeTranscode.asOutput() |
Output<Integer> |
UnicodeScript.asOutput() |
Output<T> |
ToNumber.asOutput() |
Output<Long> |
ToHashBucketStrong.asOutput() |
Output<Long> |
ToHashBucketFast.asOutput() |
Output<Long> |
ToHashBucket.asOutput() |
Output<String> |
Substr.asOutput() |
Output<String> |
Strip.asOutput() |
Output<Integer> |
StringLength.asOutput() |
Output<String> |
StringFormat.asOutput() |
Output<String> |
StaticRegexReplace.asOutput() |
Output<Boolean> |
StaticRegexFullMatch.asOutput() |
Output<String> |
RegexReplace.asOutput() |
Output<Boolean> |
RegexFullMatch.asOutput() |
Output<String> |
ReduceJoin.asOutput() |
Output<String> |
Join.asOutput() |
Output<Long> |
UnicodeDecodeWithOffsets.charToByteStarts()
A 1D int32 Tensor containing the byte index in the input string where each
character in `char_values` starts.
|
Output<Long> |
UnicodeDecodeWithOffsets.charToByteStarts()
A 1D int32 Tensor containing the byte index in the input string where each
character in `char_values` starts.
|
Output<Integer> |
UnicodeDecodeWithOffsets.charValues()
A 1D int32 Tensor containing the decoded codepoints.
|
Output<Integer> |
UnicodeDecodeWithOffsets.charValues()
A 1D int32 Tensor containing the decoded codepoints.
|
Output<Long> |
StringSplit.indices() |
Output<Long> |
StringSplit.indices() |
Output<String> |
Join.output() |
Output<String> |
ReduceJoin.output()
Has shape equal to that of the input with reduced dimensions removed or
set to `1` depending on `keep_dims`.
|
Output<Boolean> |
RegexFullMatch.output()
A bool tensor with the same shape as `input`.
|
Output<String> |
RegexReplace.output()
The text after applying pattern match and rewrite substitution.
|
Output<Boolean> |
StaticRegexFullMatch.output()
A bool tensor with the same shape as `input`.
|
Output<String> |
StaticRegexReplace.output()
The text after applying pattern and rewrite.
|
Output<String> |
StringFormat.output()
= The resulting string scalar.
|
Output<Integer> |
StringLength.output()
Integer tensor that has the same shape as `input`.
|
Output<String> |
Strip.output()
A string `Tensor` of the same shape as the input.
|
Output<String> |
Substr.output()
Tensor of substrings
|
Output<Long> |
ToHashBucket.output()
A Tensor of the same shape as the input `string_tensor`.
|
Output<Long> |
ToHashBucketFast.output()
A Tensor of the same shape as the input `string_tensor`.
|
Output<Long> |
ToHashBucketStrong.output()
A Tensor of the same shape as the input `string_tensor`.
|
Output<T> |
ToNumber.output()
A Tensor of the same shape as the input `string_tensor`.
|
Output<Integer> |
UnicodeScript.output()
A Tensor of int32 script codes corresponding to each input code point.
|
Output<String> |
UnicodeTranscode.output()
A string tensor containing unicode text encoded using `output_encoding`.
|
Output<String> |
UnicodeTranscode.output()
A string tensor containing unicode text encoded using `output_encoding`.
|
Output<Integer> |
UnicodeScript.output()
A Tensor of int32 script codes corresponding to each input code point.
|
Output<T> |
ToNumber.output()
A Tensor of the same shape as the input `string_tensor`.
|
Output<Long> |
ToHashBucketStrong.output()
A Tensor of the same shape as the input `string_tensor`.
|
Output<Long> |
ToHashBucketFast.output()
A Tensor of the same shape as the input `string_tensor`.
|
Output<Long> |
ToHashBucket.output()
A Tensor of the same shape as the input `string_tensor`.
|
Output<String> |
Substr.output()
Tensor of substrings
|
Output<String> |
Strip.output()
A string `Tensor` of the same shape as the input.
|
Output<Integer> |
StringLength.output()
Integer tensor that has the same shape as `input`.
|
Output<String> |
StringFormat.output()
= The resulting string scalar.
|
Output<String> |
StaticRegexReplace.output()
The text after applying pattern and rewrite.
|
Output<Boolean> |
StaticRegexFullMatch.output()
A bool tensor with the same shape as `input`.
|
Output<String> |
RegexReplace.output()
The text after applying pattern match and rewrite substitution.
|
Output<Boolean> |
RegexFullMatch.output()
A bool tensor with the same shape as `input`.
|
Output<String> |
ReduceJoin.output()
Has shape equal to that of the input with reduced dimensions removed or
set to `1` depending on `keep_dims`.
|
Output<String> |
Join.output() |
Output<T> |
UnicodeDecodeWithOffsets.rowSplits()
A 1D int32 tensor containing the row splits.
|
Output<T> |
UnicodeDecodeWithOffsets.rowSplits()
A 1D int32 tensor containing the row splits.
|
Output<Long> |
StringSplit.shape() |
Output<Long> |
StringSplit.shape() |
Output<String> |
StringSplit.values() |
Output<String> |
StringSplit.values() |
Modifier and Type | Method and Description |
---|---|
Output<String> |
AudioSummary.asOutput() |
Output<String> |
HistogramSummary.asOutput() |
Output<String> |
ImageSummary.asOutput() |
Output<String> |
MergeSummary.asOutput() |
Output<String> |
ScalarSummary.asOutput() |
Output<String> |
StatsAggregatorSummary.asOutput() |
Output<Object> |
SummaryWriter.asOutput() |
Output<String> |
TensorSummary.asOutput() |
Output<String> |
TensorSummary.asOutput() |
Output<Object> |
SummaryWriter.asOutput() |
Output<String> |
StatsAggregatorSummary.asOutput() |
Output<String> |
ScalarSummary.asOutput() |
Output<String> |
MergeSummary.asOutput() |
Output<String> |
ImageSummary.asOutput() |
Output<String> |
HistogramSummary.asOutput() |
Output<String> |
AudioSummary.asOutput() |
Output<String> |
AudioSummary.summary()
Scalar.
|
Output<String> |
HistogramSummary.summary()
Scalar.
|
Output<String> |
ImageSummary.summary()
Scalar.
|
Output<String> |
MergeSummary.summary()
Scalar.
|
Output<String> |
ScalarSummary.summary()
Scalar.
|
Output<String> |
StatsAggregatorSummary.summary() |
Output<String> |
TensorSummary.summary() |
Output<String> |
TensorSummary.summary() |
Output<String> |
StatsAggregatorSummary.summary() |
Output<String> |
ScalarSummary.summary()
Scalar.
|
Output<String> |
MergeSummary.summary()
Scalar.
|
Output<String> |
ImageSummary.summary()
Scalar.
|
Output<String> |
HistogramSummary.summary()
Scalar.
|
Output<String> |
AudioSummary.summary()
Scalar.
|
Output<?> |
SummaryWriter.writer() |
Output<?> |
SummaryWriter.writer() |
Modifier and Type | Method and Description |
---|---|
Output<Integer> |
AccumulatorNumAccumulated.asOutput() |
Output<T> |
AccumulatorTakeGradient.asOutput() |
Output<T> |
ApplyAdadelta.asOutput() |
Output<T> |
ApplyAdagrad.asOutput() |
Output<T> |
ApplyAdagradDa.asOutput() |
Output<T> |
ApplyAdam.asOutput() |
Output<T> |
ApplyAdaMax.asOutput() |
Output<T> |
ApplyAddSign.asOutput() |
Output<T> |
ApplyCenteredRmsProp.asOutput() |
Output<T> |
ApplyFtrl.asOutput() |
Output<T> |
ApplyGradientDescent.asOutput() |
Output<T> |
ApplyMomentum.asOutput() |
Output<T> |
ApplyPowerSign.asOutput() |
Output<T> |
ApplyProximalAdagrad.asOutput() |
Output<T> |
ApplyProximalGradientDescent.asOutput() |
Output<T> |
ApplyRmsProp.asOutput() |
Output<String> |
ConditionalAccumulator.asOutput() |
Output<T> |
PreventGradient.asOutput() |
Output<T> |
RestoreSlice.asOutput() |
Output<Long> |
SdcaFprint.asOutput() |
Output<T> |
SparseApplyAdadelta.asOutput() |
Output<T> |
SparseApplyAdagrad.asOutput() |
Output<T> |
SparseApplyAdagradDa.asOutput() |
Output<T> |
SparseApplyCenteredRmsProp.asOutput() |
Output<T> |
SparseApplyFtrl.asOutput() |
Output<T> |
SparseApplyMomentum.asOutput() |
Output<T> |
SparseApplyProximalAdagrad.asOutput() |
Output<T> |
SparseApplyProximalGradientDescent.asOutput() |
Output<T> |
SparseApplyRmsProp.asOutput() |
Output<T> |
TileGrad.asOutput() |
Output<T> |
TileGrad.asOutput() |
Output<T> |
SparseApplyRmsProp.asOutput() |
Output<T> |
SparseApplyProximalGradientDescent.asOutput() |
Output<T> |
SparseApplyProximalAdagrad.asOutput() |
Output<T> |
SparseApplyMomentum.asOutput() |
Output<T> |
SparseApplyFtrl.asOutput() |
Output<T> |
SparseApplyCenteredRmsProp.asOutput() |
Output<T> |
SparseApplyAdagradDa.asOutput() |
Output<T> |
SparseApplyAdagrad.asOutput() |
Output<T> |
SparseApplyAdadelta.asOutput() |
Output<Long> |
SdcaFprint.asOutput() |
Output<T> |
RestoreSlice.asOutput() |
Output<T> |
PreventGradient.asOutput() |
Output<String> |
ConditionalAccumulator.asOutput() |
Output<T> |
ApplyRmsProp.asOutput() |
Output<T> |
ApplyProximalGradientDescent.asOutput() |
Output<T> |
ApplyProximalAdagrad.asOutput() |
Output<T> |
ApplyPowerSign.asOutput() |
Output<T> |
ApplyMomentum.asOutput() |
Output<T> |
ApplyGradientDescent.asOutput() |
Output<T> |
ApplyFtrl.asOutput() |
Output<T> |
ApplyCenteredRmsProp.asOutput() |
Output<T> |
ApplyAddSign.asOutput() |
Output<T> |
ApplyAdaMax.asOutput() |
Output<T> |
ApplyAdam.asOutput() |
Output<T> |
ApplyAdagradDa.asOutput() |
Output<T> |
ApplyAdagrad.asOutput() |
Output<T> |
ApplyAdadelta.asOutput() |
Output<T> |
AccumulatorTakeGradient.asOutput() |
Output<Integer> |
AccumulatorNumAccumulated.asOutput() |
Output<T> |
AccumulatorTakeGradient.average()
The average of the accumulated gradients.
|
Output<T> |
AccumulatorTakeGradient.average()
The average of the accumulated gradients.
|
Output<String> |
ConditionalAccumulator.handle()
The handle to the accumulator.
|
Output<String> |
ConditionalAccumulator.handle()
The handle to the accumulator.
|
Output<Integer> |
AccumulatorNumAccumulated.numAccumulated()
The number of gradients aggregated in the given accumulator.
|
Output<Integer> |
AccumulatorNumAccumulated.numAccumulated()
The number of gradients aggregated in the given accumulator.
|
Output<Integer> |
GenerateVocabRemapping.numPresent()
Number of new vocab entries found in old vocab.
|
Output<Integer> |
GenerateVocabRemapping.numPresent()
Number of new vocab entries found in old vocab.
|
Output<T> |
ApplyAdadelta.out()
Same as "var".
|
Output<T> |
ApplyAdagrad.out()
Same as "var".
|
Output<T> |
ApplyAdagradDa.out()
Same as "var".
|
Output<T> |
ApplyAdam.out()
Same as "var".
|
Output<T> |
ApplyAdaMax.out()
Same as "var".
|
Output<T> |
ApplyAddSign.out()
Same as "var".
|
Output<T> |
ApplyCenteredRmsProp.out()
Same as "var".
|
Output<T> |
ApplyFtrl.out()
Same as "var".
|
Output<T> |
ApplyGradientDescent.out()
Same as "var".
|
Output<T> |
ApplyMomentum.out()
Same as "var".
|
Output<T> |
ApplyPowerSign.out()
Same as "var".
|
Output<T> |
ApplyProximalAdagrad.out()
Same as "var".
|
Output<T> |
ApplyProximalGradientDescent.out()
Same as "var".
|
Output<T> |
ApplyRmsProp.out()
Same as "var".
|
Output<T> |
SparseApplyAdadelta.out()
Same as "var".
|
Output<T> |
SparseApplyAdagrad.out()
Same as "var".
|
Output<T> |
SparseApplyAdagradDa.out()
Same as "var".
|
Output<T> |
SparseApplyCenteredRmsProp.out()
Same as "var".
|
Output<T> |
SparseApplyFtrl.out()
Same as "var".
|
Output<T> |
SparseApplyMomentum.out()
Same as "var".
|
Output<T> |
SparseApplyProximalAdagrad.out()
Same as "var".
|
Output<T> |
SparseApplyProximalGradientDescent.out()
Same as "var".
|
Output<T> |
SparseApplyRmsProp.out()
Same as "var".
|
Output<T> |
SparseApplyRmsProp.out()
Same as "var".
|
Output<T> |
SparseApplyProximalGradientDescent.out()
Same as "var".
|
Output<T> |
SparseApplyProximalAdagrad.out()
Same as "var".
|
Output<T> |
SparseApplyMomentum.out()
Same as "var".
|
Output<T> |
SparseApplyFtrl.out()
Same as "var".
|
Output<T> |
SparseApplyCenteredRmsProp.out()
Same as "var".
|
Output<T> |
SparseApplyAdagradDa.out()
Same as "var".
|
Output<T> |
SparseApplyAdagrad.out()
Same as "var".
|
Output<T> |
SparseApplyAdadelta.out()
Same as "var".
|
Output<T> |
ApplyRmsProp.out()
Same as "var".
|
Output<T> |
ApplyProximalGradientDescent.out()
Same as "var".
|
Output<T> |
ApplyProximalAdagrad.out()
Same as "var".
|
Output<T> |
ApplyPowerSign.out()
Same as "var".
|
Output<T> |
ApplyMomentum.out()
Same as "var".
|
Output<T> |
ApplyGradientDescent.out()
Same as "var".
|
Output<T> |
ApplyFtrl.out()
Same as "var".
|
Output<T> |
ApplyCenteredRmsProp.out()
Same as "var".
|
Output<T> |
ApplyAddSign.out()
Same as "var".
|
Output<T> |
ApplyAdaMax.out()
Same as "var".
|
Output<T> |
ApplyAdam.out()
Same as "var".
|
Output<T> |
ApplyAdagradDa.out()
Same as "var".
|
Output<T> |
ApplyAdagrad.out()
Same as "var".
|
Output<T> |
ApplyAdadelta.out()
Same as "var".
|
Output<Float> |
SdcaOptimizer.outExampleStateData()
a list of vectors containing the updated example state
data.
|
Output<Float> |
SdcaOptimizer.outExampleStateData()
a list of vectors containing the updated example state
data.
|
Output<T> |
PreventGradient.output()
the same input tensor.
|
Output<Long> |
SdcaFprint.output()
a (N,2) shaped matrix where N is the number of elements in the input
vector.
|
Output<T> |
TileGrad.output() |
Output<T> |
TileGrad.output() |
Output<Long> |
SdcaFprint.output()
a (N,2) shaped matrix where N is the number of elements in the input
vector.
|
Output<T> |
PreventGradient.output()
the same input tensor.
|
Output<Long> |
GenerateVocabRemapping.remapping()
A Tensor of length num_new_vocab where the element at index i
is equal to the old ID that maps to the new ID i.
|
Output<Long> |
GenerateVocabRemapping.remapping()
A Tensor of length num_new_vocab where the element at index i
is equal to the old ID that maps to the new ID i.
|
Output<T> |
RestoreSlice.tensor()
The restored tensor.
|
Output<T> |
RestoreSlice.tensor()
The restored tensor.
|
Modifier and Type | Method and Description |
---|---|
List<Output<Float>> |
SdcaOptimizer.outDeltaDenseWeights()
a list of vectors where the values are the delta
weights associated with a dense feature group.
|
List<Output<Float>> |
SdcaOptimizer.outDeltaDenseWeights()
a list of vectors where the values are the delta
weights associated with a dense feature group.
|
List<Output<Float>> |
SdcaOptimizer.outDeltaSparseWeights()
a list of vectors where each value is the delta
weights associated with a sparse feature group.
|
List<Output<Float>> |
SdcaOptimizer.outDeltaSparseWeights()
a list of vectors where each value is the delta
weights associated with a sparse feature group.
|
List<Output<?>> |
Restore.tensors()
shape {N}.
|
List<Output<?>> |
Restore.tensors()
shape {N}.
|
Copyright © 2022. All rights reserved.