_SupplementBase |
|
AbstractTensor |
|
AcceleratorHooksInterface |
|
ActivityTypeSet |
|
ActivityTypeSet.Iterator |
|
Adagrad |
|
AdagradOptions |
|
AdagradParamState |
|
Adam |
|
AdamOptions |
|
AdamParamState |
|
AdamW |
|
AdamWOptions |
|
AdamWParamState |
|
AdaptiveAvgPool1dImpl |
Applies adaptive avgpool over a 1-D input.
|
AdaptiveAvgPool1dImplBase |
Base class for all (dimension-specialized) adaptive avgpool modules.
|
AdaptiveAvgPool1dImplCloneable |
|
AdaptiveAvgPool1dOptions |
Options for a D -dimensional adaptive avgpool module.
|
AdaptiveAvgPool2dImpl |
Applies adaptive avgpool over a 2-D input.
|
AdaptiveAvgPool2dImplBase |
|
AdaptiveAvgPool2dImplCloneable |
|
AdaptiveAvgPool2dOptions |
|
AdaptiveAvgPool3dImpl |
Applies adaptive avgpool over a 3-D input.
|
AdaptiveAvgPool3dImplBase |
|
AdaptiveAvgPool3dImplCloneable |
|
AdaptiveAvgPool3dOptions |
|
AdaptiveLogSoftmaxWithLossImpl |
Efficient softmax approximation as described in
Efficient softmax approximation for GPUs _ by Edouard Grave, Armand Joulin,
Moustapha Cissé, David Grangier, and Hervé Jégou.
|
AdaptiveLogSoftmaxWithLossImplCloneable |
|
AdaptiveLogSoftmaxWithLossOptions |
Options for the AdaptiveLogSoftmaxWithLoss module.
|
AdaptiveMaxPool1dImpl |
Applies adaptive maxpool over a 1-D input.
|
AdaptiveMaxPool1dImplBase |
Base class for all (dimension-specialized) adaptive maxpool modules.
|
AdaptiveMaxPool1dImplCloneable |
|
AdaptiveMaxPool1dOptions |
Options for a D -dimensional adaptive maxpool module.
|
AdaptiveMaxPool2dImpl |
Applies adaptive maxpool over a 2-D input.
|
AdaptiveMaxPool2dImplBase |
|
AdaptiveMaxPool2dImplCloneable |
|
AdaptiveMaxPool2dOptions |
|
AdaptiveMaxPool3dImpl |
Applies adaptive maxpool over a 3-D input.
|
AdaptiveMaxPool3dImplBase |
|
AdaptiveMaxPool3dImplCloneable |
|
AdaptiveMaxPool3dOptions |
|
AliasDb |
|
AliasInfo |
class AliasInfo
Data structure to hold aliasing information for an Argument .
|
AliasInfoOptional |
|
AliasTypeSetOptional |
|
AllgatherOptions |
|
Allocator |
|
AllreduceCoalescedOptions |
|
AllreduceOptions |
|
AllToAllOptions |
|
AlphaDropoutFuncOptions |
Options for torch::nn::functional::alpha_dropout .
|
AlphaDropoutImpl |
Applies Alpha Dropout over the input.
|
AlphaDropoutImplBase |
|
AlphaDropoutImplCloneable |
|
AnomalyMetadata |
|
AnomalyMode |
|
AnyClassType |
|
AnyClassTypePtr |
|
AnyEnumType |
|
AnyEnumTypePtr |
|
AnyListType |
|
AnyListTypePtr |
|
AnyModule |
Stores a type erased Module .
|
AnyModuleVector |
|
AnyModuleVector.Iterator |
|
AnyTupleType |
|
AnyTupleTypePtr |
|
AnyType |
|
AnyTypePtr |
|
AnyValue |
An implementation of std::any which stores
a type erased object, whose concrete value can be retrieved at runtime by
checking if the typeid() of a requested type matches the typeid() of
the object stored.
|
AOTIModelContainerRunner |
|
AOTIModelContainerRunnerCpu |
|
Apply |
|
ApproximateClockToUnixTimeConverter |
|
ApproximateClockToUnixTimeConverter.UnixAndApproximateTimePair |
|
ArchiveWriter |
|
Argument |
|
ArgumentArrayRef |
ArrayRef - Represent a constant reference to an array (0 or more elements
consecutively in memory), i.e.
|
ArgumentDef |
The templated inference code creates ArgumentDef instead of Argument ,
because that can be constructed at compile time and has a much smaller
binary size than having calls to Argument constructors in the template.
|
ArgumentDefArrayRef |
|
ArgumentInfo |
|
ArgumentSpec |
|
ArgumentSpecCreator |
|
ArgumentSpecExecutionPlanMap |
|
ArgumentSpecExecutionPlanMap.Iterator |
|
ASMoutput |
The output of a single invocation of an AdaptiveLogSoftmaxWithLoss
module's forward() method.
|
Assert |
|
Assign |
|
AssignList |
|
AssignListIterator |
|
AssignListMaybe |
|
Attribute |
|
attribute_iterator |
|
attribute_list |
|
AttributeList |
|
AttributeListIterator |
|
AttributePolicy |
|
AttributeValue |
|
AugAssign |
|
AugAssignKind |
|
AutoDispatchBelowADInplaceOrView |
|
AutoDispatchBelowAutograd |
|
AutoDispatchSkipFunctionalize |
|
AutoFwGradMode |
|
AutogradCompilerCall |
|
AutogradContext |
Context to save information during forward that can be accessed in
backward in custom autograd operations (see torch::autograd::Function
for details).
|
AutogradMetaFactory |
|
AutogradMetaFactoryRegisterer |
|
AutogradMetaInterface |
|
AutoGradMode |
|
AutogradState |
|
AutoNonVariableTypeMode |
|
AvgPool1dImpl |
Applies avgpool over a 1-D input.
|
AvgPool1dImplBase |
Base class for all (dimension-specialized) avgpool modules.
|
AvgPool1dImplCloneable |
|
AvgPool1dOptions |
Options for a D -dimensional avgpool module.
|
AvgPool2dImpl |
Applies avgpool over a 2-D input.
|
AvgPool2dImplBase |
|
AvgPool2dImplCloneable |
|
AvgPool2dOptions |
|
AvgPool3dImpl |
Applies avgpool over a 3-D input.
|
AvgPool3dImplBase |
|
AvgPool3dImplCloneable |
|
AvgPool3dOptions |
|
Await |
|
AwaitSingleElementType |
|
AwaitType |
|
BackendMeta |
This structure is intended to hold additional metadata of the specific device
backend.
|
BackendMetaPtr |
|
Backtrace |
Interface for a value that is computed on first access.
|
BarrierOptions |
|
BatchNorm1dImpl |
Applies the BatchNorm1d function.
|
BatchNorm1dImplBase |
Base class for all (dimension-specialized) batchnorm modules.
|
BatchNorm1dImplBaseBase |
Base class for all (dimension-specialized) batchnorm and instancenorm
modules.
|
BatchNorm1dImplCloneable |
|
BatchNorm2dImpl |
Applies the BatchNorm2d function.
|
BatchNorm2dImplBase |
|
BatchNorm2dImplBaseBase |
|
BatchNorm2dImplCloneable |
|
BatchNorm3dImpl |
Applies the BatchNorm3d function.
|
BatchNorm3dImplBase |
|
BatchNorm3dImplBaseBase |
|
BatchNorm3dImplCloneable |
|
BatchNormFuncOptions |
Options for torch::nn::functional::batch_norm .
|
BatchNormOptions |
Options for the BatchNorm module.
|
BatchSize |
A wrapper around a batch size value, which implements the
CustomBatchRequest interface.
|
BatchSizeOptional |
|
BatchSizeSampler |
|
BCELossImpl |
Creates a criterion that measures the Binary Cross Entropy
between the target and the output.
|
BCELossImplCloneable |
|
BCELossOptions |
Options for the BCELoss module.
|
BCEWithLogitsLossImpl |
This loss combines a Sigmoid layer and the BCELoss in one single
class.
|
BCEWithLogitsLossImplCloneable |
|
BCEWithLogitsLossOptions |
Options for the BCEWithLogitsLoss module.
|
BFloat16 |
|
BFloat16.from_bits_t |
|
BFloat16ArrayRef |
|
BilinearImpl |
Applies a billinear transformation with optional bias.
|
BilinearImplCloneable |
|
BilinearOptions |
Options for the Bilinear module.
|
BinOp |
|
bits16 |
bits16 is an uninterpreted dtype of a tensor with 16 bits, without any
semantics defined.
|
bits1x8 |
bits1x8 is an uninterpreted dtype of a tensor with 1 bit (packed to byte
boundary), without any semantics defined.
|
bits2x4 |
bits2x4 is an uninterpreted dtype of a tensor with 2 bits (packed to byte
boundary), without any semantics defined.
|
bits4x2 |
bits4x2 is an uninterpreted dtype of a tensor with 4 bits (packed to byte
boundary), without any semantics defined.
|
bits8 |
bits8 is an uninterpreted dtype of a tensor with 8 bits, without any
semantics defined.
|
bitset |
This is a simple bitset class with sizeof(long long int) bits.
|
Blob |
\brief Blob is a general container that hosts a typed pointer.
|
Block |
|
BlockArrayRef |
|
BlockWrap |
|
BoolArrayRef |
|
BooleanElementReference |
|
BooleanList |
|
BooleanListIterator |
|
BoolOptional |
|
BoolType |
|
BoolTypePtr |
|
BoolVector |
|
BoolVector.Iterator |
|
BoolVectorOptional |
|
Break |
|
BroadcastOptions |
|
BucketAccumulator |
|
buffer_iterator |
|
buffer_list |
|
BufferPolicy |
|
BuiltinFunction |
|
BuiltinModule |
|
ByteArrayRef |
|
ByteOptional |
|
BytePointerPair |
|
BytePointerPairOptional |
|
BytePointerVector |
|
BytePointerVector.Iterator |
|
ByteVector |
|
ByteVector.Iterator |
|
C10dLogger |
|
C10dLoggingData |
|
C10FlagParser |
|
CacheKey |
|
CacheKeyBuffer |
|
Call |
|
CapsuleType |
|
CapsuleTypePtr |
|
CastValue |
|
CELUImpl |
Applies celu over a given input.
|
CELUImplCloneable |
|
CELUOptions |
Options for the CELU module.
|
ChunkBatchDataset |
|
ChunkBatchSharedBatchDataset |
|
ChunkBatchSharedTensorBatchDataset |
|
ChunkDataReader |
Interface for chunk reader, which performs data chunking and reading of
entire chunks.
|
ChunkDataset |
A stateful dataset that support hierarchical sampling and prefetching of
entre chunks.
|
ChunkDatasetOptions |
Options to configure a ChunkDataset .
|
ChunkMapBatchDataset |
|
ChunkMapDataset |
|
ChunkMapTensorBatchDataset |
|
ChunkMapTensorDataset |
|
ChunkRandomDataLoader |
A dataloader for stateful datasets.
|
ChunkRandomDataLoaderBase |
|
ChunkRandomTensorDataLoader |
|
ChunkRandomTensorDataLoaderBase |
|
ChunkRecordIterator |
|
ChunkSharedBatchDataset |
A dataset that wraps another dataset in a shared pointer and implements the
BatchDataset API, delegating all calls to the shared instance.
|
ChunkSharedTensorBatchDataset |
|
ChunkStatefulDataset |
A stateful dataset is a dataset that maintains some internal state, which
will be reset() at the beginning of each epoch.
|
ChunkStatefulTensorDataset |
|
ChunkTensorBatchDataset |
|
ChunkTensorDataReader |
|
ChunkTensorDataset |
|
ClassAttribute |
|
ClassDef |
|
ClassType |
|
ClassType.Property |
|
ClassTypePropertyOptional |
|
ClassValue |
|
ClosureValue |
|
Code |
|
CodeImpl |
|
CommHookInterface |
|
CompilationUnit |
|
CompiledNodeArgs |
|
CompileTimeEmptyString |
|
ComplexType |
|
ComplexTypePtr |
|
Compound |
|
ConstantPad1dImpl |
Applies ConstantPad over a 1-D input.
|
ConstantPad1dImplBase |
Base class for all (dimension-specialized) ConstantPad modules.
|
ConstantPad1dImplCloneable |
|
ConstantPad1dOptions |
Options for a D -dimensional ConstantPad module.
|
ConstantPad2dImpl |
Applies ConstantPad over a 2-D input.
|
ConstantPad2dImplBase |
|
ConstantPad2dImplCloneable |
|
ConstantPad2dOptions |
|
ConstantPad3dImpl |
Applies ConstantPad over a 3-D input.
|
ConstantPad3dImplBase |
|
ConstantPad3dImplCloneable |
|
ConstantPad3dOptions |
|
ConstantString |
|
ConstExpr |
|
Context |
|
Continue |
|
Conv1dFuncOptions |
Options for a D -dimensional convolution functional.
|
Conv1dImpl |
Applies convolution over a 1-D input.
|
Conv1dImplBase |
Base class for all (dimension-specialized) convolution modules.
|
Conv1dImplCloneable |
|
Conv1dOptions |
Options for a D -dimensional convolution module.
|
Conv1dPadding |
|
Conv2dFuncOptions |
|
Conv2dImpl |
Applies convolution over a 2-D input.
|
Conv2dImplBase |
|
Conv2dImplCloneable |
|
Conv2dOptions |
|
Conv2dPadding |
|
Conv3dFuncOptions |
|
Conv3dImpl |
Applies convolution over a 3-D input.
|
Conv3dImplBase |
|
Conv3dImplCloneable |
|
Conv3dOptions |
|
Conv3dPadding |
|
ConvPaddingMode |
|
ConvTranspose1dFuncOptions |
Options for a D -dimensional convolution functional.
|
ConvTranspose1dImpl |
Applies the ConvTranspose1d function.
|
ConvTranspose1dImplBase |
Base class for all (dimension-specialized) convolution transpose modules.
|
ConvTranspose1dImplBaseBase |
|
ConvTranspose1dImplCloneable |
|
ConvTranspose1dOptions |
|
ConvTranspose2dFuncOptions |
|
ConvTranspose2dImpl |
Applies the ConvTranspose2d function.
|
ConvTranspose2dImplBase |
|
ConvTranspose2dImplBaseBase |
|
ConvTranspose2dImplCloneable |
|
ConvTranspose2dOptions |
|
ConvTranspose3dFuncOptions |
|
ConvTranspose3dImpl |
Applies the ConvTranspose3d function.
|
ConvTranspose3dImplBase |
|
ConvTranspose3dImplBaseBase |
|
ConvTranspose3dImplCloneable |
|
ConvTranspose3dOptions |
|
CosineEmbeddingLossImpl |
Creates a criterion that measures the loss given input tensors
input1 , input2 , and a Tensor label target with values 1 or
-1.
|
CosineEmbeddingLossImplCloneable |
|
CosineEmbeddingLossOptions |
Options for the CosineEmbeddingLoss module.
|
CosineSimilarityImpl |
Returns the cosine similarity between :math:x_1 and :math:x_2 , computed
along dim .
|
CosineSimilarityImplCloneable |
|
CosineSimilarityOptions |
Options for the CosineSimilarity module.
|
CppFunction |
Represents a C++ function that implements an operator.
|
CppSignature |
|
CppSignatureOptional |
|
CPUGeneratorImpl |
|
crc64_t |
|
CrossEntropyLossImpl |
Creates a criterion that computes cross entropy loss between input and
target.
|
CrossEntropyLossImplCloneable |
|
CrossEntropyLossOptions |
Options for the CrossEntropyLoss module.
|
CrossMapLRN2dImpl |
See the documentation for torch::nn::CrossMapLRN2dOptions class to learn
what constructor arguments are supported for this module.
|
CrossMapLRN2dImplCloneable |
|
CrossMapLRN2dOptions |
Options for the CrossMapLRN2d module.
|
CTCLossImpl |
The Connectionist Temporal Classification loss.
|
CTCLossImplCloneable |
|
CTCLossOptions |
Options for the CTCLoss module.
|
CUDAHooksArgs |
|
CUDAHooksInterface |
|
CustomBatchRequest |
A base class for custom index types.
|
CustomClassHolder |
|
DataLoaderOptions |
Options to configure a DataLoader .
|
DataPtr |
|
DataPtrVector |
|
DataPtrVector.Iterator |
|
DDPLogger |
|
DDPLoggingData |
|
DebugInfoBase |
|
DebugInfoGuard |
|
Decl |
|
Def |
|
DefMaybe |
|
DefVector |
|
DefVector.Iterator |
|
Delete |
|
DeserializationStorageContext |
|
DetailConv1dOptions |
Options for a D -dimensional convolution or convolution transpose module.
|
DetailConv2dOptions |
|
DetailConv3dOptions |
|
DetectAnomalyGuard |
A RAII guard that enables Anomaly Detection Mode.
|
Device |
Represents a compute device on which a tensor is located.
|
DeviceGuardImplInterface |
DeviceGuardImplInterface represents the virtual interface which provides
functionality to provide an RAII class for device and stream switching,
via DeviceGuard.
|
DeviceGuardImplRegistrar |
|
DeviceObjType |
|
DeviceObjTypePtr |
|
DeviceOptional |
|
DeviceTypeOptional |
|
DeviceTypeSet |
|
DeviceTypeSet.Iterator |
|
DictComp |
|
DictLiteral |
|
DictType |
|
Dimname |
|
DimnameArrayRef |
|
DimnameListOptional |
|
DimnameOptional |
|
DimnameVector |
|
DimnameVector.Iterator |
|
DimVector |
|
DimVectorInferExpandGeometryResult |
|
DimVectorOptional |
|
DisabledStr |
|
DisablePythonDispatcher |
|
DisableRecordFunctionGuard |
|
Dispatcher |
Top-level dispatch interface for dispatching via the dynamic dispatcher.
|
DispatchKeyExtractor |
An instance of DispatchKeyExtractor knows how to get a dispatch key given
a list of arguments for an operator call.
|
DispatchKeyOptional |
|
DispatchKeySet |
|
DispatchKeySet.iterator |
|
DistanceFunction |
|
DistributedBackend |
|
DistributedBackend.Options |
|
DistributedBackendOptional |
|
DistributedBackendOptions |
|
DistributedRandomSampler |
Select samples randomly.
|
DistributedSampler |
A Sampler that selects a subset of indices to sample from and defines a
sampling behavior.
|
DistributedSequentialSampler |
Select samples sequentially.
|
Dots |
|
DoubleArrayRef |
|
DoubleArrayRefOptional |
|
DoubleComplex |
|
DoubleComplexArrayRef |
|
DoubleComplexElementReference |
|
DoubleComplexList |
An object of this class stores a list of values of type T.
|
DoubleComplexListIterator |
|
DoubleElementReference |
|
DoubleExpandingArrayOptional |
|
DoubleList |
|
DoubleListIterator |
|
DoubleOptional |
|
DoubleVector |
|
DoubleVector.Iterator |
|
DoubleVectorOptional |
|
Dropout2dImpl |
Applies dropout over a 2-D input.
|
Dropout2dImplBase |
|
Dropout2dImplCloneable |
|
Dropout3dImpl |
Applies dropout over a 3-D input.
|
Dropout3dImplBase |
|
Dropout3dImplCloneable |
|
DropoutFuncOptions |
Options for torch::nn::functional::dropout .
|
DropoutImpl |
Applies dropout over a 1-D input.
|
DropoutImplBase |
|
DropoutImplCloneable |
|
DropoutOptions |
Options for the Dropout module.
|
DynamicLibrary |
|
DynamoTensorArg |
|
Edge |
Represents a particular input of a function.
|
EdgeVector |
|
EdgeVector.Iterator |
|
EllipsisIndexType |
|
ELUImpl |
Applies elu over a given input.
|
ELUImplCloneable |
|
ELUOptions |
Options for the ELU module.
|
EmbeddingBagFromPretrainedOptions |
Options for the EmbeddingBag::from_pretrained function.
|
EmbeddingBagFuncOptions |
Options for torch::nn::functional::embedding_bag .
|
EmbeddingBagImpl |
Computes sums or means of 'bags' of embeddings, without instantiating the
intermediate embeddings.
|
EmbeddingBagImplCloneable |
|
EmbeddingBagMode |
|
EmbeddingBagOptions |
Options for the EmbeddingBag module.
|
EmbeddingFromPretrainedOptions |
Options for the Embedding::from_pretrained function.
|
EmbeddingFuncOptions |
Options for torch::nn::functional::embedding .
|
EmbeddingImpl |
Performs a lookup in a fixed size embedding table.
|
EmbeddingImplCloneable |
|
EmbeddingOptions |
Options for the Embedding module.
|
EnabledStr |
|
EnableProfilingGuard |
|
EnumHolder |
|
EnumNameValue |
|
EnumNameValueArrayRef |
|
EnumType |
|
Example |
An Example from a dataset.
|
ExampleCollation |
A transformation of a batch to a new batch.
|
ExampleIterator |
|
ExampleOptional |
|
ExampleStack |
A Collation for Example<Tensor, Tensor> types that stacks all data
tensors into one tensor, and all target (label) tensors into one tensor.
|
ExampleVector |
|
ExampleVector.Iterator |
|
ExampleVectorIterator |
|
ExampleVectorOptional |
|
ExceptionMessageValue |
|
ExceptionValue |
|
ExecutionPlan |
|
ExecutorExecutionModeOptional |
|
ExperimentalConfig |
|
Expr |
|
ExprList |
|
ExprListIterator |
|
ExprMaybe |
|
ExprStmt |
|
ExtraFilesMap |
|
ExtraFilesMap.Iterator |
|
FanModeType |
|
FeatureAlphaDropoutFuncOptions |
Options for torch::nn::functional::feature_alpha_dropout .
|
FeatureAlphaDropoutImpl |
See the documentation for torch::nn::FeatureAlphaDropoutOptions class to
learn what constructor arguments are supported for this module.
|
FeatureAlphaDropoutImplBase |
|
FeatureAlphaDropoutImplCloneable |
|
FileLineFunc |
|
FlattenImpl |
A placeholder for Flatten operator
See https://pytorch.org/docs/main/generated/torch.nn.Flatten.html to learn
about the exact behavior of this module.
|
FlattenImplCloneable |
|
FlattenOptions |
Options for the Flatten module.
|
Float8_e4m3fn |
|
Float8_e4m3fn.from_bits_t |
|
Float8_e4m3fnuz |
|
Float8_e4m3fnuz.from_bits_t |
|
Float8_e5m2 |
|
Float8_e5m2.from_bits_t |
|
Float8_e5m2fnuz |
|
Float8_e5m2fnuz.from_bits_t |
|
FloatArrayRef |
|
FloatComplex |
|
FloatComplexArrayRef |
|
FloatOptional |
|
FloatType |
|
FloatTypePtr |
|
FoldImpl |
Applies fold over a 3-D input.
|
FoldImplCloneable |
|
FoldOptions |
Options for the Fold module.
|
For |
|
ForceDispatchKeyGuard |
|
ForwardADLevel |
|
ForwardGrad |
|
FractionalMaxPool1dOptions |
Options for a D -dimensional fractional maxpool module.
|
FractionalMaxPool2dImpl |
Applies fractional maxpool over a 2-D input.
|
FractionalMaxPool2dImplCloneable |
|
FractionalMaxPool2dOptions |
|
FractionalMaxPool3dImpl |
Applies fractional maxpool over a 3-D input.
|
FractionalMaxPool3dImplCloneable |
|
FractionalMaxPool3dOptions |
|
FullDataLoaderOptions |
Like DataLoaderOptions , but without any unconfigured state.
|
Func |
|
Function |
|
FunctionalityOffsetAndMask |
|
FunctionCrossMapLRN2d |
To use custom autograd operations, implement a Function subclass with
static forward and backward functions:
forward can take as many arguments as you want and should return either a
variable list or a Variable.
|
FunctionPostHook |
|
FunctionPostHookVector |
|
FunctionPostHookVector.Iterator |
|
FunctionPreHook |
|
FunctionPreHookVector |
|
FunctionPreHookVector.Iterator |
|
FunctionSchema |
|
FunctionSchemaOptional |
|
FunctionSchemaVector |
|
FunctionSchemaVector.Iterator |
|
FunctionType |
|
FunctionValue |
|
FunctionVector |
|
FunctionVector.Iterator |
|
FuncTorchTLSBase |
|
FusionStrategy |
|
Future |
|
FutureArrayRef |
|
FutureElementReference |
|
FutureList |
|
FutureListIterator |
|
FutureSingleElementType |
|
FutureType |
|
FutureVector |
|
FutureVector.Iterator |
|
GatheredContext |
|
GatherOptions |
|
GELUImpl |
Applies gelu over a given input.
|
GELUImplCloneable |
|
GELUOptions |
Options for the GELU module.
|
Generator |
Note [Acquire lock when using random generators]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Generator and its derived classes are NOT thread-safe.
|
GeneratorImpl |
|
GeneratorOptional |
|
GeneratorType |
|
GeneratorTypePtr |
|
GenericDict |
An object of this class stores a map from Key to Value.
|
GenericDictEntryRef |
A reference to an entry in the Dict.
|
GenericDictIterator |
|
GenericElementReference |
|
GenericList |
|
GenericListIterator |
|
Global |
|
GlooDeviceVector |
|
GlooDeviceVector.Iterator |
|
GLUImpl |
Applies glu over a given input.
|
GLUImplCloneable |
|
GLUOptions |
Options for the GLU module.
|
GradBucket |
|
GradCallback |
|
GradMode |
|
Graph |
|
graph_node_list |
|
graph_node_list_iterator |
|
GraphAttr |
|
GraphExecutor |
|
GraphExecutorImplBase |
|
GraphExecutorState |
|
GraphFunction |
|
GraphFunctionCreator |
|
GraphOptimizerEnabledGuard |
|
GraphsAttr |
|
GraphVector |
|
GraphVector.Iterator |
|
GridSampleFuncOptions |
Options for torch::nn::functional::grid_sample .
|
GridSampleMode |
|
GridSamplePaddingMode |
|
GroupNormFuncOptions |
Options for torch::nn::functional::group_norm .
|
GroupNormImpl |
Applies Group Normalization over a mini-batch of inputs as described in
the paper Group Normalization _ .
|
GroupNormImplCloneable |
|
GroupNormOptions |
Options for the GroupNorm module.
|
GRUCellImpl |
A gated recurrent unit (GRU) cell.
|
GRUCellImplBase |
|
GRUCellImplCloneable |
|
GRUCellOptions |
Options for the GRUCell module.
|
GRUImpl |
A multi-layer gated recurrent unit (GRU) module.
|
GRUImplBase |
|
GRUImplCloneable |
|
GRUOptions |
Options for the GRU module.
|
GumbelSoftmaxFuncOptions |
Options for torch::nn::functional::gumbel_softmax .
|
Half |
|
Half.from_bits_t |
|
HalfArrayRef |
|
HalfComplex |
|
HardshrinkImpl |
Applies the hard shrinkage function element-wise.
|
HardshrinkImplCloneable |
|
HardshrinkOptions |
Options for the Hardshrink module.
|
HardtanhImpl |
Applies the HardTanh function element-wise.
|
HardtanhImplCloneable |
|
HardtanhOptions |
Options for the Hardtanh module.
|
HashAliasedIValueMap |
|
HashAliasedIValueMap.Iterator |
|
HashAliasedIValues |
|
HashAliasedIValues.Iterator |
|
HashIdentityIValueMap |
|
HashIdentityIValueMap.Iterator |
|
HermeticPyObjectTLS |
|
HingeEmbeddingLossImpl |
Creates a criterion that measures the loss given an input tensor :math:x
and a labels tensor :math:y (containing 1 or -1).
|
HingeEmbeddingLossImplCloneable |
|
HingeEmbeddingLossOptions |
Options for the HingeEmbeddingLoss module.
|
HIPHooksArgs |
|
HIPHooksInterface |
|
HuberLossImpl |
Creates a criterion that uses a squared term if the absolute
element-wise error falls below delta and a delta-scaled L1 term otherwise.
|
HuberLossImplCloneable |
|
HuberLossOptions |
Options for the HuberLoss module.
|
Ident |
|
IdentityImpl |
A placeholder identity operator that is argument-insensitive.
|
IdentityImplCloneable |
|
IdentList |
|
IdentListIterator |
|
If |
|
IMethod |
|
IncludeDispatchKeyGuard |
|
InferenceMode |
|
InferredType |
|
InlinedCallStack |
|
InlinedCallStackOptional |
|
InputArchive |
A recursive representation of tensors that can be deserialized from a file
or stream.
|
InstanceNorm1dImpl |
Applies the InstanceNorm1d function.
|
InstanceNorm1dImplBase |
Base class for all (dimension-specialized) instance norm modules
|
InstanceNorm1dImplBaseBase |
|
InstanceNorm1dImplCloneable |
|
InstanceNorm2dImpl |
Applies the InstanceNorm2d function.
|
InstanceNorm2dImplBase |
|
InstanceNorm2dImplBaseBase |
|
InstanceNorm2dImplCloneable |
|
InstanceNorm3dImpl |
Applies the InstanceNorm3d function.
|
InstanceNorm3dImplBase |
|
InstanceNorm3dImplBaseBase |
|
InstanceNorm3dImplCloneable |
|
InstanceNormFuncOptions |
Options for torch::nn::functional::instance_norm .
|
InstanceNormOptions |
Options for the InstanceNorm module.
|
Instruction |
|
IntArrayRef |
|
InterfaceType |
|
InterpolateFuncOptions |
Options for torch::nn::functional::interpolate .
|
InterpolateMode |
|
IntOptional |
|
IntPair |
|
IntSizedSmallVectorBase |
This is all the stuff common to all SmallVectors.
|
IntType |
|
IntTypePtr |
|
IPUHooksArgs |
|
IPUHooksInterface |
|
IStreamAdapter |
|
IValue |
IValue (Interpreter Value) is a tagged union over the types
supported by the TorchScript interpreter.
|
IValue.CompIdentityIValues |
|
IValue.HashIdentityIValue |
|
IValueArrayRef |
|
IValueOptional |
|
IValueOptionalVector |
|
IValueOptionalVector.Iterator |
|
IValueSupplier |
|
IValueVector |
|
IValueVector.Iterator |
|
IValueVectorConsumer |
|
JavaBatchDataset |
|
JavaDataset |
Abstract class for stateless datasets to be subclassed by Java user code.
|
JavaDatasetBase |
|
JavaDistributedRandomDataLoader |
|
JavaDistributedRandomDataLoaderBase |
|
JavaDistributedRandomTensorDataLoader |
|
JavaDistributedRandomTensorDataLoaderBase |
|
JavaDistributedSequentialDataLoader |
|
JavaDistributedSequentialDataLoaderBase |
|
JavaDistributedSequentialTensorDataLoader |
|
JavaDistributedSequentialTensorDataLoaderBase |
|
JavaRandomDataLoader |
|
JavaRandomDataLoaderBase |
|
JavaRandomTensorDataLoader |
|
JavaRandomTensorDataLoaderBase |
|
JavaSequentialDataLoader |
|
JavaSequentialDataLoaderBase |
|
JavaSequentialTensorDataLoader |
|
JavaSequentialTensorDataLoaderBase |
|
JavaStatefulBatchDataset |
|
JavaStatefulDataLoader |
|
JavaStatefulDataLoaderBase |
|
JavaStatefulDataset |
Abstract class for stateful datasets to be subclassed by Java user code.
|
JavaStatefulDatasetBase |
|
JavaStatefulTensorBatchDataset |
|
JavaStatefulTensorDataLoader |
|
JavaStatefulTensorDataLoaderBase |
|
JavaStatefulTensorDataset |
|
JavaStatefulTensorDatasetBase |
|
JavaStreamBatchDataset |
|
JavaStreamDataLoader |
|
JavaStreamDataLoaderBase |
|
JavaStreamDataset |
Abstract class for stateless stream datasets to be subclassed by Java user code.
|
JavaStreamTensorBatchDataset |
|
JavaStreamTensorDataLoader |
|
JavaStreamTensorDataLoaderBase |
|
JavaStreamTensorDataset |
|
JavaTensorBatchDataset |
|
JavaTensorDataset |
|
JavaTensorDatasetBase |
|
JitModule |
|
JitModuleApplyFunction |
|
JitNode |
|
JitNodeVector |
|
JitNodeVector.Iterator |
|
JitNodeWrap |
|
JitObject |
|
JitObject.Property |
|
JitString |
|
kArea |
|
kBatchMean |
|
kBicubic |
|
kBilinear |
|
kBorder |
|
kCircular |
|
kConstant |
|
kConv1D |
|
kConv2D |
|
kConv3D |
|
kConvTranspose1D |
|
kConvTranspose2D |
|
kConvTranspose3D |
|
KernelFunction |
KernelFunction is similar to std::function but stores a kernel function.
|
kFanIn |
|
kFanOut |
|
kGELU |
|
kGRU |
|
KLDivLossImpl |
The Kullback-Leibler divergence loss measure
See https://pytorch.org/docs/main/nn.html#torch.nn.KLDivLoss to learn
about the exact behavior of this module.
|
KLDivLossImplCloneable |
|
KLDivLossOptions |
Options for the KLDivLoss module.
|
KLDivLossReduction |
|
kLeakyReLU |
|
kLinear |
|
kLSTM |
|
kMax |
|
kMean |
|
kMish |
|
kNearest |
|
kNearestExact |
|
kNone |
|
kReflect |
|
kReflection |
|
kReLU |
|
kReplicate |
|
kRNN_RELU |
|
kRNN_TANH |
|
kSame |
|
kSigmoid |
|
kSiLU |
|
kSum |
|
kTanh |
|
kTrilinear |
|
kValid |
|
kZeros |
|
L1LossImpl |
Creates a criterion that measures the mean absolute error (MAE) between each
element in the input : math :x and target : y .
|
L1LossImplCloneable |
|
L1LossOptions |
Options for the L1Loss module.
|
LayerNormFuncOptions |
Options for torch::nn::functional::layer_norm .
|
LayerNormImpl |
Applies Layer Normalization over a mini-batch of inputs as described in
the paper Layer Normalization _ .
|
LayerNormImplCloneable |
|
LayerNormOptions |
Options for the LayerNorm module.
|
LayoutEnumerationType |
|
LayoutOptional |
|
LayoutType |
|
LayoutTypePtr |
|
LBFGS |
|
LBFGSOptions |
|
LBFGSParamState |
|
LeakyReLUImpl |
Applies the LeakyReLU function element-wise.
|
LeakyReLUImplCloneable |
|
LeakyReLUOptions |
Options for the LeakyReLU module.
|
LegacyTensorConstructor |
|
Library |
This object provides the API for defining operators and providing
implementations at dispatch keys.
|
LiftedIValueArg |
|
LiftedIValueArgs |
|
LinearImpl |
Applies a linear transformation with optional bias.
|
LinearImplCloneable |
|
LinearOptions |
Options for the Linear module.
|
ListComp |
|
ListLiteral |
|
ListSingleElementType |
|
ListType |
|
LocalDispatchKeySet |
|
LocalResponseNormImpl |
Applies local response normalization over an input signal composed
of several input planes, where channels occupy the second dimension.
|
LocalResponseNormImplCloneable |
|
LocalResponseNormOptions |
Options for the LocalResponseNorm module.
|
Logger |
|
LoggerOptional |
|
LogSigmoidImpl |
Applies the LogSigmoid function element-wise.
|
LogSigmoidImplCloneable |
|
LogSoftmaxFuncOptions |
Options for torch::nn::functional::log_softmax .
|
LogSoftmaxImpl |
Applies the LogSoftmax function element-wise.
|
LogSoftmaxImplCloneable |
|
LogSoftmaxOptions |
Options for the LogSoftmax module.
|
LongArrayRef |
|
LongArrayRefOptional |
|
LongArrayRefVector |
|
LongArrayRefVector.Iterator |
|
LongElementReference |
|
LongExpandingArrayOptional |
|
LongList |
|
LongListIterator |
|
LongOptional |
|
LongOptionalArrayRef |
|
LongOptionalVector |
|
LongOptionalVector.Iterator |
|
LongSmallVectorBase |
|
LongSmallVectorCommon |
|
LongSmallVectorImpl |
|
LongVaryingShape |
|
LongVector |
|
LongVector.Iterator |
|
LongVectorOptional |
|
LossClosure |
|
LossReduction |
|
LPPool1dImpl |
Applies the LPPool1d function element-wise.
|
LPPool1dImplBase |
Base class for all (dimension-specialized) lppool modules.
|
LPPool1dImplCloneable |
|
LPPool1dOptions |
Options for a D -dimensional lppool module.
|
LPPool2dImpl |
Applies the LPPool2d function element-wise.
|
LPPool2dImplBase |
|
LPPool2dImplCloneable |
|
LPPool2dOptions |
|
LPPool3dImpl |
Applies the LPPool3d function element-wise.
|
LPPool3dImplBase |
|
LPPool3dImplCloneable |
|
LPPool3dOptions |
|
LRScheduler |
|
LSTMCellImpl |
A long short-term memory (LSTM) cell.
|
LSTMCellImplBase |
|
LSTMCellImplCloneable |
|
LSTMCellOptions |
Options for the LSTMCell module.
|
LSTMImpl |
A multi-layer long-short-term-memory (LSTM) module.
|
LSTMImplBase |
|
LSTMImplCloneable |
|
LSTMOptions |
Options for the LSTM module.
|
MagicMethod |
|
MAIAHooksArgs |
|
MAIAHooksInterface |
|
MarginRankingLossImpl |
Creates a criterion that measures the loss given
inputs :math:x1 , :math:x2 , two 1D mini-batch Tensors ,
and a label 1D mini-batch tensor :math:y (containing 1 or -1).
|
MarginRankingLossImplCloneable |
|
MarginRankingLossOptions |
Options for the MarginRankingLoss module.
|
MatchedSchema |
|
MatchTypeReturn |
|
MaxPool1dImpl |
Applies maxpool over a 1-D input.
|
MaxPool1dImplBase |
Base class for all (dimension-specialized) maxpool modules.
|
MaxPool1dImplCloneable |
|
MaxPool1dOptions |
Options for a D -dimensional maxpool module.
|
MaxPool2dImpl |
Applies maxpool over a 2-D input.
|
MaxPool2dImplBase |
|
MaxPool2dImplCloneable |
|
MaxPool2dOptions |
|
MaxPool3dImpl |
Applies maxpool over a 3-D input.
|
MaxPool3dImplBase |
|
MaxPool3dImplCloneable |
|
MaxPool3dOptions |
|
MaxUnpool1dFuncOptions |
Options for a D -dimensional maxunpool functional.
|
MaxUnpool1dImpl |
Applies maxunpool over a 1-D input.
|
MaxUnpool1dImplBase |
Base class for all (dimension-specialized) maxunpool modules.
|
MaxUnpool1dImplCloneable |
|
MaxUnpool1dOptions |
Options for a D -dimensional maxunpool module.
|
MaxUnpool2dFuncOptions |
|
MaxUnpool2dImpl |
Applies maxunpool over a 2-D input.
|
MaxUnpool2dImplBase |
|
MaxUnpool2dImplCloneable |
|
MaxUnpool2dOptions |
|
MaxUnpool3dFuncOptions |
|
MaxUnpool3dImpl |
Applies maxunpool over a 3-D input.
|
MaxUnpool3dImplBase |
|
MaxUnpool3dImplCloneable |
|
MaxUnpool3dOptions |
|
MemCopyFunction |
|
MemoryFormatOptional |
|
MemoryFormattEnumerationType |
|
MemoryFormatType |
|
MemoryReportingInfoBase |
|
MetaBase |
|
MetadataLogger |
|
Method |
|
MethodOptional |
|
MethodValue |
|
MishImpl |
Applies mish over a given input.
|
MishImplCloneable |
|
MNIST |
The MNIST dataset.
|
MNISTBatchDataset |
A dataset that can yield data only in batches.
|
MNISTDataset |
A dataset that can yield data in batches, or as individual examples.
|
MNISTMapBatchDataset |
|
MNISTMapDataset |
A MapDataset is a dataset that applies a transform to a source dataset.
|
MNISTRandomDataLoader |
A dataloader for stateless datasets.
|
MNISTRandomDataLoaderBase |
|
MobileCode |
|
Module |
The base class for all modules in PyTorch.
|
module_iterator |
|
module_list |
|
ModuleApplyFunction |
|
ModuleDictImpl |
An OrderedDict of Module s that registers its elements by their key s.
|
ModuleDictImplCloneable |
The clone() method in the base Module class does not have knowledge of
the concrete runtime type of its subclasses.
|
ModuleInstanceInfo |
ModuleInstanceInfo is a structure to include the module type and instance
name.
|
ModuleInstanceInfoOptional |
|
ModuleListImpl |
A list of Module s that registers its elements.
|
ModuleListImplCloneable |
|
ModulePolicy |
|
MPSHooksArgs |
|
MPSHooksInterface |
|
MSELossImpl |
Creates a criterion that measures the mean squared error (squared L2 norm)
between each element in the input :math:x and target :math:y .
|
MSELossImplCloneable |
|
MSELossOptions |
Options for the MSELoss module.
|
mt19937_data_pod |
mt19937_data_pod is used to get POD data in and out
of mt19937_engine.
|
mt19937_engine |
|
MTIAHooksInterface |
|
MultiheadAttentionForwardFuncOptions |
Options for torch::nn::functional::multi_head_attention_forward
|
MultiheadAttentionImpl |
Applies the MultiheadAttention function element-wise.
|
MultiheadAttentionImplCloneable |
|
MultiheadAttentionOptions |
Options for the MultiheadAttention module.
|
MultiLabelMarginLossImpl |
Creates a criterion that optimizes a multi-class multi-classification
hinge loss (margin-based loss) between input :math:x (a 2D mini-batch
Tensor ) and output :math:y (which is a 2D Tensor of target class
indices).
|
MultiLabelMarginLossImplCloneable |
|
MultiLabelMarginLossOptions |
Options for the MultiLabelMarginLoss module.
|
MultiLabelSoftMarginLossImpl |
Creates a criterion that optimizes a multi-label one-versus-all
loss based on max-entropy, between input :math:x and target :math:y of
size :math:(N, C) .
|
MultiLabelSoftMarginLossImplCloneable |
|
MultiLabelSoftMarginLossOptions |
Options for the MultiLabelSoftMarginLoss module.
|
MultiMarginLossImpl |
Creates a criterion that optimizes a multi-class classification hinge
loss (margin-based loss) between input :math:x (a 2D mini-batch Tensor )
and output :math:y (which is a 1D tensor of target class indices, :math:0
\leq y \leq \text{x.size}(1)-1 ).
|
MultiMarginLossImplCloneable |
|
MultiMarginLossOptions |
Options for the MultiMarginLoss module.
|
MzZipReaderIterWrapper |
|
named_attribute_iterator |
|
named_attribute_list |
|
named_buffer_iterator |
|
named_buffer_list |
|
named_module_iterator |
|
named_module_list |
|
named_parameter_iterator |
|
named_parameter_list |
|
NamedIValue |
|
NamedJitModule |
|
NamedModuleApplyFunction |
|
NamedSharedModuleApplyFunction |
|
NamedTensor |
|
NamedTensorMeta |
|
NamedTensorMetaInterface |
|
NamedTupleConstructor |
|
NamedType |
|
NamedValue |
A value with optional extra name and location information.
|
NamedValueArrayRef |
|
NamedValueOptional |
|
NameMangler |
class NameMangler
Utility to mangle qualified names in order to make them unique.
|
NamesMode |
|
NativeResolver |
|
NCCLPreMulSumSupplement |
|
NestedTensorImpl |
|
NLLLossImpl |
The negative log likelihood loss.
|
NLLLossImplCloneable |
|
NLLLossOptions |
Options for the NLLLoss module.
|
Node |
|
Node.undefined_input |
|
NodeCall |
|
NodeCalls |
|
NodeNodeCallMap |
|
NodeNodeCallMap.Iterator |
|
NodeSet |
|
NodeSet.Iterator |
|
NoGradGuard |
|
NoNamesGuard |
|
NoneType |
|
NoneTypePtr |
|
Nonlinearity |
|
NormalizeFuncOptions |
Options for torch::nn::functional::normalize .
|
NoTarget |
|
NoTF32Guard |
|
NumberType |
|
NumberTypePtr |
|
Obj |
|
ObjLoader |
|
OperandInfo |
|
Operation |
|
OperationCreator |
|
Operator |
|
OperatorHandle |
This is a handle to an operator schema registered with the dispatcher.
|
OperatorHandleOptional |
|
OperatorKernel |
Inherit from OperatorKernel to implement a c10 kernel.
|
OperatorName |
|
OperatorNameOptional |
|
OperatorSet |
|
OperatorVector |
|
OperatorVector.Iterator |
|
OpRegistrationListener |
Implement this interface and register your instance with the dispatcher
to get notified when operators are registered or deregistered with
the dispatcher.
|
Optimizer |
|
OptimizerCloneableAdagradOptions |
|
OptimizerCloneableAdagradParamState |
|
OptimizerCloneableAdamOptions |
|
OptimizerCloneableAdamParamState |
|
OptimizerCloneableAdamWOptions |
|
OptimizerCloneableAdamWParamState |
|
OptimizerCloneableLBFGSOptions |
|
OptimizerCloneableLBFGSParamState |
|
OptimizerCloneableRMSpropOptions |
|
OptimizerCloneableRMSpropParamState |
|
OptimizerCloneableSGDOptions |
|
OptimizerCloneableSGDParamState |
|
OptimizerOptions |
|
OptimizerParamGroup |
Stores parameters in the param_group and stores a pointer to the
OptimizerOptions
|
OptimizerParamGroupVector |
|
OptimizerParamGroupVector.Iterator |
|
OptimizerParamState |
|
OptionalDeviceGuard |
A OptionalDeviceGuard is an RAII class that sets a device to some value on
initialization, and resets the device to its original value on destruction.
|
OptionalStreamGuard |
An OptionalStreamGuard is an RAII class that sets a device to some value on
initialization, and resets the device to its original value on destruction.
|
OptionalType |
|
OutputArchive |
|
PackedSequence |
Holds the data and list of batch_sizes of a packed sequence.
|
PaddingMode |
|
PadFuncOptions |
Options for torch::nn::functional::pad .
|
PairwiseDistanceImpl |
Returns the batchwise pairwise distance between vectors :math:v_1 ,
:math:v_2 using the p-norm.
|
PairwiseDistanceImplCloneable |
|
PairwiseDistanceOptions |
Options for the PairwiseDistance module.
|
Param |
|
parameter_iterator |
|
parameter_list |
|
ParameterDictImpl |
|
ParameterDictImplCloneable |
|
ParameterListImpl |
|
ParameterListImplCloneable |
|
ParameterPolicy |
|
ParamList |
|
ParamListIterator |
|
Pass |
|
Pickler |
|
PickleReader |
|
PickleWriter |
|
PixelShuffleImpl |
Rearranges elements in a tensor of shape :math:(*, C \times r^2, H, W)
to a tensor of shape :math:(*, C, H \times r, W \times r) , where r is an
upscale factor.
|
PixelShuffleImplCloneable |
|
PixelShuffleOptions |
Options for the PixelShuffle module.
|
PixelUnshuffleImpl |
Reverses the PixelShuffle operation by rearranging elements in a tensor of
shape :math:(*, C, H \times r, W \times r) to a tensor of shape :math:(*,
C \times r^2, H, W) , where r is a downscale factor.
|
PixelUnshuffleImplCloneable |
|
PixelUnshuffleOptions |
Options for the PixelUnshuffle module.
|
PlacementConsumer |
|
PlacementCopier |
|
PlacementDeleteContext |
|
PODLocalDispatchKeySet |
|
PointerConsumer |
|
PointerPair |
|
PointerPairOptional |
|
PointerSupplier |
|
PoissonNLLLossImpl |
Negative log likelihood loss with Poisson distribution of target.
|
PoissonNLLLossImplCloneable |
|
PoissonNLLLossOptions |
Options for the PoissonNLLLoss module.
|
PostAccumulateGradHook |
|
PrefixStore |
|
PReLUImpl |
Applies the PReLU function element-wise.
|
PReLUImplCloneable |
|
PReLUOptions |
Options for the PReLU module.
|
pretty_tree |
|
PrintValue |
|
PrivateUse1HooksArgs |
|
PrivateUse1HooksInterface |
|
ProcessGroup |
|
ProcessGroup.Options |
|
ProcessGroupCppCommHookInterface |
|
ProcessGroupGloo |
|
ProcessGroupGloo.AsyncWork |
|
ProcessGroupGloo.GlooStore |
|
ProcessGroupGloo.Options |
|
ProcessGroupGloo.RecvWork |
|
ProcessGroupGloo.SendWork |
|
ProfileIValueOp |
|
ProfilerConfig |
|
Property |
|
PropertyList |
|
PropertyListIterator |
|
PropertyListMaybe |
|
PropertyVector |
|
PropertyVector.Iterator |
|
PyInterpreter |
|
PyInterpreterVTable |
|
PyObject_TorchDispatchMode |
|
PyObject_TorchDispatchModeOptional |
|
PyObjectHolder |
|
PyObjectType |
|
PyObjectTypePtr |
|
PythonDispatcherTLS |
|
PythonOp |
|
PythonTorchFunctionTLS |
|
PyTorchStreamReader |
|
QEngineVector |
|
QEngineVector.Iterator |
|
qint32 |
qint32 is for signed 32 bit quantized Tensors
|
qint8 |
This is the data type for quantized Tensors.
|
QSchemeType |
|
QSchemeTypePtr |
|
QTensorImpl |
|
QualifiedName |
|
QualifiedNameOptional |
|
Quantizer |
Quantizer is the class for storing all the information
that's necessary to perform quantize and dequantize
operation.
|
QuantizerType |
|
QuantizerTypePtr |
|
quint2x4 |
quint2x4 is for un-signed 2 bit quantized Tensors that are packed to byte
boundary.
|
quint4x2 |
quint4x2 is for un-signed 4 bit quantized Tensors that are packed to byte
boundary.
|
quint8 |
quint8 is for unsigned 8 bit quantized Tensors
|
Raise |
|
RandomSampler |
A Sampler that returns random indices.
|
RangeValue |
|
ReadAdapterInterface |
|
ReadAdapterInterfaceVector |
|
ReadAdapterInterfaceVector.Iterator |
|
Reader |
|
RecordFunction |
|
RecordFunctionCallbacksEntry |
|
RecordFunctionGuard |
|
RecordFunctionHandleIntList |
|
RecordFunctionHandleIntList.Iterator |
|
RecordFunctionHandleIntPair |
|
RecordFunctionTLS |
|
ReduceLROnPlateauScheduler |
|
ReduceOp |
|
ReduceOptions |
|
Reducer |
|
ReduceScatterOptions |
|
ReflectionPad1dImpl |
Applies ReflectionPad over a 1-D input.
|
ReflectionPad1dImplBase |
Base class for all (dimension-specialized) ReflectionPad modules.
|
ReflectionPad1dImplCloneable |
|
ReflectionPad1dOptions |
Options for a D -dimensional ReflectionPad module.
|
ReflectionPad2dImpl |
Applies ReflectionPad over a 2-D input.
|
ReflectionPad2dImplBase |
|
ReflectionPad2dImplCloneable |
|
ReflectionPad2dOptions |
|
ReflectionPad3dImpl |
Applies ReflectionPad over a 3-D input.
|
ReflectionPad3dImplBase |
|
ReflectionPad3dImplCloneable |
|
ReflectionPad3dOptions |
|
RegisterOperators |
An instance of this class handles the registration for one or more operators.
|
RegistrationHandleRAII |
|
ReLU6Impl |
Applies the ReLU6 function element-wise.
|
ReLU6ImplCloneable |
|
ReLU6Options |
Options for the ReLU6 module.
|
ReLUImpl |
Applies the ReLU function element-wise.
|
ReLUImplCloneable |
|
ReLUOptions |
Options for the ReLU module.
|
ReplicationPad1dImpl |
Applies ReplicationPad over a 1-D input.
|
ReplicationPad1dImplBase |
Base class for all (dimension-specialized) ReplicationPad modules.
|
ReplicationPad1dImplCloneable |
|
ReplicationPad1dOptions |
Options for a D -dimensional ReplicationPad module.
|
ReplicationPad2dImpl |
Applies ReplicationPad over a 2-D input.
|
ReplicationPad2dImplBase |
|
ReplicationPad2dImplCloneable |
|
ReplicationPad2dOptions |
|
ReplicationPad3dImpl |
Applies ReplicationPad over a 3-D input.
|
ReplicationPad3dImplBase |
|
ReplicationPad3dImplCloneable |
|
ReplicationPad3dOptions |
|
Resolver |
class Resolver
Represents an "outer environment" in which we an look up names and return
a corresponding SugaredValue.
|
ResolverVector |
|
ResolverVector.Iterator |
|
Result |
|
Return |
|
RMSprop |
|
RMSpropOptions |
|
RMSpropParamState |
|
RNNBaseMode |
|
RNNCellImpl |
An Elman RNN cell with tanh or ReLU non-linearity.
|
RNNCellImplBase |
Base class for all RNNCell implementations (intended for code sharing).
|
RNNCellImplCloneable |
|
RNNCellOptions |
Options for the RNNCell module.
|
RNNCellOptionsBase |
Common options for RNNCell, LSTMCell and GRUCell modules
|
RNNImpl |
A multi-layer Elman RNN module with Tanh or ReLU activation.
|
RNNImplBase |
Base class for all RNN implementations (intended for code sharing).
|
RNNImplCloneable |
|
RNNNonlinearity |
|
RNNOptions |
Options for the RNN module.
|
RNNOptionsBase |
Common options for RNN, LSTM and GRU modules.
|
ROCmBackwardPassGuard |
|
RRefInterface |
|
RRefSingleElementType |
|
RRefType |
|
RReLUFuncOptions |
Options for torch::nn::functional::rrelu .
|
RReLUImpl |
Applies the RReLU function element-wise.
|
RReLUImplCloneable |
|
RReLUOptions |
Options for the RReLU module.
|
SafePyHandle |
|
SafePyObject |
|
SafePyObjectOptional |
|
Sampler |
A Sampler is an object that yields an index with which to access a
dataset.
|
SavedTensorDefaultHooks |
|
SavedTensorDefaultHooksTLS |
|
SavedVariableHooks |
|
Scalar |
Scalar represents a 0-dimensional tensor which contains a single element.
|
ScalarArrayRef |
|
ScalarOptional |
|
ScalarTypeArrayRef |
|
ScalarTypeEnumerationType |
|
ScalarTypeOptional |
|
ScalarTypeType |
|
ScalarTypeTypePtr |
|
ScalarTypeVector |
|
ScalarTypeVector.Iterator |
|
ScatterOptions |
|
SchemaArgument |
struct SchemaArgument
Structure used to represent arguments or returns for a schema.
|
SchemaInfo |
class SchemaInfo
FunctionSchema wrapper that publicizes argument value specific operator
behavior (mutation, aliasing, special cases, etc...)
|
Scope |
|
ScopeOptional |
|
ScriptTypeParser |
class ScriptTypeParser
Parses expressions in our typed AST format (TreeView) into types and
typenames.
|
Select |
|
Self |
|
SELUImpl |
Applies the selu function element-wise.
|
SELUImplCloneable |
|
SELUOptions |
Options for the SELU module.
|
SequentialImpl |
A list of Module s that acts as a Module itself.
|
SequentialImplCloneable |
|
SequentialSampler |
A Sampler that returns indices sequentially.
|
SGD |
|
SGDOptions |
|
SGDParamState |
|
ShapeSymbol |
|
ShapeSymbolVector |
|
ShapeSymbolVector.Iterator |
|
ShapeSymbolVectorOptional |
|
SharedClassTypeVector |
|
SharedClassTypeVector.Iterator |
|
SharedModuleApplyFunction |
|
SharedModuleVector |
|
SharedModuleVector.Iterator |
|
SharedParserData |
|
SharedSugaredValueVector |
|
SharedSugaredValueVector.Iterator |
|
SharedType |
|
ShortArrayRef |
|
ShortSet |
|
ShortSet.Iterator |
|
SigmoidImpl |
Applies sigmoid over a given input.
|
SigmoidImplCloneable |
|
SiLUImpl |
Applies silu over a given input.
|
SiLUImplCloneable |
|
SimpleSelf |
|
SimpleValue |
|
SingletonTypePtr |
|
SizeInput |
|
SizesAndStrides |
|
SizeTArrayRef |
|
SizeTMatchedSchemaPair |
|
SizeTOptional |
|
SizeTStringMap |
|
SizeTStringMap.Iterator |
|
SizeTSupplier |
|
SizeTVector |
|
SizeTVector.Iterator |
|
SizeTVectorOptional |
|
SizeTVectorVector |
|
SizeTVectorVector.Iterator |
|
Slice |
|
SliceExpr |
|
SliceValue |
|
SlotCursor |
|
SmoothL1LossImpl |
Creates a criterion that uses a squared term if the absolute
element-wise error falls below beta and an L1 term otherwise.
|
SmoothL1LossImplCloneable |
|
SmoothL1LossOptions |
Options for the SmoothL1Loss module.
|
SoftMarginLossImpl |
Creates a criterion that optimizes a two-class classification
logistic loss between input tensor :math:x and target tensor :math:y
(containing 1 or -1).
|
SoftMarginLossImplCloneable |
|
SoftMarginLossOptions |
Options for the SoftMarginLoss module.
|
Softmax2dImpl |
Applies the Softmax2d function element-wise.
|
Softmax2dImplCloneable |
|
SoftmaxFuncOptions |
Options for torch::nn::functional::softmax .
|
SoftmaxImpl |
Applies the Softmax function.
|
SoftmaxImplCloneable |
|
SoftmaxOptions |
Options for the Softmax module.
|
SoftminFuncOptions |
Options for torch::nn::functional::softmin .
|
SoftminImpl |
Applies the Softmin function element-wise.
|
SoftminImplCloneable |
|
SoftminOptions |
Options for the Softmin module.
|
SoftplusImpl |
Applies softplus over a given input.
|
SoftplusImplCloneable |
|
SoftplusOptions |
Options for the Softplus module.
|
SoftshrinkImpl |
Applies the soft shrinkage function element-wise.
|
SoftshrinkImplCloneable |
|
SoftshrinkOptions |
Options for the Softshrink module.
|
SoftsignImpl |
Applies Softsign over a given input.
|
SoftsignImplCloneable |
|
Source |
|
SourceLocation |
Represents a location in source code (for debugging).
|
SourceRange |
|
SourceRangeHasher |
|
SourceRangeOptional |
|
SourceRangeUnpickler |
|
SpecialFormValue |
|
SplitUntil32Bit |
A container-like struct that acts as if it contains splits of a
TensorIterator that can use 32-bit indexing.
|
SplitUntil32Bit.iterator |
|
StackEntry |
|
StackTraceFetcher |
|
Starred |
|
StashTorchDispatchModeGuard |
|
StashTorchDispatchStackGuard |
|
StepLR |
|
Stmt |
|
StmtList |
|
StmtListIterator |
|
Storage |
|
Storage.unsafe_borrow_t |
|
Storage.use_byte_size_t |
|
StorageImpl |
|
StorageImpl.use_byte_size_t |
|
StorageType |
|
StorageTypePtr |
|
Store |
|
StoreTimeoutGuard |
|
Stream |
A stream is a software mechanism used to synchronize launched kernels
without requiring explicit synchronizations between kernels.
|
StreamData3 |
|
StreamObjType |
|
StreamObjTypePtr |
|
StreamOptional |
|
StreamSampler |
A sampler for (potentially infinite) streams of data.
|
Stride |
|
StrideArrayRef |
|
StrideOptional |
|
StrideVaryingShape |
|
StrideVector |
|
StrideVector.Iterator |
|
StrideVectorOptional |
|
StringAnyModuleDict |
|
StringAnyModuleDictItem |
|
StringAnyModuleDictItemVector |
|
StringAnyModuleDictItemVector.Iterator |
|
StringAnyModulePair |
|
StringAnyModuleVector |
|
StringArrayRef |
|
StringBoolMap |
|
StringBoolMap.Iterator |
|
StringConsumer |
|
StringCordView |
|
StringCordView.Iterator |
|
StringGenericListDict |
|
StringGenericListDictIterator |
|
StringIntMap |
|
StringIntMap.Iterator |
|
StringIValueMap |
|
StringIValueMap.Iterator |
|
StringLiteral |
|
StringLongMap |
|
StringLongMap.Iterator |
|
StringLongVector |
|
StringMapper |
|
StringOptional |
|
StringPair |
|
StringScalarTypeMap |
|
StringScalarTypeMap.Iterator |
|
StringSet |
|
StringSet.Iterator |
|
StringSharedModuleDict |
|
StringSharedModuleDictItem |
|
StringSharedModuleDictItemVector |
|
StringSharedModuleDictItemVector.Iterator |
|
StringSharedModulePair |
|
StringSharedModuleVector |
|
StringSizeTMap |
|
StringSizeTMap.Iterator |
|
StringStringMap |
|
StringStringMap.Iterator |
|
StringSupplier |
|
StringTensorDict |
An ordered dictionary implementation, akin to Python's OrderedDict .
|
StringTensorDictItem |
|
StringTensorDictItemVector |
|
StringTensorDictItemVector.Iterator |
|
StringTensorMap |
|
StringTensorMap.Iterator |
|
StringTensorPair |
|
StringTensorVector |
|
StringType |
|
StringTypePtr |
|
StringValueMap |
|
StringValueMap.Iterator |
|
StringVector |
|
StringVector.Iterator |
|
StringVectorOptional |
|
StringViewOptional |
|
StringViewVector |
|
StringViewVector.Iterator |
|
StringViewVectorOptional |
|
StrongTypePtr |
|
Subscript |
|
SugaredEnumClass |
|
SugaredTupleValue |
|
SugaredValue |
|
SwapSavedVariables |
|
Symbol |
|
SymbolArrayRef |
|
SymbolicShape |
|
SymbolicShapeMeta |
|
SymbolSet |
|
SymbolSet.Iterator |
|
SymbolVector |
|
SymbolVector.Iterator |
|
SymBool |
|
SymBoolType |
|
SymDimVector |
This is a 'vector' (really, a variable-sized array), optimized
for the case when the array is small.
|
SymDimVectorOptional |
|
SymFloat |
|
SymFloatType |
|
SymInt |
|
SymIntArrayRef |
|
SymIntArrayRefOptional |
|
SymIntOptional |
|
SymIntOptionalVector |
|
SymIntOptionalVector.Iterator |
|
SymIntSmallVectorBase |
SmallVectorTemplateBase - This is where we put
method implementations that are designed to work with non-trivial T's.
|
SymIntSmallVectorCommon |
This is the part of SmallVectorTemplateBase which does not depend on whether
the type T is a POD.
|
SymIntSmallVectorImpl |
This class consists of common code factored out of the SmallVector class to
reduce code duplication based on the SmallVector 'N' template parameter.
|
SymIntType |
|
SymIntVector |
|
SymIntVector.Iterator |
|
SymNode |
|
SymNodeArrayRef |
|
SymNodeVector |
|
SymNodeVector.Iterator |
|
T_DataPtrSizeT_T |
|
T_IntInt_T |
|
T_LongLong_T |
|
T_PackedSequenceT_TensorTensor_T_T |
|
T_PackedSequenceTensor_T |
|
T_PyObject_TorchDispatchModeTorchDispatchModeKey_T |
|
T_SafePyObjectTorchDispatchModeKey_T |
|
T_SizeTVectorVectorSizeTVector_T |
|
T_StringSizeTSizeT_T |
|
T_StringSizeTSizeT_TOptional |
|
T_TensorMaybeOwnedTensorMaybeOwned_T |
|
T_TensorMaybeOwnedTensorMaybeOwnedTensorMaybeOwned_T |
|
T_TensorT_TensorTensor_T_T |
|
T_TensorTensor_T |
|
T_TensorTensor_TOptional |
|
T_TensorTensorDoubleLong_T |
|
T_TensorTensorTensor_T |
|
T_TensorTensorTensorTensor_T |
|
T_TensorTensorTensorTensorTensor_T |
|
T_TensorTensorTensorTensorTensorTensorTensor_T |
|
T_TensorTensorTensorTensorVector_T |
|
T_TensorTensorVector_T |
|
T_TensorTensorVectorTensorVector_T |
|
T_TypePtrLong_T |
|
T_TypePtrLong_TOptional |
|
TagArrayRef |
|
TagVector |
|
TagVector.Iterator |
|
TanhImpl |
Applies Tanh over a given input.
|
TanhImplCloneable |
|
TanhshrinkImpl |
Applies Tanhshrink over a given input.
|
TanhshrinkImplCloneable |
|
Tensor |
|
TensorArg |
|
TensorArgArrayRef |
|
TensorArgs |
|
TensorArrayRef |
|
TensorArrayRefOptional |
|
TensorBase |
|
TensorBaseMaybeOwned |
|
TensorBatchDataset |
|
TensorCastValue |
|
TensorDataset |
A dataset of tensors.
|
TensorDatasetBase |
|
TensorDeque |
|
TensorDeque.Iterator |
|
TensorElementReference |
|
TensorExample |
|
TensorExampleCollation |
|
TensorExampleIterator |
|
TensorExampleOptional |
|
TensorExampleStack |
A Collation for Example<Tensor, NoTarget> types that stacks all data
tensors into one tensor.
|
TensorExampleVector |
|
TensorExampleVector.Iterator |
|
TensorExampleVectorIterator |
|
TensorExampleVectorOptional |
|
TensorGeometry |
|
TensorGeometryArg |
|
TensorIdGetter |
|
TensorImpl |
The low-level representation of a tensor, which contains a pointer
to a storage (which contains the actual data) and metadata (e.g., sizes and
strides) describing this particular view of the data as a tensor.
|
TensorImpl.LongIdentity |
|
TensorImpl.SymIntIdentity |
|
TensorImplSet |
|
TensorImplSet.Iterator |
|
TensorImplVector |
|
TensorImplVector.Iterator |
|
TensorIndex |
|
TensorIndexArrayRef |
|
TensorIndexVector |
|
TensorIndexVector.Iterator |
|
TensorIterator |
|
TensorIteratorBase |
|
TensorIteratorConfig |
|
TensorList |
|
TensorListIterator |
|
TensorMaker |
Provides a fluent API to construct tensors from external data.
|
TensorMapper |
|
TensorMaybeOwned |
A smart pointer around either a borrowed or owned T.
|
TensorName |
|
TensorNames |
|
TensorOptional |
|
TensorOptionalArrayRef |
|
TensorOptionalElementReference |
|
TensorOptionalList |
|
TensorOptionalListIterator |
|
TensorOptionalVector |
|
TensorOptionalVector.Iterator |
|
TensorOptions |
A class to encapsulate construction axes of an Tensor.
|
TensorTensorDict |
|
TensorTensorDictIterator |
|
TensorTensorHook |
|
TensorTensorRefHook |
|
TensorType |
|
TensorVector |
|
TensorVector.Iterator |
|
TensorVectorOptional |
|
TernaryIf |
|
ThreadIdGuard |
|
ThreadLocalDebugInfo |
|
ThreadLocalPythonObjects |
|
ThreadLocalState |
|
ThreadLocalStateGuard |
|
ThresholdImpl |
Applies the Threshold function element-wise.
|
ThresholdImplCloneable |
|
ThresholdOptions |
Options for the Threshold module.
|
Timer |
|
Token |
|
TorchDispatchModeTLS |
|
TraceableFunction |
See Node::is_traceable() for definition.
|
TraceState |
|
TransformerActivation |
|
TransformerDecoderImpl |
TransformerDecoder is a stack of N decoder layers.
|
TransformerDecoderImplCloneable |
|
TransformerDecoderLayerImpl |
TransformerDecoderLayer is made up of self-attn, multi-head-attn and
feedforward network.
|
TransformerDecoderLayerImplCloneable |
|
TransformerDecoderLayerOptions |
Options for the TransformerDecoderLayer module.
|
TransformerDecoderOptions |
Options for the TransformerDecoder module.
|
TransformerEncoderImpl |
TransformerEncoder module.
|
TransformerEncoderImplCloneable |
|
TransformerEncoderLayerImpl |
TransformerEncoderLayer module.
|
TransformerEncoderLayerImplCloneable |
|
TransformerEncoderLayerOptions |
Options for the TransformerEncoderLayer
Example:
|
TransformerEncoderOptions |
Options for the TransformerEncoder
Example:
|
TransformerImpl |
A transformer model.
|
TransformerImplCloneable |
|
TransformerOptions |
Options for the Transformer module
Example:
|
Tree |
|
TreeStringMap |
|
TreeStringMap.Iterator |
|
TreeView |
|
TripletMarginLossImpl |
Creates a criterion that measures the triplet loss given an input
tensors :math:x1 , :math:x2 , :math:x3 and a margin with a value greater
than :math:0 .
|
TripletMarginLossImplCloneable |
|
TripletMarginLossOptions |
Options for the TripletMarginLoss module.
|
TripletMarginWithDistanceLossImpl |
Creates a criterion that measures the triplet loss given input
tensors :math:a , :math:p , and :math:n (representing anchor,
positive, and negative examples, respectively); and a nonnegative,
real-valued function
("distance function") used to compute the relationships between the anchor
and positive example ("positive distance") and the anchor and negative
example ("negative distance").
|
TripletMarginWithDistanceLossImplCloneable |
|
TripletMarginWithDistanceLossOptions |
Options for the TripletMarginWithDistanceLoss module.
|
Tuple |
|
TupleElements |
|
TupleLiteral |
|
TupleType |
|
Type |
|
type_index |
|
Type.TypePtr |
|
TypeArrayRef |
|
TypeEnv |
|
TypeEnv.Iterator |
|
TypeIdentifier |
A type id is a unique id for a given C++ type.
|
TypeMapper |
|
TypeMeta |
TypeMeta is a thin class that allows us to store the type of a container such
as a blob, or the data type of a tensor, with a unique run-time id.
|
TypeMetaOptional |
|
TypeParser |
|
TypePrinter |
|
TypePtrOptional |
|
TypeRenamer |
|
TypeResolver |
|
TypeSupplier |
|
TypeVector |
|
TypeVector.Iterator |
|
UnaryOp |
|
UndefinedTensorImpl |
|
UnflattenImpl |
A placeholder for unflatten operator
See https://pytorch.org/docs/main/generated/torch.nn.Unflatten.html to
learn about the exact behavior of this module.
|
UnflattenImplCloneable |
|
UnflattenOptions |
Options for the Unflatten module.
|
UnfoldImpl |
Applies unfold over a 4-D input.
|
UnfoldImplCloneable |
|
UnfoldOptions |
Options for the Unfold module.
|
UnionType |
|
UniqueVoidPtr |
|
Unpickler |
|
UpsampleImpl |
Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D
(volumetric) data.
|
UpsampleImplCloneable |
|
UpsampleMode |
|
UpsampleOptions |
Options for the Upsample module.
|
Use |
|
Value |
|
ValueArrayRef |
|
ValueMapper |
|
ValueOptional |
|
ValueValueMap |
|
ValueValueMap.Iterator |
|
ValueVector |
|
ValueVector.Iterator |
|
ValueWrap |
|
Var |
|
VariableHooksInterface |
|
VariableInfo |
|
VariableVersion |
|
VarMaybe |
|
VoidTensorHook |
|
WarnAlways |
|
Warning |
|
Warning.DeprecationWarning |
|
Warning.UserWarning |
|
WarningHandler |
|
WarningHandlerGuard |
|
WarningVariant |
|
WeakIValue |
|
WeakOrStrongCompilationUnit |
|
WeakOrStrongTypePtr |
|
WeakStorageVector |
|
WeakStorageVector.Iterator |
|
WeakStorageVectorOptional |
|
WeakTypePtr |
|
While |
|
With |
|
WithItem |
|
WithItemList |
|
WithItemListIterator |
|
Work |
|
WorkInfo |
|
WorkInfoConsumer |
|
WriteableTensorData |
|
XPUHooksArgs |
|
XPUHooksInterface |
|
ZeroPad1dImpl |
|
ZeroPad1dImplBase |
Base class for all (dimension-specialized) ZeroPad modules.
|
ZeroPad1dImplCloneable |
|
ZeroPad1dOptions |
|
ZeroPad2dImpl |
|
ZeroPad2dImplBase |
|
ZeroPad2dImplCloneable |
|
ZeroPad2dOptions |
|
ZeroPad3dImpl |
|
ZeroPad3dImplBase |
|
ZeroPad3dImplCloneable |
|
ZeroPad3dOptions |
|