Class | Description |
---|---|
AccuracyParameter | |
Arena | |
Arena.Destruct_Pointer | |
ArenaOptions | |
ArenaOptions.Block_alloc_long | |
ArenaOptions.Block_dealloc_Pointer_long | |
ArenaStringPtr | |
ArgMaxParameter | |
AuxillaryParseTableField | |
AuxillaryParseTableField.enum_aux | |
AuxillaryParseTableField.EnumValidator | |
AuxillaryParseTableField.map_aux | |
AuxillaryParseTableField.map_aux.Parse_map_CodedInputStream_Pointer | |
AuxillaryParseTableField.message_aux | |
AuxillaryParseTableField.string_aux | |
BatchNormParameter | |
BiasParameter | |
BlobProto | |
BlobProtoVector | |
BlobShape | |
BoolVector | |
BoolVector.Iterator | |
BoolVectorVector | |
BoolVectorVector.Iterator | |
BoundedZCIS | |
Caffe | |
Caffe.RNG | |
CelMapReflectionFriend | |
CodedInputStream | |
CodedOutputStream | |
CommandLineInterface | |
CompareHelper | |
CompareMapKey | |
ConcatParameter | |
ContrastiveLossParameter | |
ConvolutionParameter | |
CPUTimer | |
CropParameter | |
Cursor | |
DataParameter | |
Datum | |
DatumBlockingQueue | |
DatumVector | |
DatumVector.Iterator | |
DB | |
DebugStringOptions | |
Descriptor | |
Descriptor.ExtensionRange | |
Descriptor.ReservedRange | |
DescriptorBuilder | |
DescriptorDatabase | |
DescriptorPool | |
DescriptorPool.ErrorCollector | |
DescriptorProto | |
DescriptorProto_ExtensionRange | |
DescriptorTest | |
DoubleAbsValLayer | |
DoubleAccuracyLayer | |
DoubleAdaDeltaSolver | |
DoubleAdaGradSolver | |
DoubleAdamSolver | |
DoubleArgMaxLayer | |
DoubleBaseConvolutionLayer | |
DoubleBaseDataLayer | |
DoubleBasePrefetchingDataLayer | |
DoubleBatch | |
DoubleBatchNormLayer | |
DoubleBatchReindexLayer | |
DoubleBilinearFiller | |
DoubleBlob | |
DoubleBlobSharedVector | |
DoubleBlobSharedVector.Iterator | |
DoubleBlobVector | |
DoubleBlobVector.Iterator | |
DoubleBlobVectorVector | |
DoubleBlobVectorVector.Iterator | |
DoubleBNLLLayer | |
DoubleCallbackVector | |
DoubleCallbackVector.Iterator | |
DoubleConcatLayer | |
DoubleConstantFiller | |
DoubleContrastiveLossLayer | |
DoubleConvolutionLayer | |
DoubleCropLayer | |
DoubleCuDNNConvolutionLayer | |
DoubleCuDNNLCNLayer | |
DoubleCuDNNLRNLayer | |
DoubleCuDNNPoolingLayer | |
DoubleCuDNNReLULayer | |
DoubleCuDNNSigmoidLayer | |
DoubleCuDNNSoftmaxLayer | |
DoubleCuDNNTanHLayer | |
DoubleDataLayer | |
DoubleDataTransformer | |
DoubleDeconvolutionLayer | |
DoubleDropoutLayer | |
DoubleDummyDataLayer | |
DoubleEltwiseLayer | |
DoubleEmbedLayer | |
DoubleEuclideanLossLayer | |
DoubleExpLayer | |
DoubleFiller | |
DoubleFilterLayer | |
DoubleFlattenLayer | |
DoubleGaussianFiller | |
DoubleHDF5DataLayer | |
DoubleHDF5OutputLayer | |
DoubleHingeLossLayer | |
DoubleIm2colLayer | |
DoubleImageDataLayer | |
DoubleInfogainLossLayer | |
DoubleInnerProductLayer | |
DoubleInputLayer | |
DoubleLayer | |
DoubleLayerRegisterer | |
DoubleLayerRegisterer.Creator_LayerParameter | |
DoubleLayerRegistry | |
DoubleLayerRegistry.Creator | |
DoubleLayerSharedVector | |
DoubleLayerSharedVector.Iterator | |
DoubleLossLayer | |
DoubleLRNLayer | |
DoubleLSTMLayer | |
DoubleMemoryDataLayer | |
DoubleMSRAFiller | |
DoubleMultinomialLogisticLossLayer | |
DoubleMVNLayer | |
DoubleNesterovSolver | |
DoubleNet | |
DoubleNet.Callback | |
DoubleNetSharedVector | |
DoubleNetSharedVector.Iterator | |
DoubleNeuronLayer | |
DoubleParameterLayer | |
DoublePoolingLayer | |
DoublePositiveUnitballFiller | |
DoublePowerLayer | |
DoublePReLULayer | |
DoubleRecurrentLayer | |
DoubleReductionLayer | |
DoubleRegistry | |
DoubleRegistry.Iterator | |
DoubleReLULayer | |
DoubleReshapeLayer | |
DoubleRMSPropSolver | |
DoubleRNNLayer | |
DoubleSGDSolver | |
DoubleSigmoidCrossEntropyLossLayer | |
DoubleSigmoidLayer | |
DoubleSilenceLayer | |
DoubleSliceLayer | |
DoubleSoftmaxLayer | |
DoubleSoftmaxWithLossLayer | |
DoubleSolver | |
DoubleSolver.Callback | |
DoubleSolverRegisterer | |
DoubleSolverRegisterer.Creator_SolverParameter | |
DoubleSolverRegistry | |
DoubleSolverRegistry.Creator | |
DoubleSplitLayer | |
DoubleSPPLayer | |
DoubleTanHLayer | |
DoubleThresholdLayer | |
DoubleTileLayer | |
DoubleUniformFiller | |
DoubleWindowDataLayer | |
DoubleXavierFiller | |
DropoutParameter | |
DummyDataParameter | |
EltwiseParameter | |
ELUParameter | |
EmbedParameter | |
EnumDescriptor | |
EnumDescriptor.ReservedRange | |
EnumDescriptorProto | |
EnumOptions | |
EnumValueDescriptor | |
EnumValueDescriptorProto | |
EnumValueOptions | |
ExpParameter | |
ExtensionRangeOptions | |
F_Pointer | |
FatalException | |
FieldDescriptor | |
FieldDescriptorProto | |
FieldDescriptorVector | |
FieldDescriptorVector.Iterator | |
FieldOptions | |
FileDescriptor | |
FileDescriptorProto | |
FileDescriptorTables | |
FileOptions | |
FillerParameter | |
FlattenParameter | |
FloatAbsValLayer |
\brief Computes
y = |x| |
FloatAccuracyLayer |
\brief Computes the classification accuracy for a one-of-many
classification task.
|
FloatAdaDeltaSolver | |
FloatAdaGradSolver | |
FloatAdamSolver |
\brief AdamSolver, an algorithm for first-order gradient-based optimization
of stochastic objective functions, based on adaptive estimates of
lower-order moments.
|
FloatArgMaxLayer |
\brief Compute the index of the
K max values for each datum across
all dimensions (C \times H \times W) . |
FloatBaseConvolutionLayer |
\brief Abstract base class that factors out the BLAS code common to
ConvolutionLayer and DeconvolutionLayer.
|
FloatBaseDataLayer |
\brief Provides base for data layers that feed blobs to the Net.
|
FloatBasePrefetchingDataLayer | |
FloatBatch | |
FloatBatchNormLayer |
\brief Normalizes the input to have 0-mean and/or unit (1) variance across
the batch.
|
FloatBatchReindexLayer |
\brief Index into the input blob along its first axis.
|
FloatBilinearFiller |
\brief Fills a Blob with coefficients for bilinear interpolation.
|
FloatBlob |
\brief A wrapper around SyncedMemory holders serving as the basic
computational unit through which Layer%s, Net%s, and Solver%s
interact.
|
FloatBlobSharedVector | |
FloatBlobSharedVector.Iterator | |
FloatBlobVector | |
FloatBlobVector.Iterator | |
FloatBlobVectorVector | |
FloatBlobVectorVector.Iterator | |
FloatBNLLLayer |
\brief Computes
y = x + \log(1 + \exp(-x)) if x > 0 ;
y = \log(1 + \exp(x)) otherwise. |
FloatCallbackVector | |
FloatCallbackVector.Iterator | |
FloatConcatLayer |
\brief Takes at least two Blob%s and concatenates them along either the num
or channel dimension, outputting the result.
|
FloatConstantFiller |
\brief Fills a Blob with constant values
x = 0 . |
FloatContrastiveLossLayer |
\brief Computes the contrastive loss
E = \frac{1}{2N} \sum\limits_{n=1}^N \left(y\right) d^2 +
\left(1-y\right) \max \left(margin-d, 0\right)^2
where d = \left| \left| a_n - b_n \right| \right|_2 . |
FloatConvolutionLayer |
\brief Convolves the input image with a bank of learned filters,
and (optionally) adds biases.
|
FloatCropLayer |
\brief Takes a Blob and crop it, to the shape specified by the second input
Blob, across all dimensions after the specified axis.
|
FloatCuDNNConvolutionLayer | |
FloatCuDNNLCNLayer | |
FloatCuDNNLRNLayer | |
FloatCuDNNPoolingLayer | |
FloatCuDNNReLULayer |
\brief CuDNN acceleration of ReLULayer.
|
FloatCuDNNSigmoidLayer |
\brief CuDNN acceleration of SigmoidLayer.
|
FloatCuDNNSoftmaxLayer |
\brief cuDNN implementation of SoftmaxLayer.
|
FloatCuDNNTanHLayer |
\brief CuDNN acceleration of TanHLayer.
|
FloatDataLayer | |
FloatDataTransformer |
\brief Applies common transformations to the input data, such as
scaling, mirroring, substracting the image mean...
|
FloatDeconvolutionLayer |
\brief Convolve the input with a bank of learned filters, and (optionally)
add biases, treating filters and convolution parameters in the
opposite sense as ConvolutionLayer.
|
FloatDropoutLayer |
\brief During training only, sets a random portion of
x to 0, adjusting
the rest of the vector magnitude accordingly. |
FloatDummyDataLayer |
\brief Provides data to the Net generated by a Filler.
|
FloatEltwiseLayer |
\brief Compute elementwise operations, such as product and sum,
along multiple input Blobs.
|
FloatEmbedLayer |
\brief A layer for learning "embeddings" of one-hot vector input.
|
FloatEuclideanLossLayer |
\brief Computes the Euclidean (L2) loss
E = \frac{1}{2N} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n
\right| \right|_2^2 for real-valued regression tasks. |
FloatExpLayer |
\brief Computes
y = \gamma ^ {\alpha x + \beta} ,
as specified by the scale \alpha , shift \beta ,
and base \gamma . |
FloatFiller |
\brief Fills a Blob with constant or randomly-generated data.
|
FloatFilterLayer |
\brief Takes two+ Blobs, interprets last Blob as a selector and
filter remaining Blobs accordingly with selector data (0 means that
the corresponding item has to be filtered, non-zero means that corresponding
item needs to stay).
|
FloatFlattenLayer |
\brief Reshapes the input Blob into flat vectors.
|
FloatGaussianFiller |
\brief Fills a Blob with Gaussian-distributed values
x = a . |
FloatHDF5DataLayer |
\brief Provides data to the Net from HDF5 files.
|
FloatHDF5OutputLayer |
\brief Write blobs to disk as HDF5 files.
|
FloatHingeLossLayer |
\brief Computes the hinge loss for a one-of-many classification task.
|
FloatIm2colLayer |
\brief A helper for image operations that rearranges image regions into
column vectors.
|
FloatImageDataLayer |
\brief Provides data to the Net from image files.
|
FloatInfogainLossLayer |
\brief A generalization of MultinomialLogisticLossLayer that takes an
"information gain" (infogain) matrix specifying the "value" of all label
pairs.
|
FloatInnerProductLayer |
\brief Also known as a "fully-connected" layer, computes an inner product
with a set of learned weights, and (optionally) adds biases.
|
FloatInputLayer |
\brief Provides data to the Net by assigning tops directly.
|
FloatLayer |
\brief An interface for the units of computation which can be composed into a
Net.
|
FloatLayerRegisterer | |
FloatLayerRegisterer.Creator_LayerParameter | |
FloatLayerRegistry | |
FloatLayerRegistry.Creator | |
FloatLayerSharedVector | |
FloatLayerSharedVector.Iterator | |
FloatLossLayer |
\brief An interface for Layer%s that take two Blob%s as input -- usually
(1) predictions and (2) ground-truth labels -- and output a
singleton Blob representing the loss.
|
FloatLRNLayer |
\brief Normalize the input in a local region across or within feature maps.
|
FloatLSTMLayer |
\brief Processes sequential inputs using a "Long Short-Term Memory" (LSTM)
[1] style recurrent neural network (RNN).
|
FloatMemoryDataLayer |
\brief Provides data to the Net from memory.
|
FloatMSRAFiller |
\brief Fills a Blob with values
x \sim N(0, \sigma^2) where
\sigma^2 is set inversely proportional to number of incoming
nodes, outgoing nodes, or their average. |
FloatMultinomialLogisticLossLayer |
\brief Computes the multinomial logistic loss for a one-of-many
classification task, directly taking a predicted probability
distribution as input.
|
FloatMVNLayer |
\brief Normalizes the input to have 0-mean and/or unit (1) variance.
|
FloatNesterovSolver | |
FloatNet |
\brief Connects Layer%s together into a directed acyclic graph (DAG)
specified by a NetParameter.
|
FloatNet.Callback | |
FloatNetSharedVector | |
FloatNetSharedVector.Iterator | |
FloatNeuronLayer |
\brief An interface for layers that take one blob as input (
x )
and produce one equally-sized blob as output (y ), where
each element of the output depends only on the corresponding input
element. |
FloatParameterLayer | |
FloatPoolingLayer |
\brief Pools the input image by taking the max, average, etc.
|
FloatPositiveUnitballFiller |
\brief Fills a Blob with values
x \in [0, 1]
such that \forall i \sum_j x_{ij} = 1 . |
FloatPowerLayer |
\brief Computes
y = (\alpha x + \beta) ^ \gamma ,
as specified by the scale \alpha , shift \beta ,
and power \gamma . |
FloatPReLULayer |
\brief Parameterized Rectified Linear Unit non-linearity
y_i = \max(0, x_i) + a_i \min(0, x_i)
. |
FloatRecurrentLayer |
\brief An abstract class for implementing recurrent behavior inside of an
unrolled network.
|
FloatReductionLayer |
\brief Compute "reductions" -- operations that return a scalar output Blob
for an input Blob of arbitrary size, such as the sum, absolute sum,
and sum of squares.
|
FloatRegistry | |
FloatRegistry.Iterator | |
FloatReLULayer |
\brief Rectified Linear Unit non-linearity
y = \max(0, x) . |
FloatReshapeLayer | |
FloatRMSPropSolver | |
FloatRNNLayer |
\brief Processes time-varying inputs using a simple recurrent neural network
(RNN).
|
FloatSGDSolver |
\brief Optimizes the parameters of a Net using
stochastic gradient descent (SGD) with momentum.
|
FloatSigmoidCrossEntropyLossLayer |
\brief Computes the cross-entropy (logistic) loss
E = \frac{-1}{n} \sum\limits_{n=1}^N \left[
p_n \log \hat{p}_n +
(1 - p_n) \log(1 - \hat{p}_n)
\right]
, often used for predicting targets interpreted as probabilities. |
FloatSigmoidLayer |
\brief Sigmoid function non-linearity
y = (1 + \exp(-x))^{-1}
, a classic choice in neural networks. |
FloatSilenceLayer |
\brief Ignores bottom blobs while producing no top blobs.
|
FloatSliceLayer |
\brief Takes a Blob and slices it along either the num or channel dimension,
outputting multiple sliced Blob results.
|
FloatSoftmaxLayer |
\brief Computes the softmax function.
|
FloatSoftmaxWithLossLayer |
\brief Computes the multinomial logistic loss for a one-of-many
classification task, passing real-valued predictions through a
softmax to get a probability distribution over classes.
|
FloatSolver |
\brief An interface for classes that perform optimization on Net%s.
|
FloatSolver.Callback | |
FloatSolverRegisterer | |
FloatSolverRegisterer.Creator_SolverParameter | |
FloatSolverRegistry | |
FloatSolverRegistry.Creator | |
FloatSplitLayer |
\brief Creates a "split" path in the network by copying the bottom Blob
into multiple top Blob%s to be used by multiple consuming layers.
|
FloatSPPLayer |
\brief Does spatial pyramid pooling on the input image
by taking the max, average, etc.
|
FloatTanHLayer |
\brief TanH hyperbolic tangent non-linearity
y = \frac{\exp(2x) - 1}{\exp(2x) + 1}
, popular in auto-encoders. |
FloatThresholdLayer |
\brief Tests whether the input exceeds a threshold: outputs 1 for inputs
above threshold; 0 otherwise.
|
FloatTileLayer |
\brief Copy a Blob along specified dimensions.
|
FloatUniformFiller |
\brief Fills a Blob with uniformly distributed values
x\sim U(a, b) . |
FloatWindowDataLayer |
\brief Provides data to the Net from windows of images files, specified
by a window data file.
|
FloatXavierFiller |
\brief Fills a Blob with values
x \sim U(-a, +a) where a is
set inversely proportional to number of incoming nodes, outgoing
nodes, or their average. |
Func | |
GeneratedMessageReflection | |
HDF5DataParameter | |
HDF5OutputParameter | |
HingeLossParameter | |
ImageDataParameter | |
InfogainLossParameter | |
InnerProductParameter | |
InputParameter | |
InternalThread |
Virtual class encapsulate boost::thread for use in base class
The child class will acquire the ability to run a single thread,
by reimplementing the virtual function InternalThreadEntry.
|
LayerParameter | |
LazyDescriptor | |
LazyField | |
LevelDB | |
LevelDBCursor | |
LevelDBTransaction | |
LMDB | |
LMDBCursor | |
LMDBTransaction | |
LogParameter | |
LongLongPair | |
LossParameter | |
LRNParameter | |
MapFieldBase | |
MapFieldPrinterHelper | |
MapFieldReflectionTest | |
MapIterator | |
MapKey | |
MapKeySorter | |
MapReflectionFriend | |
MapReflectionTester | |
MapValueRef | |
MemoryDataParameter | |
Message | |
MessageFactory | |
MessageLite | |
MessageOptions | |
Metadata | |
MethodDescriptor | |
MethodDescriptorProto | |
MethodOptions | |
MVNParameter | |
NetParameter | |
NetState | |
NetStateRule | |
OneofDescriptor | |
OneofDescriptorProto | |
OneofOptions | |
ParameterParameter | |
ParamSpec | |
ParseTable | |
ParseTableField | |
PoolingParameter | |
PowerParameter | |
PReLUParameter | |
Printer | |
PythonParameter | |
RecurrentParameter | |
ReductionParameter | |
Reflection | |
ReflectionAccessor | |
ReflectionOps | |
ReLUParameter | |
RepeatedFieldAccessor | |
RepeatedPtrFieldBase | |
ReshapeParameter | |
ScaleParameter | |
ServiceDescriptor | |
ServiceDescriptorProto | |
ServiceOptions | |
SigmoidParameter | |
SliceParameter | |
SoftmaxParameter | |
SolverParameter | |
SolverState | |
SourceCodeInfo | |
SourceLocation | |
SPPParameter | |
StringIntMap | |
StringIntMap.Iterator | |
StringPiece | |
StringVector | |
StringVector.Iterator | |
Symbol | |
SyncedMemory |
\brief Manages memory allocation and synchronization between the host (CPU)
and device (GPU).
|
TanHParameter | |
thread |
Forward declare boost::thread instead of including boost/thread.hpp
to avoid a boost/NVCC issues (#1009, #1010) on OSX.
|
ThresholdParameter | |
TileParameter | |
Timer | |
Transaction | |
TransformationParameter | |
UninterpretedOption | |
UnknownField | |
UnknownFieldSet | |
V0LayerParameter | |
V1LayerParameter | |
WeakFieldMap | |
WindowDataParameter | |
WireFormat | |
WireFormatLite | |
ZeroCopyInputStream | |
ZeroCopyOutputStream |
Copyright © 2022. All rights reserved.