Package | Description |
---|---|
org.bytedeco.tensorrt.global | |
org.bytedeco.tensorrt.nvinfer | |
org.bytedeco.tensorrt.nvparsers |
Modifier and Type | Method and Description |
---|---|
nvinfer.DataType |
nvinfer.DataType.intern() |
static nvinfer.DataType |
nvinfer.DataType.valueOf(String name)
Returns the enum constant of this type with the specified name.
|
static nvinfer.DataType[] |
nvinfer.DataType.values()
Returns an array containing the constants of this enum type, in
the order they are declared.
|
Modifier and Type | Method and Description |
---|---|
static boolean |
nvinfer.transposeSubBuffers(Pointer input,
nvinfer.DataType type,
int num,
int height,
int width)
Deprecated.
Deprecated in TensorRT 8.0.
\warning This file will be removed in TensorRT 10.0.
|
Modifier and Type | Method and Description |
---|---|
nvinfer.DataType |
ICudaEngine.getBindingDataType(int bindingIndex)
Deprecated.
Deprecated in TensorRT 8.5. Superseded by getTensorDataType().
|
nvinfer.DataType |
VCudaEngine.getBindingDataType(int bindingIndex) |
nvinfer.DataType |
INormalizationLayer.getComputePrecision()
\brief Get the compute precision of this layer.
|
nvinfer.DataType |
VNormalizationLayer.getComputePrecision() |
nvinfer.DataType |
IAlgorithmIOInfo.getDataType()
\brief Return DataType of the input/output of algorithm.
|
nvinfer.DataType |
VAlgorithmIOInfo.getDataType() |
nvinfer.DataType |
IPluginV2Ext.getOutputDataType(int index,
int[] inputTypes,
int nbInputs)
Deprecated.
|
nvinfer.DataType |
IPluginV2Ext.getOutputDataType(int index,
IntPointer inputTypes,
int nbInputs)
Deprecated.
\brief Return the DataType of the plugin output at the requested index.
|
nvinfer.DataType |
ILayer.getOutputType(int index)
\brief get the output type of this layer
|
nvinfer.DataType |
VLayer.getOutputType(int index) |
nvinfer.DataType |
ILayer.getPrecision()
\brief get the computational precision of this layer
|
nvinfer.DataType |
VLayer.getPrecision() |
nvinfer.DataType |
ICudaEngine.getTensorDataType(String tensorName)
\brief Determine the required data type for a buffer from its tensor name.
|
nvinfer.DataType |
VCudaEngine.getTensorDataType(String tensorName) |
nvinfer.DataType |
ICastLayer.getToType()
\brief Return cast layer output type.
|
nvinfer.DataType |
VCastLayer.getToType() |
nvinfer.DataType |
ITensor.getType()
\brief Get the data type of a tensor.
|
nvinfer.DataType |
VTensor.getType() |
nvinfer.DataType |
Weights.type()
The type of the weights.
|
nvinfer.DataType |
PluginTensorDesc.type()
\warning DataType:kBOOL and DataType::kUINT8 are not supported.
|
nvinfer.DataType |
VHostMemory.type() |
nvinfer.DataType |
IHostMemory.type()
The type of the memory that was allocated.
|
Modifier and Type | Method and Description |
---|---|
ICastLayer |
INetworkDefinition.addCast(ITensor input,
nvinfer.DataType toType)
\brief Add a cast layer.
|
ICastLayer |
VNetworkDefinition.addCast(ITensor input,
nvinfer.DataType toType) |
ITensor |
INetworkDefinition.addInput(String name,
nvinfer.DataType type,
Dims32 dimensions)
\brief Add an input tensor to the network.
|
ITensor |
VNetworkDefinition.addInput(String name,
nvinfer.DataType type,
Dims32 dimensions) |
void |
IPluginV2.configureWithFormat(Dims32 inputDims,
int nbInputs,
Dims32 outputDims,
int nbOutputs,
nvinfer.DataType type,
nvinfer.TensorFormat format,
int maxBatchSize)
Deprecated.
\brief Configure the layer.
|
void |
INormalizationLayer.setComputePrecision(nvinfer.DataType type)
\brief Set the compute precision of this layer.
|
void |
VNormalizationLayer.setComputePrecision(nvinfer.DataType type) |
void |
ILayer.setOutputType(int index,
nvinfer.DataType dataType)
\brief Set the output type of this layer
Setting the output type constrains TensorRT to choose implementations which generate output data with the
given type.
|
void |
VLayer.setOutputType(int index,
nvinfer.DataType dataType) |
void |
ILayer.setPrecision(nvinfer.DataType dataType)
\brief Set the computational precision of this layer
Setting the precision allows TensorRT to choose an implementation which run at this computational precision.
|
void |
VLayer.setPrecision(nvinfer.DataType dataType) |
void |
ICastLayer.setToType(nvinfer.DataType toType)
\brief Set cast layer output type.
|
void |
VCastLayer.setToType(nvinfer.DataType toType) |
void |
ITensor.setType(nvinfer.DataType type)
\brief Set the data type of a tensor.
|
void |
VTensor.setType(nvinfer.DataType type) |
boolean |
IPluginV2.supportsFormat(nvinfer.DataType type,
nvinfer.TensorFormat format)
Deprecated.
\brief Check format support.
|
Weights |
Weights.type(nvinfer.DataType setter) |
PluginTensorDesc |
PluginTensorDesc.type(nvinfer.DataType setter) |
Modifier and Type | Method and Description |
---|---|
nvinfer.DataType |
IBinaryProtoBlob.getDataType() |
Modifier and Type | Method and Description |
---|---|
boolean |
IUffParser.parse(String file,
INetworkDefinition network,
nvinfer.DataType weightsType)
\brief Parse a UFF file.
|
IBlobNameToTensor |
ICaffeParser.parse(String deploy,
String model,
INetworkDefinition network,
nvinfer.DataType weightType)
\brief Parse a prototxt file and a binaryproto Caffe model to extract
network definition and weights associated with the network, respectively.
|
boolean |
IUffParser.parseBuffer(String buffer,
long size,
INetworkDefinition network,
nvinfer.DataType weightsType)
\brief Parse a UFF buffer, useful if the file already live in memory.
|
IBlobNameToTensor |
ICaffeParser.parseBuffers(byte[] deployBuffer,
long deployLength,
byte[] modelBuffer,
long modelLength,
INetworkDefinition network,
nvinfer.DataType weightType) |
IBlobNameToTensor |
ICaffeParser.parseBuffers(ByteBuffer deployBuffer,
long deployLength,
ByteBuffer modelBuffer,
long modelLength,
INetworkDefinition network,
nvinfer.DataType weightType) |
IBlobNameToTensor |
ICaffeParser.parseBuffers(BytePointer deployBuffer,
long deployLength,
BytePointer modelBuffer,
long modelLength,
INetworkDefinition network,
nvinfer.DataType weightType)
\brief Parse a deploy prototxt and a binaryproto Caffe model from memory buffers to extract
network definition and weights associated with the network, respectively.
|
Copyright © 2024. All rights reserved.