Package | Description |
---|---|
org.tensorflow.lite |
Defines classes to load and execute TensorFlowLite models.
|
Modifier and Type | Method and Description |
---|---|
Interpreter.Options |
Interpreter.Options.addDelegate(Delegate delegate)
Adds a
Delegate to be applied during interpreter creation. |
Interpreter.Options |
Interpreter.Options.addDelegate(Delegate delegate)
Adds a
Delegate to be applied during interpreter creation. |
Interpreter.Options |
Interpreter.Options.setAllowBufferHandleOutput(boolean allow)
Advanced: Set if buffer handle output is allowed.
|
Interpreter.Options |
Interpreter.Options.setAllowBufferHandleOutput(boolean allow)
Advanced: Set if buffer handle output is allowed.
|
Interpreter.Options |
Interpreter.Options.setAllowFp16PrecisionForFp32(boolean allow)
Sets whether to allow float16 precision for FP32 calculation when possible.
|
Interpreter.Options |
Interpreter.Options.setAllowFp16PrecisionForFp32(boolean allow)
Sets whether to allow float16 precision for FP32 calculation when possible.
|
Interpreter.Options |
Interpreter.Options.setNumThreads(int numThreads)
Sets the number of threads to be used for ops that support multi-threading.
|
Interpreter.Options |
Interpreter.Options.setNumThreads(int numThreads)
Sets the number of threads to be used for ops that support multi-threading.
|
Interpreter.Options |
Interpreter.Options.setUseNNAPI(boolean useNNAPI)
Sets whether to use NN API (if available) for op execution.
|
Interpreter.Options |
Interpreter.Options.setUseNNAPI(boolean useNNAPI)
Sets whether to use NN API (if available) for op execution.
|
Constructor and Description |
---|
Interpreter(@NonNull ByteBuffer byteBuffer,
Interpreter.Options options)
Initializes a
Interpreter with a ByteBuffer of a model file and a set of custom
#Options . |
Interpreter(@NonNull ByteBuffer byteBuffer,
Interpreter.Options options)
Initializes a
Interpreter with a ByteBuffer of a model file and a set of custom
#Options . |
Interpreter(@NonNull File modelFile,
Interpreter.Options options)
Initializes a
Interpreter and specifies the number of threads used for inference. |
Interpreter(@NonNull File modelFile,
Interpreter.Options options)
Initializes a
Interpreter and specifies the number of threads used for inference. |
Copyright © 2022. All rights reserved.