@Namespace(value="torch::autograd") @NoOffset @Properties(inherit=torch.class) public class Node extends Pointer
Modifier and Type | Class and Description |
---|---|
static class |
Node.undefined_input |
Pointer.CustomDeallocator, Pointer.Deallocator, Pointer.NativeDeallocator, Pointer.ReferenceCounter
Constructor and Description |
---|
Node(Pointer p)
Pointer cast constructor.
|
Modifier and Type | Method and Description |
---|---|
int |
add_input_metadata(Node.undefined_input u)
Adds a placeholder for an input that will not be used.
|
int |
add_input_metadata(Tensor t) |
int |
add_input_metadata(TensorOptions options,
SymIntArrayRef shape,
boolean is_tensor_subclass)
Adds the type and shape metadata for a new input.
|
void |
add_next_edge(Edge edge) |
long |
add_post_hook(FunctionPostHook post_hook) |
void |
add_pre_hook(FunctionPreHook pre_hook) |
void |
add_retains_grad_hook(FunctionPreHook pre_hook,
int output_idx) |
void |
add_tensor_pre_hook(FunctionPreHook pre_hook) |
TensorVector |
apply_with_saved(TensorVector inputs,
SwapSavedVariables saved) |
TensorVector |
apply(TensorVector inputs)
Evaluates the function on the given inputs and returns the result of the
function call.
|
void |
assign_parent() |
void |
clear_input_metadata() |
void |
compiled_args(CompiledNodeArgs args) |
boolean |
del_post_hook(long key) |
Node |
getptr()
Nodes are neither copyable nor moveable.
|
boolean |
is_traceable()
Returns true if this function is traceable.
|
AnomalyMetadata |
metadata()
Returns the anomaly metadata stored for this
Node . |
BytePointer |
name()
Returns the name of the dynamic type of the function, for debugging.
|
Edge |
next_edge(long index) |
EdgeVector |
next_edges() |
int |
num_inputs() |
int |
num_outputs() |
boolean |
passes_state_transparently()
A
Node is said to pass state transparently to backward, if the
state consists only of (Saved)Variables and only non-variable objects
that parameterize the operation in some way that defines the graph
structure AND the backward function is traceable. |
FunctionPreHook |
pop_retains_grad_hook(int output_idx) |
FunctionPostHookVector |
post_hooks() |
FunctionPreHookVector |
pre_hooks() |
Pointer |
pyobj()
Returns the
PyObject stored for this Node (for Python
interaction). |
void |
release_variables()
Releases saved variables if the operation won't be reused.
|
long |
sequence_nr()
NOTE [ Sequence Number]
The sequence_nr has two main usages in autograd:
1) Helps determine the node's execution priority in the engine.
|
void |
set_next_edge(long index,
Edge edge) |
void |
set_next_edges(EdgeVector next_edges) |
void |
set_pyobj(Pointer pyobj)
Sets the
PyObject stored for this Node (for Python interaction). |
boolean |
should_compute_output(long output_edge_index)
The difference between functions
should_compute_output and
task_should_compute_output :
- should_compute_output should only be used during graph construction
and takes into account only requires_grad information
- task_should_compute_output should only be called during the backward
pass (unless called directly through grad_fn) and takes into account the
current graph task. |
StreamOptional |
stream(byte device_type) |
StreamOptional |
stream(torch.DeviceType device_type)
Note: Function Streams
A function's stream (for a given device type) is the stream of the first
element of its input buffer on a device of that type.
|
boolean |
task_should_compute_output(long output_edge_index)
Same as the above
should_compute_output function but will also
check whether this edge is needed within the current graph task. |
PostAccumulateGradHook |
tensor_post_acc_grad_hooks() |
FunctionPreHookVector |
tensor_pre_hooks() |
long |
thread_id()
Id of the thread that created Node
|
long |
topological_nr() |
void |
update_topological_nr(Edge edge) |
void |
will_release_variables()
Called before an apply if
release_variables() is going to be called. |
address, asBuffer, asByteBuffer, availablePhysicalBytes, calloc, capacity, capacity, close, deallocate, deallocate, deallocateReferences, deallocator, deallocator, equals, fill, formatBytes, free, getDirectBufferAddress, getPointer, getPointer, getPointer, getPointer, hashCode, interruptDeallocatorThread, isNull, isNull, limit, limit, malloc, maxBytes, maxPhysicalBytes, memchr, memcmp, memcpy, memmove, memset, offsetAddress, offsetof, offsetof, parseBytes, physicalBytes, physicalBytesInaccurate, position, position, put, realloc, referenceCount, releaseReference, retainReference, setNull, sizeof, sizeof, toString, totalBytes, totalCount, totalPhysicalBytes, withDeallocator, zero
public Node(Pointer p)
Pointer(Pointer)
.@SharedPtr public Node getptr()
@Name(value="operator ()") @Cast(value={"","std::vector<torch::Tensor>"}) @StdMove public TensorVector apply(@Cast(value={"","std::vector<torch::Tensor>"}) @StdMove TensorVector inputs)
@Cast(value="uint32_t") @NoException(value=true) public int add_input_metadata(@Const @ByRef TensorOptions options, @ByVal SymIntArrayRef shape, @Cast(value="bool") boolean is_tensor_subclass)
@Cast(value="uint32_t") @NoException(value=true) public int add_input_metadata(@Const @ByRef Tensor t)
@Cast(value="uint32_t") @NoException(value=true) public int add_input_metadata(@ByVal Node.undefined_input u)
@Cast(value="uint32_t") @NoException(value=true) public int num_inputs()
@ByVal public StreamOptional stream(torch.DeviceType device_type)
@ByVal public StreamOptional stream(@Cast(value="c10::DeviceType") byte device_type)
public void clear_input_metadata()
public void set_next_edges(@Cast(value={"","std::vector<torch::autograd::Edge>"}) @StdMove EdgeVector next_edges)
@Const @ByRef @NoException(value=true) public Edge next_edge(@Cast(value="size_t") long index)
@ByRef @NoException(value=true) public EdgeVector next_edges()
@Cast(value="uint32_t") @NoException(value=true) public int num_outputs()
@Cast(value="uint64_t") @NoException(value=true) public long sequence_nr()
Node
is paired with with thread_id it was
created in
as a unique identifier by the profiler to annotate recorded events.
The purpose of this is to help users (and possibly programs)
interpreting the profiler's output to correlate backward nodes with its
forward ops. We need both sequence_nr and thread_id to identify a node
because sequence_nr is thread_local, i.e., starts counting up from zero
in a new thread@Cast(value="uint64_t") @NoException(value=true) public long topological_nr()
public void assign_parent()
@Cast(value="uint64_t") @NoException(value=true) public long thread_id()
@StdString public BytePointer name()
@Cast(value="bool") public boolean should_compute_output(@Cast(value="size_t") long output_edge_index)
should_compute_output
and
task_should_compute_output
:
- should_compute_output
should only be used during graph construction
and takes into account only requires_grad information
- task_should_compute_output
should only be called during the backward
pass (unless called directly through grad_fn) and takes into account the
current graph task. Specifically, the autograd engine trims unnecessary
edges when inputs
are specified, and during backward untrimmed nodes
left on the graph can/should check task_should_compute_output
to see if
any outgoing edges have been trimmed by the engine. If that is the case,
gradient computation wrt those edges can be omitted.
Returns true if the particular output edge is active, and that particular
output of this function should be computed.@Cast(value="bool") public boolean task_should_compute_output(@Cast(value="size_t") long output_edge_index)
should_compute_output
function but will also
check whether this edge is needed within the current graph task.@Cast(value="PyObject*") @NoException(value=true) public Pointer pyobj()
PyObject
stored for this Node
(for Python
interaction).@NoException(value=true) public void set_pyobj(@Cast(value="PyObject*") Pointer pyobj)
PyObject
stored for this Node
(for Python interaction).@NoException(value=true) public AnomalyMetadata metadata()
Node
.
If none exist, creates a new empty one.@Cast(value="uintptr_t") public long add_post_hook(@UniquePtr @ByRef(value=true) FunctionPostHook post_hook)
@Cast(value="bool") public boolean del_post_hook(@Cast(value="const uintptr_t") long key)
@ByRef @NoException(value=true) public FunctionPostHookVector post_hooks()
public void add_pre_hook(@UniquePtr @ByRef(value=true) FunctionPreHook pre_hook)
public void add_tensor_pre_hook(@UniquePtr @ByRef(value=true) FunctionPreHook pre_hook)
public void add_retains_grad_hook(@UniquePtr @ByRef(value=true) FunctionPreHook pre_hook, int output_idx)
@UniquePtr @ByVal public FunctionPreHook pop_retains_grad_hook(int output_idx)
@ByRef @NoException(value=true) public FunctionPreHookVector pre_hooks()
@ByRef @NoException(value=true) public FunctionPreHookVector tensor_pre_hooks()
@UniquePtr @NoException(value=true) public PostAccumulateGradHook tensor_post_acc_grad_hooks()
public void release_variables()
public void will_release_variables()
release_variables()
is going to be called.
Allows larger ops like InterpreterAutogradFunction
to incrementally
release variables as they run.@Cast(value="bool") public boolean is_traceable()
apply()
are performed on autograd
Variables
(i.e. apply mostly instantiates and applies other functions).@Cast(value="bool") public boolean passes_state_transparently()
Node
is said to pass state transparently to backward, if the
state consists only of (Saved)Variables and only non-variable objects
that parameterize the operation in some way that defines the graph
structure AND the backward function is traceable. In particular,
parametrization MUST NOT depend on the data of any Variable
.
TODO: it might be possible to handle cases where backward is
non-traceable but state passing could be considered transparent. This
will probably depend on saved_variable_list being mutable.
NOTE: this value matters only if is_traceable() returns false.public void compiled_args(@ByRef CompiledNodeArgs args)
@Cast(value={"","std::vector<torch::Tensor>"}) @StdMove public TensorVector apply_with_saved(@Cast(value={"","std::vector<torch::Tensor>"}) @StdMove TensorVector inputs, @ByRef SwapSavedVariables saved)
Copyright © 2024. All rights reserved.