Data type of the tensor
Number of nonzero entries in the tensor
Total number of entries (including zero entries) in the tensor
Number of sparse dimensions
Dense strides of the tensor
Adds a second tensor, which can either be a sparse or a dense tensor:
This is not supported on the WebGL backend yet.
Align the shapes of this tensor and the given tensor according to the broadcasting rules: https://github.com/onnx/onnx/blob/master/docs/Broadcasting.md
Tensor of which the shapes should be aligned
Performs average pooling over the spatial dimensions of this tensor with shape [N,C,D1,D2,..]
Size of the average pooling dimension
Padding of the input specified as [startpad_D1,startpad_D2,...,startpad_DN,endpad_D1,endpad_D2,...] Padding value will be 0. Defaults to 0 for all axes
Stride size of the average pooling kernel. Defaults to 1 for all axes
Wether padded values should be included in the average (or masked out). Defaults to false
Not implemented yet
Compares this tensor to another tensor.
Tensor to compare to
Optional maximum difference between the tensors. If not specified the tensors have to be exactly equal
Creates a new sparse tensor with the same sparsity shape and the given value everywhere.
Constant value to set at every position
Convolves this tensor with the specified kernel.
This tensor should have shape [N,C,D1,D2,...] where D1,D2,... are the spatial dimensions.
Behaves according to https://github.com/onnx/onnx/blob/master/docs/Operators.md#Conv
Convolution kernel with shape [M,C/G,K1,K2] where G is the group parameter
Optional bias to add to the result with shape [M]
Per axis dilations for the spatial dimension. Defaults to 1 for all axes
Group parameter
Padding to add to the input for each spatial dimension. Defaults to 0 for all axes
Convolution stride for each spatial dimension. Defaults to 1 for all axes
Optional activation to apply. Defaults to the identity (so no activation)
Calculates the transpose convolution
This tensor should have shape [N,C,D1,D2,...] where D1,D2,... are the spatial dimensions.
Convolution kernel with shape [M,C/G,K1,K2] where G is the group parameter
Per axis dilations for the spatial dimension. Defaults to 1 for all axes
Group parameter
Padding to add to the input for each spatial dimension. Defaults to 0 for all axes
Convolution stride for each spatial dimension. Defaults to 1 for all axes
Not implemented yet
Divides a second tensor element wise, which can either be a sparse or a dense tensor. The same restrictions as for SparseTensor.add apply!
Not implemented yet
Calculates the general matrix product. https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3
A and B can have batch dimensions. Their last two dimensions should correspond to the dimensions for the matrix product
Second matrix for the matrix product
If the last two dimensions of a are transposed. Defaults to false
If the last two dimensions of a are transposed. Defaults to false
Alpha parameter. Defaults to 1.0
Optional tensor to add to the result.
Beta parameter, only used if c is specified. Defaults to 1.0
Dense part of the shape of the tensor, ie. the D last values of the shape, where D is then number of dense dimension.
Sparse part of the shape of the tensor, ie. the S first values of the shape, where S is then number of sparse dimension.
Calculates the matrix product. This tensor should have shape [M,N]
Two cases are supported for sparse tensors:
Dense matrix to multiply with. Should have shape [N,O]
Takes the maximum over specified axis/axes.
One or multiple axes to take the maximum over. If not specified this will be all axes
Wether the maximum axes will be kept with size 1
Takes the minimum over specified axis/axes.
One or multiple axes to take the minimum over. If not specified this will be all axes
Wether the minimum axes will be kept with size 1
Multiplies a second tensor element wise, which can either be a sparse or a dense tensor. The same restrictions as for SparseTensor.add apply!
Pads the input according to the padding mode. The input has shape [D1,D2,..]
Padding size of each input. Specified as [startpad_D1,startpad_D2,...,startpad_DN,endpad_D1,endpad_D2,...]
Padding mode. One of 'constant', 'edge', 'reflect'. Defaults to 'constant'
Value for constant padding. Defaults to 0.0
Takes the product over specified axis/axes.
One or multiple axes to take the product over. If not specified this will be all axes
Wether the product axes will be kept with size 1
Takes the log of the sum over the specified axis
This is equal to a.sum(axes, keepDims).log()
(where sumSize is the number
of entries in the summation axes) but faster.
Note that this can only be called on tensors with a float data type (float64, float32, float16)
One or multiple axes to take the mean over. If not specified this will take the mean over all axes
Wether the mean axes will be kept with size 1
Takes the log of the sum over the exp of the specified axis
This is equal to a.sum(axes, keepDims).log()
(where sumSize is the number
of entries in the summation axes) but faster.
Note that this can only be called on tensors with a float data type (float64, float32, float16)
One or multiple axes to take the mean over. If not specified this will take the mean over all axes
Wether the mean axes will be kept with size 1
Takes the mean over the specified axis/axes.
This is equal to a.sum(axes, keepDims).divide(sumSize)
(where sumSize is the number
of entries in the summation axes) but faster.
One or multiple axes to take the mean over. If not specified this will take the mean over all axes
Wether the mean axes will be kept with size 1
Takes the mean over the specified axis/axes with the entries of the tensor squared.
This is equal to a.multiply(a).sum(axes, keepDims).divide(sumSize)
(where sumSize is the number
of entries in the summation axes) but faster.
One or multiple axes to take the mean over. If not specified this will take the mean over all axes
Wether the mean axes will be kept with size 1
Reshape the tensor to the specified shape
At most one value in the shape can be -1, which will be replaced by the inferred size for this dimension.
New shape of the tensor
Wether the tensor values should be copied. Only has an effect on GPU tensors
Not implemented yet
Takes a slice of the tensor along the specified axes.
Start of the slice for each axis
End of the slice for each axis - Exclusive (the end index will not be included in the slice)
Axes to slice. Defaults to all axes
Not implemented yet
Takes the softmax along the given axis https://en.wikipedia.org/wiki/Softmax_function
Note that this can only be called on tensors with a float data type (float64, float32, float16)
Subtracts a second tensor, which can either be a sparse or a dense tensor. The same restrictions as for SparseTensor.add apply!
Sums over sparse and/or dense dimensions according to the specified axes.
Sums over the specified axis/axes with the entries of the tensor squared.
This is equal to a.multiply(a).sum(axes, keepDims)
but faster
One or multiple axes to sum over. If not specified this will sum over all axes
Wether the summation axes will be kept with size 1
Transposes the tensor according to the given permutation
Permutation for the axes. Default is the reverse axis order
Not implemented yet
Not implemented yet
Creates a sparse tensor with zero dense dimensions from a dense CPU tensor.
Generated using TypeDoc
Creates a new sparse tensor in coordinate format. The tensor has a number of sparse dimensions and optionally a number of dense dimensions. The shape of a sparse tensor can thus be decomposed into [...S, ...D], where S is the shape of the sparse dimensions and D the shape of the dense dimensions. By default the number of dense dimensions is zero
The values tensor holds all non-zero values and has shape [NNZ, ...D] where NNZ is the number of non-zero entries. The indices tensor holds the location of all non-zero entries of the tensor and has shape [NNZ, |S|] (where |S| is the number of sparse dimensions).
Note that all indexes that are not specified are implicitly zero. This does however not mean that they become non-zero on certain element wise operations. Instead element wise operations maintain the sparsity pattern. Otherwise, many operations would create effectively dense tensors (eg. exp()), or would simply not be well defined (eg. log()).
If you want to create a sparse tensor, equivalent to the following CPU tensor:
you collect the indices, where the value is nonzero:
and the corresponding values: