Options
All
  • Public
  • Public/Protected
  • All
Menu

Class SparseTensor<DTpe>

Type parameters

  • DTpe: DType = "float32"

Hierarchy

Index

Constructors

constructor

  • new SparseTensor<DTpe>(values: Tensor<DTpe>, indices: Tensor<"uint32">, shape: readonly number[], denseDims?: number): SparseTensor<DTpe>
  • Creates a new sparse tensor in coordinate format. The tensor has a number of sparse dimensions and optionally a number of dense dimensions. The shape of a sparse tensor can thus be decomposed into [...S, ...D], where S is the shape of the sparse dimensions and D the shape of the dense dimensions. By default the number of dense dimensions is zero

    The values tensor holds all non-zero values and has shape [NNZ, ...D] where NNZ is the number of non-zero entries. The indices tensor holds the location of all non-zero entries of the tensor and has shape [NNZ, |S|] (where |S| is the number of sparse dimensions).

    Note that all indexes that are not specified are implicitly zero. This does however not mean that they become non-zero on certain element wise operations. Instead element wise operations maintain the sparsity pattern. Otherwise, many operations would create effectively dense tensors (eg. exp()), or would simply not be well defined (eg. log()).

    example

    If you want to create a sparse tensor, equivalent to the following CPU tensor:

    const a = new CPUTensor([3,3],[1,0,0,0,2,0,0,3,4]);
    

    you collect the indices, where the value is nonzero:

    const indices = [
     0,0,  // Corresponds to value 1
     1,1,  // Corresponds to value 2
     2,1,  // Corresponds to value 3
     2,2   // Corresponds to value 4
    ];
    const indiceTensor = new CPUTensor([4, 2], indices, 'uint32');
    

    and the corresponding values:

    const values = [1,2,3,4];
    const valueTensor = new CPUTensor([4],values);
    
    const sparseTensor = new SparseTensor(valueTensor, indiceTensor, [3,3]);
    

    Type parameters

    • DTpe: DType = "float32"

    Parameters

    • values: Tensor<DTpe>
    • indices: Tensor<"uint32">
    • shape: readonly number[]
    • denseDims: number = 0

    Returns SparseTensor<DTpe>

Properties

denseDims

denseDims: number = 0

dtype

dtype: DTpe

Data type of the tensor

indices

indices: Tensor<"uint32">

nnz

nnz: number

Number of nonzero entries in the tensor

shape

shape: readonly number[]

size

size: number

Total number of entries (including zero entries) in the tensor

sparseDims

sparseDims: number

Number of sparse dimensions

strides

strides: number[]

Dense strides of the tensor

values

values: Tensor<DTpe>

Methods

abs

acos

acosh

add

  • add(tensor: Tensor<DTpe>, alpha?: number, beta?: number): Tensor<DTpe>
  • Adds a second tensor, which can either be a sparse or a dense tensor:

    • If the second tensor is a dense tensor, it is assumed that it has a rank at most equal to the dense dimensions of the first tensor. If this is not the case, entries in the second tensors that are zero in the first tensor are simply ignored! This also means that broadcasting in the first tensor is only supported on the dense dimensions!
    • If the second tensor is a sparse tensor, it is assumed that the first and second tensor have exactly the same sparsity pattern!

    This is not supported on the WebGL backend yet.

    Parameters

    • tensor: Tensor<DTpe>
    • Optional alpha: number
    • Optional beta: number

    Returns Tensor<DTpe>

addMultiplyScalar

  • addMultiplyScalar(factor: number, add: number): Tensor<DTpe>

addScalar

  • addScalar(value: number): Tensor<DTpe>

add_impl

  • add_impl(th: Tensor<DTpe>, tensor: Tensor<DTpe>, resultShape: readonly number[], alpha: number, beta: number): Tensor<DTpe>

alignShapes

  • alignShapes(shape1: readonly number[], shape2: readonly number[]): readonly number[][]
  • Parameters

    • shape1: readonly number[]
    • shape2: readonly number[]

    Returns readonly number[][]

alignTensor

  • alignTensor(tensor: Tensor<DTpe>): (readonly number[] | Tensor<DTpe>)[] | (any[] | Tensor<DTpe>)[]

asin

asinh

atan

atanh

averagePool

  • averagePool(kernelShape: number[], pads?: number[], strides?: number[], includePad?: boolean): Tensor<DTpe>
  • Performs average pooling over the spatial dimensions of this tensor with shape [N,C,D1,D2,..]

    Parameters

    • kernelShape: number[]

      Size of the average pooling dimension

    • Optional pads: number[]

      Padding of the input specified as [startpad_D1,startpad_D2,...,startpad_DN,endpad_D1,endpad_D2,...] Padding value will be 0. Defaults to 0 for all axes

    • Optional strides: number[]

      Stride size of the average pooling kernel. Defaults to 1 for all axes

    • Optional includePad: boolean

      Wether padded values should be included in the average (or masked out). Defaults to false

    Returns Tensor<DTpe>

Protected averagePool_impl

  • averagePool_impl(kernelShape: number[], pads: number[], strides: number[], includePad: boolean): Tensor<DTpe>

cast

  • cast<DTpe2>(dtype: DTpe2): Tensor<DTpe2>

ceil

clip

  • clip(min?: number, max?: number): Tensor<DTpe>

clipBackward

  • clipBackward(grad: Tensor<DTpe>, min?: number, max?: number): Tensor<DTpe>

compare

  • compare(tensor: Tensor<DTpe>, epsilon?: number): Promise<boolean>
  • Compares this tensor to another tensor.

    example
    const a = new CPUTensor([2,2], [1,2,3,4]);
    const b = new CPUTensor([2,2], [1.1,2.1,2.9,4.05]);
    const c = new CPUTensor([4], [1,2,3,4]);
    a.compare(b, 0.5).then(equal => {
     //equal will be true
    });
    
    a.compare(b).then(equal => {
     //equal will be false
    });
    
    a.compare(c).then(equal => {
     //equal will be false since the shapes of the tensors do not match
    });
    

    Parameters

    • tensor: Tensor<DTpe>

      Tensor to compare to

    • Optional epsilon: number

      Optional maximum difference between the tensors. If not specified the tensors have to be exactly equal

    Returns Promise<boolean>

concat

  • Concatenate the two tensors along the given axis

    Note that at the moment, only concatenation along sparse dimensions is supported!

    Parameters

    • tensor: Tensor<DTpe>
    • axis: number

    Returns Tensor<DTpe>

constantLike

  • constantLike(value: number): Tensor<DTpe>

conv

  • conv(kernel: Tensor<DTpe>, bias?: Tensor<DTpe>, dilations?: number[], group?: number, pads?: number[], strides?: number[], activation?: "id" | "relu" | "relu6"): Tensor<DTpe>
  • Convolves this tensor with the specified kernel.

    This tensor should have shape [N,C,D1,D2,...] where D1,D2,... are the spatial dimensions.

    Behaves according to https://github.com/onnx/onnx/blob/master/docs/Operators.md#Conv

    Parameters

    • kernel: Tensor<DTpe>

      Convolution kernel with shape [M,C/G,K1,K2] where G is the group parameter

    • Optional bias: Tensor<DTpe>

      Optional bias to add to the result with shape [M]

    • Optional dilations: number[]

      Per axis dilations for the spatial dimension. Defaults to 1 for all axes

    • Optional group: number

      Group parameter

    • Optional pads: number[]

      Padding to add to the input for each spatial dimension. Defaults to 0 for all axes

    • Optional strides: number[]

      Convolution stride for each spatial dimension. Defaults to 1 for all axes

    • Optional activation: "id" | "relu" | "relu6"

      Optional activation to apply. Defaults to the identity (so no activation)

    Returns Tensor<DTpe>

convTranspose

  • convTranspose(kernel: Tensor<DTpe>, dilations?: number[], group?: number, pads?: number[], strides?: number[]): Tensor<DTpe>
  • Calculates the transpose convolution

    This tensor should have shape [N,C,D1,D2,...] where D1,D2,... are the spatial dimensions.

    Parameters

    • kernel: Tensor<DTpe>

      Convolution kernel with shape [M,C/G,K1,K2] where G is the group parameter

    • Optional dilations: number[]

      Per axis dilations for the spatial dimension. Defaults to 1 for all axes

    • Optional group: number

      Group parameter

    • Optional pads: number[]

      Padding to add to the input for each spatial dimension. Defaults to 0 for all axes

    • Optional strides: number[]

      Convolution stride for each spatial dimension. Defaults to 1 for all axes

    Returns Tensor<DTpe>

Protected convTranspose_impl

  • convTranspose_impl(kernel: Tensor<DTpe>, dilations: number[], group: number, pads: number[], strides: number[]): Tensor<DTpe>

Protected conv_impl

  • conv_impl(kernel: Tensor<DTpe>, dilations: number[], group: number, pads: number[], strides: number[], activation: Activation, bias?: Tensor<DTpe>): Tensor<DTpe>

copy

cos

cosh

delete

  • delete(): void

divide

  • divide(tensor: Tensor<DTpe>, alpha?: number): Tensor<DTpe>

divide_impl

  • divide_impl(th: Tensor<DTpe>, tensor: Tensor<DTpe>, resultShape: readonly number[], alpha: number): Tensor<DTpe>

exp

expand

  • expand(shape: readonly number[]): Tensor<DTpe>

flatten

  • flatten(axis?: number): Tensor<DTpe>

floor

gather

gemm

  • gemm(b: Tensor<DTpe>, aTranspose?: boolean, bTranspose?: boolean, alpha?: number, c?: Tensor<DTpe>, beta?: number): Tensor<DTpe>
  • A and B can have batch dimensions. Their last two dimensions should correspond to the dimensions for the matrix product

    Parameters

    • b: Tensor<DTpe>

      Second matrix for the matrix product

    • Optional aTranspose: boolean

      If the last two dimensions of a are transposed. Defaults to false

    • Optional bTranspose: boolean

      If the last two dimensions of a are transposed. Defaults to false

    • Optional alpha: number

      Alpha parameter. Defaults to 1.0

    • Optional c: Tensor<DTpe>

      Optional tensor to add to the result.

    • Optional beta: number

      Beta parameter, only used if c is specified. Defaults to 1.0

    Returns Tensor<DTpe>

gemm_impl

  • gemm_impl(b: Tensor<DTpe>, aTranspose: boolean, bTranspose: boolean, alpha: number, beta: number, C?: Tensor<DTpe>): Tensor<DTpe>

Protected getAxes

  • getAxes(axes?: number | number[]): number[]

getDenseShape

  • getDenseShape(): readonly number[]
  • Dense part of the shape of the tensor, ie. the D last values of the shape, where D is then number of dense dimension.

    Returns readonly number[]

getShape

  • getShape(): readonly number[]

getSparseShape

  • getSparseShape(): readonly number[]
  • Sparse part of the shape of the tensor, ie. the S first values of the shape, where S is then number of sparse dimension.

    Returns readonly number[]

getValues

  • getValues(): Promise<TensorValues[DTpe]>

hardSigmoid

  • hardSigmoid(alpha: number, beta: number): Tensor<DTpe>

log

matMul

  • Calculates the matrix product. This tensor should have shape [M,N]

    Two cases are supported for sparse tensors:

    • If this tensor has one sparse dimension, the resulting tensor is a sparse tensor with the same number of non-zero entries
    • If this tensor has two sparse dimensions, the resulting tensor is dense. Right now this only supports sparse-dense matrix multiplication. Supported on
    • All backends if the sparse tensor has 1 sparse dimensions
    • Only on CPU/WASM if the sparse tensor has no sparse dimensions
    result

    Tensor with shape [M,O]

    Parameters

    • tensor: Tensor<DTpe>

      Dense matrix to multiply with. Should have shape [N,O]

    Returns Tensor<DTpe>

max

  • max(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the maximum over specified axis/axes.

    example
    const a = new CPUTensor([2,3], [1,2,3,4,5,6]);
    
    a.max(); //Will be [6]
    a.max(0); //Will be [4,5,6]
    a.max(1); //Will [3,6]
    a.max(0, true); //Will be [[4,5,6]]
    

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the maximum over. If not specified this will be all axes

    • Optional keepDims: boolean

      Wether the maximum axes will be kept with size 1

    Returns Tensor<DTpe>

Protected max_impl

  • max_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

min

  • min(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the minimum over specified axis/axes.

    example
    const a = new CPUTensor([2,3], [1,2,3,4,5,6]);
    
    a.min(); //Will be [1]
    a.min(0); //Will be [1,2,3]
    a.min(1); //Will [1,4]
    a.min(0, true); //Will be [[1,2,3]]
    

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the minimum over. If not specified this will be all axes

    • Optional keepDims: boolean

      Wether the minimum axes will be kept with size 1

    Returns Tensor<DTpe>

Protected min_impl

  • min_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

multiply

  • multiply(tensor: Tensor<DTpe>, alpha?: number): Tensor<DTpe>

multiplyScalar

  • multiplyScalar(value: number): Tensor<DTpe>

multiply_impl

  • multiply_impl(th: Tensor<DTpe>, tensor: Tensor<DTpe>, resultShape: readonly number[], alpha: number): Tensor<DTpe>

negate

normalize

pad

  • pad(pads: number[], mode?: "constant" | "reflect" | "edge", value?: number): Tensor<DTpe>
  • Pads the input according to the padding mode. The input has shape [D1,D2,..]

    example
    const a = new CPUTensor([2,2],[1,2,3,4]);
    a.pad([1,1,1,1],'constant',5);
    //Result will be:
    // [[5,5,5,5],
    //  [5,1,2,5],
    //  [5,3,4,5],
    //  [5,5,5,5]]
    a.pad([1,1,1,1],'edge');
    //Result will be:
    // [[1,1,2,2],
    //  [1,1,2,2],
    //  [3,3,4,4],
    //  [3,3,4,4]]
    
    a.pad([2,2,2,2],'reflect');
    //Result will be:
    // [[4,3,3,4,4,3],
    //  [2,1,1,2,2,1],
    //  [2,1,1,2,2,1],
    //  [4,3,3,4,4,3],
    //  [4,3,3,4,4,3],
    //  [2,1,1,2,2,1]]
    

    Parameters

    • pads: number[]

      Padding size of each input. Specified as [startpad_D1,startpad_D2,...,startpad_DN,endpad_D1,endpad_D2,...]

    • Optional mode: "constant" | "reflect" | "edge"

      Padding mode. One of 'constant', 'edge', 'reflect'. Defaults to 'constant'

    • Optional value: number

      Value for constant padding. Defaults to 0.0

    Returns Tensor<DTpe>

Protected pad_impl

  • pad_impl(pads: number[], mode: PadMode, value: number): Tensor<DTpe>

power

  • Takes the positionwise power. Supports broadcasting

    example
    const a = new CPUTensor([2,2],[5,6,7,8]);
    const b = new CPUTensor([2,2],[2,3,2,3]);
    const c = new CPUTensor([1],[2]);
    
    a.power(b);
    //Will be
    // [[25,216],
    //  [49,512]]
    
    a.power(c);
    //Will be
    // [[25,36],
    //  [49,64]]
    

    Parameters

    Returns Tensor<DTpe>

powerScalar

  • powerScalar(power: number, factor: number): Tensor<DTpe>

power_impl

  • power_impl(th: Tensor<DTpe>, tensor: Tensor<DTpe>, resultShape: readonly number[]): Tensor<DTpe>

product

  • product(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the product over specified axis/axes.

    example
    const a = new CPUTensor([2,3], [1,2,3,4,5,6]);
    
    a.product(); //Will be [720]
    a.product(0); //Will be [4,10,18]
    a.product(1); //Will [6,120]
    a.product(0, true); //Will be [[4,10,18]]
    

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the product over. If not specified this will be all axes

    • Optional keepDims: boolean

      Wether the product axes will be kept with size 1

    Returns Tensor<DTpe>

Protected product_impl

  • product_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

reduceLogSum

  • reduceLogSum(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the log of the sum over the specified axis This is equal to a.sum(axes, keepDims).log() (where sumSize is the number of entries in the summation axes) but faster.

    Note that this can only be called on tensors with a float data type (float64, float32, float16)

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the mean over. If not specified this will take the mean over all axes

    • Optional keepDims: boolean

      Wether the mean axes will be kept with size 1

    Returns Tensor<DTpe>

reduceLogSumExp

  • reduceLogSumExp(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the log of the sum over the exp of the specified axis This is equal to a.sum(axes, keepDims).log() (where sumSize is the number of entries in the summation axes) but faster.

    Note that this can only be called on tensors with a float data type (float64, float32, float16)

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the mean over. If not specified this will take the mean over all axes

    • Optional keepDims: boolean

      Wether the mean axes will be kept with size 1

    Returns Tensor<DTpe>

Protected reduceLogSumExp_impl

  • reduceLogSumExp_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

Protected reduceLogSum_impl

  • reduceLogSum_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

reduceMean

  • reduceMean(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the mean over the specified axis/axes. This is equal to a.sum(axes, keepDims).divide(sumSize) (where sumSize is the number of entries in the summation axes) but faster.

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the mean over. If not specified this will take the mean over all axes

    • Optional keepDims: boolean

      Wether the mean axes will be kept with size 1

    Returns Tensor<DTpe>

reduceMeanSquare

  • reduceMeanSquare(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the mean over the specified axis/axes with the entries of the tensor squared. This is equal to a.multiply(a).sum(axes, keepDims).divide(sumSize) (where sumSize is the number of entries in the summation axes) but faster.

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the mean over. If not specified this will take the mean over all axes

    • Optional keepDims: boolean

      Wether the mean axes will be kept with size 1

    Returns Tensor<DTpe>

Protected reduceMeanSquare_impl

  • reduceMeanSquare_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

Protected reduceMean_impl

  • reduceMean_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

repeat

  • repeat(repeats: number[]): Tensor<DTpe>

reshape

  • reshape(shape: readonly number[], copy?: boolean): Tensor<DTpe>
  • Reshape the tensor to the specified shape

    At most one value in the shape can be -1, which will be replaced by the inferred size for this dimension.

    Parameters

    • shape: readonly number[]

      New shape of the tensor

    • Optional copy: boolean

      Wether the tensor values should be copied. Only has an effect on GPU tensors

    Returns Tensor<DTpe>

Protected reshape_impl

  • reshape_impl(shape: readonly number[], copy: boolean): Tensor<DTpe>

round

setValues

  • setValues(values: Tensor<DTpe>, starts: number[]): Tensor<DTpe>

sigmoid

sign

sin

singleConstant

  • singleConstant(value: number): Tensor<DTpe>

sinh

slice

  • slice(starts: number[], ends: number[], axes?: number[], steps?: number[]): Tensor<DTpe>
  • Takes a slice of the tensor along the specified axes.

    example
    const a = new CPUTensor([2,2],[5,6,7,8]);
    
    a.slice([0],[1],[0]);
    //Will be
    // [[5,6]]
    
    a.slice([0],[1],[1]);
    //Will be
    // [[5],
        [6]]
    

    Parameters

    • starts: number[]

      Start of the slice for each axis

    • ends: number[]

      End of the slice for each axis - Exclusive (the end index will not be included in the slice)

    • Optional axes: number[]

      Axes to slice. Defaults to all axes

    • Optional steps: number[]

    Returns Tensor<DTpe>

Protected slice_impl

  • slice_impl(starts: number[], ends: number[], axes: number[], steps: number[]): Tensor<DTpe>

softmax

  • softmax(axis: number): Tensor<DTpe>

sqrt

squeeze

subtract

  • subtract(tensor: Tensor<DTpe>, alpha?: number, beta?: number): Tensor<DTpe>

subtract_impl

  • subtract_impl(th: Tensor<DTpe>, tensor: Tensor<DTpe>, resultShape: readonly number[], alpha: number, beta: number): Tensor<DTpe>

sum

  • sum(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Sums over sparse and/or dense dimensions according to the specified axes.

    • If summing only over dense dimensions, all backends are supported.
    • If summing over sparse dimensions, only CPU/WebGL are supported

    Parameters

    • Optional axes: number | number[]
    • Optional keepDims: boolean

    Returns Tensor<DTpe>

sumSquare

  • sumSquare(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Sums over the specified axis/axes with the entries of the tensor squared. This is equal to a.multiply(a).sum(axes, keepDims) but faster

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to sum over. If not specified this will sum over all axes

    • Optional keepDims: boolean

      Wether the summation axes will be kept with size 1

    Returns Tensor<DTpe>

Protected sumSquare_impl

  • sumSquare_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

Protected sum_impl

  • sum_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

tan

tanh

transpose

  • transpose(permutation?: number[]): Tensor<DTpe>
  • Transposes the tensor according to the given permutation

    example
    const a = new CPUTensor([2,2],[5,6,7,8]);
    
    a.transpose();
    //Will be
    // [[5,7],
    //  [6,8]]
    

    Parameters

    • Optional permutation: number[]

      Permutation for the axes. Default is the reverse axis order

    Returns Tensor<DTpe>

Protected transpose_impl

  • transpose_impl(permutation: number[]): Tensor<DTpe>

upsample

  • upsample(scales: number[]): Tensor<DTpe>

Static fromDense

  • Creates a sparse tensor with zero dense dimensions from a dense CPU tensor.

    example
    const denseTensor = new CPUTensor([3,3],[1,0,0,0,2,0,0,3,4]);
    
    const sparseTensor = SparseTensor.fromDense(denseTensor);
    console.log(sparseTensor.nnz); // Will log '4'
    console.log(sparseTensor.sparseDims); // Will log '2'
    

    Type parameters

    Parameters

    Returns SparseTensor<DTpe>

Generated using TypeDoc