Options
All
  • Public
  • Public/Protected
  • All
Menu

Class Variable<DTpe>

Tensor that also has a gradient associated to it When noGrad is false, a dynamic computation graph on this variable will be build.

Once backward on a scalar variable (eg. a variable with shape [1]) is called, the gradients for all variables will be computed

Type parameters

  • DTpe: DType = "float32"

Hierarchy

Implements

  • VariableI<DTpe>

Index

Constructors

constructor

Properties

Optional backEdge

backEdge: undefined | BackwardOp<DTpe>

dtype

dtype: DTpe

Data type of the tensor

Optional grad

grad: undefined | Tensor<DTpe>

noGrad

noGrad: boolean

value

value: Tensor<DTpe>

Methods

abs

acos

acosh

add

  • add(tensor: Tensor<DTpe>, alpha?: number, beta?: number): Tensor<DTpe>
  • Adds two tensors. Supports broadcasting

    example
    const a = new CPUTensor([2,2],[1,2,3,4]);
    const b = new CPUTensor([2,2],[5,6,7,8]);
    const c = new CPUTensor([1],[2]);
    
    a.add(b);
    //Will be
    // [[6,8],
    //  [10,12]]
    
    a.add(c);
    //Will be
    // [[3,4],
    //  [5,6]]
    

    Parameters

    • tensor: Tensor<DTpe>
    • Optional alpha: number
    • Optional beta: number

    Returns Tensor<DTpe>

addMultiplyScalar

  • addMultiplyScalar(factor: number, add: number): Tensor<DTpe>

addScalar

  • addScalar(value: number): Tensor<DTpe>

add_impl

  • add_impl(th: Tensor<DTpe>, tensor: Tensor<DTpe>, resultShape: readonly number[], alpha: number, beta: number): Tensor<DTpe>

alignShapes

  • alignShapes(shape1: readonly number[], shape2: readonly number[]): readonly number[][]
  • Parameters

    • shape1: readonly number[]
    • shape2: readonly number[]

    Returns readonly number[][]

alignTensor

  • alignTensor(tensor: Tensor<DTpe>): (readonly number[] | Tensor<DTpe>)[] | (any[] | Tensor<DTpe>)[]

asin

asinh

atan

atanh

averagePool

  • averagePool(kernelShape: number[], pads?: number[], strides?: number[], includePad?: boolean): Tensor<DTpe>
  • Performs average pooling over the spatial dimensions of this tensor with shape [N,C,D1,D2,..]

    Parameters

    • kernelShape: number[]

      Size of the average pooling dimension

    • Optional pads: number[]

      Padding of the input specified as [startpad_D1,startpad_D2,...,startpad_DN,endpad_D1,endpad_D2,...] Padding value will be 0. Defaults to 0 for all axes

    • Optional strides: number[]

      Stride size of the average pooling kernel. Defaults to 1 for all axes

    • Optional includePad: boolean

      Wether padded values should be included in the average (or masked out). Defaults to false

    Returns Tensor<DTpe>

Protected averagePool_impl

  • averagePool_impl(kernelShape: number[], pads: number[], strides: number[], includePad: boolean): Tensor<DTpe>

backward

  • backward(grad?: Tensor<DTpe>): boolean
  • Performs a backward pass and returns wether the grad is needed or can be deleted

    Parameters

    Returns boolean

cast

  • cast<DTpe2>(dtype: DTpe2): Tensor<DTpe2>

ceil

clip

  • clip(min?: number, max?: number): Tensor<DTpe>

clipBackward

  • clipBackward(grad: Tensor<DTpe>, min?: number, max?: number): Tensor<DTpe>

compare

  • compare(tensor: Tensor<DTpe>, epsilon?: number): Promise<boolean>
  • Compares this tensor to another tensor.

    example
    const a = new CPUTensor([2,2], [1,2,3,4]);
    const b = new CPUTensor([2,2], [1.1,2.1,2.9,4.05]);
    const c = new CPUTensor([4], [1,2,3,4]);
    a.compare(b, 0.5).then(equal => {
     //equal will be true
    });
    
    a.compare(b).then(equal => {
     //equal will be false
    });
    
    a.compare(c).then(equal => {
     //equal will be false since the shapes of the tensors do not match
    });
    

    Parameters

    • tensor: Tensor<DTpe>

      Tensor to compare to

    • Optional epsilon: number

      Optional maximum difference between the tensors. If not specified the tensors have to be exactly equal

    Returns Promise<boolean>

concat

constantLike

  • constantLike(value: number): Tensor<DTpe>

conv

  • conv(kernel: Tensor<DTpe>, bias?: Tensor<DTpe>, dilations?: number[], group?: number, pads?: number[], strides?: number[], activation?: "id" | "relu" | "relu6"): Tensor<DTpe>
  • Convolves this tensor with the specified kernel.

    This tensor should have shape [N,C,D1,D2,...] where D1,D2,... are the spatial dimensions.

    Behaves according to https://github.com/onnx/onnx/blob/master/docs/Operators.md#Conv

    Parameters

    • kernel: Tensor<DTpe>

      Convolution kernel with shape [M,C/G,K1,K2] where G is the group parameter

    • Optional bias: Tensor<DTpe>

      Optional bias to add to the result with shape [M]

    • Optional dilations: number[]

      Per axis dilations for the spatial dimension. Defaults to 1 for all axes

    • Optional group: number

      Group parameter

    • Optional pads: number[]

      Padding to add to the input for each spatial dimension. Defaults to 0 for all axes

    • Optional strides: number[]

      Convolution stride for each spatial dimension. Defaults to 1 for all axes

    • Optional activation: "id" | "relu" | "relu6"

      Optional activation to apply. Defaults to the identity (so no activation)

    Returns Tensor<DTpe>

convTranspose

  • convTranspose(kernel: Tensor<DTpe>, dilations?: number[], group?: number, pads?: number[], strides?: number[]): Tensor<DTpe>
  • Calculates the transpose convolution

    This tensor should have shape [N,C,D1,D2,...] where D1,D2,... are the spatial dimensions.

    Parameters

    • kernel: Tensor<DTpe>

      Convolution kernel with shape [M,C/G,K1,K2] where G is the group parameter

    • Optional dilations: number[]

      Per axis dilations for the spatial dimension. Defaults to 1 for all axes

    • Optional group: number

      Group parameter

    • Optional pads: number[]

      Padding to add to the input for each spatial dimension. Defaults to 0 for all axes

    • Optional strides: number[]

      Convolution stride for each spatial dimension. Defaults to 1 for all axes

    Returns Tensor<DTpe>

Protected convTranspose_impl

  • convTranspose_impl(kernel: Tensor<DTpe>, dilations: number[], group: number, pads: number[], strides: number[]): Tensor<DTpe>

Protected conv_impl

  • conv_impl(kernel: Tensor<DTpe>, dilations: number[], group: number, pads: number[], strides: number[], activation: Activation, bias?: Tensor<DTpe>): Tensor<DTpe>

copy

cos

cosh

delete

  • delete(): void

divide

  • divide(tensor: Tensor<DTpe>, alpha?: number): Tensor<DTpe>
  • Divides two tensors. Supports broadcasting

    example
    const a = new CPUTensor([2,2],[5,6,7,8]);
    const b = new CPUTensor([2,2],[1,2,3,4]);
    const c = new CPUTensor([1],[2]);
    
    a.divide(b);
    //Will be
    // [[5,3],
    //  [2.333,2]]
    
    a.divide(c);
    //Will be
    // [[2.5,3],
    //  [3.5,4]]
    

    Parameters

    • tensor: Tensor<DTpe>
    • Optional alpha: number

    Returns Tensor<DTpe>

divide_impl

  • divide_impl(th: Tensor<DTpe>, tensor: Tensor<DTpe>, resultShape: readonly number[], alpha: number): Tensor<DTpe>

exp

expand

  • expand(shape: readonly number[]): Tensor<DTpe>

flatten

  • flatten(axis?: number): Tensor<DTpe>

floor

gather

gemm

  • gemm(b: Tensor<DTpe>, aTranspose?: boolean, bTranspose?: boolean, alpha?: number, c?: Tensor<DTpe>, beta?: number): Tensor<DTpe>
  • A and B can have batch dimensions. Their last two dimensions should correspond to the dimensions for the matrix product

    Parameters

    • b: Tensor<DTpe>

      Second matrix for the matrix product

    • Optional aTranspose: boolean

      If the last two dimensions of a are transposed. Defaults to false

    • Optional bTranspose: boolean

      If the last two dimensions of a are transposed. Defaults to false

    • Optional alpha: number

      Alpha parameter. Defaults to 1.0

    • Optional c: Tensor<DTpe>

      Optional tensor to add to the result.

    • Optional beta: number

      Beta parameter, only used if c is specified. Defaults to 1.0

    Returns Tensor<DTpe>

gemm_impl

  • gemm_impl(b: Tensor<DTpe>, aTranspose: boolean, bTranspose: boolean, alpha: number, beta: number, C?: Tensor<DTpe>): Tensor<DTpe>

Protected getAxes

  • getAxes(axes?: number | number[]): number[]

getShape

  • getShape(): readonly number[]

getValues

  • getValues(): Promise<TensorValues[DTpe]>

hardSigmoid

  • hardSigmoid(alpha: number, beta: number): Tensor<DTpe>

isLeaf

  • isLeaf(): boolean

log

matMul

max

  • max(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the maximum over specified axis/axes.

    example
    const a = new CPUTensor([2,3], [1,2,3,4,5,6]);
    
    a.max(); //Will be [6]
    a.max(0); //Will be [4,5,6]
    a.max(1); //Will [3,6]
    a.max(0, true); //Will be [[4,5,6]]
    

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the maximum over. If not specified this will be all axes

    • Optional keepDims: boolean

      Wether the maximum axes will be kept with size 1

    Returns Tensor<DTpe>

Protected max_impl

  • max_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

min

  • min(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the minimum over specified axis/axes.

    example
    const a = new CPUTensor([2,3], [1,2,3,4,5,6]);
    
    a.min(); //Will be [1]
    a.min(0); //Will be [1,2,3]
    a.min(1); //Will [1,4]
    a.min(0, true); //Will be [[1,2,3]]
    

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the minimum over. If not specified this will be all axes

    • Optional keepDims: boolean

      Wether the minimum axes will be kept with size 1

    Returns Tensor<DTpe>

Protected min_impl

  • min_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

multiply

  • multiply(tensor: Tensor<DTpe>, alpha?: number): Tensor<DTpe>
  • Multiplies two tensors. Supports broadcasting

    example
    const a = new CPUTensor([2,2],[1,2,3,4]);
    const b = new CPUTensor([2,2],[5,6,7,8]);
    const c = new CPUTensor([1],[2]);
    
    a.multiply(b);
    //Will be
    // [[5,12],
    //  [21,32]]
    
    a.multiply(c);
    //Will be
    // [[2,4]
        [6,8]]
    

    Parameters

    • tensor: Tensor<DTpe>
    • Optional alpha: number

    Returns Tensor<DTpe>

multiplyScalar

  • multiplyScalar(value: number): Tensor<DTpe>

multiply_impl

  • multiply_impl(th: Tensor<DTpe>, tensor: Tensor<DTpe>, resultShape: readonly number[], alpha: number): Tensor<DTpe>

negate

normalize

pad

  • pad(pads: number[], mode?: "constant" | "reflect" | "edge", value?: number): Tensor<DTpe>
  • Pads the input according to the padding mode. The input has shape [D1,D2,..]

    example
    const a = new CPUTensor([2,2],[1,2,3,4]);
    a.pad([1,1,1,1],'constant',5);
    //Result will be:
    // [[5,5,5,5],
    //  [5,1,2,5],
    //  [5,3,4,5],
    //  [5,5,5,5]]
    a.pad([1,1,1,1],'edge');
    //Result will be:
    // [[1,1,2,2],
    //  [1,1,2,2],
    //  [3,3,4,4],
    //  [3,3,4,4]]
    
    a.pad([2,2,2,2],'reflect');
    //Result will be:
    // [[4,3,3,4,4,3],
    //  [2,1,1,2,2,1],
    //  [2,1,1,2,2,1],
    //  [4,3,3,4,4,3],
    //  [4,3,3,4,4,3],
    //  [2,1,1,2,2,1]]
    

    Parameters

    • pads: number[]

      Padding size of each input. Specified as [startpad_D1,startpad_D2,...,startpad_DN,endpad_D1,endpad_D2,...]

    • Optional mode: "constant" | "reflect" | "edge"

      Padding mode. One of 'constant', 'edge', 'reflect'. Defaults to 'constant'

    • Optional value: number

      Value for constant padding. Defaults to 0.0

    Returns Tensor<DTpe>

Protected pad_impl

  • pad_impl(pads: number[], mode: PadMode, value: number): Tensor<DTpe>

power

  • Takes the positionwise power. Supports broadcasting

    example
    const a = new CPUTensor([2,2],[5,6,7,8]);
    const b = new CPUTensor([2,2],[2,3,2,3]);
    const c = new CPUTensor([1],[2]);
    
    a.power(b);
    //Will be
    // [[25,216],
    //  [49,512]]
    
    a.power(c);
    //Will be
    // [[25,36],
    //  [49,64]]
    

    Parameters

    Returns Tensor<DTpe>

powerScalar

  • powerScalar(power: number, factor: number): Tensor<DTpe>

power_impl

  • power_impl(th: Tensor<DTpe>, tensor: Tensor<DTpe>, resultShape: readonly number[]): Tensor<DTpe>

product

  • product(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the product over specified axis/axes.

    example
    const a = new CPUTensor([2,3], [1,2,3,4,5,6]);
    
    a.product(); //Will be [720]
    a.product(0); //Will be [4,10,18]
    a.product(1); //Will [6,120]
    a.product(0, true); //Will be [[4,10,18]]
    

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the product over. If not specified this will be all axes

    • Optional keepDims: boolean

      Wether the product axes will be kept with size 1

    Returns Tensor<DTpe>

Protected product_impl

  • product_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

reduceLogSum

  • reduceLogSum(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the log of the sum over the specified axis This is equal to a.sum(axes, keepDims).log() (where sumSize is the number of entries in the summation axes) but faster.

    Note that this can only be called on tensors with a float data type (float64, float32, float16)

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the mean over. If not specified this will take the mean over all axes

    • Optional keepDims: boolean

      Wether the mean axes will be kept with size 1

    Returns Tensor<DTpe>

reduceLogSumExp

  • reduceLogSumExp(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the log of the sum over the exp of the specified axis This is equal to a.sum(axes, keepDims).log() (where sumSize is the number of entries in the summation axes) but faster.

    Note that this can only be called on tensors with a float data type (float64, float32, float16)

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the mean over. If not specified this will take the mean over all axes

    • Optional keepDims: boolean

      Wether the mean axes will be kept with size 1

    Returns Tensor<DTpe>

Protected reduceLogSumExp_impl

  • reduceLogSumExp_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

Protected reduceLogSum_impl

  • reduceLogSum_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

reduceMean

  • reduceMean(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the mean over the specified axis/axes. This is equal to a.sum(axes, keepDims).divide(sumSize) (where sumSize is the number of entries in the summation axes) but faster.

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the mean over. If not specified this will take the mean over all axes

    • Optional keepDims: boolean

      Wether the mean axes will be kept with size 1

    Returns Tensor<DTpe>

reduceMeanSquare

  • reduceMeanSquare(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Takes the mean over the specified axis/axes with the entries of the tensor squared. This is equal to a.multiply(a).sum(axes, keepDims).divide(sumSize) (where sumSize is the number of entries in the summation axes) but faster.

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to take the mean over. If not specified this will take the mean over all axes

    • Optional keepDims: boolean

      Wether the mean axes will be kept with size 1

    Returns Tensor<DTpe>

Protected reduceMeanSquare_impl

  • reduceMeanSquare_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

Protected reduceMean_impl

  • reduceMean_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

repeat

  • repeat(repeats: number[]): Tensor<DTpe>

reshape

  • reshape(shape: readonly number[], copy?: boolean): Tensor<DTpe>
  • Reshape the tensor to the specified shape

    At most one value in the shape can be -1, which will be replaced by the inferred size for this dimension.

    Parameters

    • shape: readonly number[]

      New shape of the tensor

    • Optional copy: boolean

      Wether the tensor values should be copied. Only has an effect on GPU tensors

    Returns Tensor<DTpe>

Protected reshape_impl

  • reshape_impl(shape: readonly number[], copy: boolean): Tensor<DTpe>

round

setValues

  • setValues(values: Tensor<DTpe>, starts: number[]): Tensor<DTpe>

sigmoid

sign

sin

singleConstant

  • singleConstant(value: number): Tensor<DTpe>

sinh

slice

  • slice(starts: number[], ends: number[], axes?: number[], steps?: number[]): Tensor<DTpe>
  • Takes a slice of the tensor along the specified axes.

    example
    const a = new CPUTensor([2,2],[5,6,7,8]);
    
    a.slice([0],[1],[0]);
    //Will be
    // [[5,6]]
    
    a.slice([0],[1],[1]);
    //Will be
    // [[5],
        [6]]
    

    Parameters

    • starts: number[]

      Start of the slice for each axis

    • ends: number[]

      End of the slice for each axis - Exclusive (the end index will not be included in the slice)

    • Optional axes: number[]

      Axes to slice. Defaults to all axes

    • Optional steps: number[]

    Returns Tensor<DTpe>

Protected slice_impl

  • slice_impl(starts: number[], ends: number[], axes: number[], steps: number[]): Tensor<DTpe>

softmax

  • softmax(axis: number): Tensor<DTpe>

sqrt

squeeze

subtract

  • subtract(tensor: Tensor<DTpe>, alpha?: number, beta?: number): Tensor<DTpe>
  • Subtracts two tensors. Supports broadcasting

    example
    const a = new CPUTensor([2,2],[5,6,7,8]);
    const b = new CPUTensor([2,2],[1,2,3,4]);
    const c = new CPUTensor([1],[2]);
    
    a.subtract(b);
    //Will be
    // [[4,4],
    //  [4,4]]
    
    a.subtract(c);
    //Will be
    // [[3,4],
    //  [5,6]]
    

    Parameters

    • tensor: Tensor<DTpe>
    • Optional alpha: number
    • Optional beta: number

    Returns Tensor<DTpe>

subtract_impl

  • subtract_impl(th: Tensor<DTpe>, tensor: Tensor<DTpe>, resultShape: readonly number[], alpha: number, beta: number): Tensor<DTpe>

sum

  • sum(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Sums over the specified axis/axes.

    example
    const a = new CPUTensor([2,3], [1,2,3,4,5,6]);
    
    a.sum(); //Will be [21]
    a.sum(0); //Will be [5,7,9]
    a.sum(1); //Will [6,15]
    a.sum(0, true); //Will be [[5,7,9]]
    

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to sum over. If not specified this will sum over all axes

    • Optional keepDims: boolean

      Wether the summation axes will be kept with size 1

    Returns Tensor<DTpe>

sumSquare

  • sumSquare(axes?: number | number[], keepDims?: boolean): Tensor<DTpe>
  • Sums over the specified axis/axes with the entries of the tensor squared. This is equal to a.multiply(a).sum(axes, keepDims) but faster

    Parameters

    • Optional axes: number | number[]

      One or multiple axes to sum over. If not specified this will sum over all axes

    • Optional keepDims: boolean

      Wether the summation axes will be kept with size 1

    Returns Tensor<DTpe>

Protected sumSquare_impl

  • sumSquare_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

Protected sum_impl

  • sum_impl(axes: number[], keepDims: boolean): Tensor<DTpe>

tan

tanh

transpose

  • transpose(permutation?: number[]): Tensor<DTpe>
  • Transposes the tensor according to the given permutation

    example
    const a = new CPUTensor([2,2],[5,6,7,8]);
    
    a.transpose();
    //Will be
    // [[5,7],
    //  [6,8]]
    

    Parameters

    • Optional permutation: number[]

      Permutation for the axes. Default is the reverse axis order

    Returns Tensor<DTpe>

Protected transpose_impl

  • transpose_impl(permutation: number[]): Tensor<DTpe>

upsample

  • upsample(scales: number[]): Tensor<DTpe>

Static create

Static fromData

Generated using TypeDoc