Options
All
  • Public
  • Public/Protected
  • All
Menu

Hierarchy

Index

Constructors

constructor

Properties

backend

backend: Backend = 'CPU'

Private constants

constants: Constants = ...

Private defaultReady

defaultReady: number[] = ...

Private inputSet

inputSet: Set<string> = ...

inputs

inputs: IValueInfoProto[]

Private intermediaries

intermediaries: {} = ...

Type declaration

  • [name: string]: Intermediary

mode

mode: Mode = 'train'

Private modelProto

modelProto: ModelProto

Private noConvertConstants

noConvertConstants: Set<string>

Private noConvertNodes

noConvertNodes: Set<number>

Private nodeIdCounter

nodeIdCounter: number = 10000

Private nodeIds

nodeIds: number[] = ...

Private nodes

nodes: {} = ...

Type declaration

  • [id: number]: OnnxNode

outputs

outputs: string[]

Private precision

precision: 16 | 32

Private version

version: number

Methods

delete

  • delete(): void
  • Deletes the model

    This will release the memory/framebuffers (depending on the backend)

    Returns void

forward

  • forward(inputs: Tensor<any>[], wait?: number): Promise<Tensor<any>[]>
  • Do a forward pass for the specified inputs

    Parameters

    • inputs: Tensor<any>[]
    • Optional wait: number

      Number of milliseconds to wait between each layer. This is especially useful, if your model is complex and you dont want your model to block your whole application.

    Returns Promise<Tensor<any>[]>

Protected getInputsToNode

  • getInputsToNode(node: OnnxNode, intermediaryRes: {}): { inputs: Tensor<any>[]; toDelete: string[] }
  • Parameters

    • node: OnnxNode
    • intermediaryRes: {}
      • [name: string]: IntermediaryRes

    Returns { inputs: Tensor<any>[]; toDelete: string[] }

    • inputs: Tensor<any>[]
    • toDelete: string[]

getNodeWithInput

  • getNodeWithInput(output: string): undefined | number

getNodeWithOutput

  • getNodeWithOutput(output: string): undefined | number

getNodes

  • getNodes(): {}

getParameters

getSubModules

Private initNodes

  • initNodes(modelProto: ModelProto): void

Protected initializeForward

  • initializeForward(inputs: Tensor<any>[], intermediaryRes: {}, nodes: {}, nodesReady: number[]): void
  • Parameters

    • inputs: Tensor<any>[]
    • intermediaryRes: {}
      • [name: string]: IntermediaryRes
    • nodes: {}
      • [id: number]: { variableInputs: number }
        • variableInputs: number
    • nodesReady: number[]

    Returns void

Private initializer

  • initializer(initializer: ITensorProto[]): void

Private insertNode

  • insertNode(node: OnnxNode): void

optimize

  • optimize(): void

Protected propagateResults

  • propagateResults(node: OnnxNode, intermediaryRes: {}, outputs: Tensor<any>[], nodes: {}, nodesReady: number[]): void
  • Parameters

    • node: OnnxNode
    • intermediaryRes: {}
      • [name: string]: IntermediaryRes
    • outputs: Tensor<any>[]
    • nodes: {}
      • [id: number]: { variableInputs: number }
        • variableInputs: number
    • nodesReady: number[]

    Returns void

prune

  • prune(intermediariesToDelete?: string[]): void
  • Parameters

    • Optional intermediariesToDelete: string[]

    Returns void

Private pruneIntermediaries

  • pruneIntermediaries(intermediariesToDelete?: string[]): Set<number>
  • Parameters

    • Optional intermediariesToDelete: string[]

    Returns Set<number>

Private removeNode

  • removeNode(nodeId: number, preserveIntermediaries: Set<string>): string[]
  • Parameters

    • nodeId: number
    • preserveIntermediaries: Set<string>

    Returns string[]

resolveConstant

  • resolveConstant(name: string): undefined | Tensor<any>

toBackend

  • toBackend(backend: Backend): Promise<void>

toCPU

  • toCPU(): Promise<void>

toGPU

  • toGPU(): Promise<void>

toWASM

  • toWASM(): Promise<void>

Generated using TypeDoc