Skip to content

Commit

Permalink
Second round cherry-pick to rel-1.9.0 (#9062)
Browse files Browse the repository at this point in the history
* Adding async fetching for webgl backend (#8951)

* Adding async fetching for webgl backend

* fix PR comments and CI failure.

* fixing a bug

* adding a flag

* Enable linking in exception throwing support library when build onnxruntime wasm. (#8973)

* Enable linking in exception throwing support library when build onnxruntime webassembly containing onnxruntime-extensions.

* Add flag in build.py to enable linking exceptions throwing library.

* Update onnxruntime-extensions document and bind custom_ops build flag with use_extensions.

* Update doc.

* Update cgmanifest.json.

Co-authored-by: Zuwei Zhao <[email protected]>

* Remove document text from error message in a couple of ops (#9003)

* do not add pkg wheel entry to the index html file if it already exists (#9004)

* do not add pkg wheel entry to the index html file if it already exists

* [js/web] fix ort web e2e test (#9025)

* Fix cmake POWER10 detection

Recent commit 60c98a8 changed variable mlas_common_srcs which affects
POWER10 detection.

* Fix Where op type reduction processing (#9033)

* Update type reduction script to track Where Op's second input type.

* Clean up op_kernel_type_control.h includes.

* Use more maintainable include.

* Fix ROCm wheels CI pipeline break by installing latest protobuf from source (#9047)

* install protobuf from source

* fix rm command in Dockerfile

* fix options on rm command

* fix cd into protobuf source directory

* try again

* remove strip step

* debug list the files

* ls on /usr

* more debug

* more debug

* adjust LD_LIBRARY_PATH

* try remove protobuf before ORT build

* [js/web] a bugfix and add tests for wasm proxy worker (#9048)

* [js/web] add tests for wasm proxy worker

* fix script src override

* Set onnxruntime_DISABLE_RTTI to default OFF (#9049)

Co-authored-by: Du Li <[email protected]>
Co-authored-by: Zuwei Zhao <[email protected]>
Co-authored-by: Zuwei Zhao <[email protected]>
Co-authored-by: Hariharan Seshadri <[email protected]>
Co-authored-by: liqun Fu <[email protected]>
Co-authored-by: Yulong Wang <[email protected]>
Co-authored-by: Rajalakshmi Srinivasaraghavan <[email protected]>
Co-authored-by: Edward Chen <[email protected]>
Co-authored-by: Suffian Khan <[email protected]>
Co-authored-by: Changming Sun <[email protected]>
  • Loading branch information
11 people authored Sep 16, 2021
1 parent f202cf3 commit 83dc225
Show file tree
Hide file tree
Showing 30 changed files with 295 additions and 43 deletions.
2 changes: 1 addition & 1 deletion cgmanifests/submodules/cgmanifest.json
Original file line number Diff line number Diff line change
Expand Up @@ -374,7 +374,7 @@
"component": {
"type": "git",
"git": {
"commitHash": "97ec95075187b3e6bfbe2820572f9edc93e6ac5b",
"commitHash": "d4b2aff0c890ae38bad87c20f5731333db2a2cc1",
"repositoryUrl": "https://github.com/microsoft/onnxruntime-extensions.git"
},
"comments": "git submodule at cmake/external/onnxruntime-extensions"
Expand Down
3 changes: 2 additions & 1 deletion cmake/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ option(onnxruntime_USE_ROCM "Build with AMD GPU support" OFF)
option(onnxruntime_DISABLE_CONTRIB_OPS "Disable contrib ops" OFF)
option(onnxruntime_DISABLE_ML_OPS "Disable traditional ML ops" OFF)
option(onnxruntime_DISABLE_SPARSE_TENSORS "Disable sparse tensors data types" OFF)
cmake_dependent_option(onnxruntime_DISABLE_RTTI "Disable RTTI" ON "NOT onnxruntime_ENABLE_PYTHON" OFF)
cmake_dependent_option(onnxruntime_DISABLE_RTTI "Disable RTTI" ON "onnxruntime_MINIMAL_BUILD;NOT onnxruntime_ENABLE_PYTHON" OFF)
# For now onnxruntime_DISABLE_EXCEPTIONS will only work with onnxruntime_MINIMAL_BUILD, more changes (ONNX, non-CPU EP, ...) are required to run this standalone
option(onnxruntime_DISABLE_EXCEPTIONS "Disable exception handling. Requires onnxruntime_MINIMAL_BUILD currently." OFF)
option(onnxruntime_MINIMAL_BUILD "Exclude as much as possible from the build. Support ORT format models. No support for ONNX format models." OFF)
Expand Down Expand Up @@ -148,6 +148,7 @@ option(onnxruntime_USE_MPI "Build with MPI support" OFF)
option(onnxruntime_BUILD_WEBASSEMBLY "Enable this option to create WebAssembly byte codes" OFF)
option(onnxruntime_ENABLE_WEBASSEMBLY_THREADS "Enable this option to create WebAssembly byte codes with multi-threads support" OFF)
option(onnxruntime_ENABLE_WEBASSEMBLY_EXCEPTION_CATCHING "Enable this option to turn on exception catching" OFF)
option(onnxruntime_ENABLE_WEBASSEMBLY_EXCEPTION_THROWING "Enable this option to turn on exception throwing even if the build disabled exceptions support" OFF)
option(onnxruntime_ENABLE_WEBASSEMBLY_DEBUG_INFO "Enable this option to turn on DWARF format debug info" OFF)

# Enable bitcode for iOS
Expand Down
2 changes: 1 addition & 1 deletion cmake/external/onnxruntime-extensions
Submodule onnxruntime-extensions updated 53 files
+1 −0 .gitignore
+7 −0 CMakeLists.txt
+7 −3 cmake/noexcep_ops.cmake
+19 −3 includes/ocos.h
+1 −1 onnxruntime_extensions/_cuops.py
+8 −1 operators/math/math.cc
+2 −2 operators/math/segement_extraction.cc
+1 −1 operators/math/segment_extraction.hpp
+8 −8 operators/math/segment_sum.cc
+1 −1 operators/math/segment_sum.hpp
+0 −9 operators/text/kernels.h
+1 −1 operators/text/op_equal.hpp
+4 −4 operators/text/op_equal_impl.hpp
+7 −7 operators/text/op_ragged_tensor.cc
+1 −1 operators/text/op_ragged_tensor.hpp
+5 −5 operators/text/re2_strings/string_regex_replace.cc
+1 −1 operators/text/re2_strings/string_regex_replace.hpp
+8 −8 operators/text/re2_strings/string_regex_split.cc
+1 −1 operators/text/re2_strings/string_regex_split.hpp
+0 −0 operators/text/re2_strings/string_regex_split_re.hpp
+1 −1 operators/text/string_concat.cc
+1 −1 operators/text/string_concat.hpp
+10 −2 operators/text/string_ecmaregex_replace.cc
+3 −2 operators/text/string_ecmaregex_replace.hpp
+8 −2 operators/text/string_ecmaregex_split.cc
+3 −1 operators/text/string_ecmaregex_split.hpp
+6 −6 operators/text/string_hash.cc
+1 −1 operators/text/string_hash.hpp
+4 −4 operators/text/string_join.cc
+1 −1 operators/text/string_join.hpp
+1 −1 operators/text/string_length.hpp
+1 −1 operators/text/string_lower.hpp
+5 −5 operators/text/string_split.cc
+1 −1 operators/text/string_split.hpp
+6 −6 operators/text/string_to_vector.cc
+43 −0 operators/text/text.cc
+4 −4 operators/text/vector_to_string.cc
+3 −3 operators/tokenizer/basic_tokenizer.cc
+75 −1 operators/tokenizer/bert_tokenizer.cc
+20 −0 operators/tokenizer/bert_tokenizer.hpp
+74 −21 operators/tokenizer/bert_tokenizer_decoder.cc
+6 −1 operators/tokenizer/bert_tokenizer_decoder.hpp
+4 −4 operators/tokenizer/blingfire_sentencebreaker.cc
+9 −9 operators/tokenizer/gpt2_tokenizer.cc
+1 −1 operators/tokenizer/tokenizers.cc
+5 −5 operators/tokenizer/wordpiece_tokenizer.cc
+5 −84 shared/ortcustomops.cc
+13 −1 test/static_test/test_strings.cc
+76 −0 test/static_test/test_tokenizer.cc
+27 −2 test/test_bert_tokenizer_decoder.py
+9 −4 test/test_segment_extraction.py
+204 −0 test/test_string_ecma_regex.py
+11 −6 tools/gen_selectedops.py
2 changes: 1 addition & 1 deletion cmake/onnxruntime_mlas.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ else()
HAS_P10_RUNTIME
)
if (HAS_P10_RUNTIME)
set_source_files_properties(${mlas_common_srcs} PROPERTIES COMPILE_FLAGS "-DPOWER10")
set_source_files_properties(${MLAS_SRC_DIR}/platform.cpp PROPERTIES COMPILE_FLAGS "-DPOWER10")
endif()
set(mlas_platform_srcs_power10
${MLAS_SRC_DIR}/power/SgemmKernelPOWER10.cpp
Expand Down
5 changes: 5 additions & 0 deletions cmake/onnxruntime_webassembly.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,11 @@ else()
set_property(TARGET onnxruntime_webassembly APPEND_STRING PROPERTY LINK_FLAGS " -s ASSERTIONS=0 -s SAFE_HEAP=0 -s STACK_OVERFLOW_CHECK=0 -s DEMANGLE_SUPPORT=0")
endif()

# Set link flag to enable exceptions support, this will override default disabling exception throwing behavior when disable exceptions.
if (onnxruntime_ENABLE_WEBASSEMBLY_EXCEPTION_THROWING)
set_property(TARGET onnxruntime_webassembly APPEND_STRING PROPERTY LINK_FLAGS " -s DISABLE_EXCEPTION_THROWING=0")
endif()

if (onnxruntime_ENABLE_WEBASSEMBLY_THREADS)
if (onnxruntime_ENABLE_WEBASSEMBLY_SIMD)
set_property(TARGET onnxruntime_webassembly APPEND_STRING PROPERTY LINK_FLAGS " -s EXPORT_NAME=ortWasmSimdThreaded -s USE_PTHREADS=1")
Expand Down
9 changes: 8 additions & 1 deletion docs/onnxruntime_extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,14 @@ If your model contains operators from onnxruntime-extensions, please add argumen
You could even manually edit the **required_operators.config** if you know the custom operators required and don't want to build the shared library.

### Build and Disable Exceptions
You could add argument `--disable_exceptions` to disable exceptions in both onnxruntime and onnxruntime-extensions. However, if the custom operators you used in onnxruntime-extensions (such as BlingFireTokenizer) use c++ exceptions, then you cannot disable it.
You could add argument `--disable_exceptions` to disable exceptions in both onnxruntime and onnxruntime-extensions.

However, if the custom operators you used in onnxruntime-extensions (such as BlingFireTokenizer) use c++ exceptions, then you will also need to add argument `--enable_wasm_exception_throwing_override` to enable **Emscripten** to link in exception throwing support library. If this argument is not set, Emscripten will throw linking errors.

### Example Build Command
```console
D:\onnxruntime> build.bat --config Release --build_wasm --enable_wasm_threads --enable_wasm_simd --skip_tests --disable_exceptions --disable_wasm_exception_catching --enable_wasm_exception_throwing_override --disable_rtti --use_extensions --parallel --minimal_build custom_ops --include_ops_by_config D:\required_operators.config
```

## E2E Example using Custom Operators
A common NLP task would probably contain several steps, including pre-processing, DL model and post-processing. It would be very efficient and productive to convert the pre/post processing code snippets into ONNX model since ONNX graph is actually a computation graph, and it can represent the most programming code, theoretically.
Expand Down
4 changes: 4 additions & 0 deletions js/common/lib/env.ts
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,10 @@ export declare namespace Env {
* Set or get the packed texture mode
*/
pack?: boolean;
/**
* Set or get whether enable async download.
*/
async?: boolean;
}
}

Expand Down
13 changes: 12 additions & 1 deletion js/web/lib/onnxjs/backends/backend-webgl.ts
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,13 @@ export class WebGLBackend implements Backend {
env.webgl.pack = value;
}

get async(): boolean|undefined {
return env.webgl.async;
}
set async(value: boolean|undefined) {
env.webgl.async = value;
}

initialize(): boolean {
try {
this.glContext = createWebGLContext(this.contextId);
Expand All @@ -58,13 +65,17 @@ export class WebGLBackend implements Backend {
if (typeof this.pack !== 'boolean') {
this.pack = false;
}
if (typeof this.async !== 'boolean') {
this.async = false;
}

Logger.setWithEnv(env);

Logger.verbose(
'WebGLBackend',
`Created WebGLContext: ${typeof this.glContext} with matmulMaxBatchSize: ${
this.matmulMaxBatchSize}; textureCacheMode: ${this.textureCacheMode}; pack: ${this.pack}.`);
this.matmulMaxBatchSize}; textureCacheMode: ${this.textureCacheMode}; pack: ${this.pack}; async: ${
this.async}.`);
return true;
} catch (e) {
Logger.warning('WebGLBackend', `Unable to initialize WebGLBackend. ${e}`);
Expand Down
14 changes: 12 additions & 2 deletions js/web/lib/onnxjs/backends/webgl/inference-handler.ts
Original file line number Diff line number Diff line change
Expand Up @@ -245,8 +245,8 @@ export class WebGLInferenceHandler implements InferenceHandler {
...layout,
tensor: tensor ||
new Tensor(
layout.unpackedShape, dataType, (_id: Tensor.Id) => this.readTexture(textureData), undefined,
undefined, tensorId),
layout.unpackedShape, dataType, (_id: Tensor.Id) => this.readTexture(textureData),
async (_id: Tensor.Id) => this.readTextureAsync(textureData), undefined, tensorId),
texture
};
this.setTextureData(textureData.tensor.dataId, textureData, layout.isPacked);
Expand Down Expand Up @@ -287,6 +287,16 @@ export class WebGLInferenceHandler implements InferenceHandler {
return this.session.textureManager.readTexture(textureData, textureData.tensor.type, textureData.channels);
}

async readTextureAsync(textureData: TextureData): Promise<Tensor.NumberType> {
if (textureData.isPacked) {
return this.readTextureAsync(this.unpack(textureData));
}
if (!this.session.backend.glContext.isFloat32DownloadSupported) {
return this.session.textureManager.readUint8TextureAsFloat(encodeAsUint8(this, textureData));
}
return this.session.textureManager.readTextureAsync(textureData, textureData.tensor.type, textureData.channels);
}

pack(input: TextureData): TextureData {
const outputTextureData = this.executeProgram(createPackProgramInfoLoader(this, input.tensor), [input.tensor]);
return outputTextureData;
Expand Down
24 changes: 24 additions & 0 deletions js/web/lib/onnxjs/backends/webgl/texture-manager.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ export class TextureManager {
private readonly inUseTextures: Map<string, WebGLTexture[]>;
private readonly idleTextures: Map<string, WebGLTexture[]>;
private readonly textureLookup: Map<WebGLTexture, string>;
private readonly pendingRead: Map<Tensor.Id, Array<(arr: Tensor.NumberType) => void>> = new Map();

constructor(
public glContext: WebGLContext, public layoutStrategy: TextureLayoutStrategy, public profiler: Readonly<Profiler>,
Expand Down Expand Up @@ -89,6 +90,29 @@ export class TextureManager {
return this.toTensorData(dataType, data);
});
}
async readTextureAsync(td: TextureData, dataType: Tensor.DataType, channels?: number): Promise<Tensor.NumberType> {
const dataId = td.tensor.dataId;
if (!channels) {
channels = 1;
}
if (this.pendingRead.has(dataId)) {
const subscribers = this.pendingRead.get(dataId);
return new Promise<Tensor.NumberType>(resolve => subscribers?.push(resolve));
}
return this.profiler.event('backend', 'TextureManager.readTextureAsync', async () => {
this.pendingRead.set(dataId, []);
const dataSize = td.shape.reduce((a, b) => a * b) * channels!;
// add a fence waiting for the data to be ready
await this.glContext.createAndWaitForFence();
const data = this.glContext.readTexture(
td.texture, td.width, td.height, dataSize, this.toEncoderType(dataType), channels!);
const tensorData = this.toTensorData(dataType, data);
const subscribers = this.pendingRead.get(dataId);
this.pendingRead.delete(dataId);
subscribers?.forEach(resolve => resolve(tensorData));
return tensorData;
});
}
readUint8TextureAsFloat(td: TextureData): Float32Array {
return this.profiler.event('backend', 'TextureManager.readUint8TextureAsFloat', () => {
const dataSize = td.shape.reduce((a, b) => a * b);
Expand Down
74 changes: 74 additions & 0 deletions js/web/lib/onnxjs/backends/webgl/webgl-context.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,26 @@ import * as DataEncoders from './texture-data-encoder';
import {DataEncoder, Encoder} from './texture-data-encoder';
import {repeatedTry} from './utils';

export interface FenceContext {
query: WebGLSync|null;
isFencePassed(): boolean;
}

type PollItem = {
isDoneFn: () => boolean; resolveFn: () => void;
};

export function linearSearchLastTrue(arr: Array<() => boolean>): number {
let i = 0;
for (; i < arr.length; ++i) {
const isDone = arr[i]();
if (!isDone) {
break;
}
}
return i - 1;
}

/**
* Abstraction and wrapper around WebGLRenderingContext and its operations
*/
Expand Down Expand Up @@ -128,6 +148,7 @@ export class WebGLContext {
// unbind FB
return encoder.decode(buffer, dataSize);
}

isFramebufferReady(): boolean {
// TODO: Implement logic to check if the framebuffer is ready
return true;
Expand Down Expand Up @@ -535,4 +556,57 @@ ${shaderSource}`);
await repeatedTry(() => this.isTimerResultAvailable(query));
return this.getTimerResult(query);
}

public async createAndWaitForFence(): Promise<void> {
const fenceContext = this.createFence(this.gl);
return this.pollFence(fenceContext);
}

private createFence(gl: WebGLRenderingContext): FenceContext {
let isFencePassed: () => boolean;
const gl2 = gl as WebGL2RenderingContext;
const query = gl2.fenceSync(gl2.SYNC_GPU_COMMANDS_COMPLETE, 0);
gl.flush();
if (query === null) {
isFencePassed = () => true;
} else {
isFencePassed = () => {
const status = gl2.clientWaitSync(query, 0, 0);
return status === gl2.ALREADY_SIGNALED || status === gl2.CONDITION_SATISFIED;
};
}
return {query, isFencePassed};
}

async pollFence(fenceContext: FenceContext) {
return new Promise<void>(resolve => {
void this.addItemToPoll(() => fenceContext.isFencePassed(), () => resolve());
});
}

private itemsToPoll: PollItem[] = [];

pollItems(): void {
// Find the last query that has finished.
const index = linearSearchLastTrue(this.itemsToPoll.map(x => x.isDoneFn));
for (let i = 0; i <= index; ++i) {
const {resolveFn} = this.itemsToPoll[i];
resolveFn();
}
this.itemsToPoll = this.itemsToPoll.slice(index + 1);
}

private async addItemToPoll(isDoneFn: () => boolean, resolveFn: () => void) {
this.itemsToPoll.push({isDoneFn, resolveFn});
if (this.itemsToPoll.length > 1) {
// We already have a running loop that polls.
return;
}
// Start a new loop that polls.
await repeatedTry(() => {
this.pollItems();
// End the loop if no more items to poll.
return this.itemsToPoll.length === 0;
});
}
}
19 changes: 12 additions & 7 deletions js/web/lib/onnxjs/execution-plan.ts
Original file line number Diff line number Diff line change
Expand Up @@ -134,15 +134,20 @@ export class ExecutionPlan {
}

const output: Tensor[] = [];
this.graph.getOutputIndices().forEach((outputIndex) => {
const thisValue = this._values[outputIndex];
if (thisValue === undefined) {
for (let i = 0; i < this.graph.getOutputIndices().length; i++) {
const outputIndex = this.graph.getOutputIndices()[i];
const outputTensor = this._values[outputIndex];
if (outputTensor === undefined) {
throw new Error(`required output [${outputIndex}] does not have value`);
}
// eslint-disable-next-line no-unused-expressions
thisValue.data;
output.push(thisValue);
});
if (outputIndex === 0) {
await outputTensor.getData();
} else {
// eslint-disable-next-line no-unused-expressions
outputTensor.data;
}
output.push(outputTensor);
}
Logger.verbose('ExecPlan', 'disposing of inferenceHandler');
inferenceHandler.dispose();
return output;
Expand Down
3 changes: 0 additions & 3 deletions js/web/lib/onnxjs/tensor.ts
Original file line number Diff line number Diff line change
Expand Up @@ -131,9 +131,6 @@ export class Tensor {
* get the underlying tensor data asynchronously
*/
async getData(): Promise<TensorData> {
// TBD: This function is designed for usage when any backend data provider offers a way to retrieve data in an
// asynchronous way. should implement this function when enabling webgl async read data.

if (this.cache === undefined) {
this.cache = await this.asyncDataProvider!(this.dataId);
}
Expand Down
2 changes: 1 addition & 1 deletion js/web/lib/wasm/proxy-wrapper.ts
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ const onProxyWorkerMessage = (ev: MessageEvent<OrtWasmMessage>): void => {
}
};

const scriptSrc = isProxy() ? (document?.currentScript as HTMLScriptElement)?.src : undefined;
const scriptSrc = typeof document !== 'undefined' ? (document?.currentScript as HTMLScriptElement)?.src : undefined;

export const initWasm = async(): Promise<void> => {
if (isProxy()) {
Expand Down
5 changes: 4 additions & 1 deletion js/web/script/test-runner-cli-args.ts
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,10 @@ function parseWebglFlags(args: minimist.ParsedArgs): Env.WebGLFlags {
if (pack !== undefined && typeof pack !== 'boolean') {
throw new Error('Flag "webgl-texture-pack-mode" is invalid');
}

const async = args['webgl-async'];
if (async !== undefined && typeof async !== 'boolean') {
throw new Error('Flag "webgl-async" is invalid');
}
return {contextId, matmulMaxBatchSize, textureCacheMode, pack};
}

Expand Down
10 changes: 10 additions & 0 deletions js/web/test/e2e/browser-test-wasm-no-threads-proxy.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

'use strict';

it('Browser E2E testing - WebAssembly backend (no threads, proxy)', async function () {
ort.env.wasm.numThreads = 1;
ort.env.wasm.proxy = true;
await testFunction(ort, { executionProviders: ['wasm'] });
});
9 changes: 9 additions & 0 deletions js/web/test/e2e/browser-test-wasm-proxy.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

'use strict';

it('Browser E2E testing - WebAssembly backend (proxy)', async function () {
ort.env.wasm.proxy = true;
await testFunction(ort, { executionProviders: ['wasm'] });
});
6 changes: 3 additions & 3 deletions js/web/test/e2e/karma.conf.js
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ module.exports = function (config) {
config.set({
frameworks: ['mocha'],
files: [
{ pattern: distPrefix + 'ort.js' },
{ pattern: distPrefix + 'ort.min.js' },
{ pattern: './common.js' },
{ pattern: TEST_MAIN },
{ pattern: './node_modules/onnxruntime-web/dist/*.wasm', included: false, nocache: true },
Expand All @@ -42,11 +42,11 @@ module.exports = function (config) {
browsers: [],
customLaunchers: {
Chrome_default: {
base: 'Chrome',
base: 'ChromeHeadless',
chromeDataDir: USER_DATA
},
Chrome_no_threads: {
base: 'Chrome',
base: 'ChromeHeadless',
chromeDataDir: USER_DATA,
// TODO: no-thread flags
}
Expand Down
2 changes: 2 additions & 0 deletions js/web/test/e2e/run.js
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,8 @@ async function testAllBrowserCases({ hostInKarma }) {
await runKarma({ hostInKarma, main: './browser-test-webgl.js', browser: 'Chrome_default' });
await runKarma({ hostInKarma, main: './browser-test-wasm.js', browser: 'Chrome_default' });
await runKarma({ hostInKarma, main: './browser-test-wasm-no-threads.js', browser: 'Chrome_default' });
await runKarma({ hostInKarma, main: './browser-test-wasm-proxy.js', browser: 'Chrome_default' });
await runKarma({ hostInKarma, main: './browser-test-wasm-no-threads-proxy.js', browser: 'Chrome_default' });
await runKarma({ hostInKarma, main: './browser-test-wasm-path-override-filename.js', browser: 'Chrome_default' });
await runKarma({ hostInKarma, main: './browser-test-wasm-path-override-prefix.js', browser: 'Chrome_default' });
}
Expand Down
3 changes: 3 additions & 0 deletions js/web/test/test-main.ts
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ if (options.globalEnvFlags) {
if (flags.webgl?.pack !== undefined) {
ort.env.webgl.pack = flags.webgl.pack;
}
if (flags.webgl?.async !== undefined) {
ort.env.webgl.async = flags.webgl.async;
}
if (flags.wasm?.numThreads !== undefined) {
ort.env.wasm.numThreads = flags.wasm.numThreads;
}
Expand Down
Loading

0 comments on commit 83dc225

Please sign in to comment.