loadFrozenModel does not work with local files

只愿长相守 提交于 2019-12-10 11:34:15

问题


need help with async/await.

currently studying https://github.com/tensorflow/tfjs-converter.

and I'm stumped at this part of the code (loading my python converted saved js model for use in the browser):

import * as tf from '@tensorflow/tfjs';
import {loadFrozenModel} from '@tensorflow/tfjs-converter';

/*1st model loader*/
const MODEL_URL = './model/web_model.pb';
const WEIGHTS_URL = '.model/weights_manifest.json';
const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);

/*2nd model execution in browser*/
const cat = document.getElementById('cat');
model.execute({input: tf.fromPixels(cat)});

I noticed it's using es6 (import/export) and es2017 (async/await) so I've used babel with babel-preset-env, babel-polyfill and babel-plugin-transform-runtime. I've used webpack but switched over to Parcel as my bundler (as suggested by the tensorflow.js devs). In both bundlers I keep getting the error that the await should be wrapped in an async function so I wrapped the first part of the code in an async function hoping to get a Promise.

async function loadMod(){

const MODEL_URL = './model/web_model.pb';
const WEIGHTS_URL = '.model/weights_manifest.json';
const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);

} 

loadMod();

now both builders say that the 'await is a reserved word'. vscode eslinter says that loadMod(); has a Promise void. (so the promise failed or got rejected?) I'm trying to reference the javascript model files using a relative path or is this wrong? I have to 'serve' the ML model from the cloud? It can't be from a relative local path?

Any suggestions would be much appreciated. Thanks!


回答1:


You try to use this function

tf.loadFrozenModel(MODEL_FILE_URL, WEIGHT_MANIFEST_FILE_URL)

And your code has a syntax error. If you use the key words 'await', you must define one async function, such as below:

async function run () {

  /*1st model loader*/
  MODEL_URL = './model/web_model.pb';
  const WEIGHTS_URL = '.model/weights_manifest.json';
  const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);

 /*2nd model execution in browser*/
 const cat = document.getElementById('cat');
 model.execute({input: tf.fromPixels(cat)});

}
run();



回答2:


tf.loadFrozenModel uses fetch under the hood. Fetch is used to get a file served by a server and cannot be used with local files unless those are served by a server. See this answer for more.

For loadFrozenModel to work with local files, those files needs to be served by a server. One can use http-server to serve the model topology and its weights.

 // install the http-server module
 npm install http-server -g

 // cd to the repository containing the files
 // launch the server to serve static files of model topology and weights
 http-server -c1 --cors .

 // load model in js script
 (async () => {
   ...
   const model = await tf.loadFrozenModel('http://localhost:8080/tensorflowjs_model.pb', 'http://localhost:8080/weights_manifest.json')
 })()


来源:https://stackoverflow.com/questions/50295650/loadfrozenmodel-does-not-work-with-local-files

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!