loadFrozenModel does not work with local files

耗尽温柔 提交于 2019-12-07 00:08:31
gang yin

You try to use this function

tf.loadFrozenModel(MODEL_FILE_URL, WEIGHT_MANIFEST_FILE_URL)

And your code has a syntax error. If you use the key words 'await', you must define one async function, such as below:

async function run () {

  /*1st model loader*/
  MODEL_URL = './model/web_model.pb';
  const WEIGHTS_URL = '.model/weights_manifest.json';
  const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);

 /*2nd model execution in browser*/
 const cat = document.getElementById('cat');
 model.execute({input: tf.fromPixels(cat)});

}
run();

tf.loadFrozenModel uses fetch under the hood. Fetch is used to get a file served by a server and cannot be used with local files unless those are served by a server. See this answer for more.

For loadFrozenModel to work with local files, those files needs to be served by a server. One can use http-server to serve the model topology and its weights.

 // install the http-server module
 npm install http-server -g

 // cd to the repository containing the files
 // launch the server to serve static files of model topology and weights
 http-server -c1 --cors .

 // load model in js script
 (async () => {
   ...
   const model = await tf.loadFrozenModel('http://localhost:8080/tensorflowjs_model.pb', 'http://localhost:8080/weights_manifest.json')
 })()
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!