According to my testing, Facebook\'s crawlers do not render client-side templates like a browser.
I want to avoid a webserver and building HTML files for Open Graph
When you think about it it should be clear, why this does not work.
The Facebook crawler downloads the HTML as it is served by the server. The Facebook crawler will not execute any JavaScript, like all the crawler will not execute the JavaScript. This is due to security restrictions and for speed reason (they do not have the time execute JavaScript on their servers.)
There is no way around this. If you want the crawler to index you page, you need to give them directly what you want them to read.
Tip: You could use something like phantom.js to render your pages on the server side and serve this to the crawlers.
Use the ?_escaped_fragment_ method along with a prerender service. Facebook will respect the same crawlable Ajax specification as Google. Please see https://developers.google.com/webmasters/ajax-crawling/docs/specification
The solution is basically to use some kind of server-side user-agent detection to pick up whenever a social media crawler arrives.