Reading pure HTML is way faster than waiting/calling for javascript functions etc and then making notice, how the page is set up. I think that's the main reason.
Another might be that the whole crawling thing is automated - so, again, reading static page is a lot easier and makes a lot more sense. As with javascript the content of the page might change every second etc, making the crawler "confused"
Considered, that this has not yet been implemented in search engines, I think that it won't come in the near future.