Just running the jobs in the background is not a scalable solution: If you are fetching 10000 urls you probably only want to fetch a few (say 100) in parallel. GNU Parallel is made for that:
seq 10000 | parallel -j100 wget https://www.example.com/page{}.html
See the man page for more examples:
http://www.gnu.org/software/parallel/man.html#example__download_10_images_for_each_of_the_past_30_days