I am assuming you are using Hive on Spark (or) Spark-Sql with hive context. If that is the case, you should run analyze in hive.
Analyze table<...> typically needs to run after the table is created or if there are significant inserts/changes. You can do this at the end of your load step itself, if this is a MR or spark job.
At the time of analysis, if you are using hive on spark - please also use the configurations in the link below. You can set this at the session level for each query. I have used the parameters in this link https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started in production and it works fine.