Incremental Learning using MAXSMT

一笑奈何 提交于 2020-07-23 10:58:09

问题


Can we use the previous solution of a MaxSMT solver (optimize) in an incremental way in z3? Also, Is there any way to print out the soft assertions on the optimizer?


回答1:


The answer is YES if you are asking whether it is technically possible to run either z3 or OptiMathSAT incrementally with a MaxSMT problem. (Use the API).

All soft-clauses with the same id --at the moment in which one performs a check-sat-- are considered part of the same MaxSMT goal. In essence, the OMT solver evaluates the relevant set of soft-clauses of a MaxSMT goal lazily. This holds for both z3 and OptiMathSAT.

An optimal solution found at an early stage may be far away from the optimal solution of later stages of the iterative process.

When dealing with a MaxSMT problem, the ability of an OMT solver to reuse learned clauses across incremental calls may depend on the optimization algorithm that is being used.

I see two possible cases:

  • One is using a core-based MaxSMT engine. In this case, exploring formulations of the problem with an increasing level of complexity may help identifying a tractable sub-set of the original problem. However, notice that lemmas involving soft-constraints learned at previous iterations may not be useful at later stages (Actually, the OMT solver is going to discard all of these clauses and re-compute them if necessary).

  • One is using a sat-based MaxSMT engine. In this case, it is not clear to me the benefit of splitting the problem into smaller chunks, other than focusing the search over particular groups of (?possibly related?) soft-clauses. The OMT solver could be given all soft-constraints at once, together with a hard timeout, and it would still be able to yield a partial optimal solution when the alarm fires. (T-Lemmas involving the cost function are not going to be useful across incremental calls because the cost function changes. In the best scenario, the OMT solvers discards them. In the worst scenario, these T-Lemmas remain in the environment and clutter the search without altering the solution).

    I admit that it is a bit hard to predict the performance of the OMT solver because of the overhead that is introduced with either approach. On the one hand, we have the overhead of incremental calls and the fact that the optimization process is restarted from scratch multiple times. On the other hand, we have the overhead of performing BCP over a much larger set of soft-clauses. I guess that the balance turns in favor of the incremental approach for sufficiently large sets of soft-clauses. [This would be an interesting subject of investigation, I would love to read a paper about it!]



来源:https://stackoverflow.com/questions/59288428/incremental-learning-using-maxsmt

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!