Inference engines vs Decision trees [closed]

别来无恙 提交于 2019-12-05 05:13:43

Forward chaining inference engines support specifications in full first-order logic (translated to if-then rules), while decision trees can only march down a set to a specific subset. If you're using both for, say, determining what car a user wants, then in first-order logic you can say (CHR syntax; <=> replaces LHS by RHS):

user_likes_color(C), available_color(C) <=> car_color(C).

in addition to all the rules that determine the brand/type of car the user wants, and the inference engine will pick the color as well as the other attributes.

With decision trees, you'd have to set up an extra tree for the color. That's okay as long as color doesn't interact with other properties, but once they do, you're screwed: you may have to replicate the entire tree for every color except those colors that conflict with other properties, where you'd need to also modify the tree.

(I admit color is a very stupid example, but I hope it gets the idea across.)

To say so I have not used inference engines or decision trees in practice. In my point of view you should use decision trees if you to want learn form a given training set and then predict outcomes. As an example, if you have a data set with information which states if you went out for a barbecue given the weather condition (wind, temperatur, rain, ...). With that data set you can build a decision tree. The nice thing about decision tree is that you can use pruning to avoid overfitting and therefore avoid to model noise.

I think inference engines are better than decision trees if you have specific rules, which you can use for reasoning. Larsmans has already provided a good example.

I hope that helps

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!