I am using a scikit extra trees classifier:
model = ExtraTreesClassifier(n_estimators=10000, n_jobs=-1, random_state=0)
Once the model is f
The paper "Why Should I Trust You?": Explaining the Predictions of Any Classifier was submitted 9 days after this question, providing an algorithm for a general solution to this problem! :-)
In short, it is called LIME for "local interpretable model-agnostic explanations", and works by fitting a simpler, local model around the prediction(s) you want to understand.
What's more, they have made a python implementation (https://github.com/marcotcr/lime) with pretty detailed examples on how to use it with sklearn. For instance this one is on two-class random forest problem on text data, and this one is on continuous and categorical features. They are all to be found via the README on github.
The authors had a very productive year in 2016 concerning this field, so if you like reading papers, here's a starter: