Tag Archives: Model Interpretability

Mining Interpretable Rules from Classification Models

As data scientists, we come across numerous classification problems every once in a while. Ensemble learning techniques like bagging and boosting typically give us quite high classification performances. But all such models are much complex and hard to interpret. To make sure that everything is working fine and also to understand the prediction results/logic better, it becomes necessary… Read More »