NO.PZ202212020200001502
问题如下:
Build a decision tree for this problem.
选项:
解释:
Both of the features are binary, so there are no issues with having to determine a threshold as there would be for a continuous series. The first stage is to calculate the entropy if the split was made for each of the two features.
Examining the Car_owner feature first, among owners (feature = 1), two made a claim while four did not, leading to entropy for this sub-set of:
Among non-car owners (feature = 0), two made a claim and two did not, leading to an entropy of 1. The weighted entropy for splitting by car ownership is therefore given by
and the information gain is information gain = 0.971 - 0.951 = 0.020
We repeat this process by calculating the entropy that would occur if the split was made via the College_degree feature. If we did so, we would observe that the weighted entropy would be 0.551, with an information gain of 0.420. Therefore, because the entropy is maximized when the sample is first split by College_degree, this becomes the root node of the decision tree.
For policyholders with a college degree (i.e., the feature=1), there is already a pure split as four of them have not made claims while none have made claims (in other words, nobody with college degrees made claims). This means that no further splits are required along this branch. The other branch can be split using the Car_ownership feature, which is the only one remaining.
The tree structure is given below:
看答案没看懂