Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries, insurance is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!
Knowing all of this, On the Road car insurance has requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they’ve asked you to use simple Logistic Regression, identifying the single feature that results in the best-performing model, as measured by accuracy.
They have supplied you with their customer data as a csv file called car_insurance.csv
, along with a table (below) detailing the column names and descriptions below.
1 The Dataset description
Column | Description |
---|---|
id |
Unique client identifier |
age |
Client’s age:
|
gender |
Client’s gender:
|
driving_experience |
Years the client has been driving:
|
education |
Client’s level of education:
|
income |
Client’s income level:
|
credit_score |
Client’s credit score (between zero and one) |
vehicle_ownership |
Client’s vehicle ownership status:
|
vehcile_year |
Year of vehicle registration:
|
married |
Client’s marital status:
|
children |
Client’s number of children |
postal_code |
Client’s postal code |
annual_mileage |
Number of miles driven by the client each year |
vehicle_type |
Type of car:
|
speeding_violations |
Total number of speeding violations received by the client |
duis |
Number of times the client has been caught driving under the influence of alcohol |
past_accidents |
Total number of previous accidents the client has been involved in |
outcome |
Whether the client made a claim on their car insurance (response variable):
|
2 Objectives
The Head of Data at On the Road car insurance has asked for your support as they venture into the world of machine learning! They would like you to start by investigating their customer data and cleaning it in preparation for modeling. Once that is complete, they would like you to tell them which feature produces the best accuracy for predicting whether a customer will make a car insurance claim. Specifically, they have set the following tasks:
- Investigate and clean the data, so that there are no missing values and remove the “id” column.
- Find the feature with the best predictive performance for a car insurance claim (“outcome”) by creating simple Logistic Regression models (each with a single feature) and assessing their accuracy.
- Create a data frame called best_feature_df, containing columns named “best_feature” and “best_accuracy” with the name of the feature with the highest accuracy, and the respective accuracy score.
3 Used libraries
4 Cleaning the data
-
Reading and inspecting data : The summary function indicates that
credit_score
andannual_mileage
have null values.
age gender race driving_experience
Mode :logical Mode :logical Mode :logical Mode :logical
FALSE:10000 FALSE:10000 FALSE:10000 FALSE:10000
education income credit_score vehicle_ownership
Mode :logical Mode :logical Mode :logical Mode :logical
FALSE:10000 FALSE:10000 FALSE:9018 FALSE:10000
TRUE :982
vehicle_year married children postal_code
Mode :logical Mode :logical Mode :logical Mode :logical
FALSE:10000 FALSE:10000 FALSE:10000 FALSE:10000
annual_mileage vehicle_type speeding_violations duis
Mode :logical Mode :logical Mode :logical Mode :logical
FALSE:9043 FALSE:10000 FALSE:10000 FALSE:10000
TRUE :957
past_accidents outcome
Mode :logical Mode :logical
FALSE:10000 FALSE:10000
- Replacing missing data : Replacing missing values with the mean value of each column is more convenient than deleting rows, as we can’t just delete approximately 10% of the dataset.
5 Preparing the Logistic Regression models
- Extracting important values to avoid long lines of code.
- Using the glue function to call each column using its name (of type string) and to facilitate joining the accuracy score of each feature to the feature_scores tibble.
for(col in features){
models <- glm(glue('outcome ~ {col}'), data = clean_data, family = "binomial")
predictions <- round(fitted(models))
accuracy_score <- length(which(predictions == outcome))/length(outcome)
features_scores[which(features_scores$features == col),
"accuracy_score"] = accuracy_score
}
- The results are in.
6 Checking the results and extracting them
- The feature with the best predictive performance for a car insurance claim is
driving_experience
with a score of 0.7771
best_feature <- features_scores$features[which.max(features_scores$accuracy_score)]
best_accuracy <- max(features_scores$accuracy_score)
best_feature_df <- data.frame(best_feature, best_accuracy)
best_feature_df
An initial test to try and discover which features are more influential is calculating the correlation of each one to the outcome column (We will compare it to the result from the models). It reveals that driving_experience
, age
and income
are more likely to result in a negative outcome than other features (due to the negative correlation value), while annual_mileage
and surprisingly gender
are more likely to result in a claim made compared to the other columns.
cor_table <- tibble(features, cor = cor(subset(clean_data, select = -outcome), outcome))
head(cor_table)
Comparing the results reveals that correlation, despite predicting that the first five features have higher accuracy scores, wasn’t a sufficiently good practice for the rest of the features.
full_table <- cor_table %>%
inner_join(features_scores, by = "features") %>%
arrange(desc(accuracy_score))
full_table