Follow the instructions to clean the data and answer questions. If any of the nodes you used in the workflow has a random seed, set 9214 to the seed to fix the random state. Our goal is to predict the credit score from the given data. There is/are one (or multiple) attribute(s) which is/a

Assignment Task

Goal

This assignment aims to build experiences for students to clean the dataset, split the data into training and test sets, train usable predictive models, and explain the outputs. A small part of the discovery and research component is included in the assignment to expand the students’ skill set.

Task

The dataset contains messy values as the dataset is collected from the real world. Your tasks are to clean the data and create the predictive models according to the instructions for answering the questions listed below. The source file is “data_2024.csv”. The report should be prepared with the template and answer the questions. A table of content is not required.

Data Cleaning

You must follow the instructions to clean and split the given data set into training and test sets. Remember, a well-split dataset is the foundation of support for the model training and test. It is estimated that you will need to use around 30 nodes for data cleaning and partitioning before sending the partitioned data into the predictive models. Suggested nodes to be used include “File Reader,” “Column Filter,” “Rule-based Row Filter,” “String Manipulation,” “Math Formula,” “Math Formula (Multi Column),” “Rule Engine,” “Missing Value,” “Shuffle,” “Numerical Binner,” “Feature Selection Loop Start (1:1),” and “Partitioning.” You may see a warning sign on the “Missing Value” node stating, “The current settings use missing value handling methods that cannot be represented in PMML 4.2.” It is normal; you can ignore it because we are not using PMML in the assignment.

Naïve Bayes Model

After partitioning the cleaned data into training and test sets, build a Naïve Bayes classifier to predict the “Credit_Score.

Random Forest Model

After partitioning the cleaned data into training and test sets, build a random forest classifier to predict the “Credit_Score

Questions

  1. Follow the instructions to clean the data and answer questions. If any of the nodes you used in the workflow has a random seed, set 9214 to the seed to fix the random state.
  2. Our goal is to predict the credit score from the given data. There is/are one (or multiple) attribute(s) which is/are significantly irrelevant to the goal. Pick the most irrelevant attribute and give a persuasive rationale for that. The excluded attribute(s) is , and the reason for removing it is
  3. After removing the selected attribute(s), let’s start to remove tuples containing missing values. Remove tuples only if any of the attributes listed below have missing values: “Month,” “Age,”
  4. Check for the “Age” attribute to eliminate symbols that are not numbers to recover the data into the usual number format. Moreover, drop the tuples whose “Age” value is lower than or equal to 0 or greater than 120. List the node(s) (in sequence) and the corresponding command(s) used in this process.
  5. Remove the non-numerical symbol in the “Annual_Income” column and convert it to the double format. List the node(s) (in sequence) and the corresponding command(s) used in this process.
  6. Convert the in the “Occupation” attribute to Null. Please note that Null is different from an empty string. Remove the non-numerical symbol in “Num_of_Loan” and convert it to integer data type. Take absolute values of attributes “Num_Bank_Accounts” and “Num_Credit_Card.” Set values to 0 for the “Num_of_Loan” attribute if the original values are negative. Remove the non-numerical symbol in “Num_of_Delayed_payment” and convert it into integer format. Set the “Credit_Mix” value to “Unknow” if the original value is “_”.Remove the non-numerical symbol in “Outstanding_Debt” and convert it into the double format. List the node(s) (in sequence) and the corresponding command(s) used in this process.node(s) (in sequence) and the corresponding command(s) used in this process.
  7. Convert the “Credit_History_Age” to the count of months and store it in the integer format. For example, if the original value from a tuple is “22 Years and 1 Months”, the value will be 265 after the conversion (22 * 12 + 1 = 265). Store the converted result in a new attribute called “Total_CHA.” List the node(s) (in sequence) and the corresponding command(s) used in this process
  8. Remove the non-numerical symbol in the Amount invested monthly and convert it to the double format. Set the value to  Unknow if the original value in the Payment Behaviour attribute starts with. Remove the non-numerical symbol in  Monthly Balance and convert it to the double format. Convert Change Credit Limit” into the double format. List the node(s) (in sequence) and the corresponding command(s) used in this process
  9. Use the “Missing Value” node and use the “Next Value*” to replace missing values in all string type attributes. Use the “Previous Value*” in the same node to replace missing values in any numerical format. If the value of “Monthly_Balance” is negative, replace the value with 0. You can ignore the warning shown by the “Missing Value” node regarding “The current setting uses missing value handling methods that cannot be represented in PMML 4.2.” We are not using PMML in this unit. Screenshot the pop-up window with the correct settings.
  10. Simplify the “Type_of_Loan” attribute. If the original content has more than one type separated by a comma, keep only the first part. Otherwise, keep the full description if there is no comma included. For example, “Auto Loan, Credit-Builder Loan, Personal Loan, and Home Equity Loan” will become “Auto Loan”, “Credit-Builder Loan” will still be “Credit-Builder Loan”, and “Not Specified, Auto Loan, and Student Loan” will become “Not Specified” after the process. List the node(s) (in sequence) and the corresponding command(s) used in this process.
  11. Bin the “Changed_Credit_Limit” attribute with six bins of ranges: [−∞, −2.0), [−2.0, 0), [0, 4.0), [4.0, 6.0) , [6.0, 7.5) , and [7.5, ∞) and put the result into a new attribute called “Changed_Credit_Limit_binned”. Screenshot the pop-up window with the correct settings of your binner.
  12. Remove all temporarily created or useless attributes. Use the “Feature Selection Loop Start (1:1)” node to select the feature. The class label should be excluded from the features in the feature selection node. The Genetic Algorithm is specified to be the feature selection strategy with default population size and the maximum number of generations. Again, 9214 should be used as the static random seed. After selecting features, shuffle the data with seed 9214. The data should be partitioned by “Linear sampling”, with 80?ta in the training set and 20% in the test set. How many tuples and attributes (excluding the class label) are in the training set at the end?
  13. Build a Naïve Bayes classifier using the training and test sets created in the previous task. Answer the following questions after completing the model training and test.
  14. Give a screenshot of the Naïve Bayes classifier in the KNIME workflow. You can take the screenshot starting from the portioning node output to the end of the Naïve Bayes classifier part scorer.
  15. The default probability should be 0.0001, the minimum standard deviation should be 0.0001, the threshold standard deviation should be 0, and the maximum number of unique nominal values per attribute should be set to 600 in the classifier. Screenshot the setting dialogue of your Naïve Bayes Learner.
  16. Screenshot the confusion matrix and the Accuracy statistics of the test result. If the bank wants to minimise the risk of lending money to customers, the “Good” in “Credit_Score” should be the major target. Based on the current result, does the classifier perform satisfactorily?
  17. Which measurement should we look at to interpret your conclusion in this case?
  18. Build a random forest classifier using the training and test sets created in the previous task. Answer the following questions after completing the model training and test. Use the information gain ratio as the split criterion and 9214 as the static random seed to build the random forest model.
  19. Give a screenshot of the random forest classifier in the KNIME workflow. You can take the screenshot starting from the portioning node output to the end of the Naïve Bayes classifier part scorer.
  20. Screenshot the confusion matrix and the Accuracy statistics of the test result.
  21. If the bank wants to minimize the risk of lending money to customers, the “Good” in “Credit_Score” should be the major target. Compare the measurements between random forest results and Naïve Bayes results. Which model presents a more suitable result? Which measure should be used to make the comparison?
  22. Which class does the built random forest model perform the best? What measurement(s) should we look at to find the answer?