Aan de slagGa gratis aan de slag

Evaluating & Comparing Algorithms

Now that we've created a new model with GBTRegressor its time to compare it against our baseline of RandomForestRegressor. To do this we will compare the predictions of both models to the actual data and calculate RMSE and R^2.

Deze oefening maakt deel uit van de cursus

Feature Engineering with PySpark

Cursus bekijken

Oefeninstructies

  • Import RegressionEvaluator from pyspark.ml.evaluation so it is available for use later.
  • Initialize RegressionEvaluator by setting labelCol to our actual data, SALESCLOSEPRICE and predictionCol to our predicted data, Prediction_Price
  • To calculate our metrics, call evaluate on evaluator with the prediction values preds and create a dictionary with key evaluator.metricName and value of rmse, do the same for the r2 metric.

Praktische interactieve oefening

Probeer deze oefening eens door deze voorbeeldcode in te vullen.

from ____ import ____

# Select columns to compute test error
evaluator = ____(____=____, 
                                ____=____)
# Dictionary of model predictions to loop over
models = {'Gradient Boosted Trees': gbt_predictions, 'Random Forest Regression': rfr_predictions}
for key, preds in models.items():
  # Create evaluation metrics
  rmse = evaluator.____(____, {____: ____})
  r2 = evaluator.____(____, {____: ____})
  
  # Print Model Metrics
  print(key + ' RMSE: ' + str(rmse))
  print(key + ' R^2: ' + str(r2))
Code bewerken en uitvoeren