ComeçarComece de graça

Evaluating & Comparing Algorithms

Now that we've created a new model with GBTRegressor its time to compare it against our baseline of RandomForestRegressor. To do this we will compare the predictions of both models to the actual data and calculate RMSE and R^2.

Este exercício faz parte do curso

Feature Engineering with PySpark

Ver curso

Instruções do exercício

  • Import RegressionEvaluator from pyspark.ml.evaluation so it is available for use later.
  • Initialize RegressionEvaluator by setting labelCol to our actual data, SALESCLOSEPRICE and predictionCol to our predicted data, Prediction_Price
  • To calculate our metrics, call evaluate on evaluator with the prediction values preds and create a dictionary with key evaluator.metricName and value of rmse, do the same for the r2 metric.

Exercício interativo prático

Experimente este exercício completando este código de exemplo.

from ____ import ____

# Select columns to compute test error
evaluator = ____(____=____, 
                                ____=____)
# Dictionary of model predictions to loop over
models = {'Gradient Boosted Trees': gbt_predictions, 'Random Forest Regression': rfr_predictions}
for key, preds in models.items():
  # Create evaluation metrics
  rmse = evaluator.____(____, {____: ____})
  r2 = evaluator.____(____, {____: ____})
  
  # Print Model Metrics
  print(key + ' RMSE: ' + str(rmse))
  print(key + ' R^2: ' + str(r2))
Editar e executar o código