Sunday, April 28, 2019

pyspark random forest

Tree_Methods_Consulting_Project_SOLUTION

Tree Methods Consulting Project - SOLUTION

You've been hired by a dog food company to try to predict why some batches of their dog food are spoiling much quicker than intended! Unfortunately this Dog Food company hasn't upgraded to the latest machinery, meaning that the amounts of the five preservative chemicals they are using can vary a lot, but which is the chemical that has the strongest effect? The dog food company first mixes up a batch of preservative that contains 4 different preservative chemicals (A,B,C,D) and then is completed with a "filler" chemical. The food scientists beelive one of the A,B,C, or D preservatives is causing the problem, but need your help to figure out which one! Use Machine Learning with RF to find out which parameter had the most predicitive power, thus finding out which chemical causes the early spoiling! So create a model and then find out how you can decide which chemical is the problem!

  • Pres_A : Percentage of preservative A in the mix
  • Pres_B : Percentage of preservative B in the mix
  • Pres_C : Percentage of preservative C in the mix
  • Pres_D : Percentage of preservative D in the mix
  • Spoiled: Label indicating whether or not the dog food batch was spoiled.

Think carefully about what this problem is really asking you to solve. While we will use Machine Learning to solve this, it won't be with your typical train/test split workflow. If this confuses you, skip ahead to the solution code along walk-through!


In [46]:
#Tree methods Example
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('dogfood').getOrCreate()
In [47]:
# Load training data
data = spark.read.csv('dog_food.csv',inferSchema=True,header=True)
In [48]:
data.printSchema()
root
 |-- A: integer (nullable = true)
 |-- B: integer (nullable = true)
 |-- C: double (nullable = true)
 |-- D: integer (nullable = true)
 |-- Spoiled: double (nullable = true)

In [49]:
data.head()
Out[49]:
Row(A=4, B=2, C=12.0, D=3, Spoiled=1.0)
In [50]:
data.describe().show()
+-------+------------------+------------------+------------------+------------------+-------------------+
|summary|                 A|                 B|                 C|                 D|            Spoiled|
+-------+------------------+------------------+------------------+------------------+-------------------+
|  count|               490|               490|               490|               490|                490|
|   mean|  5.53469387755102| 5.504081632653061| 9.126530612244897| 5.579591836734694| 0.2857142857142857|
| stddev|2.9515204234399057|2.8537966089662063|2.0555451971054275|2.8548369309982857|0.45221563164613465|
|    min|                 1|                 1|               5.0|                 1|                0.0|
|    max|                10|                10|              14.0|                10|                1.0|
+-------+------------------+------------------+------------------+------------------+-------------------+

In [51]:
# Import VectorAssembler and Vectors
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
In [52]:
data.columns
Out[52]:
['A', 'B', 'C', 'D', 'Spoiled']
In [53]:
assembler = VectorAssembler(inputCols=['A', 'B', 'C', 'D'],outputCol="features")
In [54]:
output = assembler.transform(data)
In [55]:
from pyspark.ml.classification import RandomForestClassifier,DecisionTreeClassifier
In [56]:
rfc = DecisionTreeClassifier(labelCol='Spoiled',featuresCol='features')
In [57]:
output.printSchema()
root
 |-- A: integer (nullable = true)
 |-- B: integer (nullable = true)
 |-- C: double (nullable = true)
 |-- D: integer (nullable = true)
 |-- Spoiled: double (nullable = true)
 |-- features: vector (nullable = true)

In [58]:
final_data = output.select('features','Spoiled')
final_data.head()
Out[58]:
Row(features=DenseVector([4.0, 2.0, 12.0, 3.0]), Spoiled=1.0)
In [59]:
rfc_model = rfc.fit(final_data)
In [60]:
rfc_model.featureImportances
Out[60]:
SparseVector(4, {0: 0.0026, 1: 0.0089, 2: 0.9686, 3: 0.0199})

Bingo! Feature at index 2 (Chemical C) is by far the most important feature, meaning it is causing the early spoilage! This is a pretty interesting use of a machine learning model in an alternative way!

Great Job

No comments:

Post a Comment

Loud fan of desktop

 Upon restart the fan of the desktop got loud again. I cleaned the desktop from the dust but it was still loud (Lower than the first sound) ...