Sugarcane Yield Grade Prediction Using Random Forest With Forward Feature Selection and Hyper-Parameter Tuning

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/326018532

Sugarcane Yield Grade Prediction Using Random Forest with Forward Feature
Selection and Hyper-parameter Tuning

Chapter · January 2019


DOI: 10.1007/978-3-319-93692-5_4

CITATIONS READS

23 749

2 authors, including:

Pradit Mittrapiyanuruk

32 PUBLICATIONS 269 CITATIONS

SEE PROFILE

All content following this page was uploaded by Pradit Mittrapiyanuruk on 10 July 2018.

The user has requested enhancement of the downloaded file.


Sugarcane Yield Grade Prediction using
Random Forest with Forward Feature Selection
and Hyper-parameter Tuning

Phusanisa Charoen-Ung and Pradit Mittrapiyanuruk

Department of Computer Science, Faculty of Sciences,


Srinakharinwirot University, Bangkok, Thailand.
Phusanisa.cha@g.swu.ac.th, praditm@g.swu.ac.th

Abstract. This paper presents a Random Forest (RF) based method for predicting
the sugarcane yield grade of a farmer plot. The dataset used in this work is ob-
tained from a set of sugarcane plots around a sugar mill in Thailand. The number
of records in the train dataset and the test dataset are 8,765 records and 3,756
records, respectively.
We propose a forward feature selection in conjunction with hyper-parameter
tuning for training the random forest classifier. The accuracy of our method is
71.88%. We compare the accuracy of our method with two non-machine-learning
baselines. The first baseline is to use the actual yield of the last year as the pre-
diction. The second baseline is that the target yield of each plot is manually pre-
dicted by human expert. The accuracies of these baselines are 51.52% and
65.50%, respectively. The results on accuracy indicate that our proposed method
can be used for aiding the decision making of sugar mill operation planning.

Keywords: yield grade prediction, machine learning, random forest, forward


feature selection, hyper-parameter tuning.

1 Introduction
In each harvest season, sugar mills need to know the estimated yield of sugarcane in
each harvest season for the purpose of operation planning. For each production year, the
field surveying staffs of sugar mill will collect the data of each plot. Conventionally, the
field experts of sugar mills will use their experience to estimate the sugarcane yield of
each plot based on the surveying data, the historical yield profile of each plot and each
farmer. The main disadvantage of using this human based yield estimation method is
that there is a large discrepancy between estimated yields and actual yields.
In this work, we propose a machine learning based method for sugarcane yield grade
prediction. The yield grade of each plot is assigned to low-yield-volume, medium-yield-
volume or high-yield-volume. Then our goal is to predict the yield grade of a plot. The
features used in the prediction are the plot characteristics, the sugarcane characteristics,
the plot cultivation scheme and the rain volume.
Our proposed method is based on Random Forest (RF) technique [7]. In training
phase, we propose to a RF model training procedure by using a forward feature selection
[8] in conjunction with hyper-parameter tuning. We implement our method by using the
Python Anaconda. In particular, we use the RF implementation of Scikit-Learn [1]. The
contribution of this work is that we develop a machine learning based sugarcane yield
grade prediction method for the data obtained from sugarcane plots around a sugar mill
in Thailand. Our method outperforms the prediction provided by human-expert.
The remainder of the paper are listed as follows. The related works are reviewed in
Section 2. In Section 3, we explain about the data used in the prediction as well as the
details of our proposed methods. Then, we report the prediction accuracy and give a
discussion in Section 4. Finally, the conclusion is drawn in Section 5.

2 RELATED WORK
The literatures about sugarcane yield prediction are presented in [2], [3], [4], [5]. These
works are directly related to our work. In [2], the authors propose a sugarcane yield
prediction method using a random forest algorithm. The features used in this work in-
clude: (i) biomass index computed from Agricultural Production Systems sIMulator
(APSIM), (ii) yields from two previous years, (iii) cumulative rainfalls, radiations and
daily temperature ranges in two different time periods, (iv) Southern Oscillation Index
(SOI), and (v) 3-month running average sea surface temperature in the Niño 3.4 region.
In [3], the authors propose a method to predict the sugar content from sugarcane harvests
in the year of 2011-2012 where each observation is referred to one block in the farms.
The authors use 53 features in the prediction. These features belong to four groups: (i)
soil physics and soil chemistry, (ii) weather, (iii) agricultural practices, and (iv) crop-
related information. Three different machine learning techniques are used in the predic-
tion including Support Vector Regression, Random Forest, and Regression Trees. Also,
the RRelief algorithm [6] is used for feature selection. The authors report that the pre-
diction accuracy of their best method in term of Mean-Absolute-Error is 2.02 kg/Mg.
Also, they report that their method can predict 90% of the observation within a precision
of 5.40 kg/Mg.
In [4], the authors propose the study of the effects of hyper-parameter tuning, feature
engineering and feature selection in sugarcane yield prediction techniques. For feature
engineering, a set of features derived from the original features are calculated and used
in the prediction, e.g., the rate of fertilization for each nutrient (N, P and K), the weather
description for four different periods, etc. The feature selection is performed by using
RRelief algorithm [6]. The machine learning techniques used in the study include Sup-
port Vector Machine (SVM), Random Forest (RF), Regression Tree (RT), Neural Net-
work (NN), and Boosted Regression Trees (BRT). The hyper-parameter tuning is ac-
complished by grid search with 10-fold cross validation where the Mean-Absolute-Error
is used as the metric. The authors evaluate 66 combinations of six machine learning
techniques, tuning, feature selection, and feature engineering. The authors report that the
BRT, SVM, and RF give the better performance among the others where the RF is the
best. In [5], the authors propose the method to solve the harvest scheduling problem for
a group of sugarcane growers that supply sugarcane to a mill in Thailand. A neural net-
work is applied to predict the sugarcane yields to be used in the harvest scheduling. The
features used in the prediction include crop class, farming skill, cultivars, soil type, the
use of irrigation, the age of the cane in days from cultivation to harvest, the average value
of the daily minimum/maximum temperature, the average daily rainfall (mm.), and the
accumulated daily rainfall since germination (mm.).
With regard to the feature selection for crop yield prediction, the works in [8] and
[9] are presented. In [8], the authors propose a forward feature selection method for
Wheat Yield Prediction. Their problem is recast as regression problem. The authors use
two different predictive methods, i.e., Regression Tree and Support Vector Regression
in their work. Inspired by this work, we adopt their idea into our proposed method pre-
sented in Section 3. In [9], the authors evaluate several common predictive modeling
techniques applied to crop yield prediction using a method to define the best feature
subset for each model. The predictive modeling methods studied in [9] include Multiple
linear regression, stepwise linear regression, M5- regression trees, and artificial neural
networks (ANN).

3 DATA AND METHOD

3.1 Data

Data collection process.


The sugarcane production data used in the work are provided by a sugar mill in Thai-
land. The main sources of data are obtained by collecting the data at the farmer plots
(e.g. cane class/type, soil type, area, fertilizer etc.) and collecting the data (e.g., actual
yield) at the mill when the farmers deliver their sugarcane to the mill.
Two different application programs are used in the data collection. These programs
use MS-SQL Server as the database backend. The first application program, referred to
as "GIS-System", is used to record the data collected from the farmer plots. These data
are associated with the GPS-based spatial information of plot location. The second ap-
plication program, referred to as "Cane-Accounting", is used to record the actual yield
data for each plot at the time the farmers deliver their sugarcanes to the mill.
The rainfall volume data are collected from about 100 mill-owned rain stations
located around the farmer plots. Then, the rainfall volume data of each plot is derived
from the closest rain station to the plot. The closest rain station to a plot is determined
by using the GPS spatial data that are in the database mentioned earlier.
From the database of both application programs, we write a SQL script to aggre-
gate the data and export the dataset to CSV file used for the predictive modeling (ex-
plained in the next section). Note that, the variable "YieldGrade" is the target of our
prediction. The value is used to justify the "YieldGrade" of each plot. Totally, we have
12,520 records in the dataset that are derived from the production data in year
2558/2559 (7,450 records) and in year 2559/2560 (5,070 records). Finally, the yield
grade of each plot is assigned according to the value of yield amount by using the cri-
teria considered the crop cutting and the criteria set by mill operation management into
3 different grades: (i) Low-yield-volume (Grade=1), (ii) Medium-yield-volume
(Grade=2), (ii), and (iii) High-yield-volume (Grade=3). The criteria used to assign the
yield grade can be explained as follows. First, the yield grade is defined as “Low” or
Grade=1 when the sugarcane yield is less than 7 Tons/Rais. Second, the yield grade is
defined as “Medium” or Grade=2 when the yield is in the range 7 to 12 Tons/Rais.
Finally, the yield grade is defined as “High” or Grade=3 when the yield is above 12
Tons/Rais. The list of variables (features) in the dataset can be summarized in Table 1.

Table 1. List of variables in the dataset.

Name Type Detail

Class_cane Category Cane class which has 3 different types: 1st rattoon cane, 2nd rattoon
cane and 3rd rattoon cane
Type_Cane Category Cane type which has 4 different types:
LK92-11, K84-200, K99-72 and Khonkean3
WaterType Category Irrigation sources which has 3 different types: Rain, Groundwater
and Natural canal
ActionWater Category Irrigation action type which has 2 different types: Water pour and
Rain
Epidemic Category Epidemic control method which has 2 different types: Preimergent
and Herbicide
FertilizerType Category Fertizer types which has 2 different types:
Chemical and organic
Fertilizer Category Fertilizer formula which has 4 different types:
46-0-0, 15-15-15, 16-16-16 and 25-7-7
TypeSoil Category Soil type which has 4 different types:
Loam, Silty clay and Ferus soil
GrooveWide Category Groove wide of the plot which has 4 different types: 120, 130, 140,
and 150 cm.
YieldOldGrade Category Yield Grade of the plot from the previous season which has 3 differ-
ent types: Grade 1, 2, and 3.
TargetGrade Category Target yield grade that is provided by human expert (Grade 1, 2, and
3).
TargetOldGrade Category Target yield grade from the previous season (Grade 1, 2, and 3).
FarmerContract- Category The farmer grade provided by the financial department according to
Grade his/her profile of sugarcane yields in previous years and his/her pro-
file of financial debts with the mill in case that the financial aid is
provided to the farmer which has 3 different types: A:good, B:Fair,
and C: Poor
Area_Remain Continuous The actual plot area that is used in the sugarcane cultivation (rais)
Distance Continuous The distance from plot to sugarcane mill (km)
Rain_Vol Continuous The rain volume in area of plot (mm)
ContractsArea Continuous The amount of sugarcanes that the farmer commits to deliver to the
mill (Tons/Rais).
YieldGrade Category Yield grade (Grade 1, 2, and 3).
Data Preprocessing.
First, we perform an exploratory data analysis (EDA). There are some records of miss-
ing value (NULL). That is, there are 41 records, 45 records, and, 43 records that have
no value in the fields “Type_Cane”, “FertilizerType” and “Fertilizer” respectively.
Also, there are some records that correspond to the outliers. A record is specified as
outlier if there is the value in a categorical variable which the count of this variable for
the whole data is very small comparing to the most frequency (largest count). The
source of these missing data and outliers could happen during the data entry process.
We opt to fill in these missing values and replace the values of outliers with the mode
(most frequency) of the corresponding fields. After the data pre-processing, we apply
the one-hot encoding (dummy variables) to the categorical variables. Finally, we ran-
domly split the dataset into the train data and the test data with 70:30 (Train:Test) ratio.
The number of records in the train and the test datasets are 8,765 records and 3,756
records, respectively.
In the training set, the number of records that are Grade 1, Grade 2, and Grade 3
are 3,933, 3,759, and 1,073, respectively. And, the number of records that are Grade 1,
Grade 2, and Grade3, in the testing set are 1,712, 1,607, and 437, respectively. The
distributions of yield grade in the training and testing datasets are shown in Fig. 1.
Clearly, the distributions of target variables of the train and test datasets are similar.

Distribution of YieldGrade

0.6
0.449 0.456 0.429 0.428
0.4

0.2 0.122 0.116

0
1 2 3

Train Test

Fig. 1. The histograms show the distributions of yield grades in the training set (blue) and
testing dataset (red).

3.2 Yield grade prediction using Random Forest with Forward Feature
Selection
In this section, we present our Random Forest based method for sugarcane yield grade
prediction. As an overview, the input to the system is a record of data with a pre-selected
set of variables (listed in Table 1) and the output is the predicted yield grade of the cor-
responding record. At the model training step, we propose to use a forward feature se-
lection in conjunction with hyper-parameter tuning to improve the accuracy of predic-
tion.
As a review, Random Forest (RF) is a supervise machine learning algorithm that can
be used for both regression and classification tasks. The RF model is an ensemble of
decision trees. Each decision tree (DT) is trained with random subset of training data
where the sampling is performed with replacement. Furthermore, at each step in the DT
construction, the best feature of split is chosen from a random subset of features. To
make the prediction of a test data instance, first we make the prediction for each DT.
Then the predictions from all DTs in the model are aggregated either by hard voting or
soft voting. Moreover, as the random subset of training data are sampled during RF
model training, there will be about 37% of training instances that are not used in each
DT construction. These samples are called Out-of-bag (OOB) instances. Alternatively,
we can also use these OOB instances to evaluate the predictive accuracy of RF model
by averaging the evaluations of OOB instances of DTs of the RF model.
In this work, we use Random Forest model in Scikit-Learn [1]. The input features
are listed in Table 2 but not include YieldGrade, which will be the target of prediction.
First, we try to build the RF predictive model by using the default parameters set up by
Scikit-Learn. We found that the accuracy of yield grade classification on both the train-
ing set and the testing set are 97.96% and 68.29%, respectively. The OOB accuracy score
is 64.61%. As the accuracies on the training set and the testing set are largely different,
this indicates that the problem of model overfitting.
Algorithm 1: RF model training with Forward Feature Selection and Hyper-parameter tuning
Input: Training data: X=features, Y=class label, F=set of features
Output: Trained model: best_clf and best hyper-parameters: best_params
1: clf_1, params_1=GridSearchCV(RandomForestClassifier(),X[F],Y)
2: impVal = clf_1.feature_importance()
3: F=sort_descending(F, impVal)
4: S0={}
5: for i=1 to |F|
6: Si = Si-1  F[i]
7: clf = RandomForestClassifier(params_1)
8: ValScore[i]=cross_validation_score(clf,X[Si],Y)
9: end for
10: k = argmaxi ValScore[i]
11: best_clf,best_params=GridSearchCV(RandomForestClassifier(),X[Sk],Y)

To improve the generalization of our model, we propose the RF model construction


using a forward feature selection with hyper-parameter tuning that can be shown as
Pseudo-code in Algorithm 1 with the naming adapted from Scikit-Learn library. Our
proposed feature selection method is inspired by the idea presented in [8]. The details
can be explained step-by-step as follows:
First, as shown in Line 1, we train a random forest classifier with grid search and
cross validation. In this step of model training, all features (denoted by F) of the training
data are used. The trained model and the best parameters are obtained in the variables
clf_1 and params_1, respectively. Then, in Line 2, we compute the feature importance
values (denoted by impVal) of all features according to the above trained model by
using the function feature_importance. Note that, for the purpose of saving the pro-
cessing time of model training, we actually implement these 2 steps by separately tuning
the hyper-parameters: n_estimators, min_samples_leaf, max_features, and
max_depth. After that, we train a random forest classifier with the best values of these
hyper-parameters found in the separate tuning. Then, the feature importance value of
each feature is computed from this trained RF model.
Next, in Line 3, we sort the set of features F according to their importance values in
descending order. Then, in Line 5-10, we iteratively add the feature in the order of
importance value one-by-one into the feature list denoted by Si. Note that the list of
features in the ith iteration is constructed from the list of features in the previous iteration
(i-1) by adding the new feature whose importance score is decreasing. This step corre-
sponds to Line 6. Then, in Line 7-8, we train a random forest classifier on the training
data but we use only the above list of features (Si). We denote the training data with
this set of features as X[Si] in Line 8. Specifically, in this step of model training, we
use the same hyper-parameters found in Line 2 (params_1) as the model is instantiated
in Line 7. Then, the cross validation is applied to evaluate the accuracy of model. The
mean of validation scores when model training using the set of features Si will be cal-
culated and recorded accordingly into ValScore[i].
After all iterations are done, we search for the set of features Sk that gives the max-
imum mean validation score. This step corresponds to Line 10. Finally, a random forest
model is trained on the training data using only the best set of features Sk. In this step
of training, the grid search with cross validation is used to fine-tune the best hyper-
parameters. This step corresponds to Line 11. The trained model obtained from this step
is the output of our proposed training method. We can use the model to predict the yield
grade of a data instance.

4 RESULT AND DISCUSSION


We evaluate our yield grade classification method by measuring the accuracy of predic-
tion on the test dataset. The prediction of a plot is considered as the correct classification
only if the predicted yield grade is the same as the actual yield grade in the dataset. We
compare the prediction of our RF based method with two different non-machine-learn-
ing baselines. The first baseline, referred to as Baseline-1, is to predict the yield grade of
each plot based on the actual yield grade from the last year. This baseline is based on the
intuition idea that if we do not have any machine learning system, we can use the infor-
mation of actual yield from the last year as the prediction of yield grade of the same plot
in the incoming year. From the dataset, the accuracy of Baseline-1 is 51.52% and is
shown in Table 3. The second baseline, referred to as Baseline-2, is based on the predic-
tion by using the target of yield grade where we let a human expert to manually give a
target grade of each plot based on their experience. Then the accuracy of Baseline-2 is
65.50% and is shown in Table 3.
For the result of our proposed model, we report the intermediate results and the final
result corresponding to some major steps in Algorithm 1 as follows.
The implementation that corresponds to Line 1-2 in Algorithm 1 is achieved by sep-
arately tuning the hyper-parameters: n_estimators, min_samples_leaf,
max_features, and max_depth. The tuning range of n_estimators is 1 to 1000
with step size of 100 (i.e, 1, 100, 200, …, 1000). The tuning range of max_features
is 1 to 7 with step size of 1. The tuning range of max_features is 1, 2, 3, …, 17. Note
that we have 17 features in the dataset. The tuning range of min_samples_leaf con-
sists of 20 different values that are equally spaced between 2 to 40. And, the tuning range
of max_depth consists of 10 values that are equally spaced between 5 to 50. After
separate tuning, we obtain the best hyper-parameters as follows: n_estimators=300,
min_samples_leaf=14, max_features=4 and max_depth=10. When we re-train
the RF model by using all features in the training data with, the above hyper-parameters,
we obtain the accuracy on both the training set and the testing set are 75.24% and
71.86%, and the OOB accuracy score us 71.65%.
By using this trained RF model, we calculate the feature importances by using the
attribute "feature_importances_" of the "RandomForestClassifier" class in
Scikit-Learn. The features are sorted in the descending order according to the feature
importance value. The sorted feature list will be used in the iteration of forward feature
selection in Line 5-9. The feature importance values obtained in the dataset are listed in
Table 2. As the feature "TargetGrade" is the most important feature, it seems that the
predictions provided by human expert have an impact to our proposed method. We will
discuss this issue by comparing our method with the baseline that uses this feature as the
prediction.
For each iteration of forward feature selection in Line 6-8, the mean validation score
when adding the feature to the model is shown in Table 2 (right side). Corresponding to
Line 10 of Algorithm 1, the maximum value of validation score of all feature sets is
0.70591 where the number of features is 12. That is, the algorithm selects to use 12
features as follows: TargetGrade, FarmerContractGrade, ContractsArea, Area_Remain,
Distance, Rain_Vol, YieldOldGrade, Fertilizer, WaterType, Class_Cane, TargetOld-
Grade and GrooveWide.
Corresponding to Line 11 of Algorithm 1, we perform the hyper-parameter tuning
by applying the grid-search with the 5-fold cross-validation on the training set. The
ranges of parameters used in the grid search are listed as follows:
params = {'n_estimators': [100,200,300, 400, 500],
'min_samples_leaf': [10,12,14,16,20],
'max_depth': [5,10],
'max_features': [None, 4,5,6]}
We obtain the best score of 5-fold cross validation equal to 71.83%, where the best
parameters are listed as max_depth: 10, max_features: 6, min_sam-
ples_leaf: 14 and n_estimators: 100. By using the trained RF model with
the best parameters, the accuracy on both the training set and the testing set are 75.83%
and 71.88%, respectively. This value of accuracy is the final result of our model.
We also compare our RF based method with other machine learning methods.
Specifically, we use K-Nearest Neighbors (KNN), Support Vector Machine (SVM),
Naïve Bayes, and AdaBoost. Each of these methods are trained with the default param-
eters of Scikit-Learn. The accuracies of these methods on the test dataset are shown in
Table 3. From the accuracies reported in Table 3, we found that our method outperform
the first baseline (51.52%) that predicts the yield grade of each plot based on the actual
yield of the last year. This indicates that we can improve the accuracy of prediction
about 20% by using one of our proposed methods instead of making the predictions
without having any machine learning system. The accuracy of our method is also better
than the ones of KNN, SVM, Naïve-Bayes, and AdaBoost.
With respect to the second baseline, we found that our proposed method outper-
forms the Baseline-2 about 6% of accuracy. At we explain earlier that the feature "Tar-
getGrade" is the most important feature in the RF based method. Especially, from Table
2, the feature importance of "TargetGrade" is 60.27%. This means that if we incorporate
a machine learning technique (i.e., RF in our case) with other features to the target
yield grade ("TargetGrade") provided by human expert, we can improve we can im-
prove the accuracy of prediction about 6%. From the standpoint of sugar mill operation,
this could be a big gain as a decision making aid tool.

Table 2. Feature importance values of the RF based model and the mean validation score at each
iteration of forward feature selection.
Mean Validation Score
Feature name Feature importance value
(after adding the feature)
TargetGrade 0.60273 0.66374
FarmerContractGrade 0.22070 0.69577
ContractsArea 0.05302 0.70264
Area_Remain 0.03336 0.70208
Distance 0.02211 0.69649
Rain_Vol 0.01935 0.69864
YieldOldGrade 0.01931 0.70232
Fertilizer 0.00702 0.70367
WaterType 0.00586 0.70511
Class_Cane 0.00353 0.70503
TargetOldGrade 0.00341 0.70511
GrooveWide 0.00285 0.70591
EpidemicType 0.00194 0.70487
Type_Cane 0.00144 0.70495
ActionWaterType 0.00139 0.70487
FertilizerType 0.00134 0.70511
TypeSoil 0.00065 0.70407

Table 3. The accuracies of the baselines and our propose method


Method Baseline-1 Baseline-2 KNN SVM Naïve AdaBoost Our
Bayes method
Accuracy 51.52% 65.50% 48.43% 49.79% 58.87% 65.50% 71.88%
5 CONCLUSION

In this work, we propose methods for classifying the sugarcane yield grade at the plot
level from the information about plot characteristics. Our method is based on Random
Forest algorithm where our implementation is based on the random forest of Scikit-
Learn [1]. The data used in this work is acquired from a set of sugarcane plots around
a sugar mill in Thailand. The data consists of 12,521 records from two production years.
We split the data into the training set and the testing set. The training dataset is used to
build the models to predict the yield grade in the testing dataset. The classification ac-
curacies of our random forest based method on the testing dataset are 75.83% and
71.88%, respectively. Our proposed method outperforms the baseline that uses the ac-
tual yield of the last year as the prediction (51.52%) and the baseline that the target
yield of each plot is manually predicted by human expert (65.50%).
Some possible topics for future work can be listed as follows. First we can investi-
gate the improvement in the prediction accuracy in the case that daily the temperature
information in the local area and more information about the soil characteristics of each
plot are available as in some related works. Second, a comprehensive study about the
effects of hyper-parameter tuning, feature engineering and feature selection on the da-
taset could be a potential work. Finally, a model stacking technique can be applied to
improve the prediction accuracy.

References
1. Pedregosa, F., et. al.: Scikit-learn: Machine Learning in Python Journal of Machine Learning
Research 12, 2825-2830 (2011)
2. Everingham, Y., et al. Accurate prediction of sugarcane yield using a random forest algo-
rithm. Agronomy for Sustainable Development 36(27), 1-9 (2016)
3. de Oliveira, M. P. G., BOCCA, Bocca F. F., Rodrigues, L. H. A.. From spreadsheets to sugar
content modeling: A data mining approach. Computers and Electronics in Agriculture 132,
14-20 (2017
4. Bocca, F. F., Rodrigues, L. H. A.: The effect of tuning, feature engineering, and feature
selection in data mining applied to rained sugarcane yield modelling. Computers and Elec-
tronics in Agriculture128, 67-76 (2016)
5. Thuankaewsing, S., Khamjan, S., Piewthongngam, K., Pathumnakul, S.: Harvest scheduling
algorithm to equalize supplier benefits: a case study from the Thai sugar cane industry. Com-
put. Electron. Agric. 110, 42–55 (2015)
6. Robnik-Šikonja, M., Kononenko, I., Konokenko, I.: An adaptation of Relief for attribute
estimation in regression. In: Proceedings of the Fourteenth International Conference on Ma-
chine Learning, pp. 296-304. Morgan Kaufmann Publishers Inc., San Francisco (1997)
7. Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001)
8. Ruß, G., Kruse, R.: Feature Selection for Wheat Yield Prediction. In: Bramer M., Ellis R.,
Petridis M. (eds) Research and Development in Intelligent Systems XXVI, pp. 465-478.
Springer, London (2010)
9. Gonzalez-Sanchez, A., Frausto-Solis, J., Ojeda-Bustamante, W.: Attribute Selection Impact
on Linear and Nonlinear Regression Models for Crop Yield Prediction. The Scientific World
Journal (2014)

View publication stats

You might also like