Skip to content

unstable result with Decision Tree Classifier #12188

Open
@Ichaab

Description

@Ichaab

Description

I have developed a java code to produce a decision tree without any pruning strategy. The used desicion rule is also the default majority rule. Then I opted to use python for its simplicity. The problem is the randomness in the DecisionTreeClassifier. Although splitter is set to "best" and max_features="None", so as all features are used, and random_state to 1, I don't finish with the same result as the java code generates. Exactly the same training and test data sets are used for python and java. How can I eliminate all randomness to obtain the same result with the java code please?
Help please.

Steps/Code to Reproduce

Expected Results

Same decision tree produced with java

Actual Results

different confusion matrix in each time I fixed random_state to 1.

Versions

Windows-8.1-6.3.9600-SP0
Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)]
NumPy 1.14.0
SciPy 1.0.0
Scikit-Learn 0.20.dev0

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions