We consider the linear classification method consisting of separating two sets of points in d-space by a hyperplane. We wish to determine the hyperplane which minimises the sum of distances from all misclassified points to the hyperplane.... more
We consider the linear classification method consisting of separating two sets of points in d-space by a hyperplane. We wish to determine the hyperplane which minimises the sum of distances from all misclassified points to the hyperplane. To this end two local descent methods are developed, one grid-based and one optimisation-theory based, and are embedded in several ways into a VNS metaheuristic scheme. Computational results show these approaches to be complementary, leading to a single hybrid VNS strategy which combines both approaches to exploit the strong points of each. Extensive computational tests show that the resulting method performs well.
- by Frank Plastria and +1
- •
- Data Mining, Mathematical Sciences, Local Search
We consider the linear classification method consisting of separating two sets of points in d-space by a hyperplane. We investigate the situation where the two sets are nonseparable, and we wish to find the hyperplane which minimises the... more
We consider the linear classification method consisting of separating two sets of points in d-space by a hyperplane. We investigate the situation where the two sets are nonseparable, and we wish to find the hyperplane which minimises the sum of distances from all misclassified points to the hyperplane. To this end two local descent methods are developed, one grid-based and one optimisation-theory based, and are embedded in several ways into a VNS metaheuristic scheme. Computational results show these approaches to be complementary, leading to a single hybrid VNS strategy which combines both approaches to exploit the strong points of each. Extensive computational tests show that the resulting method performs well.
- by Frank Plastria and +1
- •
- Variable Neighborhood Search
We investigate the eects of dimensionality reduction using dierent techniques and dierent dimensions on six two-class data sets with numerical attributes as pre-processing for two classification algo- rithms. Besides reducing the... more
We investigate the eects of dimensionality reduction using dierent techniques and dierent dimensions on six two-class data sets with numerical attributes as pre-processing for two classification algo- rithms. Besides reducing the dimensionality with the use of principal components and linear discriminants, we also introduce four new tech- niques. After this dimensionality reduction two algorithms are applied. The first algorithm takes advantage of the reduced dimensionality itself while the second one directly exploits the dimensional ranking. We ob- serve that neither a single superior dimensionality reduction technique nor a straightforward way to select the optimal dimension can be iden- tified. On the other hand we show that a good choice of technique and dimension can have a major impact on the classification power, gen- erating classifiers that can rival industry standards. We conclude that dimensionality reduction should not only be used for visualisation or as pre-processing...
This paper reports on the construction of a personalized theme creation engine as a possible catalyst to the active use in secondary educationin Europe of digital media published on-line by selected museums
- by Peter Stuer and +1
- •
- Digital Media, Museums and the Web