Feature Extraction: Foundations and ApplicationsIsabelle Guyon, Steve Gunn, Masoud Nikravesh, Lofti A. Zadeh Springer Science & Business Media, 20/07/2006 - 778 páginas Everyonelovesagoodcompetition. AsIwritethis,twobillionfansareeagerly anticipating the 2006 World Cup. Meanwhile, a fan base that is somewhat smaller (but presumably includes you, dear reader) is equally eager to read all about the results of the NIPS 2003 Feature Selection Challenge, contained herein. Fans of Radford Neal and Jianguo Zhang (or of Bayesian neural n- works and Dirichlet di?usion trees) are gloating “I told you so” and looking forproofthattheirwinwasnota?uke. Butthematterisbynomeanssettled, and fans of SVMs are shouting “wait ’til next year!” You know this book is a bit more edgy than your standard academic treatise as soon as you see the dedication: “To our friends and foes. ” Competition breeds improvement. Fifty years ago, the champion in 100m butter?yswimmingwas22percentslowerthantoday’schampion;thewomen’s marathon champion from just 30 years ago was 26 percent slower. Who knows how much better our machine learning algorithms would be today if Turing in 1950 had proposed an e?ective competition rather than his elusive Test? But what makes an e?ective competition? The ?eld of Speech Recognition hashadNIST-runcompetitionssince1988;errorrateshavebeenreducedbya factorofthreeormore,butthe?eldhasnotyethadtheimpactexpectedofit. Information Retrieval has had its TREC competition since 1992; progress has been steady and refugees from the competition have played important roles in the hundred-billion-dollar search industry. Robotics has had the DARPA Grand Challenge for only two years, but in that time we have seen the results go from complete failure to resounding success (although it may have helped that the second year’s course was somewhat easier than the ?rst’s). |
Índice
An Introduction to Feature Extraction | 1 |
References | 22 |
3 | 30 |
References | 58 |
Assessment Methods | 65 |
Filter Methods | 89 |
References | 114 |
References | 135 |
Combining a Filter Method with SVMs | 439 |
References | 445 |
References | 461 |
Information Gain Correlation and Support Vector | 463 |
References | 470 |
References | 487 |
Combining InformationBased Supervised | 489 |
An Input Variable Importance Definition | 509 |
References | 162 |
References | 182 |
Ensemble Learning | 187 |
References | 203 |
References | 231 |
References | 260 |
References | 295 |
Ensembles of Regularized Least Squares Classifiers | 297 |
References | 313 |
Combining SVMs with Various Feature Selection | 315 |
Variable Selection using Correlation and Single Variable | 342 |
References | 357 |
TreeBased Ensembles with Dynamic Soft Feature | 359 |
References | 374 |
Sparse Flexible and Efficient Modeling | 375 |
References | 393 |
Margin Based Feature Selection and Infogain | 395 |
Nonlinear Feature Selection with the Potential Support | 419 |
References | 547 |
Constructing Orthogonal Latent Features | 551 |
References | 582 |
References | 604 |
Highly Predictive Features | 625 |
Elementary Statistics | 649 |
Confidence Intervals | 655 |
References | 662 |
ARCENE | 669 |
GISETTE | 677 |
DOROTHEA | 687 |
MATLAB Code of the Lambda Method 697 | 696 |
High Dimensional Classification with Bayesian Neural | 707 |
Krzysztof Grabczewski Norbert Jankowski | 735 |
Lemaire F Clérot | 743 |
771 | |
Outras edições - Ver tudo
Feature Extraction: Foundations and Applications Isabelle Guyon,Steve Gunn,Masoud Nikravesh,Lofti A. Zadeh Pré-visualização limitada - 2008 |
Feature Extraction: Foundations and Applications Isabelle Guyon,Steve Gunn,Masoud Nikravesh,Lofti A. Zadeh Pré-visualização indisponível - 2009 |
Feature Extraction: Foundations and Applications Isabelle Guyon,Steve Gunn,Masoud Nikravesh,Lofti A. Zadeh Pré-visualização indisponível - 2017 |
Palavras e frases frequentes
accuracy algorithm applied approach Arcene average Bayes Bayesian binary Breiman Chapter class labels coefficient computed correlation criterion cross-validation dataset decision tree defined Dexter dimensionality dimensionality reduction distribution Dorothea ensemble entropy error rate estimate evaluation examples F-score feature selection feature selection methods feature set feature subset feature values filter fuzzy neural fuzzy sets Gaussian Gisette gradient Guyon hyperparameter IEEE input variables Isomap iteration kernel kernel PCA L1-norm linear loss function Machine Learning Madelon margin matrix minimization mutual information neural network neuron nonlinear number of features optimization output overfitting P-SVM p-value parameters performance prediction predictor principal components probability Random Forest regression regularization relevance indices relevant features RLSC sample score selected features single variable classifier split statistical subset of features support vector machines test set tion training data training set validation set Vapnik variable selection weights wrapper zero