How To Run An Amazing Forward Feature Selection Technique
Introduction
If you’ve used machine learning to solve problems, one of the common questions you will have is how to do feature selection? There are a few families, and today we are covering a near-optimal variant of sequential feature selection.
Sequential Forward Selection (SFS) builds a feature subset one feature at a time, which is a powerful technique but not feasible when working with an enormous family of features AND has a few inefficiencies when it comes to path dependency in feature selection!
The algorithm we are discussing today is a sequential feature selection with floating backtrack that achieves near-optimal results while evaluating only 1% of subsets compared to a naive search AND lets you correct early greedy mistakes, a capability that basic forward selection lacks. This results in a 20% improvement in criterion values compared to the base sequential search.

