In: Operations Management
Does feature Selection provides the data analyst an advantage of reducing the number of statistically-significantly features associated with the dependent variable, a concept also known as a type of dimensionality reduction?
Does feature Selection provides the data analyst an advantage of reducing the number of statistically-significantly features associated with the dependent variable, a concept also known as a type of dimensionality reduction?
Feature solution also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction.Feature selection methods can be used to identify and remove unneeded, irrelevant and redundant attributes from data that do not contribute to the accuracy of a predictive model or may in fact decrease the accuracy of the model.Though Feature selection is different from dimensionality reduction,both methods seek to reduce the number of attributes in the dataset, but a dimensionality reduction method do so by creating new combinations of attributes, where as feature selection methods include and exclude attributes present in the data without changing them.Feature selection is itself useful, but it mostly acts as a filter, muting out features that aren’t useful in addition to your existing features.
As Feature Solution identify and remove unneeded, irrelevant and redundant attributes from data that do not contribute to the accuracy of a predictive model or may in fact decrease the accuracy of the model,actually it is reducing numbers of data ,to be processed by data analyst.so it is obvious that feature Selection provides the data analyst an advantage of reducing the number of statistically-significantly features associated with the dependent variable