In the era of big data, feature selection is a necessary part of data preprocessing. Feature selection is a data dimensionality reduction technology. Its main purpose is to select the most beneficial relevant features for the algorithm from the original data, reduce the dimensionality of the data and the difficulty of learning tasks, and improve the efficiency of the model. At this stage, the research on feature selection algorithms has achieved initial results, but it is also facing major challenges. Among them, the disaster of dimensionality is the major challenge faced by the feature selection and classification problem. First, the basic architecture of the feature selection algorithm is introduced, and the four processes of the generation of the subset, the evaluation of the subset, the termination condition, and the result verification are described in sequence Second, the methods are classified according to evaluation strategy, search strategy, and supervision information respectively, and compare these traditional methods, and point out their advantages and disadvantages. Finally, the feature selection is summarized and its future research direction is prospected.