Multi-stream Fast Fourier Convolutional Neural Network for Automatic Target Recognition of Ground Military Vehicle
Keywords:convolutional neural network, Fast Fourier Transform, Ground Military Vehicle, multi-stream, synthetic aperture radar, automatic target recognition
SAR is very useful in both military and civilian applications due to its 24/7, all-weather, and high-resolution capabilities, as well as its ability to recognize camouflage and penetrating cover. In the field of SAR image interpretation, target recognition is an important research challenge for researchers all over the world. With the application of high-resolution SAR, the imaging area has been expanding, and different imaging modes have appeared one after another. There are many difficulties with the conventional understanding of human interpretation. There are issues like slow movement, a lot of labor, and poor judgment. Technology for intelligent interpretation needs to be developed immediately. Although deep CNNs have proven extremely efficient in image recognition, one of the major drawbacks is that they require more parameters as their layers increase. The cost of convolution operation for all convolutional layers is therefore high, and learning lag results from the inevitable rise in computation as the size of the image kernel grows. This study proposes a three ways input of SAR images into multi-stream fast Fourier convolutional neural network (MS-FFCNN). The technique elaborates on the transformation of rudimentary multi-stream convolution neural network into multi-stream fast Fourier convolution neural network. By utilizing the fast Fourier transformation instead of the standard convolution, it lowers the cost of image convolution in convolutional neural networks (CNNs), which lowers the overall computational cost. The multiple streams of FFCNN overcome the problem of insufficient samples size and further improve on the long training time as well as improving the recognition accuracy. The proposed method yielded good recognition accuracy of 99.92%.
How to Cite
Copyright (c) 2022 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.