Intelligent Automation & Soft Computing DOI:10.32604/iasc.2021.017607 | |

Article |

Face Image Compression and Reconstruction Based on Improved PCA

1School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, 210044, China

2Jiangsu Key Laboratory of Data Science and Smart Software, Jinling Institute of Technology, Nanjing, 211169, China

3College of Information Engineering, Fuyang Normal University, Fuyang, 236041, China

4Department of Mathematics, Faculty of Science, New Valley University, El-Kharga, 72511, Egypt

*Corresponding Author: Yu Xue. Email: xueyu@nuist.edu.cn

Received: 04 February 2021; Accepted: 02 July 2021

Abstract: Face recognition technology has many usages in the real-world applications, and it has generated extensive interest in recent years. However, the amount of data in a digital image is growing explosively, taking up a lot of storage and transmission resources. There is a lot of redundancy in an image data representation. Thus, image compression has become a hot topic. The principal component analysis (PCA) can effectively remove the correlation of an image and condense the image information into a characteristic image with several main components. At the same time, it can restore different data images according to their principal components and meet the needs of image compression and reconstruction at diverse levels. This paper introduces an improved PCA algorithms. The covariance matrix, calculated according to a batch of training samples, is an approximation of the real covariance matrix. The matrix is relatively to the dimension of the covariance matrix, and the number of training samples is often too small. Therefore, it difficult to accurately obtain the covariance matrix. This improved PCA algorithm called 2DPCA can solve this problem effectively. By comparing it with several discrete PCA improvement algorithms, we show that the 2DPCA has a better dimensionality reduction effect. Compared with the PCA algorithm, the 2DPCA has a lower root-mean-square error under the constant noise condition.

Keywords: Image compression; PCA; feature extraction

At present, the amount of digital image data is soaring, occupying a lot of storage space and apportioning increased transmission resources [1]. Due to the high correlation of adjacent pixels, there is a lot of redundancy in image data representation. The principal component analysis (PCA) method can remove the correlation of the image data [2], and effectively compress the image information into several main components. At the same time, it can restore different data images according to their number of principal components, thus meet the needs of image compression and reconstruction at different levels. Moreover, PCA is often used for feature selection [3–5].

Among the active subspaces, the researchers’ top concern is the face image. It has been of a wide concern and deeply studied by the academic community. Feature extraction and dimension reduction are the key steps of face compression [6]. However, there are many shortcomings in the PCA algorithm. The common PCA compression method cannot achieve good results due to external conditions such as change of facial expression and strong light. Another important factor to consider is the dimension of the pictures [7]. Therefore, it is necessary to study an improved PCA algorithm, which can enhance the compression efficiency and ameliorates the accuracy of reconstruction [8]. It should be noticed that self-adaptive parameter is a good direction to optimization PCA. The image compression and reconstruction can be used in drones [9]. It should be noticed that self-adaptive parameter is a good direction to optimization PCA [10–13].

The main work of this paper is to study and analyzes the PCA algorithm for image compression and reconstruction. This paper focus on the study of PCA improved algorithm which includes 2DPCA, Mat PCA and Module PCA. The rest of the article is structured as follows: Section 2 introduces related work. Section 3 describes the PCA and improved PCA. Section 4 introduces the designs of experiment and analysis the experimental results. Finally, Section 5 provides the conclusions.

PCA is also called principal component analysis. It is a statistical method that converts the original multiple variables into several new composite variables [14]. These new variables are uncorrelated with each other and effectively to represent the information of the original variables. PCA can remove the correlation between image data, and condense the image information on the characteristic image which is several main components. The PCA is effectively to realize image compression. At the same time, it can recover different data image according to the number of principal components which are meet the needs of image compression and reconstruction at diverse levels. To conduct data analysis through deep learning [15,16]. The PCA can be used to preprocess multi-objective optimization algorithms [17]. The basic PCA image compression algorithm can achieve ideal compression ratio, but this method does not have a good standard for the selection of the number of retained features. The signal-to-noise ratio is very low, and the non-linear or non-stationary image signals are not considered. At the same time, the algorithm is optimized by the evolutionary algorithm and deep learning [18,19].

The pixel represents redundant information on the face image. It can be used to subtract the predicted value

Another way of image compression is transformation. In the process of transformation, the image is first obtained by some transforming (linear or nonlinear), and then which quantize these coefficients to obtain the compressed image. At the end of decoding, the encoded coefficients are quantized inversely, and the actual image are produced by inverse transformation. A typical transformation based on compression system is shown in Fig. 2.

Both forecast code and conversion code have their own advantages. The former one is relatively simple to implement, and the algorithm itself is adaptive to the original information of the image. The latter one generally has a higher compression ratio, but the cost is the complexity of transformation calculation, which also makes the implementation more complex. The evaluation method of image compression is usually divided into two aspects, compression performance and compression image quality. Compression performance is usually measured by compression ratio

The quality of the compressed image can be evaluated either subjectively or objectively. Among them, common objective quality evaluation methods include root mean square error, SNR (signal to noise ratio) and PSNR (peak signal to noise ratio).

The

K-L transformation is one of the main processes of the PCA method. It is necessary to use K-L transformation to realize facial image compression and reconstruction. The K-L transformation method is classical and easy to implement. The basic PCA method first selects some image as training image before facial image compression. Assuming that the image to be trained has a size of

PCA requires the population dispersion matrix of the training sample set, which named the covariance matrix:

It is a matrix with dimension

SVD decomposition is a common method to deal with matrices with high dimensions. SVD decomposition can effectively decompose high-dimensional matrices into low-dimensional space. Through SVD decomposition, we can solve the eigenvalues of the high-dimensional matrix easily. The following is the exact theory related to SVD decomposition.

If

where,

Therefore, construct the matrix:

It is easy to find its eigenvalue

Arranging the eigenvalues from large to small:

For any face image

The resulting coefficient vector can be thought as a compression. Since the coefficient vector dimension

Basic principally component analysis method has some disadvantages. When the face image illumination position changes, basic PCA cannot capture these changes effectively. Studies have shown that the basic PCA can capture some of the most simply consistency between image hardly, unless the information is included in training image. In addition, the basic PCA will stretch the pixels of the image in some way (usually the first place of each column is connected) into a vector with high dimension. When the image size is bigger, the vector dimension after stretching will be very prominent, not to mention the covariance matrix between the training image. Although the SVD decomposition can be utilized for approximating the feature image, which avoid the emergence of large covariance matrix, it is not accurate in many cases. Due to the deficiency in the PCA method, an improved method, named 2DPCA is proposed in this paper.

Let’s

we going to project it onto

Through the study we know that we can use the trace of the covariance of the projected vector to describe the dispersion degree of the projected sample. That is:

where,

The physical meaning of the maximization equation is to find a projection direction that maximizes the dispersion between the vectors after all training samples are projected on it. The covariance matrix of

The matrix

Eq. (25) is generalized criterion. The normalized vector

In fact, in the projection direction set

This section mainly summarizes the experimental results of the above algorithms, including the degree of image compression and the size of the root mean square error in image reconstruction. This paper mainly assesses the quality of several algorithms based on the size of the reconstruction error.

From Tab. 1, we can see that PCA algorithm can achieve an image compression ratio of about 3, and the image of the ORL database can have a very stable effect. As shown in Tab. 2, when noise is added, there will be no difference in compression effect.

From the above table, it shows that no matter what noise is added to PCA image compression. It has no significant influence on the image compression result. But adding noise will have a huge impact on image reconstruction. Tab. 3 shows the mean square deviation value of image reconstruction data adds noise.

In Tab. 4, the mean value of Gaussian noise is all 0 by default. From Tab. 4, we can see that the 2DPCA algorithm also have better results when processing noisy image. Compared with PCA algorithm, 2DPCA algorithm has lower root-mean-square error under the same noise condition. From Tab. 5, we can see that compared with Mat PCA, 2DPCA also has lower root-mean-square error under the same noise.

This article provides an image reconstruction and compression algorithm based on principal component analysis and its improved algorithm. PCA is effective to reduce the dimension of data and minimize the error between the extracted components and the original data, so it can be used to data compression and feature extraction. Especially with the development of multimedia image data information technology, abundant image media contains a lot of information. In order to store and transmit these image data effectively, more and more attention is being paid to image compression technology. The image compression and reconstruction based on PCA and its improved algorithm is proposed in this paper. The experimental results demonstrate that the implementation method is simple. It can realize image compression effectively and restore different data images according to the number of principal components. It also satisfies the needs of different levels of image compression and reconstruction.

Funding Statement: This work was partially supported by the National Natural Science Foundation of China (61876089, 61876185, 61902281, 61375121), the Opening Project of Jiangsu Key Laboratory of Data Science and Smart Software (No. 2019DS301), the Science and Technology Program of Jiangsu Province Construction System (2020JH08), and the Priority Academic Program Development of Jiangsu Higher Education Institutions.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

- D. Tsiotas and S. Polyzos, “The complexity in the study of spatial networks: An epistemological approach,” Networks & Spatial Economics, vol. 18, no. 1, pp. 1–32, 2018.
- B. Y. Cai and J. Xu, “Digital image compression based on BEMD and PCA,” Computer Engineering and Applications, vol. 22, no. 1, pp. 335–337, 2011.
- Y. Zhang, X. F. Song and D. W. Gong, “A return-cost-based binary firefly algorithm for feature selection,” Information Sciences, vol. 11, no. 2, pp. 418–419, 2017.
- Y. Zhang, D. W. Gong, Y. Hu and W. Q. Zhang, “Feature selection algorithm based on bare bones particle swarm optimization,” Neurocomputing, vol. 14, no. 8, pp. 150–157, 2015.
- Y. Zhang, D. W. Gong and J. Cheng, “Multi-objective particle swarm optimization approach for cost-based feature selection in classification,” IEEE-ACM Transactions on Computational Biology and Bioinformatics, vol. 14, no. 1, pp. 64–75, 2017.
- M. L. Zheng, P. Z. Zhang and W. W. Guo, “Super resolution reconstruction of face image based on learning,” Computer Engineering and Applications, vol. 20, no. 5, pp. 122–130, 2013.
- H. Jiang, “Image compression and reconstruction of principal component analysis,” Electronic Design Engineering, vol. 20, no. 5, pp. 14–18, 2012.
- D. Zheng, Z. Ran, Z. Liu, L. Li and L. Tian, “An efficient bar code image recognition algorithm for sorting system,” Computers Materials & Continua, vol. 64, no. 3, pp. 1885–1895, 2020.
- D. Wu, Y. Liu, Z. Xu and W. Shang, “Design and development of unmanned surface vehicle for meteorological monitoring,” Intelligent Automation & Soft Computing, vol. 26, no. 5, pp. 1123–1138, 2020.
- Y. Xue, T. Tang, W. Pang and A. X. Liu, “Self-adaptive parameter and strategy based particle swarm optimization for large-scale feature selection problems with multiple classifiers,” Applied Soft Computing, vol. 88, no. 4, pp. 1–12, 2020.
- Y. Xue, B. Xue and M. Zhang, “Self-adaptive particle swarm optimization for large-scale feature selection in classification,” ACM Transactions on Knowledge Discovery from Data, vol. 13, no. 5, pp. 1–27, 2019.
- Y. Xue, J. M. Jiang, B. P. Zhao and T. H. Ma, “A self-adaptive artificial bee colony algorithm based on global best for global optimization,” Soft Computing, vol. 22, no. 9, pp. 2935–2952, 2018.
- X. Yu, Y. Chu, F. Jiang, Y. Guo and D. W. Gong, “SVMs classification based two-side cross domain collaborative filtering by inferring intrinsic user and item features,” Knowledge-Based Systems, vol. 14, no. 1, pp. 80–91, 2018.
- T. Liu and F. B. Yang, “Application of principal component analysis in image compression,” Journal of Natural Science, vol. 24, no. 4, pp. 24–28, 2008.
- Y. Xue, Y. Tang, X. Xu, J. Liang and F. Neri, “Multi-objective feature selection with missing data in classification,” IEEE Transactions on Emerging Topics in Computational Intelligence. DOI 10.1109/TETCI.2021.3074147.
- Y. Xue, Y. Wang, J. Liang and A. Slowik, “A self-adaptive mutation neural architecture search algorithm based on blocks,” IEEE Computational Intelligence Magazine. DOI 10.1109/MCI.2021.3084435.
- Y. Xue, H. Zhu and J. Liang, “Adaptive crossover operator based multi-objective binary genetic algorithm for feature selection in classification,” Knowledge Based Systems. DOI 10.1016/j.knosys.2021.107218.
- A. Adebayo, S. Misra, L. Fernández-Sanz and A. Olusola, “Genetic algorithm and tabu search memory with course sandwiching (gats_cs) for university examination timetabling,” Intelligent Automation & Soft Computing, vol. 26, no. 3, pp. 385–396, 2020.
- B. T. Hu and J. W. Wang, “Deep learning for distinguishing computer generated images and natural images: A survey,” Journal of Information Hiding and Privacy Protection, vol. 2, no. 2, pp. 37–47, 2020.

This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |