Skip to main content

Research Repository

Advanced Search

MTFFNet: a multi-task feature fusion framework for Chinese painting classification.

Jiang, Wei; Wang, Xiaoyu; Ren, Jinchang; Li, Sen; Sun, Meijun; Wang, Zheng; Jin, Jesse S.

Authors

Wei Jiang

Xiaoyu Wang

Sen Li

Meijun Sun

Zheng Wang

Jesse S. Jin



Abstract

Different artists have their unique painting styles, which can be hardly recognized by ordinary people without professional knowledge. How to intelligently analyze such artistic styles via underlying features remains to be a challenging research problem. In this paper, we propose a novel multi-task feature fusion architecture (MTFFNet), for cognitive classification of traditional Chinese paintings. Specifically, by taking the full advantage of the pre-trained DenseNet as backbone, MTFFNet benefits from the fusion of two different types of feature information: semantic and brush stroke features. These features are learned from the RGB images and auxiliary gray-level co-occurrence matrix (GLCM) in an end-to-end manner, to enhance the discriminative power of the features for the first time. Through abundant experiments, our results demonstrate that our proposed model MTFFNet achieves significantly better classification performance than many state-of-the-art approaches. In this paper, an end-to-end multi-task feature fusion method for Chinese painting classification is proposed. We come up with a new model named MTFFNet, composed of two branches, in which one branch is top-level RGB feature learning and the other branch is low-level brush stroke feature learning. The semantic feature learning branch takes the original image of traditional Chinese painting as input, extracting the color and semantic information of the image, while the brush feature learning branch takes the GLCM feature map as input, extracting the texture and edge information of the image. Multi-kernel learning SVM (supporting vector machine) is selected as the final classifier. Evaluated by experiments, this method improves the accuracy of Chinese painting classification and enhances the generalization ability. By adopting the end-to-end multi-task feature fusion method, MTFFNet could extract more semantic features and texture information in the image. When compared with state-of-the-art classification method for Chinese painting, the proposed method achieves much higher accuracy on our proposed datasets, without lowering speed or efficiency. The proposed method provides an effective solution for cognitive classification of Chinese ink painting, where the accuracy and efficiency of the approach have been fully validated.

Citation

JIANG, W., WANG, X., REN, J., LI, S., SUN, M., WANG, Z. and JIN, J.S. 2021. MTFFNet: a multi-task feature fusion framework for Chinese painting classification. Cognitive computation [online], 13(5), pages 1287-1296. Available from: https://doi.org/10.1007/s12559-021-09896-9

Journal Article Type Article
Acceptance Date Feb 3, 2021
Online Publication Date Sep 10, 2021
Publication Date Sep 30, 2021
Deposit Date May 28, 2024
Publicly Available Date May 28, 2024
Journal Cognitive computation
Print ISSN 1866-9956
Electronic ISSN 1866-9964
Publisher Springer
Peer Reviewed Peer Reviewed
Volume 13
Issue 5
Pages 1287-1296
DOI https://doi.org/10.1007/s12559-021-09896-9
Keywords Multi-task feature fusion; Traditional Chinese paintings; Gray-level co-occurrence matrix
Public URL https://rgu-repository.worktribe.com/output/2058630

Files




You might also like



Downloadable Citations