Skip to main content

Research Repository

Advanced Search

A computational visual saliency model for images.

Narayanaswamy, Manjula

Authors

Manjula Narayanaswamy



Contributors

Yafan Zhao
Supervisor

Abstract

Human eyes receive an enormous amount of information from the visual world. It is highly difficult to simultaneously process this excessive information for the human brain. Hence the human visual system will selectively process the incoming information by attending only the relevant regions of interest in a scene. Visual saliency characterises some parts of a scene that appears to stand out from its neighbouring regions and attracts the human gaze. Modelling saliency-based visual attention has been an active research area in recent years. Saliency models have found vital importance in many areas of computer vision tasks such as image and video compression, object segmentation, target tracking, remote sensing and robotics. Many of these applications deal with high-resolution images and real-time videos and it is a challenge to process this excessive amount of information with limited computational resources. Employing saliency models in these applications will limit the processing of irrelevant information and further will improve their efficiency and performance. Therefore, a saliency model with good prediction accuracy and low computation time is highly essential. This thesis presents a low-computation wavelet-based visual saliency model designed to predict the regions of human eye fixations in images. The proposed model uses two-channel information luminance (Y) and chrominance (Cr) in YCbCr colour space for saliency computation. These two channels are decomposed to their lowest resolution using two-dimensional Discrete Wavelet Transform (DWT) to extract the local contrast features at multiple scales. The extracted local contrast features are integrated at multiple levels using a two-dimensional entropy-based feature combination scheme to derive a combined map. The combined map is normalized and enhanced using natural logarithm transformation to derive a final saliency map. The performance of the model has been evaluated qualitatively and quantitatively using two large benchmark image datasets. The experimental results show that the proposed model has achieved better prediction accuracy both qualitatively and quantitatively with a significant reduction in computation time when compared to the existing benchmark models. It has achieved nearly 25% computational savings when compared to the benchmark model with the lowest computation time.

Citation

NARAYANASWAMY, M. 2021. A computational visual saliency model for images. Robert Gordon University, MRes thesis. Hosted on OpenAIR [online]. Available from: https://doi.org/10.48526/rgu-wt-1447310

Thesis Type Thesis
Deposit Date Sep 8, 2021
Publicly Available Date Sep 8, 2021
Keywords Image processing; Video processing; Visual saliency; Fixation prediction; Rendering; Image entropy; Discrete wavelet transform
Public URL https://rgu-repository.worktribe.com/output/1447310
Publisher URL https://doi.org/10.48526/rgu-wt-1447310
Award Date Jun 30, 2021

Files





You might also like



Downloadable Citations