Manjula Narayanaswamy
A computational visual saliency model for images.
Narayanaswamy, Manjula
Authors
Contributors
Yafan Zhao
Supervisor
Abstract
Human eyes receive an enormous amount of information from the visual world. It is highly difficult to simultaneously process this excessive information for the human brain. Hence the human visual system will selectively process the incoming information by attending only the relevant regions of interest in a scene. Visual saliency characterises some parts of a scene that appears to stand out from its neighbouring regions and attracts the human gaze. Modelling saliency-based visual attention has been an active research area in recent years. Saliency models have found vital importance in many areas of computer vision tasks such as image and video compression, object segmentation, target tracking, remote sensing and robotics. Many of these applications deal with high-resolution images and real-time videos and it is a challenge to process this excessive amount of information with limited computational resources. Employing saliency models in these applications will limit the processing of irrelevant information and further will improve their efficiency and performance. Therefore, a saliency model with good prediction accuracy and low computation time is highly essential. This thesis presents a low-computation wavelet-based visual saliency model designed to predict the regions of human eye fixations in images. The proposed model uses two-channel information luminance (Y) and chrominance (Cr) in YCbCr colour space for saliency computation. These two channels are decomposed to their lowest resolution using two-dimensional Discrete Wavelet Transform (DWT) to extract the local contrast features at multiple scales. The extracted local contrast features are integrated at multiple levels using a two-dimensional entropy-based feature combination scheme to derive a combined map. The combined map is normalized and enhanced using natural logarithm transformation to derive a final saliency map. The performance of the model has been evaluated qualitatively and quantitatively using two large benchmark image datasets. The experimental results show that the proposed model has achieved better prediction accuracy both qualitatively and quantitatively with a significant reduction in computation time when compared to the existing benchmark models. It has achieved nearly 25% computational savings when compared to the benchmark model with the lowest computation time.
Citation
NARAYANASWAMY, M. 2021. A computational visual saliency model for images. Robert Gordon University, MRes thesis. Hosted on OpenAIR [online]. Available from: https://doi.org/10.48526/rgu-wt-1447310
Thesis Type | Thesis |
---|---|
Deposit Date | Sep 8, 2021 |
Publicly Available Date | Sep 8, 2021 |
Keywords | Image processing; Video processing; Visual saliency; Fixation prediction; Rendering; Image entropy; Discrete wavelet transform |
Public URL | https://rgu-repository.worktribe.com/output/1447310 |
Publisher URL | https://doi.org/10.48526/rgu-wt-1447310 |
Files
NARAYANASWAMY 2021 A computational visual saliency model
(4.4 Mb)
PDF
Licence
https://creativecommons.org/licenses/by-nc/4.0/
Copyright Statement
Copyright: the author and Robert Gordon University
You might also like
A low-complexity wavelet-based visual saliency model to predict fixations.
(2020)
Conference Proceeding
Algorithms and methods for video transcoding.
(2019)
Thesis
A computational model of visual attention.
(2017)
Thesis
A new video quality metric for compressed video.
(2012)
Thesis
Complexity management of H.264/AVC video compression.
(2006)
Thesis