Abstract:
A new computational model of visual attention for intelligent robots is proposed.Motivated by biology,this model simulates both the bottom-up and top-down human visual selective attention mechanisms.By extracting multiple lowlevel features of the input image with multiple scales and analyzing the amplitude spectra of these feature maps in frequency domain,the corresponding saliency map is constructed in spatial domain.Based on the saliency map,the position and size of potential focuses of attention are computed.According to the given target,attention is changed among the focuses of attention.The model is tested on many natural images.Experiment results and qualitative and quantitative analysis are presented.The proposed results are consistent with human visual attention results.It is indicated that this model is effective in attention effect and computational speed.