In addition to this code translation, we have been doing more research on symmetry and have found several other papers that deal with the subject in relation to visual attention. Two of the papers we are reading explicitly mention Itti and Koch and contrast their symmetry methods with Itti et al.
I fixed some bugs in our normalization code so the final maps that resulted from doing operations using integers instead of floats, so our results look like normal saliency maps now. The DoG filter code we implemented based upon Walther's saliency toolbox still doesn't give us the kind of results we would like - often the images will come out black after being fed through the filter or will only have one very intense region of marginal importance. So there are still some bugs to work out with that filter, but otherwise things are moving smoothly and our results look good.
Our plans for wrapping up the project include fully implementing this bilateral symmetry filter and fixing any normalization quirks in the code. We then aim to perform a small experiment asking participants to rate our saliency maps against other methods.
And now, pretty pictures (Original, LAB, RGB):
Papers:
Biologically Inspired Saliency Map Model for Bottom-up
Visual Attention : http://www.springerlink.com/content/7wxq0npr7b09hlj3/fulltext.pdf
Paying Attention to Symmetry : http://www.ai.rug.nl/~gert/download/kootstra08bmvc.pdf
No comments:
Post a Comment