A Low Overhead Progressive Transmission for Visual Descriptor Based on Image Saliency
Yegang Du, Zhiyang Li, Milos Stojmenovic, Wenyu Qu and Heng Qi
A typical mobile visual search (MVS) application generally follows the client-server architecture. Query images or these descriptors are transmitted from the mobile client to the remote server via the wireless network, to retrieval similar images from the database maintained in the server. Due to bandwidth constrained wireless networks, transmission latency is a bottleneck in present MVS. In some recent works, progressive transmission strategies have been proposed to reduce the transmission latency. The two main concerns in the progressive transmission are finding a proper priority of transmission and making up the recognition rate caused by the transmission loss. To address the two issues, a novel MVS framework is proposed in this paper, consisting of two main parts: a new progressive transmission model based on image saliency (MVSS) and a new matching metric designing for matching silent parts with whole images. Many experiments have been done on the public Stanford image set to evaluate the proposed MVSS system, and the results demonstrate that our framework not only reduces the transmission latency but also achieves a better retrieval accuracy, when comparing with the existing progressive transmission mechanisms.
Keywords: Saliency; bag of words; distance algorithm; mobile visual search; image retrieval