Joint Rain Detection and Removal from a Single Image with Contextualized Deep Networks

IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1377-1393, 2020.

Cited by: 78|Views381
WOS EI
Weibo:
To restore images captured in the environment with both rain accumulation and heavy rain, we introduced an recurrent rain detection and removal network that progressively removes rain streaks, embedded with the rain-accumulation removal network

Abstract:

Rain streaks, particularly in heavy rain, not only degrade visibility but also make many computer vision algorithms fail to function properly. In this paper, we address this visibility problem by focusing on single-image rain removal, even in the presence of dense rain streaks and rain-streak accumulation, which is visually similar to mis...More

Code:

Data:

0
Introduction
  • Restoring images degraded by rain is beneficial to many computer vision applications for outdoor scenes.
  • Rain reduces visibility significantly, which can impair many computer vision systems.
  • There are two types of visibility degradation brought by rains.
  • The rain streaks accumulation scatters light out and into the line of sight, and severely reduce the visibility.
  • Nearby rain streaks generate specular highlights and occlude background scenes.
  • These rain streaks are diversified in shapes, sizes, and directions, in heavy rain, introducing severe visibility degradation
Highlights
  • Restoring images degraded by rain is beneficial to many computer vision applications for outdoor scenes
  • We compare our method with state-of-the-art methods on a few benchmark datasets: (1) Rain12 1 [34], which includes 12 synthesized rain images with only one type of rain streaks; Rain100L, which is the synthesized data set with only one type of rain streaks; (2) Rain20L, which is a subset of Rain100L; 1. http://yu-li.github.io/
  • The rain streaks are synthesized in two ways: (1) the photorealistic rendering techniques proposed by [18]; (2) the simulated sharp line streaks along a certain direction with a small variation within an image
  • We have introduced a new deep learning based method to remove rain from a single image, even in the presence of rain streak accumulation and heavy rain
  • A new region-dependent rain image model is proposed for additional rain detection and is further extended to simulate rain accumulation and heavy rains
  • To restore images captured in the environment with both rain accumulation and heavy rain, we introduced an recurrent rain detection and removal network that progressively removes rain streaks, embedded with the rain-accumulation removal network
Methods
  • The authors compare the four versions of the approaches, JORDER-, JORDER (Section 4 of [53]), JORDER-R (Section 5.1 of [53]), JORDER-R-DEVEIL (Section 5.2 of [53]), JORDERE, JORDER-E-DEVEIL (JORDER-E + detail preserving rainaccumulation removal) with state-of-the-art methods: image decomposition (ID) [30], CNN-based rain drop removal (CNN) [10], discriminative sparse coding (DSC) [37], layer priors (LP) [34], deep detail network (DetailNet) [14], directional global sparse model (UGSM) [8], joint convolutional analysis and synthesis sparse representation (JCAS) [23], density-aware multi-stream dense network (DID-MDN) [55], conditional generative adversarial network (ID-CGAN) [56], and a common CNN baseline for image processing – SRCNN [9].
  • SRCNN, and DetailNet are trained from scratch.
  • For evaluations on synthesized data, the authors train the model with the corresponding training data from scratch, without any fine-tuning.
  • The authors evaluate the results only in the luminance channel, which has a significant impact on the human visual system to perceive the image quality.
  • The authors' results and codes are publicly available
Results
  • (3) Rain100H, which is the synthesized data set with five streak directions.
  • Note, while it is rare for a real rain image to contain rain streaks in many different directions, synthesizing this kind of images for training can boost the capacity of the network.
  • The images for synthesizing Rain100L, Rain20L and Rain100H are selected from BSD200 [38].
  • The authors release the training and testing sets, as well as the image rendering code to public
Conclusion
  • The authors have introduced a new deep learning based method to remove rain from a single image, even in the presence of rain streak accumulation and heavy rain.
  • A new region-dependent rain image model is proposed for additional rain detection and is further extended to simulate rain accumulation and heavy rains.
  • Based on this model, the authors developed a fully convolutional network that jointly detect and remove rain.
  • Evaluations on real images demonstrated that the method outperforms state-of-the-art methods
Summary
  • Introduction:

    Restoring images degraded by rain is beneficial to many computer vision applications for outdoor scenes.
  • Rain reduces visibility significantly, which can impair many computer vision systems.
  • There are two types of visibility degradation brought by rains.
  • The rain streaks accumulation scatters light out and into the line of sight, and severely reduce the visibility.
  • Nearby rain streaks generate specular highlights and occlude background scenes.
  • These rain streaks are diversified in shapes, sizes, and directions, in heavy rain, introducing severe visibility degradation
  • Methods:

    The authors compare the four versions of the approaches, JORDER-, JORDER (Section 4 of [53]), JORDER-R (Section 5.1 of [53]), JORDER-R-DEVEIL (Section 5.2 of [53]), JORDERE, JORDER-E-DEVEIL (JORDER-E + detail preserving rainaccumulation removal) with state-of-the-art methods: image decomposition (ID) [30], CNN-based rain drop removal (CNN) [10], discriminative sparse coding (DSC) [37], layer priors (LP) [34], deep detail network (DetailNet) [14], directional global sparse model (UGSM) [8], joint convolutional analysis and synthesis sparse representation (JCAS) [23], density-aware multi-stream dense network (DID-MDN) [55], conditional generative adversarial network (ID-CGAN) [56], and a common CNN baseline for image processing – SRCNN [9].
  • SRCNN, and DetailNet are trained from scratch.
  • For evaluations on synthesized data, the authors train the model with the corresponding training data from scratch, without any fine-tuning.
  • The authors evaluate the results only in the luminance channel, which has a significant impact on the human visual system to perceive the image quality.
  • The authors' results and codes are publicly available
  • Results:

    (3) Rain100H, which is the synthesized data set with five streak directions.
  • Note, while it is rare for a real rain image to contain rain streaks in many different directions, synthesizing this kind of images for training can boost the capacity of the network.
  • The images for synthesizing Rain100L, Rain20L and Rain100H are selected from BSD200 [38].
  • The authors release the training and testing sets, as well as the image rendering code to public
  • Conclusion:

    The authors have introduced a new deep learning based method to remove rain from a single image, even in the presence of rain streak accumulation and heavy rain.
  • A new region-dependent rain image model is proposed for additional rain detection and is further extended to simulate rain accumulation and heavy rains.
  • Based on this model, the authors developed a fully convolutional network that jointly detect and remove rain.
  • Evaluations on real images demonstrated that the method outperforms state-of-the-art methods
Tables
  • Table1: PSNR results among different methods
  • Table2: SSIM results among different methods
  • Table3: The time complexity (in seconds) of JORDER compared with state-of-the-art methods. JR and JRD denote JORDER-R and
  • Table4: The error rate of VGG-19 with / without rain removal as a preprocessing on ImageNet-1k validation dataset
  • Table5: The semantic segmentation and object detection performance of pretrained models with / without rain removal as a preprocessing on
  • Table6: PSNR and SSIM results of the four versions
  • Table7: Objective evaluation for the effect of contextualized dilated convolution
  • Table8: The performance of JORDER network with and w/o contextualized dilated convolutions (CDC). The parallel one is illustrated in Fig. 3, and the sequential one signifies that the convolution paths with different dilated factors are chained together
  • Table9: The performance of JORDER networks where dilated convolutions are replaced by pooling and up-sampling layers (JPS), and stride convolution and transposed convolution layers (JST)
  • Table10: The objective evaluation when the detected rain masks are inaccurate
  • Table11: The PSNR results in the case of over-detection and under-detection
  • Table12: The SSIM results in the case of over-detection and under-detection
  • Table13: Ablation analysis for the technical improvements compared to
  • Table14: Comparison of the rank product of different methods. The smaller, the better
Download tables as Excel
Related work
  • Rain image recovery [2], [4], [5], [7], [16]–[19], [58] from video sequences has been widely explored. Garg et al [16]–[19], [43] first construct the appearance model to describe rain streaks and exploit it to detect rain pixels in video. Zhang et al [58] and Brewer et al [5] focus on the chromaticity and shape of rain streaks, respectively. Other methods construct novel features to model and detect rain streaks, such as frequency domain analysis [2], histogram of orientation [4] and generalized low rank [7]. These methods make full use of the rich information in videos and the temporal redundancy in adjacent frames to identify rain streaks. In contrast, our method attempts to jointly detect and remove rain regions from only a single image.
Reference
  • C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert. Enhancing underwater images and videos by fusion. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pages 81–88, June 2012.
    Google ScholarLocate open access versionFindings
  • P. C. Barnum, S. Narasimhan, and T. Kanade. Analysis of rain and snow in frequency space. Int’l Journal of Computer Vision, 86(2-3):256–274, 2010.
    Google ScholarLocate open access versionFindings
  • D. Berman, T. Treibitz, and S. Avidan. Non-local image dehazing. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pages 1674–1682, June 2016.
    Google ScholarLocate open access versionFindings
  • J. Bossu, N. Hautiere, and J.-P. Tarel. Rain or snow detection in image sequences through use of a histogram of orientation of streaks. International journal of computer vision, 93(3):348–367, 2011.
    Google ScholarLocate open access versionFindings
  • N. Brewer and N. Liu. Using the shape characteristics of rain to identify and remove rain from video. In Joint IAPR International Workshops on SPR and SSPR, pages 451–458, 2008.
    Google ScholarLocate open access versionFindings
  • B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao. Dehazenet: An end-to-end system for single image haze removal. IEEE Trans. on Image Processing, 25(11):5187–5198, Nov 2016.
    Google ScholarLocate open access versionFindings
  • Y.-L. Chen and C.-T. Hsu. A generalized low-rank appearance model for spatio-temporally correlated rain streaks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1968–1975, 2013.
    Google ScholarLocate open access versionFindings
  • L.-J. Deng, T.-Z. Huang, X.-L. Zhao, and T.-X. Jiang. A directional global sparse model for single image rain removal. Applied Mathematical Modelling, 59:662 – 679, 2018.
    Google ScholarLocate open access versionFindings
  • C. Dong, C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. TPAMI, 2015.
    Google ScholarLocate open access versionFindings
  • D. Eigen, D. Krishnan, and R. Fergus. Restoring an image taken through a window covered with dirt or rain. In Proc. IEEE Int’l Conf. Computer Vision, December 2013.
    Google ScholarLocate open access versionFindings
  • M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. Int’l Journal of Computer Vision, 111(1):98–136, January 2015.
    Google ScholarLocate open access versionFindings
  • G. D. Finlayson, S. D. Hordley, and P. Morovic. Colour constancy using the chromagenic constraint. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, volume 1, pages 1079–1086 vol. 1, June 2005.
    Google ScholarLocate open access versionFindings
  • X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley. Clearing the skies: A deep network architecture for single-image rain removal. IEEE Trans. on Image Processing, 26(6):2944–2956, June 2017.
    Google ScholarLocate open access versionFindings
  • X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley. Removing rain from single images via a deep detail network. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, Honolulu, Hawaii, USA, July 2017.
    Google ScholarLocate open access versionFindings
  • B. V. Funt and G. D. Finlayson. Color constant color indexing. IEEE Trans. on Pattern Analysis and Machine Intelligence, 17(5):522–529, May 1995.
    Google ScholarLocate open access versionFindings
  • K. Garg and S. K. Nayar. Detection and removal of rain from videos. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, volume 1, pages I–528, 2004.
    Google ScholarLocate open access versionFindings
  • K. Garg and S. K. Nayar. When does a camera see rain? In Proc. IEEE Int’l Conf. Computer Vision, volume 2, pages 1067–1074, 2005.
    Google ScholarLocate open access versionFindings
  • K. Garg and S. K. Nayar. Photorealistic rendering of rain streaks. In ACM Trans. Graphics, volume 25, pages 996–1002, 2006.
    Google ScholarLocate open access versionFindings
  • K. Garg and S. K. Nayar. Vision and rain. Int’l Journal of Computer Vision, 75(1):3–27, 2007.
    Google ScholarLocate open access versionFindings
  • L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv:1508.06576, 2015.
    Findings
  • T. Gevers and A. W. Smeulders. Color-based object recognition. Pattern Recognition, 32(3):453 – 464, 1999.
    Google ScholarLocate open access versionFindings
  • A. Gijsenij, T. Gevers, and J. van de Weijer. Computational color constancy: Survey and experiments. IEEE Trans. on Image Processing, 20(9):2475–2489, Sept 2011.
    Google ScholarLocate open access versionFindings
  • S. Gu, D. Meng, W. Zuo, and L. Zhang. Joint convolutional analysis and synthesis sparse representation for single image layer separation. In Proc. IEEE Int’l Conf. Computer Vision, pages 1717–1725, Oct 2017.
    Google ScholarLocate open access versionFindings
  • D.-A. Huang, L.-W. Kang, Y.-C. F. Wang, and C.-W. Lin. Self-learning based image decomposition with applications to single image denoising. IEEE Transactions on multimedia, 16(1):83–93, 2014.
    Google ScholarLocate open access versionFindings
  • D.-A. Huang, L.-W. Kang, M.-C. Yang, C.-W. Lin, and Y.-C. F. Wang. Context-aware single image rain removal. In Proc. IEEE Int’l Conf. Multimedia and Expo, pages 164–169, 2012.
    Google ScholarLocate open access versionFindings
  • Q. Huynh-Thu and M. Ghanbari. Scope of validity of psnr in image/video quality assessment. Electronics letters, 44(13):800–801, 2008.
    Google ScholarLocate open access versionFindings
  • S. Iizuka, E. Simo-Serra, and H. Ishikawa. Globally and Locally Consistent Image Completion. ACM Transactions on Graphics (Proc. of SIGGRAPH 2017), 36(4):107, 2017.
    Google ScholarLocate open access versionFindings
  • Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM Trans. Multimedia, pages 675–678, New York, NY, USA, 2014. ACM.
    Google ScholarFindings
  • F. Jiang, W. Tao, S. Liu, J. Ren, X. Guo, and D. Zhao. An end-to-end compression framework based on convolutional neural networks. IEEE Trans. on Circuits and Systems for Video Technology, PP(99):1–1, 2017.
    Google ScholarLocate open access versionFindings
  • L. W. Kang, C. W. Lin, and Y. H. Fu. Automatic single-image-based rain streaks removal via image decomposition. IEEE Trans. on Image Processing, 21(4):1742–1755, April 2012.
    Google ScholarLocate open access versionFindings
  • J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pages 1646–1654, June 2016.
    Google ScholarLocate open access versionFindings
  • J.-H. Kim, C. Lee, J.-Y. Sim, and C.-S. Kim. Single-image deraining using an adaptive nonlocal means filter. In IEEE Trans. on Image Processing, pages 914–917, 2013.
    Google ScholarLocate open access versionFindings
  • J. H. Kim, C. Lee, J. Y. Sim, and C. S. Kim. Single-image deraining using an adaptive nonlocal means filter. In Proc. IEEE Int’l Conf. Image Processing, pages 914–917, Sept 2013.
    Google ScholarLocate open access versionFindings
  • Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown. Rain streak removal using layer priors. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pages 2736–2744, 2016.
    Google ScholarLocate open access versionFindings
  • K. G. Lore, A. Akintayo, and S. Sarkar. Llnet: A deep autoencoder approach to natural low-light image enhancement. arXiv preprint arXiv:1511.03995, 2015.
    Findings
  • X. Lu, Z. Lin, X. Shen, R. Mech, and J. Z. Wang. Deep multi-patch aggregation network for image style, aesthetics, and quality estimation. In Proc. IEEE Int’l Conf. Computer Vision, pages 990–998, 2015.
    Google ScholarLocate open access versionFindings
  • Y. Luo, Y. Xu, and H. Ji. Removing rain from a single image via discriminative sparse coding. In Proc. IEEE Int’l Conf. Computer Vision, pages 3397–3405, 2015.
    Google ScholarLocate open access versionFindings
  • D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. IEEE Int’l Conf. Computer Vision, volume 2, pages 416–423, July 2001.
    Google ScholarLocate open access versionFindings
  • S. G. Narasimhan and S. K. Nayar. Vision and the atmosphere. Int’l Journal of Computer Vision, 48(3):233–254, 2002.
    Google ScholarLocate open access versionFindings
  • W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang. Single image dehazing via multi-scale convolutional neural networks. In Proc. IEEE European Conf. Computer Vision, pages 154–169, 2016.
    Google ScholarLocate open access versionFindings
  • W. Ren, L. Ma, J. Zhang, J. Pan, X. Cao, W. Liu, and M.-H. Yang. Gated fusion network for single image dehazing. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, June 2018.
    Google ScholarLocate open access versionFindings
  • M. Rubinstein, D. Gutierrez, O. Sorkine, and A. Shamir. A comparative study of image retargeting. In ACM Trans. Graphics, pages 160:1–
    Google ScholarLocate open access versionFindings
  • V. Santhaseelan and V. K. Asari. Utilizing local phase information to remove rain from video. Int’l Journal of Computer Vision, 112(1):71–89, Mar 2015.
    Google ScholarLocate open access versionFindings
  • A. Saxena, M. Sun, and A. Y. Ng. Make3d: Learning 3d scene structure from a single still image. IEEE Trans. on Pattern Analysis and Machine Intelligence, 31(5):824–840, May 2009.
    Google ScholarLocate open access versionFindings
  • C. J. Schuler, M. Hirsch, S. Harmeling, and B. Scholkopf. Learning to deblur. arXiv:1406.7444, 2014.
    Findings
  • L. Shen, Z. Yue, Q. Chen, F. Feng, and J. Ma. Deep joint rain and haze removal from single images. ArXiv e-prints, January 2018.
    Google ScholarFindings
  • K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
    Google ScholarLocate open access versionFindings
  • S.-H. Sun, S.-P. Fan, and Y.-C. F. Wang. Exploiting image structural similarity for single image rain removal. In Proc. IEEE Int’l Conf. Image Processing, pages 4482–4486, 2014.
    Google ScholarLocate open access versionFindings
  • Y. Tian and S. G. Narasimhan. Seeing through water: Image restoration using model-based tracking. In Proc. IEEE Int’l Conf. Computer Vision, pages 2303–2310, Sept 2009.
    Google ScholarLocate open access versionFindings
  • Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. on Image Processing, 13(4):600–612, 2004.
    Google ScholarLocate open access versionFindings
  • L. Xu, J. S. Ren, C. Liu, and J. Jia. Deep convolutional neural network for image deconvolution. In Proc. Annual Conf. Neural Information Processing Systems. 2014.
    Google ScholarLocate open access versionFindings
  • W. Yang, J. Feng, J. Yang, F. Zhao, J. Liu, Z. Guo, and S. Yan. Deep edge guided recurrent residual learning for image super-resolution. IEEE Trans. on Image Processing, 26(12):5895–5907, Dec 2017.
    Google ScholarLocate open access versionFindings
  • W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan. Deep joint rain detection and removal from a single image. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, July 2017.
    Google ScholarLocate open access versionFindings
  • F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In International Conference on Learning Representation, 2016.
    Google ScholarLocate open access versionFindings
  • H. Zhang and V. M. Patel. Density-aware single image de-raining using a multi-stream dense network. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, June 2018.
    Google ScholarLocate open access versionFindings
  • H. Zhang, V. Sindagi, and V. M. Patel. Image De-raining Using a Conditional Generative Adversarial Network. ArXiv e-prints, January
    Google ScholarFindings
  • K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. on Image Processing, 26(7):3142–3155, July 2017.
    Google ScholarLocate open access versionFindings
  • X. Zhang, H. Li, Y. Qi, W. K. Leow, and T. K. Ng. Rain removal in video by combining temporal and chromatic properties. In Proc. IEEE Int’l Conf. Multimedia and Expo, pages 461–464, 2006.
    Google ScholarLocate open access versionFindings
  • B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba. Scene parsing through ade20k dataset. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2017. Wenhan Yang (17’S-18’M) received the B.S degree and Ph.D. degree (Hons.) in computer science from Peking University, Beijing, China, in 2012 and 2018. He is currently a postdoctoral research fellow with the Department of Electrical and Computer Engineering, National University of Singapore. Dr. Yang was a Visiting Scholar with the National University of Singapore, from 2015 to 2016. His current research interests include deep-learning based image processing, bad weather restoration, related applications
    Google ScholarLocate open access versionFindings
  • Zongming Guo (M’09) received the B.S. degree in mathematics, and the M.S. and Ph.D. degrees in computer science from Peking University, Beijing, China, in 1987, 1990, and 1994, respectively.
    Google ScholarFindings
  • Dr. Guo is the Executive Member of the ChinaSociety of Motion Picture and Television Engineers. He was a recipient of the First Prize of the State Administration of Radio Film and Television Award in 2004, the First Prize of the Ministry of Education Science and Technology Progress Award in 2006, the Second Prize of the National Science and Technology Award in 2007, the Wang Xuan News Technology Award and the Chia Tai Teaching Award in 2008, the Government Allowance granted by the State Council in 2009, and the Distinguished Doctoral Dissertation Advisor Award of Peking University in 2012 and 2013.
    Google ScholarFindings
  • Jiashi Feng is currently an Assistant Professor in the Department of Electrical and Computer Engineering at National University of Singapore. He received his PhD from National University of Singapore in 2014. Before joining NUS as a faculty, he was a postdoc research follow at UC Berkeley. Dr Feng’s research areas include computer vision and machine learning. In particular, he is interested in object recognition, detection, segmentation, robust learning and deep learning.
    Google ScholarFindings
  • Jiaying Liu (08’S-10’M-17’SM) received the B.E. degree in computer science from Northwestern Polytechnic University, Xi’an, China, and the Ph.D. degree with the Best Graduate Honor in computer science from Peking University, Beijing, China, in 2005 and 2010, respectively.
    Google ScholarLocate open access versionFindings
  • She is currently an Associate Professor with the Institute of Computer Science and Technology, Peking University. She has authored over 90 technical articles in refereed journals and proceedings, and holds 19 granted patents. Her current research interests include image/video processing, compression, and computer vision. Dr. Liu was a Visiting Scholar with the University of Southern California, Los Angeles, from 2007 to 2008. She was a Visiting Researcher at Microsoft Research Asia (MSRA) in 2015 supported by “Star Track for Young Faculties”. She has also served as TC member in IEEE CAS MSA and APSIPA IVM, and APSIPA distinguished lecture in 2016-2017. She is CCF/IEEE Senior Member.
    Google ScholarLocate open access versionFindings
  • Shuicheng Yan is the Vice President and Chief Scientist of Qihoo 360 Technology Co. Ltd., as well as Head of 360 Artificial Intelligence Institute. He is also a tenured Associate Professor at National University of Singapore, and IEEE Fellow, IAPR Fellow and ACM Distinguished Scientist. His research areas include computer vision, machine learning and multimedia analysis, and he has authored/co-authored about 500 high quality technical papers, with Google Scholar citation over 25,000 times and H-index 70. He is TR Highly Cited Researcher of 2014, 2015 and 2016.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments