A surface defect detection framework for glass bottle bottom using visual attention model and wavelet transform

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Xianen Zhou, Yaonan Wang, Qing Zhu, Jianxu Mao, Changyan Xiao, Xiao Lu, Hui Zhang
Journal/Conference Name IEEE Transactions on Industrial Informatics
Paper Category
Paper Abstract Glass bottles must be thoroughly inspected before they are used for packaging. However, the vision inspection of bottle bottoms for defects remains a challenging task in quality control due to inaccurate localization, the difficulty in detecting defects in the texture region, and the intrinsically nonuniform brightness across the central panel. To overcome these problems, we propose a surface defect detection framework, which is composed of three main parts. First, a new localization method named entropy rate superpixel circle detection (ERSCD), which combines least-squares circle detection and entropy rate superpixel (ERS) with an improved randomized circle detection, is proposed to accurately obtain the region of interest (ROI) of the bottle bottom. Then, according to the structure-property, the ROI is divided into two measurement regions central panel region and annular texture region. For the former, a defect detection method named frequency-tuned anisotropic diffusion super-pixel segmentation (FTADSP) that integrates frequency-tuned salient region detection (FT), anisotropic diffusion, and an improved superpixel segmentation is proposed to precisely detect the regions and boundaries of defects. For the latter, a defect detection strategy called wavelet transform multiscale filtering (WTMF) based on a wavelet transform and a multiscale filtering algorithm is proposed to reduce the influence of texture and to improve the robustness to localization error. The proposed framework is tested on four data sets obtained by our designed vision system. The experimental results demonstrate that our framework achieves the best performance compared with many traditional methods.
Date of publication 2019
Code Programming Language C++

Copyright Researcher 2022