[연구] 석사과정 구병진, SCI 논문지(MDPI Electronics/Q2) 게재
- 스마트팩토리융합학과
- 조회수5583
- 2022-07-28
석사과정 구병진 학생(지도교수 : 정종필)의 연구(Real-Time ISR-YOLOv4 Based Small Object Detection for Safe Shop Floor in Smart Factories)가 MDPI Electronics(Impact Factor: 2.690 (2021); 5-Year Impact Factor: 2.657 (2021))에 게재됐다.
https://www.mdpi.com/2079-9292/11/15/2348 / https://doi.org/10.3390/electronics11152348
논문요약 - Wearing a hard hat can effectively improve the safety of workers on a construction site. However, workers often take off their helmets because they have a weak sense of safety and are uncomfortable, and this action poses a large danger. Workers not wearing hard hats are more likely to be injured in accidents such as human falls and vertical falls. Therefore, the detection of wearing a helmet is an important step in the safety management of a construction site, and it is urgent to detect helmets quickly and accurately. However, the existing manual monitor is labor intensive, and it is difficult to popularize the method of mounting the sensor on the helmet. Thus, in this paper, we propose an AI method to detect the wearing of a helmet with satisfactory accuracy with a high detection rate. Our method selects based on YOLO v4 and adds an image super resolution (ISR) module at the end of the input. Afterward, the image resolution is increased, and the noise in the image is removed. Then, dense blocks are used to replace residual blocks in the backbone network using the CSPDarknet53 framework to reduce unnecessary computation and reduce the number of network structure parameters. The neck then uses a combination of SPPnet and PANnet to take full advantage of the small target’s capabilities in the image. We add foreground and background balance loss functions to the YOLOv4 loss function part to solve the image background and foreground imbalance problem. Experiments performed using self-constructed datasets show that the proposed method has more efficacy than the currently available small target detection methods. Finally, our model achieves an average precision of 93.3%, a 7.8% increase over the original algorithm, and it takes only 3.0 ms to detect an image at 416 × 416.