日本語フィールド
著者:Omura, Hajime; Minamoto, Teruya題名:Detection Method of Early Esophageal Cancer from Endoscopic Image Using Dyadic Wavelet Transform and Four-Layer Neural Network発表情報:Advances in Intelligent Systems and Computing 巻: 738 ページ: 595 - 601キーワード:概要:© 2018, Springer International Publishing AG, part of Springer Nature. We propose a new detection method of early esophageal cancer from endoscopic image by using the dyadic wavelet transform (DYWT) and the four-layered neural network (NN). We prepare 6500 appropriate training images to make a NN classifier for early esophageal cancer. Each training image is converted into HSV and CIEL*a*b* color spaces, and each fusion image is made from the S (saturation), a* (complementary color), and b* (complementary color) components. The fusion image is enhanced contrast so as to emphasize the difference of the pixel values between the normal and abnormal regions, and we use only high pixel values of this image for learning in the neural network. We can obtain the important image features by applying the inverse DYWT to processed image. We describe our proposed method in detail and present experimental results demonstrating that the detection result of the proposed method is superior to that of the deep learning technique utilizing the endoscopic image marked an early esophageal cancer by a doctor.抄録:英語フィールド
Author:Omura, Hajime; Minamoto, TeruyaTitle:Detection Method of Early Esophageal Cancer from Endoscopic Image Using Dyadic Wavelet Transform and Four-Layer Neural NetworkAnnouncement information:Advances in Intelligent Systems and Computing Vol: 738 Page: 595 - 601An abstract:© 2018, Springer International Publishing AG, part of Springer Nature. We propose a new detection method of early esophageal cancer from endoscopic image by using the dyadic wavelet transform (DYWT) and the four-layered neural network (NN). We prepare 6500 appropriate training images to make a NN classifier for early esophageal cancer. Each training image is converted into HSV and CIEL*a*b* color spaces, and each fusion image is made from the S (saturation), a* (complementary color), and b* (complementary color) components. The fusion image is enhanced contrast so as to emphasize the difference of the pixel values between the normal and abnormal regions, and we use only high pixel values of this image for learning in the neural network. We can obtain the important image features by applying the inverse DYWT to processed image. We describe our proposed method in detail and present experimental results demonstrating that the detection result of the proposed method is superior to that of the deep learning technique utilizing the endoscopic image marked an early esophageal cancer by a doctor.