OAK

Comparative performance assessment of deep learning based image steganography techniques

Metadata Downloads
Abstract
Increasing data infringement while transmission and storage have become an apprehension for the data owners. Even the digital images transmitted over the network or stored at servers are prone to unauthorized access. However, several image steganography techniques were proposed in the literature for hiding a secret image by embedding it into cover media. But the low embedding capacity and poor reconstruction quality of images are significant limitations of these techniques. To overcome these limitations, deep learning-based image steganography techniques are proposed in the literature. Convolutional neural network (CNN) based U-Net encoder has gained significant research attention in the literature. However, its performance efficacy as compared to other CNN based encoders like V-Net and U-Net++ is not implemented for image steganography. In this paper, V-Net and U-Net++ encoders are implemented for image steganography. A comparative performance assessment of U-Net, V-Net, and U-Net++ architectures are carried out. These architectures are employed to hide the secret image into the cover image. Further, a unique, robust, and standard decoder for all architectures is designed to extract the secret image from the cover image. Based on the experimental results, it is identified that U-Net architecture outperforms the other two architectures as it reports high embedding capacity and provides better quality stego and reconstructed secret images.
Author(s)
Himthani, VarshaDhaka, Vijaypal SinghKaur, ManjitRani, GeetaOza, MeetLee, Heung-No
Issued Date
2022-10
Type
Article
DOI
10.1038/s41598-022-17362-1
URI
https://scholar.gist.ac.kr/handle/local/10580
Publisher
NATURE PORTFOLIO
Citation
SCIENTIFIC REPORTS, v.12, no.1
ISSN
2045-2322
Appears in Collections:
Department of Electrical Engineering and Computer Science > 1. Journal Articles
공개 및 라이선스
  • 공개 구분공개
파일 목록

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.