TY - GEN
T1 - HyperSR
T2 - 2024 International Joint Conference on Neural Networks, IJCNN 2024
AU - Mishra, Divya
AU - Finkelstein, Ofek
AU - Hadar, Ofer
N1 - Publisher Copyright: © 2024 IEEE.
PY - 2024/1/1
Y1 - 2024/1/1
N2 - Image super-resolution models often require a large number of parameters to capture the complex mapping between low-resolution and high-resolution images. Hypernetworks allow for efficient parameterization by generating the weights of the target network dynamically based on the input low-resolution image. This enables the model to have a smaller set of fixed parameters while still being able to adapt and generate high-resolution images. Hypernetworks are meta-learning neural networks that generate the weights or parameters of another neural network, known as the target network, based on the input data. With only a 0.12% increase in computation parameters complexity for SRCNN as the target network, the proposed framework, Hyper-SR: a Hyper-network-based framework for single-image Super-Resolution, outperforms the target network in terms of perceptual image quality at higher scaling factors and faster convergence time at fewer epochs. We demonstrate results and ablation experiments using existing SRCNN as the target network and reported an average gain on SET5 dataset of +0.83 db for PSNR and +0.0208 for SSIM, on SET14 dataset, a gain of +0.62 db for PSNR and +0.0109 for SSIM at scaling factor of 4. However, our methodology may be used for any existing super-resolution network as the target network to obtain marginally improved resolution without necessitating a large number of computational parameters.
AB - Image super-resolution models often require a large number of parameters to capture the complex mapping between low-resolution and high-resolution images. Hypernetworks allow for efficient parameterization by generating the weights of the target network dynamically based on the input low-resolution image. This enables the model to have a smaller set of fixed parameters while still being able to adapt and generate high-resolution images. Hypernetworks are meta-learning neural networks that generate the weights or parameters of another neural network, known as the target network, based on the input data. With only a 0.12% increase in computation parameters complexity for SRCNN as the target network, the proposed framework, Hyper-SR: a Hyper-network-based framework for single-image Super-Resolution, outperforms the target network in terms of perceptual image quality at higher scaling factors and faster convergence time at fewer epochs. We demonstrate results and ablation experiments using existing SRCNN as the target network and reported an average gain on SET5 dataset of +0.83 db for PSNR and +0.0208 for SSIM, on SET14 dataset, a gain of +0.62 db for PSNR and +0.0109 for SSIM at scaling factor of 4. However, our methodology may be used for any existing super-resolution network as the target network to obtain marginally improved resolution without necessitating a large number of computational parameters.
KW - Computer vision
KW - Deep learning
KW - Hyper networks
KW - Image super-resolution
UR - http://www.scopus.com/inward/record.url?scp=85205006818&partnerID=8YFLogxK
U2 - https://doi.org/10.1109/IJCNN60899.2024.10651010
DO - https://doi.org/10.1109/IJCNN60899.2024.10651010
M3 - Conference contribution
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2024 International Joint Conference on Neural Networks, IJCNN 2024 - Proceedings
Y2 - 30 June 2024 through 5 July 2024
ER -