Skip to content

Commit 67df6d9

Browse files
committed
add test code
1 parent 72fabef commit 67df6d9

File tree

310 files changed

+4229
-54
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

310 files changed

+4229
-54
lines changed

Figure/main framework.png

360 KB
Loading

README.md

+79-2
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,79 @@
1-
# SAN
2-
Second-order Attention Network for Single Image Super-resolution (CVPR-2019)
1+
# Second-order Attention Network for Single Image Super-resolution (CVPR-2019)
2+
3+
"[Second-order Attention Network for Single Image Super-resolution](http://openaccess.thecvf.com/content_CVPR_2019/html/Dai_Second-Order_Attention_Network_for_Single_Image_Super-Resolution_CVPR_2019_paper.html)" is published on CVPR-2019.
4+
The code is built on [RCAN(pytorch)](https://github.com/yulunzhang/RCAN) and tested on Ubuntu 16.04 (Pytorch 0.4.0)
5+
6+
## Main Contents
7+
### 1. Introduction
8+
- **Abstract:**
9+
Recently, deep convolutional neural networks (CNNs) have been widely explored in single image super-resolution (SISR) and obtained remarkable performance. However, most of the existing CNN-based SISR methods mainly focus on wider or deeper architecture design, neglecting to explore the feature correlations of intermediate layers, hence hindering the representational power of CNNs. To address this issue, in this paper, we propose a second-order attention network (SAN) for more powerful feature expression and feature correlation learning. Specifically, a novel train- able second-order channel attention (SOCA) module is developed to adaptively rescale the channel-wise features by using second-order feature statistics for more discriminative representations. Furthermore, we present a non-locally enhanced residual group (NLRG) structure, which not only incorporates non-local operations to capture long-distance spatial contextual information, but also contains repeated local-source residual attention groups (LSRAG) to learn increasingly abstract feature representations. Experimental results demonstrate the superiority of our SAN network over state-of-the-art SISR methods in terms of both quantitative metrics and visual quality.
10+
- Main framework
11+
![Alt text](Figure/main framework.png)
12+
13+
### 2. Train code
14+
#### Prepare training datasets
15+
- 1. Download the **DIV2K** dataset (900 HR images) from the link [DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/).
16+
- 2. Set '--dir_data' as the HR and LR image path.
17+
18+
#### Train the model
19+
- You can retrain the model:
20+
- 1. CD to 'TrainCode/code';
21+
- 2. Run the following scripts to train the models:
22+
23+
24+
>
25+
> # BI degradation, scale 2, 3, 4,8
26+
> # input= $48\times 48$, output = $96\times 96$
27+
> python main.py --model san --save `save_name` --scale 2 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --patch_size 96
28+
> # input= $48\times 48$, output = $144\times 144$
29+
> python main.py --model san --save `save_name` --scale 3 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --patch_size 96
30+
> # input= $48\times 48$, output = $192\times 192$
31+
> python main.py --model san --save `save_name` --scale 4 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --patch_size 96
32+
> # input= $48\times 48$, output = $392\times 392$
33+
> python main.py --model san --save `save_name` --scale 8 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --patch_size 96
34+
35+
### 3. Test code
36+
- 1. You can Download the pretrained model first
37+
- 2. CD to 'TestCode/code', run the following scripts
38+
>
39+
> # BI degradation, scale 2, 3, 4,8
40+
> # SAN_2x
41+
> python main.py --model san --data_test MyImage --save `save_name` --scale 2 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --test_only --testpath 'your path' --testset Set5 --pre_train ../model/SAN_BIX2.pt
42+
> # SAN_3x
43+
> python main.py --model san --data_test MyImage --save `save_name` --scale 3 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --test_only --testpath 'your path' --testset Set5 --pre_train ../model/SAN_BIX3.pt
44+
> # SAN_4x
45+
> python main.py --model san --data_test MyImage --save `save_name` --scale 4 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --test_only --testpath 'your path' --testset Set5 --pre_train ../model/SAN_BIX4.pt
46+
> # SAN_8x
47+
> python main.py --model san --data_test MyImage --save `save_name` --scale 8 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --test_only --testpath 'your path' --testset Set5 --pre_train ../model/SAN_BIX8.pt
48+
### 4. Results
49+
- Some of the test results can be downloaded.
50+
51+
### 5. Citation
52+
If the the work or the code is helpful, please cite the following papers
53+
54+
> @inproceedings{dai2019second,
55+
title={Second-order Attention Network for Single Image Super-Resolution},
56+
author={Dai, Tao and Cai, Jianrui and Zhang, Yongbing and Xia, Shu-Tao and Zhang, Lei},
57+
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
58+
pages={11065--11074},
59+
year={2019}
60+
}
61+
62+
@inproceedings{zhang2018image,
63+
title={Image super-resolution using very deep residual channel attention networks},
64+
author={Zhang, Yulun and Li, Kunpeng and Li, Kai and Wang, Lichen and Zhong, Bineng and Fu, Yun},
65+
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
66+
pages={286--301},
67+
year={2018}
68+
}
69+
70+
@inproceedings{li2017second,
71+
title={Is second-order information helpful for large-scale visual recognition?},
72+
author={Li, Peihua and Xie, Jiangtao and Wang, Qilong and Zuo, Wangmeng},
73+
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
74+
pages={2070--2078},
75+
year={2017}
76+
}
77+
78+
### 6. Acknowledge
79+
The code is built on [RCAN (Pytorch)](https://github.com/yulunzhang/RCAN) and [EDSR (Pytorch)](https://github.com/thstkdgus35/EDSR-PyTorch). We thank the authors for sharing the codes.

Readme.md

-2
This file was deleted.
+124
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
function Create_benchmark_TestData_HR_LR()
2+
clear all; close all; clc
3+
path_original = './OriginalTestData';
4+
dataset = {'Sun-Hays80'};
5+
ext = {'*.jpg', '*.png', '*.bmp'};
6+
7+
degradation = 'BI'; % BI, BD, DN
8+
if strcmp(degradation, 'BI')
9+
scale_all = [2,3,4,8];
10+
else
11+
scale_all = 3;
12+
end
13+
14+
for idx_set = 1:length(dataset)
15+
fprintf('Processing %s:\n', dataset{idx_set});
16+
filepaths = [];
17+
for idx_ext = 1:length(ext)
18+
filepaths = cat(1, filepaths, dir(fullfile(path_original, dataset{idx_set}, ext{idx_ext})));
19+
end
20+
for idx_im = 1:length(filepaths)
21+
name_im = filepaths(idx_im).name;
22+
fprintf('%d. %s: ', idx_im, name_im);
23+
im_ori = imread(fullfile(path_original, dataset{idx_set}, name_im));
24+
if size(im_ori, 3) == 1
25+
im_ori = cat(3, im_ori, im_ori, im_ori);
26+
end
27+
for scale = scale_all
28+
fprintf('x%d ', scale);
29+
im_HR = modcrop(im_ori, scale);
30+
if strcmp(degradation, 'BI')
31+
im_LR = imresize(im_HR, 1/scale, 'bicubic');
32+
elseif strcmp(degradation, 'BD')
33+
im_LR = imresize_BD(im_HR, scale, 'Gaussian', 1.6); % sigma=1.6
34+
elseif strcmp(degradation, 'DN')
35+
randn('seed',0); % For test data, fix seed. But, DON'T fix seed, when preparing training data.
36+
im_LR = imresize_DN(im_HR, scale, 30); % noise level sigma=30
37+
end
38+
% folder
39+
folder_HR = fullfile('./HR', dataset{idx_set}, ['x', num2str(scale)]);
40+
folder_LR = fullfile(['./LR/LR', degradation], dataset{idx_set}, ['x', num2str(scale)]);
41+
if ~exist(folder_HR)
42+
mkdir(folder_HR)
43+
end
44+
if ~exist(folder_LR)
45+
mkdir(folder_LR)
46+
end
47+
% fn
48+
fn_HR = fullfile('./HR', dataset{idx_set}, ['x', num2str(scale)], [name_im(1:end-4), '_HR_x', num2str(scale), '.png']);
49+
fn_LR = fullfile(['./LR/LR', degradation], dataset{idx_set}, ['x', num2str(scale)], [name_im(1:end-4), '_LR', degradation, '_x', num2str(scale), '.png']);
50+
imwrite(im_HR, fn_HR, 'png');
51+
imwrite(im_LR, fn_LR, 'png');
52+
end
53+
fprintf('\n');
54+
end
55+
fprintf('\n');
56+
end
57+
end
58+
function imgs = modcrop(imgs, modulo)
59+
if size(imgs,3)==1
60+
sz = size(imgs);
61+
sz = sz - mod(sz, modulo);
62+
imgs = imgs(1:sz(1), 1:sz(2));
63+
else
64+
tmpsz = size(imgs);
65+
sz = tmpsz(1:2);
66+
sz = sz - mod(sz, modulo);
67+
imgs = imgs(1:sz(1), 1:sz(2),:);
68+
end
69+
end
70+
71+
72+
function [LR] = imresize_BD(im, scale, type, sigma)
73+
74+
if nargin ==3 && strcmp(type,'Gaussian')
75+
sigma = 1.6;
76+
end
77+
78+
if strcmp(type,'Gaussian') && fix(scale) == scale
79+
if mod(scale,2)==1
80+
kernelsize = ceil(sigma*3)*2+1;
81+
if scale==3 && sigma == 1.6
82+
kernelsize = 7;
83+
end
84+
kernel = fspecial('gaussian',kernelsize,sigma);
85+
blur_HR = imfilter(im,kernel,'replicate');
86+
87+
if isa(blur_HR, 'gpuArray')
88+
LR = blur_HR(scale-1:scale:end-1,scale-1:scale:end-1,:);
89+
else
90+
LR = imresize(blur_HR, 1/scale, 'nearest');
91+
end
92+
93+
94+
% LR = im2uint8(LR);
95+
elseif mod(scale,2)==0
96+
kernelsize = ceil(sigma*3)*2+2;
97+
kernel = fspecial('gaussian',kernelsize,sigma);
98+
blur_HR = imfilter(im, kernel,'replicate');
99+
LR= blur_HR(scale/2:scale:end-scale/2,scale/2:scale:end-scale/2,:);
100+
% LR = im2uint8(LR);
101+
end
102+
else
103+
LR = imresize(im, 1/scale, type);
104+
end
105+
end
106+
107+
function ImLR = imresize_DN(ImHR, scale, sigma)
108+
% ImLR and ImHR are uint8 data
109+
% downsample by Bicubic
110+
ImDown = imresize(ImHR, 1/scale, 'bicubic'); % 0-255
111+
ImDown = single(ImDown); % 0-255
112+
ImDownNoise = ImDown + single(sigma*randn(size(ImDown))); % 0-255
113+
ImLR = uint8(ImDownNoise); % 0-255
114+
end
115+
116+
117+
118+
119+
120+
121+
122+
123+
124+

0 commit comments

Comments
 (0)