Skip to content

Commit 66c5c1a

Browse files
committed
revise readme.md
1 parent 67df6d9 commit 66c5c1a

File tree

2 files changed

+28
-17
lines changed

2 files changed

+28
-17
lines changed
File renamed without changes.

README.md

+28-17
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,7 @@ The code is built on [RCAN(pytorch)](https://github.com/yulunzhang/RCAN) and tes
77
### 1. Introduction
88
- **Abstract:**
99
Recently, deep convolutional neural networks (CNNs) have been widely explored in single image super-resolution (SISR) and obtained remarkable performance. However, most of the existing CNN-based SISR methods mainly focus on wider or deeper architecture design, neglecting to explore the feature correlations of intermediate layers, hence hindering the representational power of CNNs. To address this issue, in this paper, we propose a second-order attention network (SAN) for more powerful feature expression and feature correlation learning. Specifically, a novel train- able second-order channel attention (SOCA) module is developed to adaptively rescale the channel-wise features by using second-order feature statistics for more discriminative representations. Furthermore, we present a non-locally enhanced residual group (NLRG) structure, which not only incorporates non-local operations to capture long-distance spatial contextual information, but also contains repeated local-source residual attention groups (LSRAG) to learn increasingly abstract feature representations. Experimental results demonstrate the superiority of our SAN network over state-of-the-art SISR methods in terms of both quantitative metrics and visual quality.
10-
- Main framework
11-
![Alt text](Figure/main framework.png)
10+
1211

1312
### 2. Train code
1413
#### Prepare training datasets
@@ -17,58 +16,70 @@ Recently, deep convolutional neural networks (CNNs) have been widely explored in
1716

1817
#### Train the model
1918
- You can retrain the model:
20-
- 1. CD to 'TrainCode/code';
21-
- 2. Run the following scripts to train the models:
22-
19+
- 1. CD to 'TrainCode/code';
20+
- 2. Run the following scripts to train the models:
21+
2322

2423
>
25-
> # BI degradation, scale 2, 3, 4,8
26-
> # input= $48\times 48$, output = $96\times 96$
24+
>
25+
> ## BI degradation, scale 2, 3, 4,8
26+
> ## input= 48x48, output = 96x96
2727
> python main.py --model san --save `save_name` --scale 2 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --patch_size 96
28-
> # input= $48\times 48$, output = $144\times 144$
28+
> ## input= 48x48, output = 144x144
2929
> python main.py --model san --save `save_name` --scale 3 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --patch_size 96
30-
> # input= $48\times 48$, output = $192\times 192$
30+
> ## input= 48x48, output = 192x192
3131
> python main.py --model san --save `save_name` --scale 4 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --patch_size 96
32-
> # input= $48\times 48$, output = $392\times 392$
32+
> ## input= 48x48, output = 392x392
3333
> python main.py --model san --save `save_name` --scale 8 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --patch_size 96
34+
>
3435
3536
### 3. Test code
3637
- 1. You can Download the pretrained model first
3738
- 2. CD to 'TestCode/code', run the following scripts
39+
3840
>
39-
> # BI degradation, scale 2, 3, 4,8
40-
> # SAN_2x
41+
> ## BI degradation, scale 2, 3, 4,8
42+
> ## SAN_2x
43+
>
4144
> python main.py --model san --data_test MyImage --save `save_name` --scale 2 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --test_only --testpath 'your path' --testset Set5 --pre_train ../model/SAN_BIX2.pt
42-
> # SAN_3x
45+
>
46+
> # SAN_3x
47+
>
4348
> python main.py --model san --data_test MyImage --save `save_name` --scale 3 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --test_only --testpath 'your path' --testset Set5 --pre_train ../model/SAN_BIX3.pt
49+
>
4450
> # SAN_4x
4551
> python main.py --model san --data_test MyImage --save `save_name` --scale 4 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --test_only --testpath 'your path' --testset Set5 --pre_train ../model/SAN_BIX4.pt
52+
>
4653
> # SAN_8x
54+
>
4755
> python main.py --model san --data_test MyImage --save `save_name` --scale 8 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --test_only --testpath 'your path' --testset Set5 --pre_train ../model/SAN_BIX8.pt
56+
>
4857
### 4. Results
4958
- Some of the test results can be downloaded.
5059

5160
### 5. Citation
5261
If the the work or the code is helpful, please cite the following papers
5362

5463
> @inproceedings{dai2019second,
55-
title={Second-order Attention Network for Single Image Super-Resolution},
64+
>
65+
> title={Second-order Attention Network for Single Image Super-Resolution},
5666
author={Dai, Tao and Cai, Jianrui and Zhang, Yongbing and Xia, Shu-Tao and Zhang, Lei},
5767
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
5868
pages={11065--11074},
5969
year={2019}
6070
}
6171

62-
@inproceedings{zhang2018image,
72+
> @inproceedings{zhang2018image,
73+
>
6374
title={Image super-resolution using very deep residual channel attention networks},
6475
author={Zhang, Yulun and Li, Kunpeng and Li, Kai and Wang, Lichen and Zhong, Bineng and Fu, Yun},
6576
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
6677
pages={286--301},
6778
year={2018}
6879
}
6980

70-
@inproceedings{li2017second,
71-
title={Is second-order information helpful for large-scale visual recognition?},
81+
> @inproceedings{li2017second,
82+
> title={Is second-order information helpful for large-scale visual recognition?},
7283
author={Li, Peihua and Xie, Jiangtao and Wang, Qilong and Zuo, Wangmeng},
7384
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
7485
pages={2070--2078},

0 commit comments

Comments
 (0)