Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is this the expected performance? #39

Open
KawaiiNotHawaii opened this issue Dec 27, 2024 · 0 comments
Open

Is this the expected performance? #39

KawaiiNotHawaii opened this issue Dec 27, 2024 · 0 comments

Comments

@KawaiiNotHawaii
Copy link

bash codes to start inference:
`# Input arguments
INPUT_PATH="demo02.jpg"
OUTPUT_BASE="outputs"

Extract the filename without extension

FILENAME=$(basename "$INPUT_PATH" | sed 's/.[^.]*$//')

Construct the output folder path

OUTPUT_PATH="$OUTPUT_BASE/$FILENAME"

Run the command

CUDA_VISIBLE_DEVICES=1 python inference_on_a_image.py
-c config_model/UniPose_SwinT.py
-p weights/unipose_swint.pth
-i "$INPUT_PATH"
-o "$OUTPUT_PATH"
-t "car"`

logs:
********* sub_sentence_present True
_IncompatibleKeys(missing_keys=['clip_model.positional_embedding', 'clip_model.text_projection', 'clip_model.logit_scale', 'clip_model.transformer.resblocks.0.attn.in_proj_weight', 'clip_model.transformer.resblocks.0.attn.in_proj_bias', 'clip_model.transformer.resblocks.0.attn.out_proj.weight', 'clip_model.transformer.resblocks.0.attn.out_proj.bias', 'clip_model.transformer.resblocks.0.ln_1.weight', 'clip_model.transformer.resblocks.0.ln_1.bias', 'clip_model.transformer.resblocks.0.mlp.c_fc.weight', 'clip_model.transformer.resblocks.0.mlp.c_fc.bias', 'clip_model.transformer.resblocks.0.mlp.c_proj.weight', 'clip_model.transformer.resblocks.0.mlp.c_proj.bias', 'clip_model.transformer.resblocks.0.ln_2.weight', 'clip_model.transformer.resblocks.0.ln_2.bias', 'clip_model.transformer.resblocks.1.attn.in_proj_weight', 'clip_model.transformer.resblocks.1.attn.in_proj_bias', 'clip_model.transformer.resblocks.1.attn.out_proj.weight', 'clip_model.transformer.resblocks.1.attn.out_proj.bias', 'clip_model.transformer.resblocks.1.ln_1.weight', 'clip_model.transformer.resblocks.1.ln_1.bias', 'clip_model.transformer.resblocks.1.mlp.c_fc.weight', 'clip_model.transformer.resblocks.1.mlp.c_fc.bias', 'clip_model.transformer.resblocks.1.mlp.c_proj.weight', 'clip_model.transformer.resblocks.1.mlp.c_proj.bias', 'clip_model.transformer.resblocks.1.ln_2.weight', 'clip_model.transformer.resblocks.1.ln_2.bias', 'clip_model.transformer.resblocks.2.attn.in_proj_weight', 'clip_model.transformer.resblocks.2.attn.in_proj_bias', 'clip_model.transformer.resblocks.2.attn.out_proj.weight', 'clip_model.transformer.resblocks.2.attn.out_proj.bias', 'clip_model.transformer.resblocks.2.ln_1.weight', 'clip_model.transformer.resblocks.2.ln_1.bias', 'clip_model.transformer.resblocks.2.mlp.c_fc.weight', 'clip_model.transformer.resblocks.2.mlp.c_fc.bias', 'clip_model.transformer.resblocks.2.mlp.c_proj.weight', 'clip_model.transformer.resblocks.2.mlp.c_proj.bias', 'clip_model.transformer.resblocks.2.ln_2.weight', 'clip_model.transformer.resblocks.2.ln_2.bias', 'clip_model.transformer.resblocks.3.attn.in_proj_weight', 'clip_model.transformer.resblocks.3.attn.in_proj_bias', 'clip_model.transformer.resblocks.3.attn.out_proj.weight', 'clip_model.transformer.resblocks.3.attn.out_proj.bias', 'clip_model.transformer.resblocks.3.ln_1.weight', 'clip_model.transformer.resblocks.3.ln_1.bias', 'clip_model.transformer.resblocks.3.mlp.c_fc.weight', 'clip_model.transformer.resblocks.3.mlp.c_fc.bias', 'clip_model.transformer.resblocks.3.mlp.c_proj.weight', 'clip_model.transformer.resblocks.3.mlp.c_proj.bias', 'clip_model.transformer.resblocks.3.ln_2.weight', 'clip_model.transformer.resblocks.3.ln_2.bias', 'clip_model.transformer.resblocks.4.attn.in_proj_weight', 'clip_model.transformer.resblocks.4.attn.in_proj_bias', 'clip_model.transformer.resblocks.4.attn.out_proj.weight', 'clip_model.transformer.resblocks.4.attn.out_proj.bias', 'clip_model.transformer.resblocks.4.ln_1.weight', 'clip_model.transformer.resblocks.4.ln_1.bias', 'clip_model.transformer.resblocks.4.mlp.c_fc.weight', 'clip_model.transformer.resblocks.4.mlp.c_fc.bias', 'clip_model.transformer.resblocks.4.mlp.c_proj.weight', 'clip_model.transformer.resblocks.4.mlp.c_proj.bias', 'clip_model.transformer.resblocks.4.ln_2.weight', 'clip_model.transformer.resblocks.4.ln_2.bias', 'clip_model.transformer.resblocks.5.attn.in_proj_weight', 'clip_model.transformer.resblocks.5.attn.in_proj_bias', 'clip_model.transformer.resblocks.5.attn.out_proj.weight', 'clip_model.transformer.resblocks.5.attn.out_proj.bias', 'clip_model.transformer.resblocks.5.ln_1.weight', 'clip_model.transformer.resblocks.5.ln_1.bias', 'clip_model.transformer.resblocks.5.mlp.c_fc.weight', 'clip_model.transformer.resblocks.5.mlp.c_fc.bias', 'clip_model.transformer.resblocks.5.mlp.c_proj.weight', 'clip_model.transformer.resblocks.5.mlp.c_proj.bias', 'clip_model.transformer.resblocks.5.ln_2.weight', 'clip_model.transformer.resblocks.5.ln_2.bias', 'clip_model.transformer.resblocks.6.attn.in_proj_weight', 'clip_model.transformer.resblocks.6.attn.in_proj_bias', 'clip_model.transformer.resblocks.6.attn.out_proj.weight', 'clip_model.transformer.resblocks.6.attn.out_proj.bias', 'clip_model.transformer.resblocks.6.ln_1.weight', 'clip_model.transformer.resblocks.6.ln_1.bias', 'clip_model.transformer.resblocks.6.mlp.c_fc.weight', 'clip_model.transformer.resblocks.6.mlp.c_fc.bias', 'clip_model.transformer.resblocks.6.mlp.c_proj.weight', 'clip_model.transformer.resblocks.6.mlp.c_proj.bias', 'clip_model.transformer.resblocks.6.ln_2.weight', 'clip_model.transformer.resblocks.6.ln_2.bias', 'clip_model.transformer.resblocks.7.attn.in_proj_weight', 'clip_model.transformer.resblocks.7.attn.in_proj_bias', 'clip_model.transformer.resblocks.7.attn.out_proj.weight', 'clip_model.transformer.resblocks.7.attn.out_proj.bias', 'clip_model.transformer.resblocks.7.ln_1.weight', 'clip_model.transformer.resblocks.7.ln_1.bias', 'clip_model.transformer.resblocks.7.mlp.c_fc.weight', 'clip_model.transformer.resblocks.7.mlp.c_fc.bias', 'clip_model.transformer.resblocks.7.mlp.c_proj.weight', 'clip_model.transformer.resblocks.7.mlp.c_proj.bias', 'clip_model.transformer.resblocks.7.ln_2.weight', 'clip_model.transformer.resblocks.7.ln_2.bias', 'clip_model.transformer.resblocks.8.attn.in_proj_weight', 'clip_model.transformer.resblocks.8.attn.in_proj_bias', 'clip_model.transformer.resblocks.8.attn.out_proj.weight', 'clip_model.transformer.resblocks.8.attn.out_proj.bias', 'clip_model.transformer.resblocks.8.ln_1.weight', 'clip_model.transformer.resblocks.8.ln_1.bias', 'clip_model.transformer.resblocks.8.mlp.c_fc.weight', 'clip_model.transformer.resblocks.8.mlp.c_fc.bias', 'clip_model.transformer.resblocks.8.mlp.c_proj.weight', 'clip_model.transformer.resblocks.8.mlp.c_proj.bias', 'clip_model.transformer.resblocks.8.ln_2.weight', 'clip_model.transformer.resblocks.8.ln_2.bias', 'clip_model.transformer.resblocks.9.attn.in_proj_weight', 'clip_model.transformer.resblocks.9.attn.in_proj_bias', 'clip_model.transformer.resblocks.9.attn.out_proj.weight', 'clip_model.transformer.resblocks.9.attn.out_proj.bias', 'clip_model.transformer.resblocks.9.ln_1.weight', 'clip_model.transformer.resblocks.9.ln_1.bias', 'clip_model.transformer.resblocks.9.mlp.c_fc.weight', 'clip_model.transformer.resblocks.9.mlp.c_fc.bias', 'clip_model.transformer.resblocks.9.mlp.c_proj.weight', 'clip_model.transformer.resblocks.9.mlp.c_proj.bias', 'clip_model.transformer.resblocks.9.ln_2.weight', 'clip_model.transformer.resblocks.9.ln_2.bias', 'clip_model.transformer.resblocks.10.attn.in_proj_weight', 'clip_model.transformer.resblocks.10.attn.in_proj_bias', 'clip_model.transformer.resblocks.10.attn.out_proj.weight', 'clip_model.transformer.resblocks.10.attn.out_proj.bias', 'clip_model.transformer.resblocks.10.ln_1.weight', 'clip_model.transformer.resblocks.10.ln_1.bias', 'clip_model.transformer.resblocks.10.mlp.c_fc.weight', 'clip_model.transformer.resblocks.10.mlp.c_fc.bias', 'clip_model.transformer.resblocks.10.mlp.c_proj.weight', 'clip_model.transformer.resblocks.10.mlp.c_proj.bias', 'clip_model.transformer.resblocks.10.ln_2.weight', 'clip_model.transformer.resblocks.10.ln_2.bias', 'clip_model.transformer.resblocks.11.attn.in_proj_weight', 'clip_model.transformer.resblocks.11.attn.in_proj_bias', 'clip_model.transformer.resblocks.11.attn.out_proj.weight', 'clip_model.transformer.resblocks.11.attn.out_proj.bias', 'clip_model.transformer.resblocks.11.ln_1.weight', 'clip_model.transformer.resblocks.11.ln_1.bias', 'clip_model.transformer.resblocks.11.mlp.c_fc.weight', 'clip_model.transformer.resblocks.11.mlp.c_fc.bias', 'clip_model.transformer.resblocks.11.mlp.c_proj.weight', 'clip_model.transformer.resblocks.11.mlp.c_proj.bias', 'clip_model.transformer.resblocks.11.ln_2.weight', 'clip_model.transformer.resblocks.11.ln_2.bias', 'clip_model.token_embedding.weight', 'clip_model.ln_final.weight', 'clip_model.ln_final.bias'], unexpected_keys=[])
/nlp_group/xxx/anaconda3/envs/unipose/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
/nlp/xxx/anaconda3/envs/unipose/lib/python3.10/site-packages/torch/utils/checkpoint.py:61: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
Inference succeeds.
savename: /nlp/xxx/X-Pose/demo05/pred.jpg

predictions:
pred
pred (1)
pred (2)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant