Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check failed in jt.argsort/jt.argmax/jt.argmin/jt.flatten #611

Open
x0w3n opened this issue Dec 3, 2024 · 0 comments
Open

Check failed in jt.argsort/jt.argmax/jt.argmin/jt.flatten #611

x0w3n opened this issue Dec 3, 2024 · 0 comments

Comments

@x0w3n
Copy link

x0w3n commented Dec 3, 2024

Describe the bug

When the dim of argsort is negative, it triggers a check failed, and the error message suggests reporting this issue.

Full Log

logs of jt.argsort

Traceback (most recent call last):
  File "test.py", line 7, in <module>
    sorted_indices = input.argsort(dim)
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.argsort)).

Types of your inputs are:
 self   = Var,
 args   = (int, ),

The function declarations are:
 vector_to_tuple<VarHolder*> argsort(VarHolder* x,  int dim=-1,  bool descending=false,  NanoString dtype=ns_int32)

Failed reason:[f 1203 08:51:26.192444 16 argsort_op.cc:126] Check failed dim(-4) >= 0(0) Something wrong ... Could you please report this issue?

logs of jt.flatten

[i 1203 09:51:00.611464 72 compiler.py:956] Jittor(1.3.9.10) src: /home/miniconda3/envs/jittor/lib/python3.9/site-packages/jittor
[i 1203 09:51:00.619209 72 compiler.py:957] g++ at /usr/bin/g++(11.2.0)
[i 1203 09:51:00.619697 72 compiler.py:958] cache_path: /home/.cache/jittor/jt1.3.9/g++11.2.0/py3.9.12/Linux-5.15.0-1x30/INTELRXEONRGOLx51/ef26/default
[i 1203 09:51:00.684229 72 install_cuda.py:93] cuda_driver_version: [12, 2]
[i 1203 09:51:00.696125 72 __init__.py:412] Found /home/.cache/jittor/jtcuda/cuda12.2_cudnn8_linux/bin/nvcc(12.2.140) at /home/.cache/jittor/jtcuda/cuda12.2_cudnn8_linux/bin/nvcc.
[i 1203 09:51:00.889402 72 __init__.py:412] Found gdb(12.0.90) at /usr/bin/gdb.
[i 1203 09:51:00.892129 72 __init__.py:412] Found addr2line(2.38) at /usr/bin/addr2line.
[i 1203 09:51:01.183813 72 compiler.py:1013] cuda key:cu12.2.140_sm_89
[i 1203 09:51:02.460258 72 __init__.py:227] Total mem: 251.50GB, using 16 procs for compiling.
[i 1203 09:51:02.671112 72 jit_compiler.cc:28] Load cc_path: /usr/bin/g++
[i 1203 09:51:02.932042 72 init.cc:63] Found cuda archs: [89,]
Traceback (most recent call last):
  File "test.py", line 9, in <module>
    flatten_output = jt.flatten(input, start_dim=-100, end_dim=-100)
  File "/home/miniconda3/envs/jittor/lib/python3.9/site-packages/jittor/__init__.py", line 714, in flatten
    for i in range(start_dim, end_dim+1, 1): dims *= in_shape[i]
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.NanoVector.__map_getitem__)).

Types of your inputs are:
 self   = NanoVector,
 arg0   = int,

The function declarations are:
 inline int64 at(int i)
 inline NanoVector slice(Slice slice)

Failed reason:[f 1203 09:51:03.640783 72 nano_vector.h:116] Check failed: i>=0 && i<size()  Something wrong... Could you please report this issue?

Minimal Reproduce

import jittor as jt

input = jt.array([[1, 2, 3], [4, 5, 6]])
dim = -4 
sorted_indices = input.argsort(dim)
import jittor as jt

x = jt.ones(3, 3)
result, value = jt.argmax(x, dim=-2)

print(result)
print(value)
import jittor as jt

x = jt.ones(3, 3)
result, value = jt.argmin(x, dim=-2)

print(result)
print(value)
import jittor as jt

input = jt.zeros((1, 0))
flatten_output = jt.flatten(input, start_dim=-100, end_dim=-100)
print(flatten_output)

Expected behavior

The dim parameter needs to be checked.

@x0w3n x0w3n changed the title Check failed in jt.argsort Check failed in jt.argsort/jt.argmax Dec 3, 2024
@x0w3n x0w3n changed the title Check failed in jt.argsort/jt.argmax Check failed in jt.argsort/jt.argmax/jt.argmin/jt.flatten Dec 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant