-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added dataset download files to write the dataset from the HF api to … #37
base: main
Are you sure you want to change the base?
Conversation
…disk. Also added bash script to create tar files from the IR files on disk.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial comments.
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/create_tar.sh
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
…de argparse functionality, elimination of global variables, and script execution layout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you able to add a README.md
or more info into the docstring in write_data_files.py
with an example of how to use this?
Also, please take a look at the CI failures.
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/datatset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple stylistic things.
llvm_ir_dataset_utils/compile_time_analysis_tools/dataset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/dataset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
llvm_ir_dataset_utils/compile_time_analysis_tools/dataset_download/write_data_files.py
Outdated
Show resolved
Hide resolved
def get_args(): | ||
"""Function to return the provided storage argument for the script. | ||
|
||
Returns: argparse.Namespace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Can we have a type annotation for this?
tarinfo = tarfile.TarInfo(name=f'bc_files/file{x[0]+1+start_index}.bc') | ||
file_obj = BytesIO(x[1]) | ||
tarinfo.size = file_obj.getbuffer().nbytes | ||
tarinfo.mtime = time() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to set the time here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tar file will otherwise be created with an irregular modification time (e.g. the year 1969) and the tar will complain that the modification time is irregular.
tarinfo.mtime = time() | ||
tar.addfile(tarinfo, fileobj=file_obj) | ||
|
||
with parallel.parallel_backend('spark'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment on the performance benefits of using the parallel backend?
end_index = file_indices[i]["end_index"] | ||
dir_name = f'{storage}/{file_indices[i]["language"]}' | ||
makedirs(dir_name, exist_ok=True) | ||
thread = threading.Thread( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would probably be more natural to use a ThreadPoolExecutor
here, submit jobs, and get back futures?
Added dataset download files to write the dataset from the HF api to disk. Also added bash script to create tar files from the IR files on disk.