Skip to content
View hamedkazemi's full-sized avatar

Block or report hamedkazemi

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. CurlyDL CurlyDL Public

    A robust, feature-rich download manager for Python with support for parallel downloads, speed limiting, and progress tracking.

    Python

  2. VoidShader VoidShader Public

    A Godot experiment exploring shader techniques to replicate and visualize the void through procedural graphics and audio synthesis.

    GDScript

  3. tinytik tinytik Public

    Go 1

  4. Import oneplus notes app backup to g... Import oneplus notes app backup to google keep:
    1
    import json
    2
    import gkeepapi
    3
    
                  
    4
    # Install clone app, go to settings -> additional features -> backup and reset and make a local backup from settings
    5
    # Then you can find the backup file in android/data/com.oneplus.backupRestore, then copy rich text from there, it's 
  5. This code provides a comprehensive s... This code provides a comprehensive solution for processing and translating large text files using OpenAI's GPT-4 model. It begins by importing necessary libraries, including NLTK for sentence tokenization and a custom tiktoken library for token encoding. The script defines two key functions: split_text_by_sentences and split_text_by_tokens, which split text into manageable segments based on sentence boundaries and token counts, respectively. The large_text_completion function then processes each text segment, sending it to the OpenAI API with a specified query (like a translation request) and concatenates the responses. The script reads a large text file (largeInput.txt), splits it into segments that fit within the token limits of GPT-4, and sends each segment to the model with a translation query.
    1
    # This code provides a comprehensive solution for processing and translating large text files using OpenAI's GPT-4 
    2
    # model. It begins by importing necessary libraries, including NLTK for sentence tokenization and a custom tiktoken 
    3
    # library for token encoding. The script defines two key functions: split_text_by_sentences and split_text_by_tokens,
    4
    # which split text into manageable segments based on sentence boundaries and token counts, respectively. The 
    5
    # large_text_completion function then processes each text segment, sending it to the OpenAI API with a specified