Skip to content
This repository was archived by the owner on Mar 20, 2025. It is now read-only.
This repository was archived by the owner on Mar 20, 2025. It is now read-only.

Understanding GPU Memory Usage in Deep Water #64

@dward4

Description

@dward4

Hi,

I'm trying to understand how Deep Water utilizes GPU memory. I ran the 'Deep Water Deep Features Similarity Cars Inception' notebook and noticed that nvtop showed

Device 0 [Tesla K80] PCIe GEN 3@16x RX: 0.000 kB/s TX: 0.000 kB/s
GPU 562MHz MEM 2505MHz TEMP 74M- FAN 0% POW 66 / 149 W
GPU-Util[||||||| 53%] MEM-Util[|||||4.4G/12.0G] Encoder[ 0%] Decoder[ 0%]

where GPU-Util fluctuated, but MEM-Util held constant at 4.4G until the notebook was closed.

Does Deep Water use the GPU to store the model and run it from there? I'm used to seeing full GPU usage with traditional basic tensorflow scripts.

I apologize in advance for any incorrect terminology, I'll amend this as needed.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions