Skip to content

Commit

Permalink
Bump 3rdparty/NeMo from e2b0f0e to 06e6703 (#486)
Browse files Browse the repository at this point in the history
Bumps [3rdparty/NeMo](https://github.com/NVIDIA/NeMo) from `e2b0f0e` to
`06e6703`.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/NVIDIA/NeMo/commit/06e6703aaafe2a7246930b3c1f627a251181bc8f"><code>06e6703</code></a>
Introducing TensorRT lazy export and caching option with trt_compile()
(<a
href="https://redirect.github.com/NVIDIA/NeMo/issues/11266">#11266</a>)</li>
<li><a
href="https://github.com/NVIDIA/NeMo/commit/e238327f17ba6e25ac9bbe8c2e2ec897cdb1493c"><code>e238327</code></a>
Fix strategies saving unsharded optimizer states (<a
href="https://redirect.github.com/NVIDIA/NeMo/issues/11392">#11392</a>)</li>
<li><a
href="https://github.com/NVIDIA/NeMo/commit/1023e15c010c7cd3653297625fb6868df394750e"><code>1023e15</code></a>
data modules for llava_next (<a
href="https://redirect.github.com/NVIDIA/NeMo/issues/11400">#11400</a>)</li>
<li><a
href="https://github.com/NVIDIA/NeMo/commit/c0b49d60480787d0a90dbfcea7a778b722cb42cc"><code>c0b49d6</code></a>
Fix vllm test issue when run_accuracy is enabled (<a
href="https://redirect.github.com/NVIDIA/NeMo/issues/11413">#11413</a>)</li>
<li><a
href="https://github.com/NVIDIA/NeMo/commit/79b2e8c0567cd2be78ad5701c8567b451f6cc4f6"><code>79b2e8c</code></a>
Adding LLava-Next model class (<a
href="https://redirect.github.com/NVIDIA/NeMo/issues/11399">#11399</a>)</li>
<li><a
href="https://github.com/NVIDIA/NeMo/commit/5d97b70cde8ff2d6232addcbe6d38eaa127aa284"><code>5d97b70</code></a>
[NeMo-UX] Support <code>load_strictness</code> (<a
href="https://redirect.github.com/NVIDIA/NeMo/issues/10612">#10612</a>)</li>
<li><a
href="https://github.com/NVIDIA/NeMo/commit/7198fa4a0eb1e1ec4e67453967863629b4509973"><code>7198fa4</code></a>
Rewire tokenizer exception handling in model resume (<a
href="https://redirect.github.com/NVIDIA/NeMo/issues/11375">#11375</a>)</li>
<li><a
href="https://github.com/NVIDIA/NeMo/commit/706eb091e234cd5c6e085eaf4f5a782254b614a5"><code>706eb09</code></a>
Remove logic to skip checkpoint save if checkpoint exists (<a
href="https://redirect.github.com/NVIDIA/NeMo/issues/11362">#11362</a>)</li>
<li><a
href="https://github.com/NVIDIA/NeMo/commit/613d0f22c1d04d91564202cea5213ee6deae4c3c"><code>613d0f2</code></a>
ci: Add HF cache (<a
href="https://redirect.github.com/NVIDIA/NeMo/issues/11398">#11398</a>)</li>
<li><a
href="https://github.com/NVIDIA/NeMo/commit/2ce32432738a528c1719721225a8bb68ec92b703"><code>2ce3243</code></a>
capitalize HF as HF instead of Hf (<a
href="https://redirect.github.com/NVIDIA/NeMo/issues/11384">#11384</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/NVIDIA/NeMo/compare/e2b0f0ead13be29476c047dfb49ad49f85a849bb...06e6703aaafe2a7246930b3c1f627a251181bc8f">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Peter St. John <[email protected]>
  • Loading branch information
dependabot[bot] and pstjohn authored Dec 3, 2024
1 parent 7f9fd66 commit 868be33
Show file tree
Hide file tree
Showing 30 changed files with 74 additions and 59 deletions.
2 changes: 1 addition & 1 deletion 3rdparty/NeMo
Submodule NeMo updated 737 files
11 changes: 9 additions & 2 deletions ci/scripts/run_pytest.sh
Original file line number Diff line number Diff line change
Expand Up @@ -25,5 +25,12 @@ if ! set_bionemo_home; then
exit 1
fi

echo "Running pytest tests"
pytest -v --nbval-lax docs/ scripts/ sub-packages/
python -m coverage erase

for dir in docs/ ./sub-packages/bionemo-*/; do
echo "Running pytest in $dir"
python -m coverage run --parallel-mode --source sub-packages/ -m pytest -v --nbval-lax $dir
done

python -m coverage combine
python -m coverage report
2 changes: 1 addition & 1 deletion docs/docs/user-guide/examples/bionemo-esm2/pretrain.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ strategy = nl.MegatronStrategy(
BioNeMo2 trainer is very similar to PyTorch Lightning trainer. We can configure the training configurations and logging.

```python
from pytorch_lightning.callbacks import LearningRateMonitor, RichModelSummary
from lightning.pytorch.callbacks import LearningRateMonitor, RichModelSummary
from bionemo.llm.lightning import PerplexityLoggingCallback

num_steps = 20
Expand Down
10 changes: 5 additions & 5 deletions scripts/gpt-pretrain.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,19 +17,19 @@
from pathlib import Path
from typing import List, Optional, Sequence, TypedDict

import lightning.pytorch as pl
import numpy as np
import pytorch_lightning as pl
import torch

# In lightning.pytorch 2.0 these are commented as being "any iterable or collection of iterables"
# for now we'll use them incase the lightning type becomes something more specific in a future release.
from lightning.pytorch.utilities.types import EVAL_DATALOADERS, TRAIN_DATALOADERS
from nemo import lightning as nl
from nemo.collections import llm
from nemo.collections.common.tokenizers.tokenizer_spec import TokenizerSpec
from nemo.collections.nlp.modules.common.tokenizer_utils import get_nmt_tokenizer
from nemo.lightning.megatron_parallel import DataT
from nemo.lightning.pytorch.plugins import MegatronDataSampler

# In pytorch_lightning 2.0 these are commented as being "any iterable or collection of iterables"
# for now we'll use them incase the lightning type becomes something more specific in a future release.
from pytorch_lightning.utilities.types import EVAL_DATALOADERS, TRAIN_DATALOADERS
from torch.utils import data
from torch.utils.data import DataLoader, Dataset

Expand Down
12 changes: 6 additions & 6 deletions sub-packages/bionemo-core/tests/bionemo/core/data/test_load.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def test_load_with_file(mocked_s3_download, tmp_path):
)

mocked_s3_download.side_effect = lambda _1, output_file, _2: Path(output_file).write_text("test")
file_path = load("foo/bar", resources=get_all_resources(tmp_path), cache_dir=tmp_path)
file_path = load("foo/bar", resources=get_all_resources(tmp_path), cache_dir=tmp_path, source="pbss")
assert file_path.is_file()
assert file_path.read_text() == "test"

Expand All @@ -132,7 +132,7 @@ def write_compressed_text(_1, output_file: str, _2):

mocked_s3_download.side_effect = write_compressed_text

file_path = load("foo/baz", resources=get_all_resources(tmp_path), cache_dir=tmp_path)
file_path = load("foo/baz", resources=get_all_resources(tmp_path), cache_dir=tmp_path, source="pbss")
assert file_path.is_file()
assert file_path.read_text() == "test"

Expand All @@ -155,7 +155,7 @@ def write_compressed_text(_1, output_file: str, _2):

mocked_s3_download.side_effect = write_compressed_text

file_path = load("foo/baz", resources=get_all_resources(tmp_path), cache_dir=tmp_path)
file_path = load("foo/baz", resources=get_all_resources(tmp_path), cache_dir=tmp_path, source="pbss")

# Assert the file remained compressed.
assert file_path.is_file()
Expand Down Expand Up @@ -190,7 +190,7 @@ def write_compressed_dir(_1, output_file: str, _2):

mocked_s3_download.side_effect = write_compressed_dir

file_path = load("foo/dir", resources=get_all_resources(tmp_path), cache_dir=tmp_path)
file_path = load("foo/dir", resources=get_all_resources(tmp_path), cache_dir=tmp_path, source="pbss")
assert file_path.is_dir()
assert (file_path / "test_file").read_text() == "test"

Expand Down Expand Up @@ -223,7 +223,7 @@ def write_tarfile_dir(_1, output_file: str, _2):

mocked_s3_download.side_effect = write_tarfile_dir

file_path = load("foo/dir", resources=get_all_resources(tmp_path), cache_dir=tmp_path)
file_path = load("foo/dir", resources=get_all_resources(tmp_path), cache_dir=tmp_path, source="pbss")

# Assert the file stays as a tarfile.
assert file_path.is_file()
Expand Down Expand Up @@ -259,7 +259,7 @@ def write_compressed_dir(_1, output_file: str, _2):

mocked_s3_download.side_effect = write_compressed_dir

file_path = load("foo/dir.gz", resources=get_all_resources(tmp_path), cache_dir=tmp_path)
file_path = load("foo/dir.gz", resources=get_all_resources(tmp_path), cache_dir=tmp_path, source="pbss")
assert file_path.is_dir()
assert (file_path / "test_file").read_text() == "test"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -21,31 +21,35 @@
"name": "stderr",
"output_type": "stream",
"text": [
"Downloading data from 'nvidia/clara/esm2_pretrain_nemo2_testdata:1.0' to file '/tmp/tmp_v_0_64q/dc23f4aaad387ecc12e53d56b8176430-esm2_pretrain_nemo2_testdata:1.0'.\n",
"Untarring contents of '/tmp/tmp_v_0_64q/dc23f4aaad387ecc12e53d56b8176430-esm2_pretrain_nemo2_testdata:1.0' to '/tmp/tmp_v_0_64q/dc23f4aaad387ecc12e53d56b8176430-esm2_pretrain_nemo2_testdata:1.0.untar'\n"
"Downloading data from 'nvidia/clara/scdl_sample_test:1.0' to file '/tmp/tmpqif5bfww/7a4237537bf535dfa00301ce8cc7073e0a23d5bc8aa902ad65db9f51b57a6df9-scdl_sample_test.tar.gz'.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Untarring contents of '/tmp/tmpqif5bfww/7a4237537bf535dfa00301ce8cc7073e0a23d5bc8aa902ad65db9f51b57a6df9-scdl_sample_test.tar.gz' to '/tmp/tmpqif5bfww/7a4237537bf535dfa00301ce8cc7073e0a23d5bc8aa902ad65db9f51b57a6df9-scdl_sample_test.tar.gz.untar'\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"download_end\": \"2024-11-07 19:13:48\",\n",
" \"download_start\": \"2024-11-07 19:13:46\",\n",
" \"download_time\": \"2s\",\n",
" \"download_end\": \"2024-12-03 18:39:20\",\n",
" \"download_start\": \"2024-12-03 18:39:03\",\n",
" \"download_time\": \"17s\",\n",
" \"files_downloaded\": 1,\n",
" \"local_path\": \"/tmp/tmp_v_0_64q/tmpgadmjb1k/esm2_pretrain_nemo2_testdata_v1.0\",\n",
" \"size_downloaded\": \"69.91 MB\",\n",
" \"local_path\": \"/tmp/tmpqif5bfww/tmprn0ysh0w/scdl_sample_test_v1.0\",\n",
" \"size_downloaded\": \"964.91 KB\",\n",
" \"status\": \"COMPLETED\"\n",
"}\n"
]
}
],
"source": [
"# xfail -- we need to file a bug or figure out how to call the ngcsdk from a jupyter notebook\n",
"# NBVAL_RAISES_EXCEPTION\n",
"with tempfile.TemporaryDirectory() as cache_dir:\n",
" load(\"esm2/testdata_esm2_pretrain:2.0\", source=\"ngc\", cache_dir=Path(cache_dir))"
" load(\"scdl/sample\", source=\"ngc\", cache_dir=Path(cache_dir))"
]
},
{
Expand All @@ -57,15 +61,15 @@
"name": "stderr",
"output_type": "stream",
"text": [
"Downloading data from 's3://general-purpose/esm2/pretrain/2024_03_sanity.tar.gz' to file '/tmp/tmpjnvk7m8k/f796e1ca28311606ff7dd62a067508bf-2024_03_sanity.tar.gz'.\n",
"s3://general-purpose/esm2/pretrain/2024_03_sanity.tar.gz: 100%|██████████| 73.3M/73.3M [00:01<00:00, 38.7MB/s]\n",
"Untarring contents of '/tmp/tmpjnvk7m8k/f796e1ca28311606ff7dd62a067508bf-2024_03_sanity.tar.gz' to '/tmp/tmpjnvk7m8k/f796e1ca28311606ff7dd62a067508bf-2024_03_sanity.tar.gz.untar'\n"
"Downloading data from 's3://bionemo-ci/test-data/scdl_sample_test.tar.gz' to file '/tmp/tmpl6cgwhyn/7a4237537bf535dfa00301ce8cc7073e0a23d5bc8aa902ad65db9f51b57a6df9-scdl_sample_test.tar.gz'.\n",
"s3://bionemo-ci/test-data/scdl_sample_test.tar.gz: 100%|██████████| 988k/988k [00:00<00:00, 2.70MB/s]\n",
"Untarring contents of '/tmp/tmpl6cgwhyn/7a4237537bf535dfa00301ce8cc7073e0a23d5bc8aa902ad65db9f51b57a6df9-scdl_sample_test.tar.gz' to '/tmp/tmpl6cgwhyn/7a4237537bf535dfa00301ce8cc7073e0a23d5bc8aa902ad65db9f51b57a6df9-scdl_sample_test.tar.gz.untar'\n"
]
}
],
"source": [
"with tempfile.TemporaryDirectory() as cache_dir:\n",
" load(\"esm2/testdata_esm2_pretrain:2.0\", source=\"pbss\", cache_dir=Path(cache_dir))"
" load(\"scdl/sample\", source=\"pbss\", cache_dir=Path(cache_dir))"
]
}
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@
import os
from typing import Literal

from lightning.pytorch.utilities.types import EVAL_DATALOADERS, TRAIN_DATALOADERS
from nemo.lightning.data import WrappedDataLoader
from nemo.lightning.pytorch.plugins import MegatronDataSampler
from nemo.utils import logging
from pytorch_lightning.utilities.types import EVAL_DATALOADERS, TRAIN_DATALOADERS

from bionemo.esm2.data import dataset, tokenizer
from bionemo.llm.data import collate
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@
import pandas as pd
import torch
import torch.utils.data
from lightning.pytorch.utilities.types import EVAL_DATALOADERS, TRAIN_DATALOADERS
from nemo.lightning.data import WrappedDataLoader
from nemo.lightning.pytorch.plugins import MegatronDataSampler
from nemo.utils import logging
from pytorch_lightning.utilities.types import EVAL_DATALOADERS, TRAIN_DATALOADERS
from torch import Tensor
from torch.utils.data import Dataset

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,9 @@
from pathlib import Path
from typing import Sequence, Tuple

import pytorch_lightning as pl
import lightning.pytorch as pl
from lightning.pytorch.callbacks import Callback, RichModelSummary
from lightning.pytorch.loggers import TensorBoardLogger
from megatron.core.optimizer.optimizer_config import OptimizerConfig
from nemo import lightning as nl
from nemo.collections import llm as nllm
Expand All @@ -28,8 +30,6 @@
from nemo.lightning.pytorch.callbacks.model_transform import ModelTransform
from nemo.lightning.pytorch.callbacks.peft import PEFT
from nemo.lightning.pytorch.optim.megatron import MegatronOptimizerModule
from pytorch_lightning.callbacks import Callback, RichModelSummary
from pytorch_lightning.loggers import TensorBoardLogger

from bionemo.core.data.load import load
from bionemo.esm2.api import ESM2GenericConfig
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,13 @@
from pathlib import Path
from typing import List, Optional, Sequence, get_args

from lightning.pytorch.callbacks import LearningRateMonitor, RichModelSummary
from megatron.core.optimizer import OptimizerConfig
from nemo import lightning as nl
from nemo.collections import llm
from nemo.lightning import resume
from nemo.lightning.pytorch import callbacks as nl_callbacks
from nemo.lightning.pytorch.optim import MegatronOptimizerModule
from pytorch_lightning.callbacks import LearningRateMonitor, RichModelSummary

from bionemo.core.utils.dtypes import PrecisionTypes, get_autocast_dtype
from bionemo.esm2.api import ESM2Config
Expand Down Expand Up @@ -170,7 +170,7 @@ def main(
)

# for wandb integration
# Please refer to https://pytorch-lightning.readthedocs.io/en/0.7.6/api/pytorch_lightning.loggers.html"
# Please refer to https://pytorch-lightning.readthedocs.io/en/0.7.6/api/lightning.pytorch.loggers.html"
wandb_config: Optional[WandbConfig] = (
None
if wandb_project is None
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
from pathlib import Path
from typing import Literal

import pytorch_lightning as pl
import lightning.pytorch as pl
from megatron.core.optimizer import OptimizerConfig
from nemo import lightning as nl
from nemo.lightning.pytorch.optim import MegatronOptimizerModule
Expand Down
4 changes: 2 additions & 2 deletions sub-packages/bionemo-example_model/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Similarly, `ExampleFineTuneConfig` extends `ExampleGenericConfig` for finetuning

# Training Module

It is helfpul to have a training module that inherits from `pytorch_lightning.LightningModule` which organizes the model architecture, training, validation, and testing logic while abstracting away boilerplate code, enabling easier and more scalable training. This wrapper can be used for all model and loss combinations specified in the config.
It is helfpul to have a training module that inherits from `lightning.pytorch.LightningModule` which organizes the model architecture, training, validation, and testing logic while abstracting away boilerplate code, enabling easier and more scalable training. This wrapper can be used for all model and loss combinations specified in the config.
In `bionemo.example_model.lightning.lightning_basic`, we define `BionemoLightningModule`.

In this example, `training_step`, `validation_step`, and `predict_step` define the training, validation, and prediction loops are independent of the forward method. In nemo:
Expand All @@ -99,7 +99,7 @@ We specify a training strategy of type `nemo.lightning.MegatronStrategy`. This s

We specify a trainer of type `nemo.lightning.Trainer`, which is an extension of the pytorch lightning trainer. This is where the devices, validation intervals, maximal steps, maximal number of epochs, and how frequently to log are specified.

we specify a nemo-logger. We can set TensorBoard and WandB logging, along with extra loggers. Here, we specify a `CSVLogger` from pytorch_lightning.loggers.
we specify a nemo-logger. We can set TensorBoard and WandB logging, along with extra loggers. Here, we specify a `CSVLogger` from lightning.pytorch.loggers.

We can now proceed to training. The first pre-training scripts is `bionemo/example_model/training_scripts/pretrain_mnist.py`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@
from dataclasses import dataclass, field
from typing import Any, Dict, Generic, List, Optional, Sequence, Tuple, Type, TypedDict, TypeVar

import pytorch_lightning as pl
import torch
from megatron.core import ModelParallelConfig
from megatron.core.optimizer.optimizer_config import OptimizerConfig
Expand All @@ -35,6 +34,7 @@
from torchvision import transforms
from torchvision.datasets import MNIST

import lightning.pytorch as pl
from bionemo.core import BIONEMO_CACHE_DIR
from bionemo.core.data.multi_epoch_dataset import IdentityMultiEpochDatasetWrapper, MultiEpochDatasetResampler
from bionemo.llm.api import MegatronLossType
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,11 @@
import argparse
from pathlib import Path

from lightning.pytorch.loggers import CSVLogger, TensorBoardLogger
from nemo import lightning as nl
from nemo.collections import llm
from nemo.lightning import NeMoLogger, resume
from nemo.lightning.pytorch import callbacks as nl_callbacks
from pytorch_lightning.loggers import CSVLogger, TensorBoardLogger

from bionemo.example_model.lightning.lightning_basic import (
BionemoLightningModule,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,10 @@

from pathlib import Path

from lightning.pytorch.loggers import CSVLogger, TensorBoardLogger
from nemo import lightning as nl
from nemo.collections import llm
from nemo.lightning import NeMoLogger, resume
from pytorch_lightning.loggers import CSVLogger, TensorBoardLogger

from bionemo.example_model.lightning.lightning_basic import (
BionemoLightningModule,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@
import pytest
import torch
from _pytest.compat import LEGACY_PATH
from lightning.pytorch.loggers import TensorBoardLogger
from nemo import lightning as nl
from nemo.collections import llm
from nemo.lightning import NeMoLogger, io, resume
from nemo.lightning.pytorch import callbacks as nl_callbacks
from pytorch_lightning.loggers import TensorBoardLogger

from bionemo.core import BIONEMO_CACHE_DIR
from bionemo.core.utils.dtypes import PrecisionTypes, get_autocast_dtype
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
from typing import Dict, List, Optional, Sequence, Type, get_args

import torch
from lightning.pytorch.callbacks import LearningRateMonitor, RichModelSummary
from megatron.core.distributed import DistributedDataParallelConfig
from megatron.core.optimizer import OptimizerConfig
from nemo import lightning as nl
Expand All @@ -36,7 +37,6 @@
from nemo.lightning.pytorch.optim.lr_scheduler import CosineAnnealingScheduler
from nemo.utils import logging
from nemo.utils.exp_manager import TimingCallback
from pytorch_lightning.callbacks import LearningRateMonitor, RichModelSummary

from bionemo.core.utils.dtypes import PrecisionTypes, get_autocast_dtype
from bionemo.geneformer.api import FineTuneSeqLenBioBertConfig, GeneformerConfig
Expand Down Expand Up @@ -195,7 +195,7 @@ def main(
)

# for wandb integration
# Please refer to https://pytorch-lightning.readthedocs.io/en/0.7.6/api/pytorch_lightning.loggers.html"
# Please refer to https://pytorch-lightning.readthedocs.io/en/0.7.6/api/lightning.pytorch.loggers.html"
wandb_options: Optional[WandbConfig] = (
None
if wandb_project is None
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
import pytest
import torch
import torch.utils.data
from lightning.pytorch.loggers import TensorBoardLogger
from megatron.core.optimizer.optimizer_config import OptimizerConfig
from megatron.core.transformer.module import Float16Module
from nemo import lightning as nl
Expand All @@ -34,7 +35,6 @@
from nemo.lightning.pytorch.callbacks.peft import PEFT
from nemo.lightning.pytorch.optim.lr_scheduler import WarmupPolicyScheduler
from nemo.lightning.pytorch.optim.megatron import MegatronOptimizerModule
from pytorch_lightning.loggers import TensorBoardLogger
from torch.nn import functional as F
from tqdm import tqdm

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
import pathlib
from typing import Literal

import pytorch_lightning as pl
import lightning.pytorch as pl
import torch
from megatron.core.optimizer.optimizer_config import OptimizerConfig
from nemo import lightning as nl
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

from typing import Any, Dict

import pytorch_lightning as pl
import lightning.pytorch as pl
from nemo.utils import logging


Expand Down
2 changes: 1 addition & 1 deletion sub-packages/bionemo-llm/src/bionemo/llm/lightning.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

from typing import Any, Callable, Dict, Generic, Iterable, Iterator, List, Optional, Sequence, Tuple, TypeVar, Union

import pytorch_lightning as pl
import lightning.pytorch as pl
import torch.distributed
from megatron.core import parallel_state
from megatron.core.optimizer.optimizer_config import OptimizerConfig
Expand Down
Loading

0 comments on commit 868be33

Please sign in to comment.