Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pipeline-parallel support for Knowledge Distillation (NeMo 2) #11766

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

AAnoosheh
Copy link
Collaborator

What does this PR do ?

Enable Pipeline-Parallelism in conjunction with student-teacher distillation in NeMo 2

Collection: [LLM]

Changelog

  • Create new script to enable KD in NeMo 2
  • Modify the MegatronParallel forward pass to run teacher in addition to student

Usage

  • TBA
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

nemo/lightning/megatron_parallel.py Fixed Show fixed Hide fixed
def forward(self, batch: Dict[str, Tensor], forward_out: Tensor) -> Tuple[Tensor, Dict[str, Tensor]]:
if isinstance(forward_out, tuple):
# neva returns (logits, loss_mask)
forward_out, batch["loss_mask"] = forward_out

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable forward_out is not used.
scripts/llm/gpt_distillation.py Fixed Show fixed Hide fixed
@AAnoosheh AAnoosheh force-pushed the aanoosheh/pp-distillation-nemo2 branch from 0ce6e7d to b748ae8 Compare January 14, 2025 18:55
if self._kd_teacher_in_pp:
with self.unwrapped_model.only_teacher_forward():
with self.unwrapped_model.swap_teacher_config(self.module):
teacher_step()

Check failure

Code scanning / CodeQL

Potentially uninitialized local variable Error

Local variable 'teacher_step' may be used before it is initialized.
scripts/llm/gpt_distillation.py Fixed Show fixed Hide fixed
from nemo import lightning as nl
from nemo.collections import llm
from nemo.collections.llm.gpt.model.base import get_batch_on_this_context_parallel_rank
from nemo.collections.llm.inference.base import _setup_trainer_and_restore_model

Check notice

Code scanning / CodeQL

Unused import Note

Import of '_setup_trainer_and_restore_model' is not used.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant