Add Agent Evaluation skill for accuracy benchmarking#1132
Add Agent Evaluation skill for accuracy benchmarking#1132
Conversation
Add a Claude Code skill for evaluating LLM accuracy using NeMo Evaluator Launcher (NEL). Based on the upstream nel-assistant skill (commit f1fa073) with ModelOpt-specific additions: - Auto-detect ModelOpt quantization format from hf_quant_config.json and set the correct vLLM/SGLang --quantization flag - Quantization-aware benchmark defaults (recommend MMLU, GSM8K, ARC-Challenge for quantized models) - Workspace management for multi-user environments (Step 0) - Disable MD036/MD029 markdownlint rules for upstream NEL formatting The skill guides users through NEL config generation, model card research, and evaluation execution (local and SLURM). Signed-off-by: Kai Xu <kaix@nvidia.com>
📝 WalkthroughWalkthroughAdds a new NeMo evaluation skill and many evaluation manifests providing interactive, stepwise workflows for running accuracy evaluations via NeMo Evaluator Launcher (NEL), plus reference docs for model-card extraction and multi-node patterns; also updates Markdown lint rules. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User as User
participant Skill as Evaluation Skill
participant NEL as NEL (CLI)
participant Deploy as Runtime (vLLM / SGLang)
participant Store as Model Storage / Workspace
User->>Skill: invoke evaluation (provide model / choices)
Skill->>NEL: run "nel skills build-config" (collect answers)
NEL-->>Skill: generated config (with auto-detected quantization & overrides)
Skill->>Store: read checkpoint & hf_quant_config.json (auto-detect)
Skill->>Deploy: prepare deployment (pre_cmd, extra_args)
Skill->>NEL: run "nel run" (dry-run -> test -> full)
NEL->>Deploy: start/target evaluation jobs
Deploy->>Store: load checkpoint
Deploy-->>NEL: task results/logs
NEL-->>Skill: status/info/logs
Skill-->>User: present results / task list / monitoring pointers
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/skills/evaluation/SKILL.md:
- Around line 269-275: Update the SKILL.md snippet that documents
NEMO_EVALUATOR_TRUST_PRE_CMD to include a clear security warning: note that
setting NEMO_EVALUATOR_TRUST_PRE_CMD=1 enables execution of pre_cmd and post_cmd
which run arbitrary shell commands with the evaluator's privileges, instruct
users to review pre_cmd content, only trust configs from known sources, and be
cautious when using nemo_skills.* self-deployment tasks; reference the
environment variable name (NEMO_EVALUATOR_TRUST_PRE_CMD) and the config keys
(pre_cmd, post_cmd, nemo_skills.*) so readers can find and audit them.
- Around line 100-116: Update the documentation in SKILL.md to remove the
incorrect per-algorithm vLLM flag mapping and instead state that if
hf_quant_config.json exists (read quantization.quant_algo), vLLM uses a single
unified flag --quantization modelopt which auto-detects NVFP4, W4A8_AWQ, FP8,
etc.; replace the table and related lines with a concise statement: "If
hf_quant_config.json exists, vLLM auto-detects the quantization format and you
should pass --quantization modelopt (no format-specific flags required)."
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 697ac2e8-f1a5-41d4-9870-adb1ff730091
📒 Files selected for processing (3)
.claude/skills/evaluation/SKILL.md.claude/skills/evaluation/evals/nemotron3-nano-bf16-reasoning.json.markdownlint-cli2.yaml
.claude/skills/evaluation/SKILL.md
Outdated
| **Auto-detect ModelOpt quantization format** (checkpoint paths only): | ||
|
|
||
| Check for `hf_quant_config.json` in the checkpoint directory: | ||
|
|
||
| ```bash | ||
| cat <checkpoint_path>/hf_quant_config.json 2>/dev/null | ||
| ``` | ||
|
|
||
| If found, read `quantization.quant_algo` and set the correct vLLM/SGLang quantization flag in `deployment.extra_args`: | ||
|
|
||
| | `quant_algo` | Flag to add | | ||
| |-------------|-------------| | ||
| | `FP8` | `--quantization modelopt` | | ||
| | `W4A8_AWQ` | `--quantization modelopt` | | ||
| | `NVFP4`, `NVFP4_AWQ` | `--quantization modelopt_fp4` | | ||
|
|
||
| If no `hf_quant_config.json`, the checkpoint is unquantized — no flag needed. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for existing ModelOpt quantization format references in the codebase
echo "=== Searching for hf_quant_config.json references ==="
rg -n 'hf_quant_config' -C3
echo -e "\n=== Searching for quantization.quant_algo references ==="
rg -n 'quant_algo' -C3
echo -e "\n=== Searching for FP8, W4A8_AWQ, NVFP4 algorithm references ==="
rg -n 'FP8|W4A8_AWQ|NVFP4' -C2Repository: NVIDIA/Model-Optimizer
Length of output: 50379
🌐 Web query:
What are the supported quantization formats in vLLM for ModelOpt quantized models? Specifically, does vLLM support --quantization modelopt and --quantization modelopt_fp4 flags?
💡 Result:
vLLM supports ModelOpt quantized models via the --quantization modelopt flag. This includes formats like FP8 (ModelOpt FP8), NVFP4 (a 4-bit floating point format from NVIDIA Model Optimizer), MXFP8, and mixed precision (e.g., FP8 dense + NVFP4 MoE). There is no specific --quantization modelopt_fp4 flag; FP4 support (NVFP4) is handled under the unified modelopt quantization method, auto-detected from the model's quantization config (e.g., hf_quant_config.json or quantization_config in config.json). For CLI usage, specify --quantization modelopt when loading ModelOpt-quantized checkpoints from Hugging Face, as shown in examples like nvidia/Llama-3.1-8B-Instruct-FP8.
Citations:
- 1: https://docs.vllm.ai/en/latest/features/quantization/
- 2: https://docs.vllm.ai/en/latest/api/vllm/model_executor/layers/quantization/modelopt.html
- 3: https://docs.vllm.ai/en/stable/api/vllm/model_executor/layers/quantization/modelopt/
- 4: https://docs.vllm.ai/en/v0.9.0/features/quantization/modelopt.html
- 5: https://docs.vllm.ai/projects/llm-compressor/en/0.8.0/examples/quantization_w4a4_fp4/
- 6: https://docs.vllm.ai/en/stable/features/quantization/modelopt.html
Incorrect vLLM flag mapping for ModelOpt quantization auto-detection.
The JSON path quantization.quant_algo is correct and the algorithm names (FP8, W4A8_AWQ, NVFP4, NVFP4_AWQ) are confirmed in the ModelOpt codebase. However, the proposed flag mapping is incorrect:
-
--quantization modelopt_fp4does not exist in vLLM. vLLM provides a single unified flag:--quantization modelopt, which auto-detects the quantization format from the model's quantization config (eitherhf_quant_config.jsonorconfig.json'squantization_configfield). -
NVFP4 is auto-detected, not mapped to a separate flag. vLLM automatically recognizes NVFP4, W4A8_AWQ, FP8, and other formats when
quant_algois present in the quantization config.
Remove the flag mapping table and replace with: "If hf_quant_config.json exists, vLLM auto-detects the quantization format and applies --quantization modelopt automatically. No additional format-specific flags are needed."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/evaluation/SKILL.md around lines 100 - 116, Update the
documentation in SKILL.md to remove the incorrect per-algorithm vLLM flag
mapping and instead state that if hf_quant_config.json exists (read
quantization.quant_algo), vLLM uses a single unified flag --quantization
modelopt which auto-detects NVFP4, W4A8_AWQ, FP8, etc.; replace the table and
related lines with a concise statement: "If hf_quant_config.json exists, vLLM
auto-detects the quantization format and you should pass --quantization modelopt
(no format-specific flags required)."
| ```bash | ||
| # If using pre_cmd or post_cmd: | ||
| export NEMO_EVALUATOR_TRUST_PRE_CMD=1 | ||
|
|
||
| # If using nemo_skills.* tasks with self-deployment: | ||
| export DUMMY_API_KEY=dummy | ||
| ``` |
There was a problem hiding this comment.
Consider security implications of NEMO_EVALUATOR_TRUST_PRE_CMD.
Line 271 sets NEMO_EVALUATOR_TRUST_PRE_CMD=1 to enable pre_cmd execution. Since pre_cmd can run arbitrary shell commands (including downloads via curl as shown in line 149), this environment variable effectively disables a security safeguard.
While necessary for the workflow, consider adding a security note warning users to:
- Review
pre_cmdcontent in configs before running - Only trust configs from known sources
- Understand that
pre_cmdruns with the same privileges as NEL
🛡️ Proposed security notice
**Important**: Export required environment variables based on your config. If any tokens or keys are missing (e.g. `HF_TOKEN`, `NGC_API_KEY`, `api_key_name` from the config), ask the user to put them in a `.env` file in the project root so you can run `set -a && source .env && set +a` (or equivalent) before executing `nel run` commands.
```bash
-# If using pre_cmd or post_cmd:
+# If using pre_cmd or post_cmd (review commands first - they execute with your privileges):
export NEMO_EVALUATOR_TRUST_PRE_CMD=1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ```bash | |
| # If using pre_cmd or post_cmd: | |
| export NEMO_EVALUATOR_TRUST_PRE_CMD=1 | |
| # If using nemo_skills.* tasks with self-deployment: | |
| export DUMMY_API_KEY=dummy | |
| ``` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/evaluation/SKILL.md around lines 269 - 275, Update the
SKILL.md snippet that documents NEMO_EVALUATOR_TRUST_PRE_CMD to include a clear
security warning: note that setting NEMO_EVALUATOR_TRUST_PRE_CMD=1 enables
execution of pre_cmd and post_cmd which run arbitrary shell commands with the
evaluator's privileges, instruct users to review pre_cmd content, only trust
configs from known sources, and be cautious when using nemo_skills.*
self-deployment tasks; reference the environment variable name
(NEMO_EVALUATOR_TRUST_PRE_CMD) and the config keys (pre_cmd, post_cmd,
nemo_skills.*) so readers can find and audit them.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1132 +/- ##
==========================================
+ Coverage 70.15% 70.18% +0.02%
==========================================
Files 230 230
Lines 26045 26045
==========================================
+ Hits 18273 18279 +6
+ Misses 7772 7766 -6 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: Kai Xu <kaix@nvidia.com>
There was a problem hiding this comment.
♻️ Duplicate comments (2)
.claude/skills/evaluation/SKILL.md (2)
194-199:⚠️ Potential issue | 🟠 MajorAdd explicit security warning for
NEMO_EVALUATOR_TRUST_PRE_CMD.Line 195 enables trusted command execution, but the snippet lacks a caution that
pre_cmd/post_cmdrun arbitrary shell commands with evaluator privileges. Please add an explicit warning in this section.Proposed doc fix
-# If using pre_cmd or post_cmd: +# If using pre_cmd or post_cmd: +# WARNING: NEMO_EVALUATOR_TRUST_PRE_CMD=1 allows `pre_cmd`/`post_cmd` execution +# from config with your current privileges. Review `pre_cmd`, `post_cmd`, and +# `nemo_skills.*` task settings, and only run trusted configs. export NEMO_EVALUATOR_TRUST_PRE_CMD=1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/evaluation/SKILL.md around lines 194 - 199, Add a clear security warning near the NEMO_EVALUATOR_TRUST_PRE_CMD export: explain that setting NEMO_EVALUATOR_TRUST_PRE_CMD=1 allows evaluator to run arbitrary shell commands via pre_cmd and post_cmd with evaluator privileges, warn against enabling it in untrusted environments or on production hosts, and recommend alternatives (avoid enabling, run in isolated sandbox/container, or validate commands) while referencing the environment variable name NEMO_EVALUATOR_TRUST_PRE_CMD and the pre_cmd/post_cmd hooks so readers know exactly which settings are risky.
108-115:⚠️ Potential issue | 🔴 CriticalIncorrect ModelOpt flag mapping for NVFP4 in vLLM docs path.
Line 114 maps
NVFP4/NVFP4_AWQto--quantization modelopt_fp4, which is likely invalid; this should use the unified ModelOpt quantization flag flow instead. This can misconfigure deployments at runtime.Proposed doc fix
-| `NVFP4`, `NVFP4_AWQ` | `--quantization modelopt_fp4` | -| Other values | Try `--quantization modelopt`; consult vLLM/SGLang docs if unsure | +| `NVFP4`, `NVFP4_AWQ` | `--quantization modelopt` | +| Other values | Use `--quantization modelopt`; verify backend support in docs if unsure |What are the valid vLLM CLI values for `--quantization` when loading NVIDIA ModelOpt-quantized checkpoints, and is `modelopt_fp4` a supported value?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/evaluation/SKILL.md around lines 108 - 115, The doc incorrectly maps NVFP4/NVFP4_AWQ to the unsupported flag `--quantization modelopt_fp4`; update the mapping logic so that when reading quantization.quant_algo you add the unified ModelOpt flag (e.g., `--quantization modelopt`) into deployment.extra_args for NVFP4/NVFP4_AWQ instead of `modelopt_fp4`, and add a note clarifying that vLLM/SGLang use `modelopt` for NVIDIA ModelOpt formats and to consult vLLM docs for any newer flag names; ensure the table rows referencing `NVFP4`, `NVFP4_AWQ`, `modelopt_fp4` are replaced with `--quantization modelopt` and adjust any code/comments that conditionally look for `modelopt_fp4`.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.claude/skills/evaluation/SKILL.md:
- Around line 194-199: Add a clear security warning near the
NEMO_EVALUATOR_TRUST_PRE_CMD export: explain that setting
NEMO_EVALUATOR_TRUST_PRE_CMD=1 allows evaluator to run arbitrary shell commands
via pre_cmd and post_cmd with evaluator privileges, warn against enabling it in
untrusted environments or on production hosts, and recommend alternatives (avoid
enabling, run in isolated sandbox/container, or validate commands) while
referencing the environment variable name NEMO_EVALUATOR_TRUST_PRE_CMD and the
pre_cmd/post_cmd hooks so readers know exactly which settings are risky.
- Around line 108-115: The doc incorrectly maps NVFP4/NVFP4_AWQ to the
unsupported flag `--quantization modelopt_fp4`; update the mapping logic so that
when reading quantization.quant_algo you add the unified ModelOpt flag (e.g.,
`--quantization modelopt`) into deployment.extra_args for NVFP4/NVFP4_AWQ
instead of `modelopt_fp4`, and add a note clarifying that vLLM/SGLang use
`modelopt` for NVIDIA ModelOpt formats and to consult vLLM docs for any newer
flag names; ensure the table rows referencing `NVFP4`, `NVFP4_AWQ`,
`modelopt_fp4` are replaced with `--quantization modelopt` and adjust any
code/comments that conditionally look for `modelopt_fp4`.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 626f8ed2-3b5d-4a8f-b618-c68007312945
📒 Files selected for processing (15)
.claude/skills/evaluation/SKILL.md.claude/skills/evaluation/evals/base-model-local-execution.json.claude/skills/evaluation/evals/external-deployment-eval.json.claude/skills/evaluation/evals/interceptor-configuration.json.claude/skills/evaluation/evals/multi-node-evaluation.json.claude/skills/evaluation/evals/nel-not-installed.json.claude/skills/evaluation/evals/nemotron3-nano-bf16-reasoning.json.claude/skills/evaluation/evals/nvfp4-auto-detect-quantization.json.claude/skills/evaluation/evals/quantized-checkpoint-local-vllm.json.claude/skills/evaluation/evals/reasoning-model-sglang.json.claude/skills/evaluation/evals/safety-multilingual-benchmarks.json.claude/skills/evaluation/evals/wandb-export-code-benchmarks.json.claude/skills/evaluation/evals/workspace-reuse-from-ptq.json.claude/skills/evaluation/references/model-card-research.md.claude/skills/evaluation/references/multi-node.md
✅ Files skipped from review due to trivial changes (13)
- .claude/skills/evaluation/evals/nel-not-installed.json
- .claude/skills/evaluation/evals/interceptor-configuration.json
- .claude/skills/evaluation/evals/external-deployment-eval.json
- .claude/skills/evaluation/evals/reasoning-model-sglang.json
- .claude/skills/evaluation/evals/wandb-export-code-benchmarks.json
- .claude/skills/evaluation/evals/nvfp4-auto-detect-quantization.json
- .claude/skills/evaluation/evals/workspace-reuse-from-ptq.json
- .claude/skills/evaluation/evals/base-model-local-execution.json
- .claude/skills/evaluation/evals/safety-multilingual-benchmarks.json
- .claude/skills/evaluation/evals/multi-node-evaluation.json
- .claude/skills/evaluation/evals/quantized-checkpoint-local-vllm.json
- .claude/skills/evaluation/references/model-card-research.md
- .claude/skills/evaluation/evals/nemotron3-nano-bf16-reasoning.json
Signed-off-by: Kai Xu <kaix@nvidia.com>
Edwardf0t1
left a comment
There was a problem hiding this comment.
Left a few comments - I think overall it's in a good shape and aligned well with the design we discussed 👍
| @@ -0,0 +1,310 @@ | |||
| --- | |||
| name: evaluation | |||
| description: Evaluate accuracy of quantized or unquantized LLMs using NeMo Evaluator Launcher (NEL). Use when user says "evaluate model", "benchmark accuracy", "run MMLU", "evaluate quantized model", "accuracy drop", "run nel", or needs to measure how quantization affects model quality. Handles model deployment, config generation, and evaluation execution. | |||
There was a problem hiding this comment.
Similar to my comment in the deployment skills PR, we can add some negative triggers as well.
|
|
||
| After the dry-run, check the output from `nel` for any problems with the config. If there are no problems, propose to first execute the test run with limited samples and then execute the full evaluation. If there are problems, resolve them before executing the full evaluation. | ||
|
|
||
| **Monitoring Progress** |
There was a problem hiding this comment.
Again, echo to my comment here, how about we move the monitoring section to a standalone skills (run-and-monitor)? e.g., replace with: "After submission, use the run-and-monitor skill for progress tracking, log inspection, and failure diagnosis. See run-and-monitor/references/nel-execution.md."
|
|
||
| If no `hf_quant_config.json`, also check `config.json` for a `quantization_config` section with `quant_method: "modelopt"`. If neither is found, the checkpoint is unquantized — no flag needed. | ||
|
|
||
| **Quantization-aware benchmark defaults:** |
There was a problem hiding this comment.
I think we can consider extracting quantization benchmarks including benchmark sensitivity ranking and recommended sets to a reference file, e.g., references/quantization-benchmarks.md, so it can be reused by the compare-results skills later.
The reason to have a compare-results skill is that evaluation is about configuring and running NEL, while compare-results is about interpreting and acting on results from multiple runs.
There was a problem hiding this comment.
This is a nice reference - do you think it's better to move model-card-research.md to common/? Since deployment also needs model card research, if both skills reference the same patterns, it should be shared.
| When you have all the answers, run the script to build the base config: | ||
|
|
||
| ```bash | ||
| nel skills build-config --execution <local|slurm> --deployment <none|vllm|sglang|nim|trtllm> --model_type <base|chat|reasoning> --benchmarks <standard|code|math_reasoning|safety|multilingual> [--export <none|mlflow|wandb>] [--output <OUTPUT>] |
There was a problem hiding this comment.
I think we need to verify benchmark categories to match NEL's build-config CLI - If NEL's categories changed, we need to update accordingly.
| |-------------|-------------| | ||
| | `FP8` | `--quantization modelopt` | | ||
| | `W4A8_AWQ` | `--quantization modelopt` | | ||
| | `NVFP4`, `NVFP4_AWQ` | `--quantization modelopt_fp4` | |
There was a problem hiding this comment.
I think --quantization modelopt_fp4 is not needed for vllm - we can align with the deployment skill's support-matrix.md.
There was a problem hiding this comment.
Similar to https://github.com/NVIDIA/Model-Optimizer/pull/1133/changes#r3013918883 we can align on a convention for the tests.
Also, later we can add more tests, such as SLURM execution, external endpoint (no self-deployment), quantized model where benchmark recommendation triggers, and a case where existing workspace from PTQ is reused.
What does this PR do?
Type of change: New feature
Add a Claude Code skill for evaluating LLM accuracy using NeMo Evaluator Launcher (NEL). Based on the upstream nel-assistant skill with ModelOpt-specific additions:
hf_quant_config.json(withconfig.jsonfallback) and set the correct vLLM/SGLang--quantizationflagreferences/for on-demand loadingSkill structure
The skill guides users through: NEL installation check → config generation via
nel skills build-config→ model card research → parameter tuning → task selection → multi-node setup → interceptors → execution with dry-run/test/full modes.Depends on: #1107 (common files:
remote_exec.sh,workspace-management.md,environment-setup.md)Testing
Invoke in Claude Code:
Before your PR is "Ready for review"
CONTRIBUTING.md: ✅ (NEL skill attributed in frontmatter)🤖 Generated with Claude Code
Summary by CodeRabbit
Documentation
New Features
Chores