-
Notifications
You must be signed in to change notification settings - Fork 4.4k
feat(ci): add llm issue triage workflow #41417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release
Are you sure you want to change the base?
Conversation
WalkthroughIntroduces an LLM-based GitHub issue triage system via configuration, a GitHub Actions workflow, orchestration layer, analysis module, GitHub labeler, and three pluggable LLM provider implementations (OpenAI, Gemini, Anthropic) for automated issue classification and labeling. Changes
Sequence DiagramssequenceDiagram
participant GitHub as GitHub Actions
participant Workflow as Workflow (issue-triage.yml)
participant Index as index.js
participant Config as config.js
participant Analyzer as IssueAnalyzer
participant Provider as LLM Provider
participant Labeler as GitHubLabeler
participant GitHubAPI as GitHub API
GitHub->>Workflow: Trigger (dispatch/issue event)
Workflow->>Workflow: Check authorization
Workflow->>Index: Execute with env vars
Index->>Config: Load config & API keys
Index->>Analyzer: Create instance
Index->>Labeler: Create instance
alt Workflow Dispatch - Single Issue
Index->>Index: Fetch issue
else Workflow Dispatch - Bulk
Index->>GitHubAPI: Query issues by labels
else Issue Event
Index->>Index: Use event issue
end
Index->>Analyzer: analyzeIssue(issue, context)
Analyzer->>Analyzer: Build prompt & codebase context
Analyzer->>Provider: analyze(prompt, context)
Provider->>Provider: Call LLM API
Provider->>Analyzer: Return TriageResult
Analyzer->>Index: Return parsed result
Index->>Labeler: applyTriageResult(issue, result)
Labeler->>GitHubAPI: Post comment
Labeler->>GitHubAPI: Add labels
Labeler->>Index: Confirm applied
Index->>Workflow: Summary & logging
Workflow->>GitHub: Complete
sequenceDiagram
participant Factory as createProvider()
participant ApiKeys as apiKeys Object
Factory->>ApiKeys: Lookup key for provider name
alt Key exists
Factory->>Factory: Instantiate provider
Factory->>Factory: Return provider instance
else Key missing
Factory->>Factory: Throw error (missing API key)
else Unknown provider
Factory->>Factory: Throw error (unsupported)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Areas requiring extra attention:
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (7)
.github/workflows/scripts/llm-triage/providers/gemini.js (1)
53-85: System prompt duplicated across providers.This prompt is nearly identical in
OpenAIProvider,GeminiProvider, andAnthropicProvider. Consider extracting to a shared utility inbase.jsor a separateprompts.jsmodule to centralize updates.Example approach:
// In base.js or prompts.js export function buildTriageSystemPrompt(context) { return `You are an expert software engineer... ${context.codebaseStructure ? `\nCodebase structure:\n${context.codebaseStructure}` : ''} ...`; }.github/workflows/scripts/llm-triage/providers/anthropic.js (2)
50-82: System prompt duplication across providers.
buildSystemPromptcontent is identical toOpenAIProvider.buildSystemPrompt. Consider extracting to a shared utility or the base class to avoid drift and ease maintenance.
7-12: Model version hardcoded.The model
claude-sonnet-4-20250514is hardcoded. Consider making it configurable via the config object for easier updates..github/workflows/issue-triage.yml (1)
78-82: Redundant authorization check.This step will never execute because the job-level
ifcondition at line 75 already ensuresis_org_member == 'true'. The step condition checks!= 'true'which is already filtered out.Consider removing this step since the job-level condition already handles this case.
.github/workflows/scripts/llm-triage/analyzer.js (1)
166-174: Keyword matching may produce false positives.Using
text.includes(pattern)matches substrings, so "listing" would match the "list" pattern. Consider word boundary matching for precision, though this may be acceptable for the use case.- [...widgetPatterns, ...featurePatterns].forEach(pattern => { - if (text.includes(pattern)) { - keywords.push(pattern); - } - }); + const wordBoundaryRegex = (pattern) => new RegExp(`\\b${pattern}\\b`, 'i'); + [...widgetPatterns, ...featurePatterns].forEach(pattern => { + if (wordBoundaryRegex(pattern).test(text)) { + keywords.push(pattern); + } + });.github/workflows/scripts/llm-triage/labeler.js (2)
46-68: Two API calls for high complexity issues.For high complexity, the tracking label is added in line 62, then the complexity label is added separately in line 66. This could be consolidated into a single API call.
if (!dryRun) { // Add labels const labelsToAdd = [trackingLabel]; - // Only add complexity label if not "high" (high = needs engineering review, not community) - if (complexityLabel && result.complexity !== 'high') { + // Add complexity label + if (complexityLabel) { labelsToAdd.push(complexityLabel); } // Add any suggested labels from the LLM if (result.suggestedLabels && result.suggestedLabels.length > 0) { // Filter to only known/valid labels const validSuggested = await this.filterValidLabels(result.suggestedLabels); labelsToAdd.push(...validSuggested); } await this.addLabels(issueNumber, labelsToAdd); - - // If high complexity, add the engineering review label - if (result.complexity === 'high' && this.config.complexityLabels.high) { - await this.addLabels(issueNumber, [this.config.complexityLabels.high]); - } }
304-324: Consider caching repository labels.
filterValidLabelsfetches all repo labels on each call. For bulk processing, this could result in many redundant API calls. Consider caching the label list.+ // Cache for repo labels + _repoLabelsCache = null; + async filterValidLabels(labels) { try { - const repoLabels = await this.octokit.paginate( - this.octokit.issues.listLabelsForRepo, - { - owner: this.owner, - repo: this.repo, - per_page: 100 - } - ); + if (!this._repoLabelsCache) { + this._repoLabelsCache = await this.octokit.paginate( + this.octokit.issues.listLabelsForRepo, + { + owner: this.owner, + repo: this.repo, + per_page: 100 + } + ); + } + const repoLabels = this._repoLabelsCache; const validLabelNames = new Set(repoLabels.map(l => l.name.toLowerCase()));
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
.github/issue-triage-config.yml(1 hunks).github/workflows/issue-triage.yml(1 hunks).github/workflows/scripts/llm-triage/analyzer.js(1 hunks).github/workflows/scripts/llm-triage/config.js(1 hunks).github/workflows/scripts/llm-triage/index.js(1 hunks).github/workflows/scripts/llm-triage/labeler.js(1 hunks).github/workflows/scripts/llm-triage/package.json(1 hunks).github/workflows/scripts/llm-triage/providers/anthropic.js(1 hunks).github/workflows/scripts/llm-triage/providers/base.js(1 hunks).github/workflows/scripts/llm-triage/providers/gemini.js(1 hunks).github/workflows/scripts/llm-triage/providers/index.js(1 hunks).github/workflows/scripts/llm-triage/providers/openai.js(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
.github/workflows/scripts/llm-triage/providers/openai.js (1)
.github/workflows/scripts/llm-triage/providers/base.js (1)
BaseLLMProvider(5-80)
.github/workflows/scripts/llm-triage/providers/index.js (3)
.github/workflows/scripts/llm-triage/providers/openai.js (1)
OpenAIProvider(7-85).github/workflows/scripts/llm-triage/providers/gemini.js (1)
GeminiProvider(7-86).github/workflows/scripts/llm-triage/providers/anthropic.js (1)
AnthropicProvider(7-83)
.github/workflows/scripts/llm-triage/providers/anthropic.js (1)
.github/workflows/scripts/llm-triage/providers/base.js (1)
BaseLLMProvider(5-80)
.github/workflows/scripts/llm-triage/providers/gemini.js (1)
.github/workflows/scripts/llm-triage/providers/base.js (1)
BaseLLMProvider(5-80)
.github/workflows/scripts/llm-triage/providers/base.js (1)
.github/workflows/scripts/llm-triage/index.js (3)
context(25-25)result(164-164)result(214-220)
.github/workflows/scripts/llm-triage/analyzer.js (3)
.github/workflows/scripts/llm-triage/providers/index.js (1)
createProvider(11-38).github/workflows/scripts/llm-triage/index.js (8)
apiKeys(24-24)config(23-23)issue(143-143)issue(210-210)codebaseContext(161-161)context(25-25)result(164-164)result(214-220).github/workflows/scripts/llm-triage/config.js (2)
config(58-58)result(131-131)
🔇 Additional comments (18)
.github/workflows/scripts/llm-triage/config.js (5)
1-33: LGTM on imports and default configuration.Clean structure. Default values are sensible and well-documented.
39-75: Configuration loading looks solid.Good defensive approach: file existence check, try-catch on parse, and clear logging. The normalization step at lines 70-71 handles YAML's array-to-object quirk well.
81-87: API keys exposed via simple object return.Implementation is straightforward. Just ensure these keys are never logged elsewhere in the codebase.
93-122: Execution context assembly is well-structured.Good use of defaults and safe parsing. Note:
parseIntwith invalid input returnsNaN, notnull—but the fallbacknullvia the ternary handles this.
150-181: Helper utilities are clean and defensive.
parseJsonSafeandnormalizeArrayhandle edge cases well..github/issue-triage-config.yml (1)
1-39: Well-documented configuration file.Clear comments, sensible defaults. Rate limiting and timeout values are reasonable for production use.
.github/workflows/scripts/llm-triage/providers/index.js (2)
11-38: Clean factory pattern implementation.Good normalization, clear error messages, and proper API key validation before instantiation. The switch-case structure is easy to extend.
40-41: Re-exports are appropriate.Allows consumers to import individual providers if needed while keeping the factory as the primary interface.
.github/workflows/scripts/llm-triage/package.json (1)
18-20: Node 20+ requirement is appropriate.Aligns with GitHub Actions runners and ensures modern ES module support.
.github/workflows/scripts/llm-triage/providers/gemini.js (2)
20-46: Analyze method is well-structured.Good error handling with logging before rethrow. The JSON response mode (
responseMimeType: 'application/json') is appropriate.
7-12: Based on my verification:Constructor implementation uses a valid and supported model name consistent with other providers in the codebase.
The
gemini-1.5-promodel name is valid according to Google's API documentation. Whilegemini-1.5-pro-002is the explicitly versioned GA release, using the aliasgemini-1.5-prois acceptable and follows the same pattern as the OpenAI provider, which usesgpt-4owithout explicit versioning. The Anthropic provider uses explicit date-based versioning (claude-sonnet-4-20250514), but the codebase accepts both approaches. The constructor properly extendsBaseLLMProvider, callssuper(), initializes the Google Generative AI client correctly, and selects an appropriate model..github/workflows/scripts/llm-triage/providers/base.js (1)
65-79: Solid defensive validation.Good normalization of complexity values, confidence clamping, and file list truncation. The fallback defaults are sensible.
.github/workflows/scripts/llm-triage/providers/openai.js (1)
20-45: Good use of OpenAI JSON mode.Using
response_format: { type: 'json_object' }ensures structured JSON output. Implementation looks solid..github/workflows/issue-triage.yml (1)
96-120: Environment setup looks complete.All necessary API keys and context variables are passed. Good separation of concerns with env vars.
.github/workflows/scripts/llm-triage/index.js (1)
209-238: Bulk processing with error resilience.Good approach - catches errors per issue and continues processing. Rate limiting is properly applied between issues.
.github/workflows/scripts/llm-triage/analyzer.js (2)
181-224: Static file mappings may drift.These hardcoded paths could become stale as the codebase evolves. Consider generating this mapping or validating paths periodically. For now, this is practical for a v1 implementation.
24-40: Clean analysis flow with good logging.The method clearly logs the analysis process and results. Good for debugging and monitoring.
.github/workflows/scripts/llm-triage/labeler.js (1)
100-147: Well-structured triage comment format.Good use of emojis for visual distinction, collapsible details section, and clear structure. The dry-run notice is helpful.
| function deepMerge(target, source) { | ||
| const result = { ...target }; | ||
|
|
||
| for (const key in source) { | ||
| if (source[key] instanceof Object && key in target && target[key] instanceof Object) { | ||
| result[key] = deepMerge(target[key], source[key]); | ||
| } else if (source[key] !== undefined) { | ||
| result[key] = source[key]; | ||
| } | ||
| } | ||
|
|
||
| return result; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deepMerge conflates arrays with objects.
instanceof Object returns true for arrays, so arrays in target and source will be recursively merged as objects rather than replaced. This may cause unexpected behavior if array overrides are intended.
Consider adding an explicit array check:
function deepMerge(target, source) {
const result = { ...target };
for (const key in source) {
- if (source[key] instanceof Object && key in target && target[key] instanceof Object) {
+ if (
+ source[key] instanceof Object &&
+ !Array.isArray(source[key]) &&
+ key in target &&
+ target[key] instanceof Object &&
+ !Array.isArray(target[key])
+ ) {
result[key] = deepMerge(target[key], source[key]);
} else if (source[key] !== undefined) {
result[key] = source[key];
}
}
return result;
}🤖 Prompt for AI Agents
In .github/workflows/scripts/llm-triage/config.js around lines 130 to 142,
deepMerge uses "instanceof Object" which returns true for arrays so arrays get
merged like objects; change the logic to explicitly detect arrays
(Array.isArray) and treat them as replaceable values (or shallow-copy the source
array) instead of recursing into them, and restrict recursion to plain objects
(e.g., typeof === 'object' && !Array.isArray(source[key]) && source[key] !==
null && !Array.isArray(target[key]) && target[key] !== null) so arrays are
overwritten by source when present.
| if (result.skipped) { | ||
| results.skipped++; | ||
| } else { | ||
| results.processed++; | ||
| results.byComplexity[result.complexity]++; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential undefined key in complexity counter.
If result.complexity is undefined (edge case where processSingleIssue returns an unexpected shape), results.byComplexity[result.complexity]++ would create an undefined key. Consider adding a guard.
if (result.skipped) {
results.skipped++;
} else {
results.processed++;
- results.byComplexity[result.complexity]++;
+ if (result.complexity && results.byComplexity.hasOwnProperty(result.complexity)) {
+ results.byComplexity[result.complexity]++;
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (result.skipped) { | |
| results.skipped++; | |
| } else { | |
| results.processed++; | |
| results.byComplexity[result.complexity]++; | |
| } | |
| if (result.skipped) { | |
| results.skipped++; | |
| } else { | |
| results.processed++; | |
| if (result.complexity && results.byComplexity.hasOwnProperty(result.complexity)) { | |
| results.byComplexity[result.complexity]++; | |
| } | |
| } |
🤖 Prompt for AI Agents
In .github/workflows/scripts/llm-triage/index.js around lines 222 to 227, the
code increments results.byComplexity[result.complexity] without guarding against
result.complexity being undefined; update the branch to check if
result.complexity is a valid key (e.g. typeof result.complexity === 'string' &&
result.complexity in results.byComplexity) before incrementing, and if not,
normalize to a fallback key like 'unknown' (ensure
results.byComplexity['unknown'] is initialized) and increment that instead.
| const comments = await this.octokit.issues.listComments({ | ||
| owner: this.owner, | ||
| repo: this.repo, | ||
| issue_number: issueNumber, | ||
| per_page: 100 | ||
| }); | ||
|
|
||
| return comments.data.some(comment => | ||
| comment.body && comment.body.includes(TRIAGE_COMMENT_MARKER) | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment pagination limited to 100.
For issues with more than 100 comments, the triage marker check may miss older comments. Consider using pagination or checking from newest comments first.
| "dependencies": { | ||
| "@anthropic-ai/sdk": "^0.24.0", | ||
| "@google/generative-ai": "^0.12.0", | ||
| "@octokit/rest": "^20.0.2", | ||
| "openai": "^4.47.0", | ||
| "yaml": "^2.4.2" | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for package-lock.json in the llm-triage directory
echo "=== Checking for package-lock.json ==="
if [ -f ".github/workflows/scripts/llm-triage/package-lock.json" ]; then
echo "package-lock.json found"
else
echo "package-lock.json NOT found"
fi
# List the llm-triage directory to see what files are present
echo ""
echo "=== Contents of llm-triage directory ==="
ls -la .github/workflows/scripts/llm-triage/ 2>/dev/null || echo "Directory not found or accessible"Repository: appsmithorg/appsmith
Length of output: 643
🌐 Web query:
@anthropic-ai/sdk 0.24.0 security vulnerabilities CVE
💡 Result:
Short answer: I found no public CVE(s) that specifically affect @anthropic-ai/sdk v0.24.0. Security scanners show no known vulnerabilities for that npm package/version. [1][2]
Related note: a separate critical CVE (CVE‑2025‑49596) exists for the MCP Inspector tool (not the @Anthropic‑ai/sdk package); it affects MCP Inspector versions < 0.14.1 and was patched in 0.14.1 — upgrade if you use MCP Inspector. [3][4]
Sources
- Package security scan / versions for @Anthropic‑ai/sdk (no known vulns). [1]
- @Anthropic‑ai/sdk changelog / release notes (v0.24.0). [2]
- NVD / vendor advisory for CVE‑2025‑49596 (MCP Inspector). [3]
- Oligo Security technical writeup on CVE‑2025‑49596. [4]
If you want, I can (a) search for any GitHub security advisories or GitHub Advisory Database entries tied to that exact package/version, or (b) run a deeper dependency/vulnerability scan on a copy of your project.
🌐 Web query:
@google/generative-ai 0.12.0 security vulnerabilities CVE
💡 Result:
Short answer: No known CVEs specifically affecting the @google/generative-ai npm package v0.12.0. The package's repo/security page shows no published advisories and the npm listing for v0.12.0 has no CVE noted. [1][2]
Related note: there is a CVE (CVE-2024-12236) for the Vertex AI / Gemini API (a server-side service) about a VPC‑Service‑Controls bypass via the fileUri parameter — this is a cloud service vulnerability, not a published vulnerability in the @google/generative-ai JavaScript package itself. [3][4]
Sources:
- @google/generative-ai npm listing (v0.12.0). [1]
- google/generative-ai-docs security page (no published advisories). [2]
- Google Cloud security bulletin for the Vertex AI Generative‑AI CVE (GCP‑2024‑063 / CVE‑2024‑12236). [3]
- CVE summary (Tenable/CVE entry) for CVE‑2024‑12236. [4]
🌐 Web query:
@octokit/rest 20.0.2 security vulnerabilities CVE
💡 Result:
Short answer: @octokit/rest v20.0.2 can be affected indirectly via known ReDoS CVEs in Octokit subpackages — CVE-2025-25285, CVE-2025-25288 and CVE-2025-25290 — if the version of its dependencies (notably @octokit/endpoint, @octokit/plugin-paginate-rest and @octokit/request) included in your install are the vulnerable releases. Sources below show the CVEs and the versions that fix them.
Details and recommended fixes:
- CVE-2025-25285 — ReDoS in @octokit/endpoint; fixed in @octokit/endpoint >= 10.1.3. [1][2].
- CVE-2025-25288 — ReDoS in @octokit/plugin-paginate-rest; fixed in @octokit/plugin-paginate-rest >= 11.4.1. [3][4].
- CVE-2025-25290 — ReDoS in @octokit/request; fixed in @octokit/request >= 9.2.1. [5][6].
Action items:
- Check your project’s installed dependency tree (npm ls @octokit/endpoint @octokit/plugin-paginate-rest @octokit/request) to see exact versions used by @octokit/[email protected]. If any are below the fixed versions above, upgrade them (or upgrade @octokit/rest to a newer release that depends on patched subpackages).
- Scan with your SCA tool (Dependabot/Snyk/OSV) to confirm no vulnerable transitive versions remain.
Sources:
[1] NVD / CVE-2025-25285 (description).
[2] CVE writeups / OSV for CVE-2025-25285.
[3] NVD / CVE-2025-25288 (description).
[4] OSV / GHSA-h5c3-5r3r-rr8q (plugin-paginate-rest).
[5] NVD / CVE-2025-25290 (description).
[6] OSV / package advisory entries for @octokit/request.
If you want, I can (1) check your repo’s installed versions from a lockfile/package.json you paste, or (2) fetch @octokit/[email protected]’s exact dependency versions and tell you whether they include vulnerable subpackage versions.
🌐 Web query:
openai 4.47.0 security vulnerabilities CVE
💡 Result:
I couldn't find any CVE entries for the OpenAI client library version 4.47.0. Security scanners show no known direct vulnerabilities for the openai package (no CVEs listed). [1][2]
Sources:
- ReversingLabs / package vulnerability scan for "openai" (no known vulnerabilities). [1]
- Snyk package page for "openai" (no known security issues). [2]
If you want, I can watch NVD/Snyk and notify you if a CVE appears, or run a full dependency scan of your environment.
🌐 Web query:
yaml 2.4.2 security vulnerabilities CVE
💡 Result:
Which "yaml 2.4.2" do you mean? (examples: PyYAML 2.4.2, js-yaml 2.4.2, a Ruby gem, or a specific product version that includes YAML 2.4.2). I can then search CVEs for that exact package/version.
🌐 Web query:
yaml npm package 2.4.2 security vulnerabilities CVE
💡 Result:
Short answer: yaml (eemeli/yaml) version 2.4.2 is not affected by the known CVE listed for that package. The notable CVE for the npm "yaml" package is CVE-2023-2251, which affects versions >=2.0.0-5 and <2.2.2 and was fixed in 2.2.2 — so 2.4.2 is already past that fix. [1][2][3]
Notes:
- There is a separate, unrelated CVE (CVE-2025-64718) for the js-yaml package (nodeca/js-yaml); that affects js-yaml versions <4.1.1 and does not apply to the eemeli "yaml" package. [4][5]
References:
- GitLab advisory / CVE-2023-2251 (yaml) — fixed in 2.2.2. [1]
- NVD / CVE-2023-2251 summaries (reported) and fixes. [2]
- Snyk package page for yaml (shows no known issues for recent versions). [3]
- GitHub Advisory for js-yaml prototype pollution (CVE-2025-64718). [4]
- Debian / NVD tracking for CVE-2025-64718 (js-yaml). [5]
If you want, I can (a) list the exact npm / GitHub advisory links, (b) scan your package-lock for transitive usage of js-yaml or yaml and flag vulnerable versions.
🏁 Script executed:
#!/bin/bash
# Check npm registry info for @octokit/[email protected] to see its dependency specifications
# Since we don't have node_modules, we'll check the package.json to see what's actually specified
echo "=== Package.json content ==="
cat .github/workflows/scripts/llm-triage/package.jsonRepository: appsmithorg/appsmith
Length of output: 540
@octokit/rest@^20.0.2 has known vulnerable transitive dependencies; missing package-lock.json creates reproducibility risks.
Verification found three CVEs (CVE-2025-25285, CVE-2025-25288, CVE-2025-25290) in @octokit/rest's subpackages (@octokit/endpoint, @octokit/plugin-paginate-rest, @octokit/request) due to ReDoS vulnerabilities. The other dependencies are clean:
- @anthropic-ai/sdk@^0.24.0: ✓ no known CVEs
- @google/generative-ai@^0.12.0: ✓ no known CVEs
- openai@^4.47.0: ✓ no known CVEs
- yaml@^2.4.2: ✓ no known CVEs (CVE-2023-2251 fixed in 2.2.2)
However, package-lock.json is missing, which prevents reproducible CI builds. Combine this with caret ranges and you risk installing patched or unpatched versions of @octokit/* subpackages inconsistently across runs. Add package-lock.json and upgrade @octokit/rest to a version with patched subpackages, or pin exact subpackage versions.
🤖 Prompt for AI Agents
.github/workflows/scripts/llm-triage/package.json lines 11-17: the project lists
"@octokit/rest": "^20.0.2" which pulls vulnerable transitive packages and there
is no package-lock.json for reproducible installs; update the dependency and add
a lockfile: upgrade @octokit/rest to a version where the vulnerable subpackages
are patched (or explicitly pin the known-good subpackage versions), replace the
caret range with an exact version if pinning, run npm install to generate and
commit package-lock.json, and run npm audit (or equivalent) to verify no
remaining vulnerable transitive packages before pushing.
| parseResponse(response) { | ||
| try { | ||
| // Try to extract JSON from the response | ||
| const jsonMatch = response.match(/```json\s*([\s\S]*?)\s*```/); | ||
| if (jsonMatch) { | ||
| return JSON.parse(jsonMatch[1]); | ||
| } | ||
|
|
||
| // Try to parse the entire response as JSON | ||
| const parsed = JSON.parse(response); | ||
| return this.validateTriageResult(parsed); | ||
| } catch (error) { | ||
| console.error('Failed to parse LLM response:', error.message); | ||
| // Return a fallback result | ||
| return { | ||
| complexity: 'high', | ||
| confidence: 0.3, | ||
| files: [], | ||
| reasoning: 'Failed to parse LLM response. Manual review recommended.', | ||
| error: error.message | ||
| }; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing validation for JSON block extraction path.
When extracting JSON from a markdown code block (line 41), validateTriageResult is not called, but it is called when parsing the full response as JSON (line 46). This inconsistency could lead to unvalidated/unnormalized results.
const jsonMatch = response.match(/```json\s*([\s\S]*?)\s*```/);
if (jsonMatch) {
- return JSON.parse(jsonMatch[1]);
+ return this.validateTriageResult(JSON.parse(jsonMatch[1]));
}
// Try to parse the entire response as JSON
const parsed = JSON.parse(response);
return this.validateTriageResult(parsed);🤖 Prompt for AI Agents
In .github/workflows/scripts/llm-triage/providers/base.js around lines 36 to 58,
the code returns raw JSON when extracting from a ```json``` block (line ~41)
without passing it through validateTriageResult; change that return to call
this.validateTriageResult(JSON.parse(jsonMatch[1])) so the extracted JSON is
validated/normalized the same way as the full-response path, ensuring any parse
errors still bubble to the existing try/catch and the result shape is
consistent.
Summary
Testing
Warning
Tests have not run on the HEAD 9b6328b yet
Wed, 26 Nov 2025 08:04:29 UTC
Context: https://www.notion.so/appsmith/V1-Community-Auto-Maintenance-AI-Assisted-Contribution-Workflow-2b7fe271b0e2801f8915dd2a334ceaf5?source=copy_link
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.