Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
7bcf16c
feat: add Evening Analysis Content Validator for intelligence assessm…
pethers Feb 21, 2026
7363a93
Add unit tests for editorial pillars, HTML utilities, MCP client, par…
pethers Feb 21, 2026
7cb8ce5
feat: update Vitest configuration to include TypeScript test files an…
pethers Feb 21, 2026
f8e078b
Add comprehensive tests for evening analysis workflow and workflow st…
pethers Feb 21, 2026
e84f762
Add unit tests for load-cia-stats, news-realtime-monitor, and validat…
pethers Feb 21, 2026
af1b385
Add unit tests for article generation modules and sitemap validation
pethers Feb 21, 2026
649d41c
typw
pethers Feb 21, 2026
08c834d
Merge branch 'main' into stricttyping
pethers Feb 21, 2026
ee628f4
chore: update global variable declarations in setup.ts, adjust sitema…
pethers Feb 21, 2026
59d8709
refactor: migrate from JSDoc to TypeDoc for documentation generation
pethers Feb 21, 2026
10e25f8
feat: add dynamic statistics loader for political intelligence data
pethers Feb 21, 2026
3bbc079
Refactor code structure for improved readability and maintainability
pethers Feb 21, 2026
e965a7b
Refactor code structure for improved readability and maintainability
pethers Feb 21, 2026
3900c3b
chore: update eslint and happy-dom dependencies to latest versions
pethers Feb 21, 2026
03316e8
feat: add main entry point for Riksdagsmonitor and TypeScript configu…
pethers Feb 21, 2026
9058c6b
chore: update gh-aw actions to version 0.48.1 in workflow lock files
pethers Feb 21, 2026
af559f0
feat: enhance news article generation workflow with metadata extracti…
pethers Feb 21, 2026
a5fa30b
Refactor TypeDoc configuration and enhance documentation
pethers Feb 21, 2026
bfb34a5
Enhance documentation across CIA platform modules with detailed intel…
pethers Feb 21, 2026
f81e64c
feat: update vitest config to include TypeScript files in browser dir…
pethers Feb 21, 2026
68b7f22
chore: update gh-aw actions to version 0.48.2 in workflow lock files
pethers Feb 21, 2026
b260368
Initial plan
Copilot Feb 21, 2026
238e706
fix: replace node with npx tsx for TypeScript script execution
Copilot Feb 21, 2026
124a52e
Merge pull request #412 from Hack23/copilot/sub-pr-407
pethers Feb 21, 2026
82e64e4
Initial plan
Copilot Feb 21, 2026
c276fed
fix: create scripts/types/, fix Chart.js tag, add tsx devDep, fix tsc…
Copilot Feb 21, 2026
6655ebb
Merge pull request #413 from Hack23/copilot/sub-pr-407
pethers Feb 21, 2026
88d526f
Initial plan
Copilot Feb 21, 2026
4438676
feat: create TypeScript type definition files for scripts
Copilot Feb 21, 2026
c55211c
feat: create scripts/types/ with all TypeScript type definitions; fix…
Copilot Feb 21, 2026
a193c63
Merge pull request #414 from Hack23/copilot/sub-pr-407-again
pethers Feb 21, 2026
08b7df5
Initial plan
Copilot Feb 21, 2026
4891b68
fix: add TypeDoc generation step before unit tests to fix sitemap API…
Copilot Feb 21, 2026
49b8a5d
Merge pull request #416 from Hack23/copilot/sub-pr-407-again
pethers Feb 21, 2026
959b4b4
Initial plan
Copilot Feb 21, 2026
4845790
fix: apply review-3835994755 feedback - build verification, spelling …
Copilot Feb 21, 2026
d1c88a6
Merge pull request #417 from Hack23/copilot/sub-pr-407-again
pethers Feb 21, 2026
ae4e04e
Initial plan
Copilot Feb 21, 2026
5093a57
fix: correct find precedence in jsdoc-validation.yml and deploy from …
Copilot Feb 21, 2026
e11ecba
Merge pull request #418 from Hack23/copilot/sub-pr-407-again
pethers Feb 21, 2026
02ec091
Initial plan
Copilot Feb 21, 2026
cc9010f
rename TOTAL_JS→TOTAL_SOURCE_FILES and FILES_WITH_JSDOC→FILES_WITH_DO…
Copilot Feb 21, 2026
8d8e761
Merge pull request #419 from Hack23/copilot/sub-pr-407-again
pethers Feb 21, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
10 changes: 10 additions & 0 deletions .github/aw/actions-lock.json
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,16 @@
"repo": "github/gh-aw/actions/setup",
"version": "v0.47.5",
"sha": "9450254bc994da0d6a346ce438a4b3764f01c456"
},
"github/gh-aw/actions/setup@v0.48.1": {
"repo": "github/gh-aw/actions/setup",
"version": "v0.48.1",
"sha": "26b6572ae210580303087bc3142fe58d140bf65c"
},
"github/gh-aw/actions/setup@v0.48.2": {
"repo": "github/gh-aw/actions/setup",
"version": "v0.48.2",
"sha": "50ab5443c80cab030b39f7c9e43445b4a254c8a9"
}
}
}
204 changes: 168 additions & 36 deletions .github/skills/ai-governance/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,44 +1,176 @@
---
name: ai-governance
description: AI/LLM governance policies, ethical AI use, prompt security, and responsible AI development practices
description: AI governance, EU AI Act compliance, OWASP LLM security, responsible AI practices for GitHub Copilot agents
license: Apache-2.0
---

# AI Governance Skill

## Purpose
Defines governance policies for responsible AI/LLM use in development, ensuring ethical practices and security compliance.

## Core Principles
1. **Transparency** — Document AI use in development
2. **Accountability** — Human review of AI-generated code
3. **Security** — Prevent prompt injection and data leakage
4. **Quality** — AI output must meet same standards as human work
5. **Privacy** — No PII in AI prompts or training data

## AI in Development
- Review all AI-generated code before merging
- Validate AI suggestions against security policies
- Document AI-assisted decisions in commit messages
- Ensure AI-generated tests are meaningful

## Prompt Security (OWASP LLM Top 10)
- LLM01: Prompt Injection prevention
- LLM02: Insecure Output Handling
- LLM06: Sensitive Information Disclosure
- Never include secrets in prompts
- Validate AI outputs before use

## GitHub Copilot Agent Governance
- Custom agents must follow ISMS policies
- Agent configurations reviewed by security team
- Tool access follows least privilege principle
- MCP server configurations use secrets (not hard-coded tokens)

## Compliance Mapping
- ISO 27001 A.5.1 — Policies for information security
- NIST AI RMF — AI Risk Management Framework
- EU AI Act — Risk-based AI regulation

## Related Policies
- [Secure Development Policy](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Secure_Development_Policy.md)

This skill provides governance guidelines for AI usage in the Riksdagsmonitor platform, including GitHub Copilot agent security, EU AI Act compliance, and responsible AI practices. It ensures AI-assisted development follows Hack23 ISMS policies and regulatory requirements.

## When to Use This Skill

Apply this skill when:
- ✅ Configuring or updating GitHub Copilot agent workflows
- ✅ Integrating AI/ML models for political data analysis
- ✅ Reviewing AI-generated code before merge
- ✅ Assessing AI risk classification under EU AI Act
- ✅ Implementing prompt engineering for data analysis
- ✅ Auditing AI agent outputs for bias or accuracy

Do NOT use for:
- ❌ Standard code reviews without AI involvement
- ❌ Manual data analysis without AI components
- ❌ Infrastructure changes unrelated to AI services

## EU AI Act Classification

### Risk Assessment for CIA Platform

```
CIA Platform AI Usage Assessment
├─→ Political Data Analysis (NLP, trend detection)
│ ├─ Risk Level: LIMITED RISK (Article 52)
│ ├─ Requirement: Transparency obligations
│ └─ Action: Disclose AI-generated analysis to users
├─→ GitHub Copilot Code Generation
│ ├─ Risk Level: MINIMAL RISK
│ ├─ Requirement: Voluntary codes of conduct
│ └─ Action: Code review before merge, security scanning
├─→ Political Risk Scoring
│ ├─ Risk Level: HIGH RISK (Annex III, Category 8)
│ ├─ Requirement: Conformity assessment, human oversight
│ └─ Action: Human review of all risk scores, audit trail
└─→ Voter Behavior Prediction
├─ Risk Level: HIGH RISK
├─ Requirement: Transparency, fairness, accountability
└─ Action: Bias testing, explainability, regular audits
```

### Compliance Checklist

- ✅ Document AI system purpose and intended use
- ✅ Classify AI risk level per EU AI Act categories
- ✅ Implement human oversight for high-risk AI outputs
- ✅ Maintain audit trail of AI-generated decisions
- ✅ Conduct bias and fairness assessments
- ✅ Provide transparency notices for AI-generated content
- ✅ Implement data governance for training datasets

## OWASP LLM Top 10 for CIA Platform

### LLM01: Prompt Injection

**Risk:** Malicious input manipulating Copilot agent behavior.

**Mitigation:**
```yaml
# .github/copilot-instructions.md safeguards
- Validate all agent outputs before committing
- Never allow agents to modify security configurations
- Restrict agent file access to source code only
- Review agent-generated code with CodeQL scanning
```

### LLM02: Insecure Output Handling

**Risk:** AI-generated code containing vulnerabilities.

**Mitigation:**
- Run CodeQL on all AI-generated code changes
- Apply OWASP secure code review checklist
- Validate AI outputs against coding standards
- Never trust AI-generated SQL or security logic without review

### LLM06: Sensitive Information Disclosure

**Risk:** AI agents leaking secrets or sensitive political data.

**Mitigation:**
```java
// Never pass sensitive data to AI prompts
// ✅ SECURE: Generic analysis request
String prompt = "Analyze voting patterns for committee " + committeeId;

// ❌ INSECURE: Including PII in prompts
String prompt = "Analyze voting for " + politicianName + " SSN: " + ssn;
```

### LLM09: Overreliance

**Risk:** Blindly trusting AI-generated political analysis.

**Mitigation:**
- All AI analysis must include confidence scores
- Human analyst review required for published insights
- Cross-validate AI outputs with official data sources
- Label AI-generated content clearly in the UI

## GitHub Copilot Agent Security

### Agent Configuration Best Practices

```yaml
# Secure agent workflow permissions
permissions:
contents: read # Read-only by default
pull-requests: write # Only for PR creation
issues: write # Only for issue management
actions: read # Read workflow status

# Never grant:
# - admin permissions
# - security_events write
# - secrets access
```

### Agent Output Validation

```
Agent Output Validation Pipeline
├─ Step 1: Syntax validation (compile check)
├─ Step 2: Security scan (CodeQL, OWASP)
├─ Step 3: Test execution (unit + integration)
├─ Step 4: Code review (human or Copilot review)
└─ Step 5: Merge approval (maintainer sign-off)
```

## Responsible AI Practices

### Bias Prevention in Political Analysis

- Test analysis algorithms across all 8 Swedish parties equally
- Validate data representation for minority viewpoints
- Audit sentiment analysis for political neutrality
- Document model limitations and known biases

### Transparency Requirements

- Label all AI-generated content in the Riksdagsmonitor platform UI
- Provide methodology documentation for AI analysis
- Enable users to access raw data behind AI insights
- Maintain changelog of AI model updates

## ISMS Alignment

| Control | Requirement | Implementation |
|---------|------------|----------------|
| ISO 27001 A.5.1 | Information security policies | AI governance policy |
| ISO 27001 A.8.1 | Asset management | AI model inventory |
| NIST CSF GV.OC | Organizational context | AI risk assessment |
| CIS Control 16 | Application security | AI code review gates |
| GDPR Art. 22 | Automated decision-making | Human oversight for scoring |

## References

- [EU AI Act](https://artificialintelligenceact.eu/)
- [OWASP LLM Top 10](https://owasp.org/www-project-top-10-for-large-language-model-applications/)
- [Hack23 ISMS Secure Development Policy](https://github.com/Hack23/ISMS-PUBLIC/blob/main/Secure_Development_Policy.md)
- [GitHub Copilot Trust Center](https://resources.github.com/copilot-trust-center/)
Loading
Loading