Thank you for your interest in contributing to AI Project Analyzer! This guide will help you get started.
- Code of Conduct
- Getting Started
- Development Workflow
- Package Structure
- Coding Standards
- Testing
- Documentation
- Pull Request Process
- Release Process
This project adheres to a code of conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to the project maintainers.
- Python 3.11+ with uv
- Node.js 18+ with npm
- Docker (for containerized development)
- Git
# Clone the repository
git clone <repository-url>
cd ai-project-analyzer
# Set up development environment
./scripts/dev-setup.sh
# Start development services
./scripts/docker-dev.sh up# Install packages
./scripts/install.sh
# Configure environment
cp config/env.development .env
# Edit .env and add your API keys
# Start services manually
./start.sh --devgit checkout -b feature/your-feature-name
# or
git checkout -b fix/issue-description
# or
git checkout -b docs/documentation-updateFollow our coding standards and ensure your changes:
- Are well-tested
- Include appropriate documentation
- Follow the existing code style
- Don't break existing functionality
# Run all tests
./scripts/test.sh --coverage
# Run specific package tests
./scripts/test.sh --package core
# Run tests in Docker
./scripts/test.sh --docker
# Run linting
./scripts/test.sh --lint
# Run security tests
./scripts/test.sh --securityWe use conventional commits:
# Format: type(scope): description
git commit -m "feat(core): add new analysis engine"
git commit -m "fix(api): resolve authentication issue"
git commit -m "docs(readme): update installation guide"
git commit -m "test(cli): add integration tests"Types:
feat: New featurefix: Bug fixdocs: Documentation changesstyle: Code style changesrefactor: Code refactoringtest: Test changeschore: Build/tooling changesperf: Performance improvementsci: CI/CD changes
git push origin your-branch-nameThen create a pull request using our PR template.
packages/
├── core/ # Core business logic
│ ├── src/ # Source code
│ ├── tests/ # Tests
│ └── README.md # Package documentation
├── cli/ # Command-line interface
├── api/ # REST API server
└── frontend/ # React frontend
Contains the main analysis logic, repository fetching, and LLM integration.
Key Files:
analyzer.py: Main analysis enginefetcher.py: Repository content fetchingreporter.py: Report generationconfig.py: Configuration management
Command-line interface for batch analysis and automation.
FastAPI server with user management, background processing, and database integration.
React application with TypeScript, providing a modern web interface.
- Code Style: We use Ruff for linting and formatting
- Type Hints: Required for all functions
- Docstrings: Google-style docstrings for all public functions
- Line Length: 100 characters maximum
- Imports: Grouped and sorted (handled by Ruff)
def analyze_repository(url: str, model: str = "gemini-2.5-flash") -> AnalysisResult:
"""Analyze a GitHub repository using the specified LLM model.
Args:
url: GitHub repository URL
model: LLM model to use for analysis
Returns:
Analysis result containing scores and insights
Raises:
InvalidURLError: If the repository URL is invalid
AnalysisError: If analysis fails
"""
# Implementation here- Code Style: ESLint + Prettier configuration
- Components: Functional components with hooks
- File Naming: PascalCase for components, camelCase for utilities
- Exports: Named exports preferred over default exports
interface RepositoryAnalysisProps {
repositoryUrl: string;
onAnalysisComplete: (result: AnalysisResult) => void;
}
export const RepositoryAnalysis: React.FC<RepositoryAnalysisProps> = ({
repositoryUrl,
onAnalysisComplete,
}) => {
// Component implementation
};- Error Handling: Always handle errors gracefully
- Logging: Use appropriate log levels
- Security: Never commit secrets or API keys
- Performance: Consider performance implications of changes
- Accessibility: Follow accessibility best practices in frontend
packages/[package]/tests/
├── unit/ # Unit tests
├── integration/ # Integration tests
├── performance/ # Performance tests
└── fixtures/ # Test data and fixtures
import pytest
from src.analyzer import analyze_repository
def test_analyze_repository_success():
"""Test successful repository analysis."""
result = analyze_repository("https://github.com/owner/repo")
assert result.status == "success"
assert result.score > 0
def test_analyze_repository_invalid_url():
"""Test analysis with invalid URL."""
with pytest.raises(InvalidURLError):
analyze_repository("invalid-url")import { render, screen } from "@testing-library/react";
import { RepositoryAnalysis } from "./RepositoryAnalysis";
test("renders repository analysis component", () => {
render(
<RepositoryAnalysis
repositoryUrl="https://github.com/owner/repo"
onAnalysisComplete={jest.fn()}
/>
);
expect(screen.getByText(/analyze repository/i)).toBeInTheDocument();
});# All tests with coverage
./scripts/test.sh --coverage
# Specific package
./scripts/test.sh --package core
# Integration tests
./scripts/test.sh --integration
# Docker-based testing
./scripts/test.sh --docker- Python: Google-style docstrings
- TypeScript: JSDoc comments
- README files: Each package should have comprehensive README
- API Endpoints: Documented in OpenAPI/Swagger format
- Auto-generated: Available at
/docswhen API server is running
- Getting Started: Clear setup instructions
- Examples: Real-world usage examples
- Troubleshooting: Common issues and solutions
- Rebase: Ensure your branch is up-to-date with main
- Test: All tests pass locally
- Lint: Code follows style guidelines
- Documentation: Updates included if needed
- Title: Clear, descriptive title
- Description: Detailed description of changes
- Tests: New tests for new functionality
- Documentation: Updated documentation
- Breaking Changes: Clearly marked and documented
- Automated Checks: CI/CD pipeline must pass
- Code Review: At least one maintainer review
- Testing: Manual testing if required
- Approval: Maintainer approval required for merge
- Cleanup: Delete feature branch
- Monitor: Watch for any issues in production
- Follow-up: Address any post-merge feedback
We use Semantic Versioning:
- MAJOR: Breaking changes
- MINOR: New features (backward compatible)
- PATCH: Bug fixes (backward compatible)
- Update Version: Update version in
pyproject.tomlfiles - Update Changelog: Document changes in
CHANGELOG.md - Create Release: Tag and create GitHub release
- Deploy: Automated deployment via CI/CD
- Announce: Notify users of new release
- Documentation: Check package README files
- Examples: Look at existing code and tests
- Issues: Search existing GitHub issues
- Discussions: Use GitHub Discussions for questions
- GitHub Issues: For bug reports and feature requests
- GitHub Discussions: For questions and general discussion
- Email: [maintainer-email] for security issues
Contributors will be recognized in:
- README.md: Contributors section
- Release Notes: Highlighting major contributions
- GitHub: Contributor recognition features
Thank you for contributing to AI Project Analyzer! 🚀