This document describes the testing infrastructure and how to run tests for go-semver-audit.
Run all tests with verbose output:
go test -v ./...Run tests for a specific package:
go test -v ./internal/analyzer/
go test -v ./internal/report/
go test -v ./cmd/go-semver-audit/Run tests with race detector enabled:
go test -race ./...Generate a coverage profile and view the results:
# Generate coverage profile
go test -coverprofile=coverage.out ./...
# View coverage summary
go tool cover -func=coverage.out
# Generate HTML coverage report
go tool cover -html=coverage.out -o coverage.htmlOr use the Makefile:
make test-coverageThis will:
- Run all tests with coverage enabled
- Generate
coverage.out(machine-readable) - Generate
coverage.html(human-readable, open in browser)
Current coverage (as of last update):
cmd/go-semver-audit: 25.4%internal/analyzer: 26.4%internal/report: 99.0%
We don't enforce strict coverage thresholds, but aim to:
- Test all public APIs
- Cover critical business logic
- Test error conditions and edge cases
- Maintain or improve coverage with new changes
The project uses GitHub Actions for CI. The workflow (.github/workflows/ci.yml) runs on:
- Every push to
mainordevelopbranches - Every pull request targeting
mainordevelop
The CI pipeline includes:
- Matrix: Tests across multiple OS (Ubuntu, Windows, macOS) and Go versions (1.21, 1.22)
- Steps:
- Checkout code
- Set up Go
- Download dependencies
- Verify dependencies
- Run tests with race detector and coverage
- Generate coverage reports
- Upload coverage to Codecov (Ubuntu/Go 1.22 only)
- Runs
go vetfor static analysis - Checks code formatting with
gofmt - Runs
staticcheckfor additional linting
- Verifies the binary can be built
- Tests the built binary with
-versionflag
To simulate the CI environment locally:
# Format check
gofmt -s -l .
# Vet check
go vet ./...
# Staticcheck (install first if needed)
go install honnef.co/go/tools/cmd/staticcheck@latest
staticcheck ./...
# Build
go build -v ./cmd/go-semver-audit
# Test with race detector and coverage
go test -race -coverprofile=coverage.out ./...Or run all checks at once:
make checkWe use table-driven tests throughout the codebase:
func TestParseUpgrade(t *testing.T) {
tests := []struct {
name string
spec string
want *Upgrade
wantErr bool
}{
{
name: "valid upgrade",
spec: "github.com/pkg/errors@v0.9.1",
want: &Upgrade{Module: "github.com/pkg/errors", NewVersion: "v0.9.1"},
wantErr: false,
},
// ... more test cases
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ParseUpgrade(tt.spec)
if (err != nil) != tt.wantErr {
t.Errorf("ParseUpgrade() error = %v, wantErr %v", err, tt.wantErr)
return
}
// ... assertions
})
}
}Test files follow Go conventions:
- Named
*_test.go - Located in the same package as the code they test
- Use the
testingpackage
The testdata/ directory contains fixtures for integration tests:
oldlib/- Example library at version 1.0newlib/- Example library at version 2.0userproject/- Example project using the library
- Test Public APIs: All exported functions, types, and methods should have tests
- Test Error Cases: Don't just test the happy path
- Use Descriptive Names: Test names should clearly describe what they test
- Keep Tests Focused: Each test should verify one specific behavior
- Use Table-Driven Tests: When testing multiple similar cases
- Avoid External Dependencies: Mock or stub external services
- Make Tests Deterministic: Tests should always produce the same result
func TestFormatJSON(t *testing.T) {
result := &analyzer.Result{
Module: "github.com/example/lib",
OldVersion: "v1.0.0",
NewVersion: "v2.0.0",
Changes: &analyzer.Diff{},
}
output, err := FormatJSON(result)
if err != nil {
t.Fatalf("FormatJSON() error = %v", err)
}
// Verify output is valid JSON
var report JSONReport
if err := json.Unmarshal([]byte(output), &report); err != nil {
t.Errorf("FormatJSON() produced invalid JSON: %v", err)
}
// Verify key fields
if report.Module != result.Module {
t.Errorf("JSONReport.Module = %q, want %q", report.Module, result.Module)
}
}go test -run TestParseUpgrade ./internal/analyzer/go test -v -run TestParseUpgrade ./internal/analyzer/go test -coverprofile=coverage.out -run TestParseUpgrade ./internal/analyzer/
go tool cover -func=coverage.outMost Go IDEs support setting breakpoints in tests. Alternatively, use dlv:
go install github.com/go-delve/delve/cmd/dlv@latest
dlv test ./internal/analyzer/ -- -test.run TestParseUpgradeCurrently, the project doesn't have benchmark tests, but they can be added following Go's benchmark conventions:
func BenchmarkParseUpgrade(b *testing.B) {
spec := "github.com/pkg/errors@v0.9.1"
for i := 0; i < b.N; i++ {
_, _ = ParseUpgrade(spec)
}
}Run benchmarks:
go test -bench=. ./...For integration testing with real Go modules:
- Use the
testdata/directory for fixture projects - Consider using temporary directories for test isolation
- Mock external calls (e.g., downloading modules) when possible
Static analysis tool that examines Go source code:
go vet ./...Advanced static analysis:
staticcheck ./...Code formatting:
# Check formatting
gofmt -s -l .
# Apply formatting
gofmt -s -w .Or use the Makefile:
make fmt
make lintSome tests may need adjustments for Windows file paths. Use filepath.Join() and filepath.ToSlash() for cross-platform compatibility.
Ensure you're using the correct flags:
go test -coverprofile=coverage.out ./...Race conditions should be fixed, not ignored. The race detector helps find concurrent access issues:
go test -race ./...When contributing:
- Add tests for new features
- Update tests for changed behavior
- Ensure all tests pass locally before pushing
- Verify coverage hasn't decreased significantly
- Follow the existing test patterns and style
See CONTRIBUTING.md for more details.