This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Structured Logging

Learn how to implement structured logging with Rivaas using Go’s standard log/slog

The Rivaas Logging package provides production-ready structured logging with minimal dependencies. Uses Go’s built-in log/slog for high performance and native integration with the Go ecosystem.

Features

  • Multiple Output Formats: JSON, text, and human-friendly console output
  • Context-Aware Logging: Automatic trace correlation with OpenTelemetry
  • Sensitive Data Redaction: Automatic sanitization of passwords, tokens, and secrets
  • Log Sampling: Reduce log volume in high-traffic scenarios
  • Convenience Methods: HTTP request logging, error logging with context, duration tracking
  • Dynamic Log Levels: Change log levels at runtime without restart
  • Functional Options API: Clean, composable configuration
  • Router Integration: Seamless integration following metrics/tracing patterns
  • Zero External Dependencies: Uses only Go standard library (except OpenTelemetry for trace correlation)

Quick Start

package main

import (
    "rivaas.dev/logging"
)

func main() {
    // Create a logger with console output
    log := logging.MustNew(
        logging.WithConsoleHandler(),
        logging.WithDebugLevel(),
    )

    log.Info("service started", "port", 8080, "env", "production")
    log.Debug("debugging information", "key", "value")
    log.Error("operation failed", "error", "connection timeout")
}
package main

import (
    "rivaas.dev/logging"
)

func main() {
    // Create a logger with JSON output
    log := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithServiceName("my-api"),
        logging.WithServiceVersion("v1.0.0"),
        logging.WithEnvironment("production"),
    )

    log.Info("user action", "user_id", "123", "action", "login")
    // Output: {"time":"2024-01-15T10:30:45.123Z","level":"INFO","msg":"user action","service":"my-api","version":"v1.0.0","env":"production","user_id":"123","action":"login"}
}
package main

import (
    "rivaas.dev/logging"
)

func main() {
    // Create a logger with text output
    log := logging.MustNew(
        logging.WithTextHandler(),
        logging.WithServiceName("my-api"),
    )

    log.Info("service started", "port", 8080)
    // Output: time=2024-01-15T10:30:45.123Z level=INFO msg="service started" service=my-api port=8080
}

How It Works

  • Handler types determine output format (JSON, Text, Console)
  • Structured fields are key-value pairs, not string concatenation
  • Log levels control verbosity (Debug, Info, Warn, Error)
  • Service metadata automatically added to every log entry
  • Sensitive data automatically redacted (passwords, tokens, keys)

Learning Path

Follow these guides to master logging with Rivaas:

  1. Installation - Get started with the logging package
  2. Basic Usage - Learn handler types and output formats
  3. Configuration - Configure loggers with all available options
  4. Context Logging - Add trace correlation with OpenTelemetry
  5. Convenience Methods - Use helper methods for common patterns
  6. Log Sampling - Reduce log volume in high-traffic scenarios
  7. Dynamic Log Levels - Change log levels at runtime
  8. Router Integration - Integrate with Rivaas router
  9. Testing - Test utilities and patterns
  10. Best Practices - Performance tips and patterns
  11. Migration - Switch from other logging libraries
  12. Examples - See real-world usage patterns

Next Steps

1 - Installation

How to install and set up the Rivaas logging package

This guide covers how to install the logging package and understand its dependencies.

Installation

Install the logging package using go get:

go get rivaas.dev/logging

Requirements: Go 1.25 or higher

Dependencies

The logging package has minimal external dependencies to maintain simplicity and avoid bloat.

DependencyPurposeRequired
Go stdlib (log/slog)Core loggingYes
go.opentelemetry.io/otel/traceTrace correlation in ContextLoggerOptional*
github.com/stretchr/testifyTest utilitiesTest only

* The OpenTelemetry trace dependency is only used by NewContextLogger() for automatic trace/span ID extraction. If you don’t use context-aware logging with tracing, this dependency has no runtime impact.

Verifying Installation

Create a simple test to verify the installation:

package main

import (
    "rivaas.dev/logging"
)

func main() {
    log := logging.MustNew(
        logging.WithConsoleHandler(),
    )
    
    log.Info("installation successful", "version", "v1.0.0")
}

Run the program:

go run main.go

You should see output like:

10:30:45.123 INFO  installation successful version=v1.0.0

Import Statement

Import the logging package in your Go files:

import "rivaas.dev/logging"

For context-aware logging with OpenTelemetry:

import (
    "rivaas.dev/logging"
    "go.opentelemetry.io/otel/trace"
)

Module Integration

Add to your go.mod:

module example.com/myapp

go 1.25

require (
    rivaas.dev/logging v1.0.0
)

Run go mod tidy to download dependencies:

go mod tidy

Next Steps

For complete API details, see the API Reference.

2 - Basic Usage

Learn the fundamentals of structured logging with handler types and output formats

This guide covers the essential operations for working with the logging package. Learn to choose handler types, set log levels, and produce structured log output.

Handler Types

The logging package supports three output formats, each optimized for different use cases.

JSON Handler (Production)

JSON format is ideal for production environments and log aggregation systems:

log := logging.MustNew(
    logging.WithJSONHandler(),
)

log.Info("user action", "user_id", "123", "action", "login")

Output:

{"time":"2024-01-15T10:30:45.123Z","level":"INFO","msg":"user action","user_id":"123","action":"login"}

Use cases:

  • Production environments.
  • Log aggregation systems like ELK, Splunk, Datadog.
  • Machine-parseable logs.
  • Cloud logging services.

Text Handler

Text format outputs key=value pairs, readable but still parseable:

log := logging.MustNew(
    logging.WithTextHandler(),
)

log.Info("request processed", "method", "GET", "path", "/api/users")

Output:

time=2024-01-15T10:30:45.123Z level=INFO msg="request processed" method=GET path=/api/users

Use cases:

  • Systems that prefer key=value format
  • Legacy log parsers
  • Environments where JSON is too verbose

Console Handler (Development)

Console format provides human-readable colored output for development:

log := logging.MustNew(
    logging.WithConsoleHandler(),
)

log.Info("server starting", "port", 8080)

Output (with colors):

10:30:45.123 INFO  server starting port=8080

Use cases:

  • Local development.
  • Debugging.
  • Terminal output.
  • Interactive troubleshooting.

Note: Console handler uses ANSI colors automatically. Colors are optimized for dark terminal themes.

Log Levels

Control log verbosity with log levels. Each level has a specific purpose.

Available Levels

// From most to least verbose:
logging.LevelDebug   // Detailed debugging information
logging.LevelInfo    // General informational messages
logging.LevelWarn    // Warning messages (not errors)
logging.LevelError   // Error messages

Setting Log Level

Configure the minimum log level during initialization:

log := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelInfo),  // Only Info, Warn, Error
)

log.Debug("this won't appear")  // Filtered out
log.Info("this will appear")    // Logged
log.Error("this will appear")   // Logged

Debug Level Shortcut

Enable debug logging with a convenience function:

log := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithDebugLevel(),  // Same as WithLevel(logging.LevelDebug)
)

Level Usage Guidelines

DEBUG - Detailed information for debugging

log.Debug("cache hit", "key", cacheKey, "ttl", ttl)

INFO - General informational messages

log.Info("server started", "port", 8080)

WARN - Warning but not an error

log.Warn("high memory usage", "used_mb", 8192, "total_mb", 16384)

ERROR - Errors that need attention

log.Error("database connection failed", "error", err, "retry_count", retries)

Structured Logging

The logging package uses structured logging with key-value pairs, not string concatenation.

Basic Structured Fields

log.Info("user logged in",
    "user_id", userID,
    "ip_address", ipAddress,
    "session_id", sessionID,
)

Output (JSON):

{
  "time": "2024-01-15T10:30:45.123Z",
  "level": "INFO",
  "msg": "user logged in",
  "user_id": "123",
  "ip_address": "192.168.1.1",
  "session_id": "abc-xyz"
}

Why Structured Logging?

BAD - String concatenation:

log.Info("User " + userID + " logged in from " + ipAddress)

GOOD - Structured fields:

log.Info("user logged in",
    "user_id", userID,
    "ip_address", ipAddress,
)

Benefits:

  • Machine-parseable
  • Searchable by field
  • Type-safe (numbers stay numbers)
  • Easier to aggregate and analyze

Type Support

The logger handles various types automatically:

log.Info("operation details",
    "name", "process_data",           // string
    "count", 1024,                     // int
    "enabled", true,                   // bool
    "duration", 250*time.Millisecond,  // duration
    "rate", 99.5,                      // float64
    "timestamp", time.Now(),           // time
    "error", err,                      // error
)

Complete Example

Putting it all together:

package main

import (
    "rivaas.dev/logging"
)

func main() {
    // Create logger for development
    log := logging.MustNew(
        logging.WithConsoleHandler(),
        logging.WithDebugLevel(),
    )

    // Log at different levels
    log.Debug("application starting", "version", "v1.0.0")
    log.Info("server listening", "port", 8080, "env", "development")
    log.Warn("high latency detected", "latency_ms", 250, "threshold_ms", 200)
    log.Error("database connection failed", "error", "connection timeout")
}

Common Patterns

Logging with Context

Add related fields that persist across multiple log calls:

// Create a logger with persistent fields
requestLog := log.With(
    "request_id", "req-123",
    "user_id", "user-456",
)

requestLog.Info("validation started")
requestLog.Info("validation completed")
// Both logs include request_id and user_id

Logging Errors

Always include the error:

if err := db.Connect(); err != nil {
    log.Error("database connection failed",
        "error", err,
        "host", dbHost,
        "port", dbPort,
        "retry_count", retries,
    )
}

Avoid Logging in Tight Loops

// BAD - logs thousands of times
for _, item := range items {
    log.Debug("processing", "item", item)
    process(item)
}

// GOOD - log once with summary
log.Info("processing batch", "count", len(items))
for _, item := range items {
    process(item)
}
log.Info("batch completed", "processed", len(items))

Next Steps

For complete API details, see the API Reference.

3 - Configuration

Configure loggers with all available options for production readiness

This guide covers all configuration options available in the logging package. It covers handler selection to service metadata.

Handler Configuration

Choose the appropriate handler type for your environment.

Handler Types

// JSON structured logging. Default and best for production.
logging.WithJSONHandler()

// Text key=value logging.
logging.WithTextHandler()

// Human-readable colored console. Best for development.
logging.WithConsoleHandler()

See Basic Usage for detailed handler comparison.

Log Level Configuration

Control which log messages are output.

Setting Minimum Level

// Set specific level
logging.WithLevel(logging.LevelDebug)
logging.WithLevel(logging.LevelInfo)
logging.WithLevel(logging.LevelWarn)
logging.WithLevel(logging.LevelError)

// Convenience function for debug
logging.WithDebugLevel()

Example:

log := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelInfo),  // Info, Warn, Error only
)

See Dynamic Log Levels to change levels at runtime.

Output Destination

By default, logs write to os.Stdout. Customize the output destination:

File Output

logFile, err := os.OpenFile("app.log", 
    os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
if err != nil {
    log.Fatal(err)
}
defer logFile.Close()

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithOutput(logFile),
)

Custom Writer

Any io.Writer can be used:

var buf bytes.Buffer
logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithOutput(&buf),
)

Multiple Writers

Use io.MultiWriter to write to multiple destinations:

logFile, _ := os.OpenFile("app.log", os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithOutput(io.MultiWriter(os.Stdout, logFile)),
)

Service Information

Add service metadata automatically to every log entry.

Service Metadata

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithServiceName("my-api"),
    logging.WithServiceVersion("v1.0.0"),
    logging.WithEnvironment("production"),
)

logger.Info("server started", "port", 8080)

Output:

{
  "level": "INFO",
  "msg": "server started",
  "service": "my-api",
  "version": "v1.0.0",
  "env": "production",
  "port": 8080
}

Why Service Metadata?

  • Filtering: Query logs by service in aggregation tools
  • Correlation: Track logs across distributed services
  • Versioning: Identify which version produced logs
  • Environment: Distinguish between dev/staging/prod logs

Reading Service Information

Access configured service info programmatically:

serviceName := logger.ServiceName()
version := logger.ServiceVersion()
env := logger.Environment()

Source Code Location

Add file and line information to log entries for debugging.

Enable Source Location

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithSource(true),
)

logger.Info("debug message")

Output:

{
  "level": "INFO",
  "msg": "debug message",
  "source": {
    "file": "main.go",
    "line": 42
  }
}

Performance note: Source location adds overhead. Enable only for debugging.

Custom Attribute Replacer

Transform or filter log attributes with a custom function.

Redacting Custom Fields

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithReplaceAttr(func(groups []string, a slog.Attr) slog.Attr {
        // Redact credit card numbers
        if a.Key == "credit_card" {
            return slog.String(a.Key, "***REDACTED***")
        }
        // Transform time format
        if a.Key == "time" {
            if t, ok := a.Value.Any().(time.Time); ok {
                return slog.String(a.Key, t.Format(time.RFC3339))
            }
        }
        return a
    }),
)

Built-in Redaction

The following fields are automatically redacted:

  • password
  • token
  • secret
  • api_key
  • authorization
log.Info("authentication", 
    "username", "john",
    "password", "secret123",  // Automatically redacted
)
// Output: {...,"username":"john","password":"***REDACTED***"}

Dropping Attributes

Return an empty slog.Attr to drop an attribute:

logging.WithReplaceAttr(func(groups []string, a slog.Attr) slog.Attr {
    if a.Key == "internal_field" {
        return slog.Attr{}  // Drop this field
    }
    return a
})

Global Logger Registration

By default, loggers are not registered globally, allowing multiple independent logger instances.

Register as Global Default

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithServiceName("my-api"),
    logging.WithGlobalLogger(),  // Register as slog default
)
defer logger.Shutdown(context.Background())

// Now third-party libraries using slog will use your logger
slog.Info("using global logger", "key", "value")

When to Use Global Registration

Use global registration when:

  • Third-party libraries use slog directly
  • You prefer slog.Info() over logger.Info()
  • Migrating from direct slog usage

Don’t use global registration when:

  • Running tests with isolated loggers
  • Creating libraries (avoid affecting global state)
  • Using multiple logging configurations

Default Behavior

Without WithGlobalLogger(), each logger is independent:

logger1 := logging.MustNew(logging.WithJSONHandler())
logger2 := logging.MustNew(logging.WithConsoleHandler())

logger1.Info("from logger1")  // JSON output
logger2.Info("from logger2")  // Console output
slog.Info("from default slog") // Standard slog output (independent)

Custom Logger

Provide your own slog.Logger for advanced scenarios.

Using Custom slog.Logger

customLogger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
    Level: slog.LevelDebug,
    AddSource: true,
}))

logger := logging.MustNew(
    logging.WithCustomLogger(customLogger),
)

Limitations:

  • Dynamic level changes (SetLevel) not supported with custom loggers
  • Service metadata must be added to custom logger directly

Debug Mode

Enable comprehensive debugging with a single option.

Enable Debug Mode

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithDebugMode(true),
)

Automatically enables:

  • Debug log level (WithDebugLevel())
  • Source code location (WithSource(true))

Use cases:

  • Troubleshooting production issues
  • Development environments
  • Detailed debugging sessions

Complete Configuration Example

Putting all options together:

package main

import (
    "os"
    "rivaas.dev/logging"
    "log/slog"
)

func main() {
    // Production configuration
    prodLogger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithLevel(logging.LevelInfo),
        logging.WithServiceName("payment-api"),
        logging.WithServiceVersion("v2.1.0"),
        logging.WithEnvironment("production"),
        logging.WithOutput(os.Stdout),
    )
    defer prodLogger.Shutdown(context.Background())

    // Development configuration
    devLogger := logging.MustNew(
        logging.WithConsoleHandler(),
        logging.WithDebugLevel(),
        logging.WithSource(true),
        logging.WithServiceName("payment-api"),
        logging.WithEnvironment("development"),
    )
    defer devLogger.Shutdown(context.Background())

    // Choose based on environment
    var logger *logging.Logger
    if os.Getenv("ENV") == "production" {
        logger = prodLogger
    } else {
        logger = devLogger
    }

    logger.Info("application started")
}

Configuration Best Practices

Production Settings

logger := logging.MustNew(
    logging.WithJSONHandler(),           // Machine-parseable
    logging.WithLevel(logging.LevelInfo), // No debug spam
    logging.WithServiceName("my-api"),    // Service identification
    logging.WithServiceVersion(version),  // Version tracking
    logging.WithEnvironment("production"), // Environment filtering
)

Development Settings

logger := logging.MustNew(
    logging.WithConsoleHandler(),  // Human-readable
    logging.WithDebugLevel(),      // See everything
    logging.WithSource(true),      // File:line info
)

Testing Settings

buf := &bytes.Buffer{}
logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithOutput(buf),
    logging.WithLevel(logging.LevelDebug),
)
// Inspect buf for assertions

Next Steps

For complete option details, see the Options Reference.

4 - Context-Aware Logging

Add trace correlation and contextual information to logs with ContextLogger

This guide covers context-aware logging with automatic trace correlation for distributed tracing integration.

Overview

Context-aware logging automatically extracts trace and span IDs from OpenTelemetry contexts, enabling correlation between logs and distributed traces.

Why context-aware logging:

  • Correlate logs with distributed traces.
  • Track requests across service boundaries.
  • Debug multi-service workflows.
  • Include trace IDs automatically without manual passing.

ContextLogger Basics

ContextLogger wraps a standard Logger and automatically extracts trace information from context.

Creating a ContextLogger

import (
    "context"
    "rivaas.dev/logging"
    "rivaas.dev/tracing"
)

// Create base logger
log := logging.MustNew(logging.WithJSONHandler())

// In a request handler with traced context
func handler(ctx context.Context) {
    // Create context logger
    cl := logging.NewContextLogger(ctx, log)
    
    cl.Info("processing request", "user_id", "123")
    // Output includes: "trace_id":"abc123...", "span_id":"def456..."
}

With OpenTelemetry Tracing

Full integration with OpenTelemetry:

package main

import (
    "context"
    "rivaas.dev/logging"
    "rivaas.dev/tracing"
)

func main() {
    // Initialize tracing
    tracer := tracing.MustNew(
        tracing.WithOTLP("localhost:4317"),
        tracing.WithServiceName("my-api"),
    )
    defer tracer.Shutdown(context.Background())

    // Initialize logging
    log := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithServiceName("my-api"),
    )

    // Start a trace
    ctx, span := tracer.Start(context.Background(), "operation")
    defer span.End()

    // Create context logger
    cl := logging.NewContextLogger(ctx, log)
    
    cl.Info("operation started")
    // Automatically includes trace_id and span_id
}

Output:

{
  "time": "2024-01-15T10:30:45.123Z",
  "level": "INFO",
  "msg": "operation started",
  "service": "my-api",
  "trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
  "span_id": "00f067aa0ba902b7"
}

Automatic Trace Correlation

When a context contains an active OpenTelemetry span, ContextLogger automatically extracts:

  • trace_id - Unique identifier for the entire trace
  • span_id - Unique identifier for this operation

Field Names

The logger uses OpenTelemetry semantic conventions:

{
  "trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
  "span_id": "00f067aa0ba902b7"
}

These field names match standard observability tools (Jaeger, Tempo, Honeycomb).

Using ContextLogger Methods

ContextLogger provides the same logging methods as Logger.

Logging at Different Levels

cl := logging.NewContextLogger(ctx, log)

cl.Debug("debugging info", "detail", "value")
cl.Info("informational message", "status", "ok")
cl.Warn("warning condition", "threshold", 100)
cl.Error("error occurred", "error", err)

All methods automatically include trace and span IDs if available.

Adding Additional Context

Use With() to add persistent fields:

// Add fields that persist across log calls
requestLogger := cl.With(
    "request_id", "req-123",
    "user_id", "user-456",
)

requestLogger.Info("validation started")
requestLogger.Info("validation completed")
// Both logs include request_id, user_id, trace_id, span_id

Accessing Trace Information

Retrieve trace IDs programmatically:

cl := logging.NewContextLogger(ctx, log)

traceID := cl.TraceID()   // "4bf92f3577b34da6a3ce929d0e0e4736"
spanID := cl.SpanID()     // "00f067aa0ba902b7"

if traceID != "" {
    // Context has active trace
    log.Info("traced operation", "trace_id", traceID)
}

Use cases:

  • Include trace ID in API responses
  • Add to custom headers
  • Pass to external systems

Without Active Trace

If context has no active span, ContextLogger behaves like a normal logger:

ctx := context.Background()  // No span
cl := logging.NewContextLogger(ctx, log)

cl.Info("message")
// Output: No trace_id or span_id fields

This makes ContextLogger safe to use everywhere, whether tracing is enabled or not.

Structured Context

Combine context logging with grouped attributes for clean organization.

// Get underlying slog.Logger for grouping
logger := cl.Logger()

requestLogger := logger.WithGroup("request")
requestLogger.Info("received", 
    "method", "POST",
    "path", "/api/users",
)

Output:

{
  "msg": "received",
  "trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
  "span_id": "00f067aa0ba902b7",
  "request": {
    "method": "POST",
    "path": "/api/users"
  }
}

Request Handler Pattern

Common pattern for HTTP request handlers:

func (s *Server) handleRequest(w http.ResponseWriter, r *http.Request) {
    // Extract or create traced context
    ctx := r.Context()
    
    // Create context logger
    cl := logging.NewContextLogger(ctx, s.logger)
    
    // Add request-specific fields
    requestLog := cl.With(
        "request_id", generateRequestID(),
        "method", r.Method,
        "path", r.URL.Path,
    )
    
    requestLog.Info("request started")
    
    // Process request...
    
    requestLog.Info("request completed", "status", 200)
}

Performance Considerations

Trace Extraction Overhead

Trace ID extraction happens once during NewContextLogger() creation:

// Trace extraction happens here (one-time cost)
cl := logging.NewContextLogger(ctx, log)

// No additional overhead
cl.Info("message 1")
cl.Info("message 2")
cl.Info("message 3")

Best practice: Create ContextLogger once per request/operation, reuse for all logging.

Pooling for High Load

For extreme high-load scenarios, consider pooling ContextLogger instances:

var contextLoggerPool = sync.Pool{
    New: func() any {
        return &logging.ContextLogger{}
    },
}

func getContextLogger(ctx context.Context, log *logging.Logger) *logging.ContextLogger {
    cl := contextLoggerPool.Get().(*logging.ContextLogger)
    // Reinitialize with new context
    *cl = *logging.NewContextLogger(ctx, log)
    return cl
}

func putContextLogger(cl *logging.ContextLogger) {
    contextLoggerPool.Put(cl)
}

Note: Only needed for >10k requests/second with extremely tight latency requirements.

Integration with Router

The Rivaas router automatically provides traced contexts:

import (
    "rivaas.dev/router"
    "rivaas.dev/logging"
)

r := router.MustNew()
logger := logging.MustNew(logging.WithJSONHandler())
r.SetLogger(logger)

r.GET("/api/users", func(c *router.Context) {
    // Context is already traced if tracing is enabled
    cl := logging.NewContextLogger(c.Request.Context(), logger)
    
    cl.Info("fetching users")
    
    // Or use the router's logger directly (already context-aware)
    c.Logger().Info("using router logger")
    
    c.JSON(200, users)
})

See Router Integration for more details.

Complete Example

Putting it all together:

package main

import (
    "context"
    "net/http"
    "rivaas.dev/logging"
    "rivaas.dev/tracing"
)

func main() {
    // Initialize tracing
    tracer := tracing.MustNew(
        tracing.WithOTLP("localhost:4317"),
        tracing.WithServiceName("payment-api"),
    )
    defer tracer.Shutdown(context.Background())

    // Initialize logging
    logger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithServiceName("payment-api"),
        logging.WithServiceVersion("v1.0.0"),
    )
    defer logger.Shutdown(context.Background())

    // HTTP handler
    http.HandleFunc("/process", func(w http.ResponseWriter, r *http.Request) {
        // Start trace
        ctx, span := tracer.Start(r.Context(), "process_payment")
        defer span.End()

        // Create context logger
        cl := logging.NewContextLogger(ctx, logger)
        
        // Add request context
        requestLog := cl.With(
            "request_id", r.Header.Get("X-Request-ID"),
            "user_id", r.Header.Get("X-User-ID"),
        )

        requestLog.Info("payment processing started")

        // Process payment...

        requestLog.Info("payment processing completed", "status", "success")

        w.WriteHeader(http.StatusOK)
    })

    http.ListenAndServe(":8080", nil)
}

Next Steps

For API details, see the API Reference.

5 - Convenience Methods

Use helper methods for common logging patterns like HTTP requests and errors

This guide covers convenience methods that simplify common logging patterns with pre-structured fields.

Overview

The logging package provides helper methods for frequently-used logging scenarios:

  • LogRequest - HTTP request logging with standard fields
  • LogError - Error logging with context
  • LogDuration - Operation timing with automatic duration calculation
  • ErrorWithStack - Critical error logging with stack traces

LogRequest - HTTP Request Logging

Automatically log HTTP requests with standard fields.

Basic Usage

func handleRequest(w http.ResponseWriter, r *http.Request) {
    start := time.Now()
    
    // Process request...
    status := 200
    bytesWritten := 1024
    
    logger.LogRequest(r, 
        "status", status,
        "duration_ms", time.Since(start).Milliseconds(),
        "bytes", bytesWritten,
    )
}

Output:

{
  "level": "INFO",
  "msg": "http request",
  "method": "GET",
  "path": "/api/users",
  "remote": "192.168.1.1:54321",
  "user_agent": "Mozilla/5.0...",
  "status": 200,
  "duration_ms": 45,
  "bytes": 1024
}

Standard Fields Included

LogRequest automatically includes:

FieldDescriptionExample
methodHTTP methodGET, POST, PUT
pathRequest path (without query)/api/users
remoteClient remote address192.168.1.1:54321
user_agentClient User-Agent headerMozilla/5.0...
queryQuery string (only if non-empty)page=1&limit=10

Additional Fields

Pass additional fields as key-value pairs:

logger.LogRequest(r,
    "status", statusCode,
    "duration_ms", elapsed,
    "bytes_written", bytesWritten,
    "user_id", userID,
    "cached", wasCached,
)

With Router Middleware

LogRequest is particularly useful in custom middleware:

func loggingMiddleware(logger *logging.Logger) func(http.Handler) http.Handler {
    return func(next http.Handler) http.Handler {
        return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
            start := time.Now()
            
            // Wrap response writer to capture status/size
            wrapped := &responseWriter{ResponseWriter: w}
            
            next.ServeHTTP(wrapped, r)
            
            logger.LogRequest(r,
                "status", wrapped.status,
                "duration_ms", time.Since(start).Milliseconds(),
                "bytes", wrapped.written,
            )
        })
    }
}

Note: The Rivaas router includes built-in access log middleware. See Router Integration.

LogError - Error Logging with Context

Convenient error logging with automatic error field.

Basic Usage

if err := db.Insert(user); err != nil {
    logger.LogError(err, "database operation failed",
        "operation", "INSERT",
        "table", "users",
        "user_id", user.ID,
    )
    return err
}

Output:

{
  "level": "ERROR",
  "msg": "database operation failed",
  "error": "connection timeout: unable to reach database",
  "operation": "INSERT",
  "table": "users",
  "user_id": "123"
}

Why Use LogError?

Instead of:

log.Error("database operation failed", "error", err.Error(), "table", "users")

Use:

logger.LogError(err, "database operation failed", "table", "users")

Benefits:

  • Shorter, cleaner code
  • Consistent error field naming
  • Automatic error message extraction
  • Clear intent (logging an error condition)

With Retry Logic

func connectWithRetry(maxRetries int) error {
    for i := 0; i < maxRetries; i++ {
        if err := connect(); err != nil {
            logger.LogError(err, "connection failed",
                "attempt", i+1,
                "max_retries", maxRetries,
            )
            time.Sleep(backoff(i))
            continue
        }
        return nil
    }
    return errors.New("max retries exceeded")
}

LogDuration - Operation Timing

Track operation duration automatically.

Basic Usage

start := time.Now()

result, err := processData(data)

logger.LogDuration("data processing completed", start,
    "rows_processed", result.Count,
    "errors", result.Errors,
)

Output:

{
  "level": "INFO",
  "msg": "data processing completed",
  "duration_ms": 1543,
  "duration": "1.543s",
  "rows_processed": 1000,
  "errors": 0
}

Included Fields

LogDuration automatically adds:

FieldDescriptionExample
duration_msDuration in milliseconds (for filtering/alerting)1543
durationHuman-readable duration string"1.543s"

Why Two Duration Fields?

  • duration_ms - Numeric value for:

    • Filtering: duration_ms > 1000
    • Alerting: Alert on slow operations
    • Aggregation: Average, percentiles, etc.
  • duration - Human-readable for:

    • Quick visual inspection
    • Log reading and debugging
    • Formats like "250ms", "1.5s", "2m30s"

Multiple Checkpoints

Track multiple stages:

start := time.Now()

// Stage 1
dataFetched := time.Now()
logger.LogDuration("data fetched", start, "rows", rowCount)

// Stage 2
processData(data)
logger.LogDuration("data processed", dataFetched, "rows", rowCount)

// Overall
logger.LogDuration("operation completed", start, "total_rows", rowCount)

With Error Handling

start := time.Now()
result, err := expensiveOperation()

if err != nil {
    logger.LogError(err, "operation failed")
    logger.LogDuration("operation failed", start, "partial_results", result.Count)
    return err
}

logger.LogDuration("operation succeeded", start, "results", result.Count)

ErrorWithStack - Error with Stack Traces

Log critical errors with stack traces for debugging.

Basic Usage

if err := criticalOperation(); err != nil {
    logger.ErrorWithStack("critical failure", err, true,
        "user_id", userID,
        "transaction_id", txID,
    )
    // Handle critical error...
}

Output:

{
  "level": "ERROR",
  "msg": "critical failure",
  "error": "database corruption detected",
  "user_id": "123",
  "transaction_id": "tx-456",
  "stack": "main.processPayment\n\t/app/main.go:42\nmain.handleRequest\n\t/app/main.go:28\n..."
}

When to Use Stack Traces

✓ Use for:

  • Critical errors requiring debugging
  • Unexpected conditions (panics, invariant violations)
  • Production incidents that need investigation
  • Errors in rarely-executed code paths

✗ Don’t use for:

  • Expected errors (validation failures, not found)
  • High-frequency errors (performance impact)
  • Errors where context is sufficient
  • Non-critical warnings

Stack Capture Cost

Stack traces have overhead:

// Low overhead - no stack trace
logger.LogError(err, "validation failed", "field", field)

// Higher overhead - captures stack trace
logger.ErrorWithStack("unexpected error", err, true, "field", field)

Performance impact:

  • Stack capture: ~100µs per call
  • Stack formatting: ~50µs per call
  • Additional log size: ~2-5KB

Recommendation: Use ErrorWithStack(includeStack: true) sparingly, only for critical errors.

Conditional Stack Traces

Include stack traces only when needed:

func handleError(err error, critical bool) {
    logger.ErrorWithStack("operation failed", err, critical,
        "severity", map[bool]string{true: "critical", false: "normal"}[critical],
    )
}

// Normal error - no stack
handleError(validationErr, false)

// Critical error - with stack
handleError(dbCorruptionErr, true)

With Panic Recovery

func recoverPanic() {
    if r := recover(); r != nil {
        err := fmt.Errorf("panic: %v", r)
        logger.ErrorWithStack("panic recovered", err, true,
            "panic_value", r,
        )
    }
}

func riskyOperation() {
    defer recoverPanic()
    
    // Operations that might panic...
}

Combining Convenience Methods

Use multiple convenience methods together:

func handleRequest(w http.ResponseWriter, r *http.Request) {
    start := time.Now()
    
    // Process request
    result, err := processRequest(r)
    
    if err != nil {
        // Log error with context
        logger.LogError(err, "request processing failed",
            "path", r.URL.Path,
        )
        
        // Log request details
        logger.LogRequest(r, "status", 500)
        
        http.Error(w, "Internal Server Error", 500)
        return
    }
    
    // Log successful request
    logger.LogRequest(r, 
        "status", 200,
        "items", len(result.Items),
    )
    
    // Log timing
    logger.LogDuration("request completed", start,
        "items_processed", len(result.Items),
    )
    
    json.NewEncoder(w).Encode(result)
}

Performance Considerations

Pooled Attribute Slices

Convenience methods use pooled slices internally for efficiency:

// No allocations beyond the log entry itself
logger.LogRequest(r, "status", 200, "bytes", 1024)
logger.LogError(err, "failed", "retry", 3)
logger.LogDuration("done", start, "count", 100)

Implementation detail: Methods use sync.Pool for attribute slices, reducing GC pressure.

Zero Allocations

Standard logging with convenience methods:

// Benchmark: 0 allocs/op for standard use
logger.LogRequest(r, "status", 200)
logger.LogError(err, "failed")
logger.LogDuration("done", start)

Exception: ErrorWithStack allocates for stack trace capture (intentional trade-off).

Next Steps

For API details, see the API Reference.

6 - Log Sampling

Reduce log volume in high-traffic scenarios with intelligent sampling

This guide covers log sampling to reduce log volume in high-throughput production environments while maintaining visibility.

Overview

Log sampling reduces the number of log entries written while preserving statistical sampling for debugging and analysis.

Why log sampling:

  • Reduce log storage costs in high-traffic scenarios.
  • Prevent log flooding during traffic spikes.
  • Maintain representative sample for debugging.
  • Always log critical errors. Sampling bypasses ERROR level.

When to use:

  • Services handling more than 1000 logs per second.
  • Cost-constrained log storage.
  • High-volume debug or info logging.
  • Noisy services with repetitive logs.

Basic Configuration

Configure sampling with SamplingConfig:

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithSampling(logging.SamplingConfig{
        Initial:    100,           // Log first 100 entries unconditionally
        Thereafter: 100,           // After that, log 1 in every 100
        Tick:       time.Minute,   // Reset counter every minute
    }),
)

How Sampling Works

The sampling algorithm has three phases:

1. Initial Phase

Log the first Initial entries unconditionally:

SamplingConfig{
    Initial: 100,  // First 100 logs always written
    // ...
}

Purpose: Ensure you always see the beginning of operations, even if they’re short-lived.

2. Sampling Phase

After Initial entries, log 1 in every Thereafter entries:

SamplingConfig{
    Initial:    100,
    Thereafter: 100,  // Log 1%, drop 99%
    // ...
}

Examples:

  • Thereafter: 100 → 1% sampling (log 1 in 100)
  • Thereafter: 10 → 10% sampling (log 1 in 10)
  • Thereafter: 1000 → 0.1% sampling (log 1 in 1000)

3. Reset Phase

Reset counter every Tick interval:

SamplingConfig{
    Initial:    100,
    Thereafter: 100,
    Tick:       time.Minute,  // Reset every minute
}

Purpose: Ensure recent activity is always visible. Without resets, you might miss important recent events.

Sampling Behavior

Visual Timeline

Time:     0s    30s   60s   90s   120s  150s
          |-----|-----|-----|-----|-----|
Logs:     [Initial] [Sample] [Reset→Initial] [Sample]
          â–“â–“â–“â–“â–“     â–‘â–‘â–‘     â–“â–“â–“â–“â–“           â–‘â–‘â–‘
          100%      1%      100%            1%
  • â–“â–“â–“â–“â–“ - Initial phase (100% logging)
  • â–‘â–‘â–‘ - Sampling phase (1% logging)
  • Reset - Counter resets at Tick interval

Error Bypass

Errors (level >= ERROR) always bypass sampling:

logger := logging.MustNew(
    logging.WithSampling(logging.SamplingConfig{
        Initial:    100,
        Thereafter: 100,  // 1% sampling
        Tick:       time.Minute,
    }),
)

// These may be sampled
logger.Debug("processing item", "id", id)  // May be dropped
logger.Info("request handled", "path", path)  // May be dropped

// These are NEVER sampled
logger.Error("database error", "error", err)  // Always logged
logger.Error("payment failed", "tx_id", txID)  // Always logged

Rationale: Critical errors should never be lost, regardless of sampling configuration.

Configuration Examples

High-Traffic API

// Log all errors, but only 1% of info/debug
logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelInfo),
    logging.WithSampling(logging.SamplingConfig{
        Initial:    1000,          // First 1000 requests fully logged
        Thereafter: 100,           // Then 1% sampling
        Tick:       5 * time.Minute, // Reset every 5 minutes
    }),
)

Result:

  • Startup: All logs for first 1000 requests
  • Steady state: 1% of logs (99% reduction)
  • Every 5 minutes: Full logging resumes briefly

Debug Logging in Production

// Enable debug logs with heavy sampling
logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelDebug),
    logging.WithSampling(logging.SamplingConfig{
        Initial:    50,            // See first 50 debug logs
        Thereafter: 1000,          // Then 0.1% sampling
        Tick:       10 * time.Minute, // Reset every 10 minutes
    }),
)

Use case: Temporarily enable debug logging in production without overwhelming logs.

Cost Optimization

// Aggressive sampling for cost reduction
logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelInfo),
    logging.WithSampling(logging.SamplingConfig{
        Initial:    500,
        Thereafter: 1000,  // 0.1% sampling (99.9% reduction)
        Tick:       time.Hour,
    }),
)

Result: Dramatic log volume reduction while maintaining statistical samples.

Special Configurations

No Sampling After Initial

Set Thereafter: 0 to log everything after initial:

SamplingConfig{
    Initial:    100,  // First 100 sampled
    Thereafter: 0,    // Then log everything
    Tick:       time.Minute,
}

Use case: Rate limiting only during burst startup.

No Reset

Set Tick: 0 to never reset the counter:

SamplingConfig{
    Initial:    1000,
    Thereafter: 100,
    Tick:       0,  // Never reset
}

Result: Sample continuously without periodic full logging.

Monitoring Sampling

Check Sampling State

Use DebugInfo() to inspect sampling state:

info := logger.DebugInfo()
samplingInfo := info["sampling"].(map[string]any)

fmt.Printf("Initial: %d\n", samplingInfo["initial"])
fmt.Printf("Thereafter: %d\n", samplingInfo["thereafter"])
fmt.Printf("Counter: %d\n", samplingInfo["counter"])

Output:

Initial: 100
Thereafter: 100
Counter: 1543

Log Sampling Metrics

Periodically log sampling state:

ticker := time.NewTicker(time.Minute)
go func() {
    for range ticker.C {
        info := logger.DebugInfo()
        if sampling, ok := info["sampling"].(map[string]any); ok {
            logger.Info("sampling state",
                "counter", sampling["counter"],
                "config", fmt.Sprintf("%d/%d/%v", 
                    sampling["initial"], 
                    sampling["thereafter"],
                    sampling["tick"]),
            )
        }
    }
}()

Performance Impact

Overhead

Sampling adds minimal overhead:

// Without sampling: ~500ns per log
// With sampling: ~520ns per log
// Overhead: ~20ns (4%)

Breakdown:

  • Atomic counter increment: ~10ns
  • Sampling decision: ~10ns
  • No additional allocations

When to Skip Sampling

Skip sampling if:

  • Logging <100 entries/second
  • Log storage cost is not a concern
  • Need every log entry (compliance, debugging)
  • Using external sampling (e.g., log aggregation system does sampling)

Best Practices

Start Conservative

Begin with light sampling, increase if needed:

// Phase 1: Start conservative
SamplingConfig{
    Initial:    1000,
    Thereafter: 10,  // 10% sampling
    Tick:       time.Minute,
}

// Phase 2: If still too much, increase sampling
SamplingConfig{
    Initial:    1000,
    Thereafter: 100,  // 1% sampling
    Tick:       time.Minute,
}

Reset Frequently

Reset counters to maintain visibility:

// Good - see recent activity
Tick: time.Minute

// Better - more responsive
Tick: 30 * time.Second

// Too aggressive - missing too much
Tick: time.Hour

Per-Logger Sampling

Use different sampling for different loggers:

// Access logs - heavy sampling
accessLogger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithSampling(logging.SamplingConfig{
        Initial:    100,
        Thereafter: 1000,  // 0.1%
        Tick:       time.Minute,
    }),
)

// Application logs - light sampling
appLogger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithSampling(logging.SamplingConfig{
        Initial:    500,
        Thereafter: 10,  // 10%
        Tick:       time.Minute,
    }),
)

Monitor Log Volume

Track log volume to tune sampling:

var logCount atomic.Int64

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithSampling(config),
)

// Periodically check
ticker := time.NewTicker(time.Minute)
go func() {
    for range ticker.C {
        count := logCount.Swap(0)
        fmt.Printf("Logs/minute: %d\n", count)
        
        // Adjust sampling if needed
        if count > 10000 {
            // Consider more aggressive sampling
        }
    }
}()

Troubleshooting

Missing Expected Logs

Problem: Important logs being sampled out.

Solution: Use ERROR level for critical logs:

// May be sampled
logger.Info("payment processed", "tx_id", txID)

// Never sampled
logger.Error("payment failed", "tx_id", txID)

Too Much Log Volume

Problem: Sampling not reducing volume enough.

Solutions:

  1. Increase Thereafter value:
SamplingConfig{
    Thereafter: 1000,  // More aggressive: 0.1% instead of 1%
}
  1. Reduce Initial value:
SamplingConfig{
    Initial: 50,  // Fewer initial logs
}
  1. Increase Tick interval:
SamplingConfig{
    Tick: 5 * time.Minute,  // Reset less frequently
}

Lost Debug Context

Problem: Sampling makes debugging difficult.

Solution: Temporarily disable sampling:

// Create logger without sampling for debugging session
debugLogger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelDebug),
    // No WithSampling() call
)

Next Steps

For API details, see the Options Reference.

7 - Dynamic Log Levels

Change log levels at runtime without restarting your application

This guide covers dynamic log level changes. You can adjust logging verbosity at runtime for troubleshooting and performance tuning.

Overview

Dynamic log levels enable changing the minimum log level without restarting your application.

Why dynamic log levels:

  • Enable debug logging temporarily for troubleshooting.
  • Reduce log volume during traffic spikes.
  • Runtime configuration via HTTP endpoint or signal handler.
  • Quick response to production issues without deployment.

Limitations:

  • Not supported with custom loggers.
  • Brief window where old and new levels may race during transition.

Basic Usage

Change log level with SetLevel:

logger := logging.MustNew(logging.WithJSONHandler())

// Initial level is Info (default)
logger.Info("this appears")
logger.Debug("this doesn't appear")

// Enable debug logging
if err := logger.SetLevel(logging.LevelDebug); err != nil {
    log.Printf("failed to change level: %v", err)
}

// Now debug logs appear
logger.Debug("this now appears")

Available Log Levels

Four log levels from least to most restrictive:

logging.LevelDebug   // Most verbose: Debug, Info, Warn, Error
logging.LevelInfo    // Info, Warn, Error
logging.LevelWarn    // Warn, Error
logging.LevelError   // Error only

Setting Levels

// Enable debug logging
logger.SetLevel(logging.LevelDebug)

// Reduce to warnings only
logger.SetLevel(logging.LevelWarn)

// Errors only
logger.SetLevel(logging.LevelError)

// Back to info
logger.SetLevel(logging.LevelInfo)

Checking Current Level

Get the current log level:

currentLevel := logger.Level()

switch currentLevel {
case logging.LevelDebug:
    fmt.Println("Debug mode enabled")
case logging.LevelInfo:
    fmt.Println("Info mode")
case logging.LevelWarn:
    fmt.Println("Warnings only")
case logging.LevelError:
    fmt.Println("Errors only")
}

HTTP Endpoint for Level Changes

Expose an HTTP endpoint to change log levels:

package main

import (
    "fmt"
    "net/http"
    "rivaas.dev/logging"
)

func main() {
    logger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithLevel(logging.LevelInfo),
    )

    // Admin endpoint to change log level
    http.HandleFunc("/admin/loglevel", func(w http.ResponseWriter, r *http.Request) {
        if r.Method != http.MethodPost {
            http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
            return
        }
        
        levelStr := r.URL.Query().Get("level")
        var level logging.Level
        
        switch levelStr {
        case "debug":
            level = logging.LevelDebug
        case "info":
            level = logging.LevelInfo
        case "warn":
            level = logging.LevelWarn
        case "error":
            level = logging.LevelError
        default:
            http.Error(w, "Invalid level. Use: debug, info, warn, error", 
                http.StatusBadRequest)
            return
        }
        
        if err := logger.SetLevel(level); err != nil {
            http.Error(w, err.Error(), http.StatusInternalServerError)
            return
        }
        
        w.WriteHeader(http.StatusOK)
        fmt.Fprintf(w, "Log level changed to %s\n", levelStr)
    })

    http.ListenAndServe(":8080", nil)
}

Usage:

# Enable debug logging
curl -X POST "http://localhost:8080/admin/loglevel?level=debug"

# Reduce to errors only
curl -X POST "http://localhost:8080/admin/loglevel?level=error"

# Back to info
curl -X POST "http://localhost:8080/admin/loglevel?level=info"

Signal Handler for Level Changes

Use Unix signals to change log levels:

package main

import (
    "os"
    "os/signal"
    "syscall"
    "rivaas.dev/logging"
)

func main() {
    logger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithLevel(logging.LevelInfo),
    )

    // Setup signal handlers
    sigChan := make(chan os.Signal, 1)
    signal.Notify(sigChan, syscall.SIGUSR1, syscall.SIGUSR2)

    go func() {
        for sig := range sigChan {
            switch sig {
            case syscall.SIGUSR1:
                // SIGUSR1: Enable debug logging
                logger.SetLevel(logging.LevelDebug)
                logger.Info("debug logging enabled via SIGUSR1")
                
            case syscall.SIGUSR2:
                // SIGUSR2: Back to info logging
                logger.SetLevel(logging.LevelInfo)
                logger.Info("info logging restored via SIGUSR2")
            }
        }
    }()

    // Application logic...
    select {}
}

Usage:

# Get process ID
PID=$(pgrep myapp)

# Enable debug logging
kill -USR1 $PID

# Restore info logging
kill -USR2 $PID

Temporary Debug Sessions

Enable debug logging temporarily:

func enableDebugTemporarily(logger *logging.Logger, duration time.Duration) {
    oldLevel := logger.Level()
    
    logger.SetLevel(logging.LevelDebug)
    logger.Info("debug logging enabled temporarily", "duration", duration)
    
    time.AfterFunc(duration, func() {
        logger.SetLevel(oldLevel)
        logger.Info("debug logging disabled, restored to", "level", oldLevel)
    })
}

// Usage
enableDebugTemporarily(logger, 5*time.Minute)

With Configuration Management

Integrate with configuration reloading:

type Config struct {
    LogLevel string `config:"log_level"`
}

func (c *Config) Validate() error {
    validLevels := map[string]bool{
        "debug": true, "info": true, "warn": true, "error": true,
    }
    if !validLevels[c.LogLevel] {
        return fmt.Errorf("invalid log level: %s", c.LogLevel)
    }
    return nil
}

func applyConfig(logger *logging.Logger, cfg *Config) error {
    var level logging.Level
    switch cfg.LogLevel {
    case "debug":
        level = logging.LevelDebug
    case "info":
        level = logging.LevelInfo
    case "warn":
        level = logging.LevelWarn
    case "error":
        level = logging.LevelError
    }
    
    return logger.SetLevel(level)
}

Error Handling

Custom Logger Limitation

Dynamic level changes don’t work with custom loggers:

customLogger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
logger := logging.MustNew(
    logging.WithCustomLogger(customLogger),
)

// This fails
err := logger.SetLevel(logging.LevelDebug)
if errors.Is(err, logging.ErrCannotChangeLevel) {
    fmt.Println("Cannot change level on custom logger")
}

Workaround: Control level in your custom logger directly:

var levelVar slog.LevelVar
levelVar.Set(slog.LevelInfo)

customLogger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
    Level: &levelVar,
}))

// Change level directly
levelVar.Set(slog.LevelDebug)

Validation

Validate level before setting:

func setLevelSafe(logger *logging.Logger, levelStr string) error {
    levelMap := map[string]logging.Level{
        "debug": logging.LevelDebug,
        "info":  logging.LevelInfo,
        "warn":  logging.LevelWarn,
        "error": logging.LevelError,
    }
    
    level, ok := levelMap[levelStr]
    if !ok {
        return fmt.Errorf("invalid level: %s", levelStr)
    }
    
    return logger.SetLevel(level)
}

Use Cases

Troubleshooting Production

Enable debug logging temporarily to diagnose an issue:

# Enable debug logs
curl -X POST "http://localhost:8080/admin/loglevel?level=debug"

# Reproduce issue and capture logs

# Restore normal level
curl -X POST "http://localhost:8080/admin/loglevel?level=info"

Traffic Spike Response

Reduce logging during high traffic:

func monitorTraffic(logger *logging.Logger) {
    ticker := time.NewTicker(time.Minute)
    for range ticker.C {
        rps := getCurrentRPS()
        
        if rps > 10000 {
            // High traffic - reduce logging
            logger.SetLevel(logging.LevelWarn)
            logger.Warn("high traffic detected, reducing log level", "rps", rps)
        } else if rps < 5000 {
            // Normal traffic - restore info logging
            logger.SetLevel(logging.LevelInfo)
        }
    }
}

Gradual Rollout

Gradually enable debug logging across a fleet:

func gradualDebugRollout(logger *logging.Logger, percentage int) {
    // Only enable debug on N% of instances
    if rand.Intn(100) < percentage {
        logger.SetLevel(logging.LevelDebug)
        logger.Info("debug logging enabled in rollout", "percentage", percentage)
    }
}

Environment-Based Levels

Set initial level based on environment, allow runtime changes:

func initLogger() *logging.Logger {
    initialLevel := logging.LevelInfo
    
    switch os.Getenv("ENV") {
    case "development":
        initialLevel = logging.LevelDebug
    case "staging":
        initialLevel = logging.LevelInfo
    case "production":
        initialLevel = logging.LevelWarn
    }
    
    return logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithLevel(initialLevel),
    )
}

Best Practices

Secure Admin Endpoints

Protect level-changing endpoints:

func logLevelHandler(logger *logging.Logger) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        // Authenticate admin
        token := r.Header.Get("X-Admin-Token")
        if !isValidAdminToken(token) {
            http.Error(w, "Unauthorized", http.StatusUnauthorized)
            return
        }
        
        // Rate limit
        if !rateLimiter.Allow() {
            http.Error(w, "Too many requests", http.StatusTooManyRequests)
            return
        }
        
        // Change level logic...
    }
}

Log Level Changes

Always log when level changes:

oldLevel := logger.Level()
logger.SetLevel(newLevel)
logger.Info("log level changed",
    "old_level", oldLevel.String(),
    "new_level", newLevel.String(),
    "reason", reason,
)

Automatic Restoration

Reset to safe level after debugging:

func debugWithTimeout(logger *logging.Logger, duration time.Duration) func() {
    oldLevel := logger.Level()
    logger.SetLevel(logging.LevelDebug)
    
    timer := time.AfterFunc(duration, func() {
        logger.SetLevel(oldLevel)
        logger.Info("debug session ended, level restored")
    })
    
    // Return cancellation function
    return func() {
        timer.Stop()
        logger.SetLevel(oldLevel)
    }
}

// Usage
cancel := debugWithTimeout(logger, 10*time.Minute)
defer cancel()

Monitor Level Changes

Track level changes over time:

type LevelChangeTracker struct {
    changes []LevelChange
    mu      sync.Mutex
}

type LevelChange struct {
    Timestamp time.Time
    OldLevel  logging.Level
    NewLevel  logging.Level
    Reason    string
}

func (t *LevelChangeTracker) Track(old, new logging.Level, reason string) {
    t.mu.Lock()
    defer t.mu.Unlock()
    
    t.changes = append(t.changes, LevelChange{
        Timestamp: time.Now(),
        OldLevel:  old,
        NewLevel:  new,
        Reason:    reason,
    })
}

Performance Considerations

Level Check Cost

Level checks are very fast:

// ~5ns per call
if logger.Logger().Enabled(ctx, logging.LevelDebug) {
    // Expensive debug operation
}

Transitional Race

Brief window where log level is transitioning:

// T0: Level is Info
logger.SetLevel(logging.LevelDebug)  // T1: Transitioning...
// T2: Level is Debug

Impact: Some logs during T1 may use old or new level inconsistently.

Mitigation: Accept minor inconsistency during transition (typically <1ms).

Next Steps

For API details, see the API Reference.

8 - Router Integration

Integrate logging with Rivaas router and the app package for full observability

This guide covers integrating the logging package with the Rivaas router and the app package for comprehensive observability.

Overview

The logging package integrates seamlessly with the Rivaas ecosystem:

  • Router - Set logger via SetLogger() method
  • App package - Automatic wiring with metrics and tracing
  • Context propagation - Automatic context-aware logging
  • Middleware - Access log and custom middleware support

Basic Router Integration

Set a logger on the router to enable request logging.

Simple Integration

import (
    "rivaas.dev/router"
    "rivaas.dev/logging"
)

func main() {
    // Create logger
    logger := logging.MustNew(
        logging.WithConsoleHandler(),
        logging.WithDebugLevel(),
    )
    
    // Create router and set logger
    r := router.MustNew()
    r.SetLogger(logger)
    
    r.GET("/", func(c *router.Context) {
        c.Logger().Info("handling request")
        c.JSON(200, map[string]string{"status": "ok"})
    })
    
    r.Run(":8080")
}

Accessing Logger in Handlers

The router context provides a logger instance:

r.GET("/api/users/:id", func(c *router.Context) {
    userID := c.Param("id")
    
    // Get logger from context
    log := c.Logger()
    log.Info("fetching user", "user_id", userID)
    
    user, err := fetchUser(userID)
    if err != nil {
        log.Error("failed to fetch user", "error", err, "user_id", userID)
        c.JSON(500, gin.H{"error": "internal server error"})
        return
    }
    
    c.JSON(200, user)
})

App Package Integration

The app package provides batteries-included observability wiring.

Full Observability Setup

import (
    "rivaas.dev/app"
    "rivaas.dev/logging"
    "rivaas.dev/tracing"
)

func main() {
    a, err := app.New(
        app.WithServiceName("my-api"),
        app.WithObservability(
            app.WithLogging(
                logging.WithJSONHandler(),
                logging.WithLevel(logging.LevelInfo),
            ),
            app.WithMetrics(), // Prometheus is default
            app.WithTracing(
                tracing.WithOTLP("localhost:4317"),
            ),
        ),
    )
    if err != nil {
        log.Fatal(err)
    }
    defer a.Shutdown(context.Background())
    
    // Get router with logging, metrics, and tracing configured
    router := a.Router()
    
    router.GET("/api/users", func(c *router.Context) {
        // Logger automatically includes trace_id and span_id
        c.Logger().Info("fetching users")
        c.JSON(200, fetchUsers())
    })
    
    a.Run(":8080")
}

Benefits:

  • Automatic service metadata (name, version, environment)
  • Trace correlation (logs include trace_id and span_id)
  • Metrics integration (log metrics alongside custom metrics)
  • Graceful shutdown handling

Component Access

Access observability components from the app:

a, _ := app.New(
    app.WithServiceName("my-api"),
    app.WithObservability(
        app.WithLogging(logging.WithJSONHandler()),
        app.WithMetrics(),
        app.WithTracing(tracing.WithOTLP("localhost:4317")),
    ),
)

// Access components
logger := a.Logger()
router := a.Router()
tracer := a.Tracer()
metrics := a.Metrics()

// Use logger directly
logger.Info("application started", "port", 8080)

Context-Aware Logging

Router contexts automatically support trace correlation.

Automatic Trace Correlation

r.GET("/api/process", func(c *router.Context) {
    // Logger from context is automatically trace-aware
    log := c.Logger()
    
    log.Info("processing started")
    // Output includes trace_id and span_id if tracing enabled
    
    result := processData()
    
    log.Info("processing completed", "items", result.Count)
})

Output (with tracing enabled):

{
  "level": "INFO",
  "msg": "processing started",
  "trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
  "span_id": "00f067aa0ba902b7",
  "service": "my-api"
}

Manual Context Logger

Create a context logger explicitly:

r.GET("/api/data", func(c *router.Context) {
    // Get base logger
    baseLogger := a.Logger()
    
    // Create context logger with trace info
    cl := logging.NewContextLogger(c.Request.Context(), baseLogger)
    
    cl.Info("processing request")
})

Access Log Middleware

The router includes built-in access log middleware.

Enable Access Logging

import "rivaas.dev/router/middleware/accesslog"

r := router.MustNew()
logger := logging.MustNew(logging.WithJSONHandler())
r.SetLogger(logger)

// Enable access logging
r.Use(accesslog.New())

r.GET("/", func(c *router.Context) {
    c.JSON(200, gin.H{"status": "ok"})
})

Output:

{
  "level": "INFO",
  "msg": "http request",
  "method": "GET",
  "path": "/",
  "status": 200,
  "duration_ms": 5,
  "bytes": 18,
  "remote": "192.168.1.1:54321",
  "user_agent": "Mozilla/5.0..."
}

Customize Access Logs

Exclude specific paths from access logs:

r.Use(accesslog.New(
    accesslog.WithExcludePaths("/health", "/metrics", "/ready"),
))

Add custom fields:

r.Use(accesslog.New(
    accesslog.WithFields(func(c *router.Context) map[string]any {
        return map[string]any{
            "api_version": c.GetHeader("X-API-Version"),
            "client_id": c.GetHeader("X-Client-ID"),
        }
    }),
))

Environment Variables

Configure logging via environment variables.

Standard OpenTelemetry Variables

# Service identification
export OTEL_SERVICE_NAME=my-api
export OTEL_SERVICE_VERSION=v1.0.0
export RIVAAS_ENVIRONMENT=production

The app package automatically reads these:

a, _ := app.New(
    // Service name from OTEL_SERVICE_NAME
    app.WithObservability(
        app.WithLogging(logging.WithJSONHandler()),
    ),
)

logger := a.Logger()
logger.Info("service started")
// Automatically includes service="my-api", version="v1.0.0", env="production"

Custom Environment Configuration

func createLogger() *logging.Logger {
    var opts []logging.Option
    
    // Handler based on environment
    switch os.Getenv("ENV") {
    case "development":
        opts = append(opts, logging.WithConsoleHandler())
    default:
        opts = append(opts, logging.WithJSONHandler())
    }
    
    // Level from environment
    logLevel := os.Getenv("LOG_LEVEL")
    switch logLevel {
    case "debug":
        opts = append(opts, logging.WithDebugLevel())
    case "warn":
        opts = append(opts, logging.WithLevel(logging.LevelWarn))
    case "error":
        opts = append(opts, logging.WithLevel(logging.LevelError))
    default:
        opts = append(opts, logging.WithLevel(logging.LevelInfo))
    }
    
    // Service metadata
    opts = append(opts,
        logging.WithServiceName(os.Getenv("SERVICE_NAME")),
        logging.WithServiceVersion(os.Getenv("SERVICE_VERSION")),
        logging.WithEnvironment(os.Getenv("ENV")),
    )
    
    return logging.MustNew(opts...)
}

Custom Middleware

Create custom logging middleware for specialized needs.

Request ID Middleware

func requestIDMiddleware(logger *logging.Logger) router.HandlerFunc {
    return func(c *router.Context) {
        requestID := c.GetHeader("X-Request-ID")
        if requestID == "" {
            requestID = generateRequestID()
        }
        
        // Add request ID to request context
        ctx := c.Request.Context()
        ctx = context.WithValue(ctx, "request_id", requestID)
        
        // Create logger with request ID
        reqLogger := logger.With("request_id", requestID)
        ctx = context.WithValue(ctx, "logger", reqLogger)
        c.Request = c.Request.WithContext(ctx)
        
        c.Next()
    }
}

// Usage
r.Use(requestIDMiddleware(logger))

User Context Middleware

func userContextMiddleware() router.HandlerFunc {
    return func(c *router.Context) {
        userID := extractUserID(c)
        
        if userID != "" {
            // Add user ID to logger
            log := c.Logger().With("user_id", userID)
            ctx := context.WithValue(c.Request.Context(), "logger", log)
            c.Request = c.Request.WithContext(ctx)
        }
        
        c.Next()
    }
}

Error Logging Middleware

func errorLoggingMiddleware() router.HandlerFunc {
    return func(c *router.Context) {
        c.Next()
        
        // Log errors after handler completes
        if c.HasErrors() {
            log := c.Logger()
            for _, err := range c.Errors() {
                log.Error("request error",
                    "error", err.Error(),
                    "type", err.Type,
                    "path", c.Request.URL.Path,
                )
            }
        }
    }
}

Complete Integration Example

Putting it all together:

package main

import (
    "context"
    "os"
    "rivaas.dev/app"
    "rivaas.dev/logging"
    "rivaas.dev/tracing"
    "rivaas.dev/router/middleware/accesslog"
)

func main() {
    // Initialize app with full observability
    a, err := app.New(
        app.WithServiceName("payment-api"),
        app.WithServiceVersion("v2.1.0"),
        app.WithObservability(
            app.WithLogging(
                logging.WithJSONHandler(),
                logging.WithLevel(logging.LevelInfo),
                logging.WithEnvironment(os.Getenv("ENV")),
            ),
            app.WithMetrics(),
            app.WithTracing(
                tracing.WithOTLP("localhost:4317"),
            ),
        ),
    )
    if err != nil {
        panic(err)
    }
    defer a.Shutdown(context.Background())
    
    router := a.Router()
    logger := a.Logger()
    
    // Add middleware
    router.Use(accesslog.New(
        accesslog.WithExcludePaths("/health", "/ready"),
    ))
    
    // Health endpoint (no logging)
    router.GET("/health", func(c *router.Context) {
        c.JSON(200, gin.H{"status": "healthy"})
    })
    
    // API endpoints (with logging and tracing)
    api := router.Group("/api/v1")
    {
        api.POST("/payments", func(c *router.Context) {
            log := c.Logger()
            log.Info("payment request received")
            
            var payment Payment
            if err := c.BindJSON(&payment); err != nil {
                log.Error("invalid payment request", "error", err)
                c.JSON(400, gin.H{"error": "invalid request"})
                return
            }
            
            result, err := processPayment(c.Request.Context(), payment)
            if err != nil {
                log.Error("payment processing failed", 
                    "error", err,
                    "payment_id", payment.ID,
                )
                c.JSON(500, gin.H{"error": "processing failed"})
                return
            }
            
            log.Info("payment processed successfully",
                "payment_id", payment.ID,
                "amount", payment.Amount,
                "status", result.Status,
            )
            
            c.JSON(200, result)
        })
    }
    
    // Start server
    logger.Info("starting server", "port", 8080)
    if err := a.Run(":8080"); err != nil {
        logger.Error("server error", "error", err)
    }
}

Best Practices

Per-Request Loggers

Create request-scoped loggers with context:

r.GET("/api/data", func(c *router.Context) {
    log := c.Logger().With(
        "request_id", c.GetHeader("X-Request-ID"),
        "user_id", extractUserID(c),
    )
    
    log.Info("request started")
    // All subsequent logs include request_id and user_id
    log.Info("processing")
    log.Info("request completed")
})

Structured Context

Add structured context early in request lifecycle:

func contextMiddleware() router.HandlerFunc {
    return func(c *router.Context) {
        log := c.Logger().With(
            "path", c.Request.URL.Path,
            "method", c.Request.Method,
            "request_id", c.Request.Header.Get("X-Request-ID"),
        )
        ctx := context.WithValue(c.Request.Context(), "logger", log)
        c.Request = c.Request.WithContext(ctx)
        c.Next()
    }
}

Avoid Logging in Hot Paths

Use access log middleware instead of manual logging:

// BAD - manual logging in every handler
r.GET("/api/users", func(c *router.Context) {
    log := c.Logger()
    log.Info("request", "path", c.Request.URL.Path) // Duplicate
    // ... handle request
    log.Info("response", "status", 200) // Use access log instead
})

// GOOD - use access log middleware
r.Use(accesslog.New())
r.GET("/api/users", func(c *router.Context) {
    // Handle request - logging handled by middleware
})

Next Steps

For API details, see the API Reference.

9 - Testing

Test utilities and patterns for logging in unit and integration tests

This guide covers testing with the logging package. It includes test utilities, assertions, and best practices.

Overview

The logging package provides comprehensive testing utilities:

  • TestHelper - High-level test utilities with assertions.
  • NewTestLogger - Simple logger with in-memory buffer.
  • MockWriter - Record and inspect write operations.
  • CountingWriter - Track log volume without storing content.
  • SlowWriter - Simulate slow I/O for timeout testing.
  • HandlerSpy - Spy on slog.Handler operations.

Quick Start

Simple Test Logger

Create a logger with in-memory output:

func TestMyFunction(t *testing.T) {
    logger, buf := logging.NewTestLogger()
    
    myFunction(logger)
    
    // Parse and inspect logs
    entries, err := logging.ParseJSONLogEntries(buf)
    require.NoError(t, err)
    
    require.Len(t, entries, 1)
    assert.Equal(t, "INFO", entries[0].Level)
    assert.Equal(t, "operation completed", entries[0].Message)
}

TestHelper

High-level testing utility with convenience methods.

Basic Usage

func TestUserService(t *testing.T) {
    th := logging.NewTestHelper(t)
    
    svc := NewUserService(th.Logger)
    svc.CreateUser("alice")
    
    // Check logs were written
    th.AssertLog(t, "INFO", "user created", map[string]any{
        "username": "alice",
    })
}

TestHelper Methods

ContainsLog - Check if message exists:

if !th.ContainsLog("user created") {
    t.Error("expected user created log")
}

ContainsAttr - Check if attribute exists:

if !th.ContainsAttr("user_id", "123") {
    t.Error("expected user_id attribute")
}

CountLevel - Count logs by level:

errorCount := th.CountLevel("ERROR")
assert.Equal(t, 2, errorCount)

LastLog - Get most recent log:

last, err := th.LastLog()
require.NoError(t, err)
assert.Equal(t, "INFO", last.Level)

Logs - Get all logs:

logs, err := th.Logs()
require.NoError(t, err)
for _, log := range logs {
    fmt.Printf("%s: %s\n", log.Level, log.Message)
}

Reset - Clear buffer:

th.Reset()  // Start fresh for next test phase

Custom Configuration

Pass options to customize the test logger:

th := logging.NewTestHelper(t,
    logging.WithLevel(logging.LevelWarn),  // Only warnings and errors
    logging.WithServiceName("test-service"),
)

Parsing Log Entries

Parse JSON logs for inspection.

ParseJSONLogEntries

func TestLogging(t *testing.T) {
    logger, buf := logging.NewTestLogger()
    
    logger.Info("test message", "key", "value")
    logger.Error("test error", "error", "something failed")
    
    entries, err := logging.ParseJSONLogEntries(buf)
    require.NoError(t, err)
    require.Len(t, entries, 2)
    
    // First entry
    assert.Equal(t, "INFO", entries[0].Level)
    assert.Equal(t, "test message", entries[0].Message)
    assert.Equal(t, "value", entries[0].Attrs["key"])
    
    // Second entry
    assert.Equal(t, "ERROR", entries[1].Level)
    assert.Equal(t, "something failed", entries[1].Attrs["error"])
}

LogEntry Structure

type LogEntry struct {
    Time    time.Time       // Log timestamp
    Level   string          // "DEBUG", "INFO", "WARN", "ERROR"
    Message string          // Log message
    Attrs   map[string]any  // All other fields
}

Mock Writers

Test utilities for inspecting write behavior.

MockWriter

Records all writes for inspection:

func TestWriteBehavior(t *testing.T) {
    mw := &logging.MockWriter{}
    logger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithOutput(mw),
    )
    
    logger.Info("test 1")
    logger.Info("test 2")
    logger.Info("test 3")
    
    // Verify write count
    assert.Equal(t, 3, mw.WriteCount())
    
    // Inspect last write
    lastWrite := mw.LastWrite()
    assert.Contains(t, string(lastWrite), "test 3")
    
    // Check total bytes
    assert.Greater(t, mw.BytesWritten(), 0)
    
    // Reset for next test
    mw.Reset()
}

CountingWriter

Count bytes without storing content:

func TestLogVolume(t *testing.T) {
    cw := &logging.CountingWriter{}
    logger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithOutput(cw),
    )
    
    for i := 0; i < 1000; i++ {
        logger.Info("test message", "index", i)
    }
    
    // Verify volume
    bytesLogged := cw.Count()
    t.Logf("Total bytes logged: %d", bytesLogged)
    
    // Useful for volume tests without memory overhead
}

SlowWriter

Simulate slow I/O:

func TestSlowLogging(t *testing.T) {
    buf := &bytes.Buffer{}
    sw := logging.NewSlowWriter(100*time.Millisecond, buf)
    
    logger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithOutput(sw),
    )
    
    start := time.Now()
    logger.Info("test")
    duration := time.Since(start)
    
    // Verify delay
    assert.GreaterOrEqual(t, duration, 100*time.Millisecond)
}

Testing Patterns

Testing Error Logging

func TestErrorHandling(t *testing.T) {
    th := logging.NewTestHelper(t)
    
    svc := NewService(th.Logger)
    err := svc.DoSomethingThatFails()
    
    require.Error(t, err)
    
    // Verify error was logged
    th.AssertLog(t, "ERROR", "operation failed", map[string]any{
        "error": "expected failure",
    })
}

Testing Log Levels

func TestLogLevels(t *testing.T) {
    th := logging.NewTestHelper(t,
        logging.WithLevel(logging.LevelWarn),
    )
    
    th.Logger.Debug("debug message")  // Won't appear
    th.Logger.Info("info message")    // Won't appear
    th.Logger.Warn("warn message")    // Will appear
    th.Logger.Error("error message")  // Will appear
    
    logs, _ := th.Logs()
    assert.Len(t, logs, 2)
    assert.Equal(t, "WARN", logs[0].Level)
    assert.Equal(t, "ERROR", logs[1].Level)
}

Testing Structured Fields

func TestStructuredLogging(t *testing.T) {
    th := logging.NewTestHelper(t)
    
    th.Logger.Info("user action",
        "user_id", "123",
        "action", "login",
        "timestamp", time.Now().Unix(),
    )
    
    // Verify specific attributes
    assert.True(t, th.ContainsAttr("user_id", "123"))
    assert.True(t, th.ContainsAttr("action", "login"))
    
    // Or use AssertLog for multiple attributes
    th.AssertLog(t, "INFO", "user action", map[string]any{
        "user_id": "123",
        "action":  "login",
    })
}

Testing Sampling

func TestSampling(t *testing.T) {
    th := logging.NewTestHelper(t,
        logging.WithSampling(logging.SamplingConfig{
            Initial:    10,
            Thereafter: 100,
            Tick:       time.Minute,
        }),
    )
    
    // Log many entries
    for i := 0; i < 1000; i++ {
        th.Logger.Info("test", "index", i)
    }
    
    logs, _ := th.Logs()
    
    // Should have ~20 logs (10 initial + ~10 sampled)
    assert.Less(t, len(logs), 50)
    assert.Greater(t, len(logs), 10)
}

Testing Context Logging

func TestContextLogger(t *testing.T) {
    th := logging.NewTestHelper(t)
    
    // Create context with trace info (mocked)
    ctx := context.Background()
    // Add trace to context...
    
    cl := logging.NewContextLogger(ctx, th.Logger)
    cl.Info("traced message")
    
    // Verify trace IDs in logs
    logs, _ := th.Logs()
    require.Len(t, logs, 1)
    
    // Check for trace_id if tracing was active
    if traceID := cl.TraceID(); traceID != "" {
        assert.Equal(t, traceID, logs[0].Attrs["trace_id"])
    }
}

Table-Driven Tests

Use table-driven tests for comprehensive coverage:

func TestLogLevels(t *testing.T) {
    tests := []struct {
        name          string
        level         logging.Level
        logFunc       func(*logging.Logger)
        expectLogged  bool
    }{
        {
            name:  "debug at info level",
            level: logging.LevelInfo,
            logFunc: func(l *logging.Logger) {
                l.Debug("debug message")
            },
            expectLogged: false,
        },
        {
            name:  "info at info level",
            level: logging.LevelInfo,
            logFunc: func(l *logging.Logger) {
                l.Info("info message")
            },
            expectLogged: true,
        },
        {
            name:  "error at warn level",
            level: logging.LevelWarn,
            logFunc: func(l *logging.Logger) {
                l.Error("error message")
            },
            expectLogged: true,
        },
    }
    
    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            th := logging.NewTestHelper(t,
                logging.WithLevel(tt.level),
            )
            
            tt.logFunc(th.Logger)
            
            logs, _ := th.Logs()
            if tt.expectLogged {
                assert.Len(t, logs, 1)
            } else {
                assert.Len(t, logs, 0)
            }
        })
    }
}

Integration Testing

Test logger integration with other components.

With HTTP Handlers

func TestHTTPHandler(t *testing.T) {
    th := logging.NewTestHelper(t)
    
    handler := NewHandler(th.Logger)
    
    req := httptest.NewRequest("GET", "/api/users", nil)
    rec := httptest.NewRecorder()
    
    handler.ServeHTTP(rec, req)
    
    // Verify logging
    th.AssertLog(t, "INFO", "request processed", map[string]any{
        "method": "GET",
        "path":   "/api/users",
    })
}

With Router

func TestRouterLogging(t *testing.T) {
    th := logging.NewTestHelper(t)
    
    r := router.MustNew()
    r.SetLogger(th.Logger)
    
    r.GET("/test", func(c *router.Context) {
        c.Logger().Info("handler called")
        c.JSON(200, gin.H{"status": "ok"})
    })
    
    // Make request
    req := httptest.NewRequest("GET", "/test", nil)
    rec := httptest.NewRecorder()
    r.ServeHTTP(rec, req)
    
    // Verify handler logged
    assert.True(t, th.ContainsLog("handler called"))
}

Best Practices

Reset Between Tests

func TestMultiplePhases(t *testing.T) {
    th := logging.NewTestHelper(t)
    
    // Phase 1
    th.Logger.Info("phase 1")
    assert.True(t, th.ContainsLog("phase 1"))
    
    // Reset for phase 2
    th.Reset()
    
    // Phase 2
    th.Logger.Info("phase 2")
    logs, _ := th.Logs()
    assert.Len(t, logs, 1)  // Only phase 2 log
}

Use Subtests

func TestLogging(t *testing.T) {
    th := logging.NewTestHelper(t)
    
    t.Run("info logging", func(t *testing.T) {
        th.Logger.Info("info message")
        assert.True(t, th.ContainsLog("info message"))
        th.Reset()
    })
    
    t.Run("error logging", func(t *testing.T) {
        th.Logger.Error("error message")
        assert.Equal(t, 1, th.CountLevel("ERROR"))
        th.Reset()
    })
}

Test Isolation

Each test should have its own logger:

func TestA(t *testing.T) {
    th := logging.NewTestHelper(t)  // Independent logger
    // Test A logic...
}

func TestB(t *testing.T) {
    th := logging.NewTestHelper(t)  // Independent logger
    // Test B logic...
}

Running Tests

# Run all tests
go test ./...

# Run with verbose output
go test -v ./...

# Run specific test
go test -run TestMyFunction

# With coverage
go test -cover ./...

# With race detector
go test -race ./...

Next Steps

For complete API details, see the API Reference.

10 - Best Practices

Production-ready logging patterns, performance tips, and recommended practices

This guide covers best practices for using the logging package in production environments.

Structured Logging

Always use structured fields instead of string concatenation.

Use Structured Fields

BAD - String concatenation:

log.Info("User " + userID + " logged in from " + ipAddress)

GOOD - Structured fields:

log.Info("user logged in",
    "user_id", userID,
    "ip_address", ipAddress,
    "session_id", sessionID,
)

Benefits:

  • Machine-parseable
  • Searchable by specific fields
  • Type-safe (numbers stay numbers)
  • Easier to aggregate and visualize
  • Better for log aggregation tools

Consistent Field Naming

Use consistent field names across your application:

// Good - consistent naming
log.Info("request started", "user_id", userID)
log.Info("database query", "user_id", userID)
log.Info("response sent", "user_id", userID)

// Bad - inconsistent naming
log.Info("request started", "user_id", userID)
log.Info("database query", "userId", userID)      // Different name
log.Info("response sent", "user", userID)         // Different name

Recommended conventions:

  • Use snake_case: user_id, request_id, duration_ms
  • Be specific: http_status not status, db_host not host
  • Use consistent units: duration_ms, size_bytes, count

Log Appropriate Levels

Choose the right log level for each message.

Level Guidelines

DEBUG - Detailed information for debugging

log.Debug("cache lookup",
    "key", cacheKey,
    "ttl", ttl,
    "hit", hit,
)

Use DEBUG for:

  • Internal state inspection
  • Flow control details
  • Cache hits/misses
  • Detailed algorithm steps

INFO - General informational messages

log.Info("server started",
    "port", 8080,
    "version", version,
)

Use INFO for:

  • Application lifecycle events (start, stop)
  • Significant business events
  • Successful operations
  • Configuration values

WARN - Warning but not an error

log.Warn("high memory usage",
    "used_mb", 8192,
    "total_mb", 16384,
    "percentage", 50,
)

Use WARN for:

  • Degraded performance
  • Using fallback behavior
  • Deprecated feature usage
  • Resource constraints

ERROR - Errors that need attention

log.Error("database connection failed",
    "error", err,
    "host", dbHost,
    "retry_count", retries,
)

Use ERROR for:

  • Operation failures
  • Exception conditions
  • Data integrity issues
  • External service failures

Include Context

Always include relevant context with log messages.

Minimal Context

// Bad - no context
log.Error("failed to save", "error", err)

Better - Includes Context

// Good - includes relevant context
log.Error("failed to save user data",
    "error", err,
    "user_id", user.ID,
    "operation", "update_profile",
    "retry_count", retries,
    "elapsed_ms", elapsed.Milliseconds(),
)

Context checklist:

  • What operation failed?
  • Which entity was involved?
  • What were the inputs?
  • How many times did we retry?
  • How long did it take?

Performance Considerations

Follow these guidelines for high-performance logging.

Avoid Logging in Tight Loops

BAD - logs thousands of times:

for _, item := range items {
    log.Debug("processing item", "item", item)
    process(item)
}

GOOD - log once with summary:

log.Info("processing batch started", "count", len(items))

for _, item := range items {
    process(item)
}

log.Info("processing batch completed",
    "count", len(items),
    "duration_ms", elapsed.Milliseconds(),
)

Use Appropriate Log Levels in Production

// Production configuration
logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelInfo),  // Skip debug logs
)

Impact:

  • DEBUG logs have overhead even if not written
  • Level checks are fast but not free
  • Set INFO or WARN in production

Defer Expensive Operations

Only compute expensive values if the log will be written:

// Bad - always computes
log.Debug("state", "expensive", expensiveComputation())

// Good - only compute if debug enabled
if log.Logger().Enabled(context.Background(), logging.LevelDebug) {
    log.Debug("state", "expensive", expensiveComputation())
}

Use Log Sampling

For high-volume services:

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithSampling(logging.SamplingConfig{
        Initial:    1000,
        Thereafter: 100,  // 1% sampling
        Tick:       time.Minute,
    }),
)

See Log Sampling for details.

Don’t Log Sensitive Data

Protect user privacy and security.

Automatically Redacted Fields

These fields are automatically redacted:

  • password
  • token
  • secret
  • api_key
  • authorization
log.Info("authentication attempt",
    "username", "alice",
    "password", "secret123",  // Automatically redacted
)
// Output: {...,"password":"***REDACTED***"}

Custom Sensitive Fields

Add custom redaction:

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithReplaceAttr(func(groups []string, a slog.Attr) slog.Attr {
        // Redact credit cards
        if a.Key == "credit_card" {
            return slog.String(a.Key, "***REDACTED***")
        }
        // Redact email addresses
        if a.Key == "email" {
            return slog.String(a.Key, maskEmail(a.Value.String()))
        }
        return a
    }),
)

What Not to Log

Never log:

  • Passwords or password hashes
  • Credit card numbers
  • Social Security numbers
  • API keys and tokens
  • Private keys
  • Session tokens
  • Personal health information (PHI)
  • Personally identifiable information (PII) without consent

Production Configuration

Recommended production setup.

Production Logger

func NewProductionLogger() *logging.Logger {
    return logging.MustNew(
        logging.WithJSONHandler(),              // Machine-parseable
        logging.WithLevel(logging.LevelInfo),   // No debug spam
        logging.WithServiceName(os.Getenv("SERVICE_NAME")),
        logging.WithServiceVersion(os.Getenv("VERSION")),
        logging.WithEnvironment("production"),
        logging.WithOutput(os.Stdout),          // Stdout for container logs
    )
}

Development Logger

func NewDevelopmentLogger() *logging.Logger {
    return logging.MustNew(
        logging.WithConsoleHandler(),  // Human-readable
        logging.WithDebugLevel(),      // See everything
        logging.WithSource(true),      // File:line info
    )
}

Environment-Based Configuration

func NewLogger() *logging.Logger {
    if os.Getenv("ENV") == "production" {
        return NewProductionLogger()
    }
    return NewDevelopmentLogger()
}

Error Handling Patterns

Best practices for logging errors.

Always Include Error Details

if err := db.Connect(); err != nil {
    log.Error("database connection failed",
        "error", err,
        "host", dbHost,
        "port", dbPort,
        "database", dbName,
        "retry_count", retries,
    )
    return err
}

Use LogError for Consistency

if err := operation(); err != nil {
    logger.LogError(err, "operation failed",
        "operation", "process_payment",
        "user_id", userID,
    )
    return err
}

Stack Traces for Critical Errors Only

// Normal error - no stack trace
if err := validation(); err != nil {
    logger.LogError(err, "validation failed", "field", field)
    return err
}

// Critical error - with stack trace
if err := criticalOperation(); err != nil {
    logger.ErrorWithStack("critical failure", err, true,
        "operation", "process_payment",
        "amount", amount,
    )
    return err
}

Request Logging

Best practices for HTTP request logging.

Use Access Log Middleware

import "rivaas.dev/router/middleware/accesslog"

r := router.MustNew()
r.SetLogger(logger)
r.Use(accesslog.New(
    accesslog.WithExcludePaths("/health", "/metrics"),
))

Don’t manually log every request:

// Bad - redundant with access log
r.GET("/api/users", func(c *router.Context) {
    c.Logger().Info("request received")  // Don't do this
    // ... handle request
    c.Logger().Info("request completed") // Don't do this
})

Per-Request Context

Add request-specific fields:

r.Use(func(c *router.Context) {
    reqLogger := c.Logger().With(
        "request_id", c.GetHeader("X-Request-ID"),
        "user_id", extractUserID(c),
    )
    ctx := context.WithValue(c.Request.Context(), "logger", reqLogger)
        c.Request = c.Request.WithContext(ctx)
    c.Next()
})

Graceful Shutdown

Always shut down loggers gracefully.

With Context Timeout

func main() {
    logger := logging.MustNew(logging.WithJSONHandler())
    defer func() {
        ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
        defer cancel()
        
        if err := logger.Shutdown(ctx); err != nil {
            fmt.Fprintf(os.Stderr, "logger shutdown error: %v\n", err)
        }
    }()
    
    // Application logic...
}

With Signal Handling

func main() {
    logger := logging.MustNew(logging.WithJSONHandler())
    
    // Setup signal handling
    sigChan := make(chan os.Signal, 1)
    signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
    
    go func() {
        <-sigChan
        logger.Info("shutting down...")
        
        ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
        defer cancel()
        logger.Shutdown(ctx)
        
        os.Exit(0)
    }()
    
    // Application logic...
}

Testing Considerations

Make logging testable.

Inject Loggers

// Good - logger injected
type Service struct {
    logger *logging.Logger
}

func NewService(logger *logging.Logger) *Service {
    return &Service{logger: logger}
}

// In tests
func TestService(t *testing.T) {
    th := logging.NewTestHelper(t)
    svc := NewService(th.Logger)
    // Test and verify logs
}

Don’t use global loggers:

// Bad - global logger
var log = logging.MustNew(logging.WithJSONHandler())

type Service struct{}

func (s *Service) DoSomething() {
    log.Info("doing something")  // Can't test
}

Common Anti-Patterns

Avoid these common mistakes.

String Formatting in Log Calls

// Bad - string formatting
log.Info(fmt.Sprintf("User %s did %s", user, action))

// Good - structured fields
log.Info("user action", "user", user, "action", action)

Logging in Library Code

// Bad - library logging directly
func LibraryFunction() {
    log.Info("library function called")
}

// Good - library returns errors
func LibraryFunction() error {
    if err := something(); err != nil {
        return fmt.Errorf("library operation failed: %w", err)
    }
    return nil
}

// Caller logs
if err := LibraryFunction(); err != nil {
    log.Error("library call failed", "error", err)
}

Ignoring Shutdown Errors

// Bad - ignoring shutdown
defer logger.Shutdown(context.Background())

// Good - handling shutdown errors
defer func() {
    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
    defer cancel()
    if err := logger.Shutdown(ctx); err != nil {
        fmt.Fprintf(os.Stderr, "shutdown error: %v\n", err)
    }
}()

Monitoring and Alerting

Set up log-based monitoring.

Log Metrics

Track log volumes by level:

var logMetrics = struct {
    debugCount, infoCount, warnCount, errorCount atomic.Int64
}{}

// Periodically export metrics
go func() {
    ticker := time.NewTicker(time.Minute)
    for range ticker.C {
        metricsLogger.Info("log metrics",
            "debug_count", logMetrics.debugCount.Swap(0),
            "info_count", logMetrics.infoCount.Swap(0),
            "warn_count", logMetrics.warnCount.Swap(0),
            "error_count", logMetrics.errorCount.Swap(0),
        )
    }
}()

Alert on Error Rates

Configure alerts in your logging system:

  • Alert if ERROR count > 100/minute
  • Alert if ERROR rate increases >50% baseline
  • Alert on specific error patterns

Next Steps

For complete API details, see the API Reference.

11 - Migration Guides

Switch from other popular Go logging libraries to Rivaas logging

This guide helps you migrate from other popular Go logging libraries to Rivaas logging.

Overview

Switching to Rivaas logging is straightforward. The package offers better performance and stdlib integration while maintaining familiar patterns.

Common migrations:

From logrus

logrus is a popular structured logger, but Rivaas logging offers better performance and native Go integration.

Basic Setup

BEFORE (logrus):

import "github.com/sirupsen/logrus"

log := logrus.New()
log.SetFormatter(&logrus.JSONFormatter{})
log.SetLevel(logrus.InfoLevel)
log.SetOutput(os.Stdout)

log.WithFields(logrus.Fields{
    "user_id": "123",
    "action": "login",
}).Info("User logged in")

AFTER (rivaas/logging):

import "rivaas.dev/logging"

log := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelInfo),
    logging.WithOutput(os.Stdout),
)

log.Info("User logged in",
    "user_id", "123",
    "action", "login",
)

Key Differences

Featurelogrusrivaas/logging
Format config&logrus.JSONFormatter{}logging.WithJSONHandler()
FieldsWithFields(logrus.Fields{})Inline key-value pairs
Levellogrus.InfoLevellogging.LevelInfo
Performance~2000 ns/op~500 ns/op
DependenciesMany externalGo stdlib only

Migration Steps

  1. Replace import:
// Old
import "github.com/sirupsen/logrus"

// New
import "rivaas.dev/logging"
  1. Update initialization:
// Old
log := logrus.New()
log.SetFormatter(&logrus.JSONFormatter{})
log.SetLevel(logrus.InfoLevel)

// New
log := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelInfo),
)
  1. Convert WithFields calls:
// Old
log.WithFields(logrus.Fields{
    "user_id": "123",
    "action": "login",
}).Info("message")

// New
log.Info("message",
    "user_id", "123",
    "action", "login",
)
  1. Update log levels:
// Old
logrus.DebugLevel -> logging.LevelDebug
logrus.InfoLevel  -> logging.LevelInfo
logrus.WarnLevel  -> logging.LevelWarn
logrus.ErrorLevel -> logging.LevelError

From zap

zap is very fast, but Rivaas logging offers similar performance with a simpler API.

Basic Setup

BEFORE (zap):

import "go.uber.org/zap"

logger, _ := zap.NewProduction()
defer logger.Sync()

logger.Info("User logged in",
    zap.String("user_id", "123"),
    zap.String("action", "login"),
    zap.Int("status", 200),
)

AFTER (rivaas/logging):

import "rivaas.dev/logging"

logger := logging.MustNew(logging.WithJSONHandler())
defer logger.Shutdown(context.Background())

logger.Info("User logged in",
    "user_id", "123",
    "action", "login",
    "status", 200,
)

Key Differences

Featurezaprivaas/logging
Typed fieldszap.String("key", val)Direct values
Shutdownlogger.Sync()logger.Shutdown(ctx)
API styleTyped wrappersNative Go types
Performance~450 ns/op~500 ns/op
ComplexityHighLow

Migration Steps

  1. Replace import:
// Old
import "go.uber.org/zap"

// New
import "rivaas.dev/logging"
  1. Simplify initialization:
// Old
logger, _ := zap.NewProduction()

// New
logger := logging.MustNew(logging.WithJSONHandler())
  1. Remove type wrappers:
// Old
logger.Info("message",
    zap.String("name", name),
    zap.Int("count", count),
    zap.Bool("enabled", enabled),
)

// New
logger.Info("message",
    "name", name,
    "count", count,
    "enabled", enabled,
)
  1. Update shutdown:
// Old
defer logger.Sync()

// New
defer logger.Shutdown(context.Background())

From zerolog

zerolog is very fast, but Rivaas logging is simpler and uses stdlib.

Basic Setup

BEFORE (zerolog):

import "github.com/rs/zerolog"

logger := zerolog.New(os.Stdout).With().
    Str("service", "myapp").
    Str("version", "1.0.0").
    Logger()

logger.Info().
    Str("user_id", "123").
    Str("action", "login").
    Msg("User logged in")

AFTER (rivaas/logging):

import "rivaas.dev/logging"

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithServiceName("myapp"),
    logging.WithServiceVersion("1.0.0"),
)

logger.Info("User logged in",
    "user_id", "123",
    "action", "login",
)

Key Differences

Featurezerologrivaas/logging
API styleChainingFunctional options
Context.With().Str().Logger()WithServiceName()
Fields.Str("k", v).Msg()Inline pairs
Performance~400 ns/op~500 ns/op
ReadabilityMediumHigh

Migration Steps

  1. Replace import:
// Old
import "github.com/rs/zerolog"

// New
import "rivaas.dev/logging"
  1. Simplify initialization:
// Old
logger := zerolog.New(os.Stdout).With().
    Str("service", "myapp").
    Str("version", "1.0.0").
    Logger()

// New
logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithServiceName("myapp"),
    logging.WithServiceVersion("1.0.0"),
)
  1. Remove chaining:
// Old
logger.Info().
    Str("user_id", "123").
    Str("action", "login").
    Msg("User logged in")

// New
logger.Info("User logged in",
    "user_id", "123",
    "action", "login",
)

From stdlib log

Standard library log is simple but unstructured. Rivaas logging adds structure while using stdlib slog.

Basic Setup

BEFORE (stdlib log):

import "log"

log.SetOutput(os.Stdout)
log.SetPrefix("[INFO] ")
log.Printf("User %s logged in from %s", userID, ipAddress)

AFTER (rivaas/logging):

import "rivaas.dev/logging"

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelInfo),
)

logger.Info("User logged in",
    "user_id", userID,
    "ip_address", ipAddress,
)

Key Benefits

Featurestdlib logrivaas/logging
StructureNoYes
Log levelsNoYes
FormatsText onlyJSON, Text, Console
PerformanceFastFast
ParsingManualAutomatic

Migration Steps

  1. Replace import:
// Old
import "log"

// New
import "rivaas.dev/logging"
  1. Update initialization:
// Old
log.SetOutput(os.Stdout)
log.SetPrefix("[INFO] ")

// New
logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithLevel(logging.LevelInfo),
)
  1. Convert Printf to structured:
// Old
log.Printf("User %s logged in from %s", userID, ipAddress)

// New
logger.Info("User logged in",
    "user_id", userID,
    "ip_address", ipAddress,
)

Migration Checklist

Use this checklist when migrating:

  • Replace logger initialization
  • Update all log calls to structured format
  • Replace log level constants
  • Update context/field methods (WithFields → inline)
  • Replace typed field methods (zap.String → direct values)
  • Update error handling (Sync → Shutdown)
  • Test with new logger
  • Update imports
  • Remove old logger dependency from go.mod
  • Update documentation and examples

Gradual Migration

Migrate gradually to minimize risk.

Phase 1: Parallel Logging

Run both loggers side-by-side:

// Keep old logger
oldLogger := logrus.New()

// Add new logger
newLogger := logging.MustNew(logging.WithJSONHandler())

// Log to both
func logInfo(msg string, fields map[string]any) {
    // Old logger
    oldLogger.WithFields(logrus.Fields(fields)).Info(msg)
    
    // New logger
    args := make([]any, 0, len(fields)*2)
    for k, v := range fields {
        args = append(args, k, v)
    }
    newLogger.Info(msg, args...)
}

Phase 2: Feature Flag

Use feature flag to switch between loggers:

func getLogger() Logger {
    if os.Getenv("USE_NEW_LOGGER") == "true" {
        return logging.MustNew(logging.WithJSONHandler())
    }
    return logrus.New()
}

Phase 3: Full Migration

Once validated, remove old logger completely.

Performance Comparison

Benchmark results (lower is better):

Loggerns/opallocs/opDependencies
stdlib slog45000
rivaas/logging50001 (OTel)
zap4500Many
zerolog4000Several
logrus20005Many

Note: rivaas/logging overhead is minimal compared to stdlib slog while adding valuable features.

Common Migration Issues

Issue: Missing Fields

Problem: Fields not appearing in logs.

Solution: Check field names match new format:

// Wrong - using old field format
log.Info("message", logrus.Fields{"key": "value"})

// Right - inline key-value pairs
log.Info("message", "key", "value")

Issue: Log Level Not Working

Problem: Debug logs not appearing.

Solution: Check log level configuration:

logger := logging.MustNew(
    logging.WithJSONHandler(),
    logging.WithDebugLevel(),  // Make sure to set debug level
)

Issue: Performance Regression

Problem: Logging slower than expected.

Solution: Check for common issues:

  • Logging in tight loops
  • Source location enabled in production
  • Not using appropriate log level

Getting Help

If you encounter issues during migration:

  1. Check the Troubleshooting guide
  2. Review Examples for patterns
  3. See Best Practices for recommendations
  4. Consult the API Reference

Next Steps

For complete API details, see the API Reference.

12 - Examples

Complete real-world examples of using the logging package

This guide provides complete, real-world examples of using the logging package in various scenarios.

Basic Application

Simple application with structured logging.

package main

import (
    "context"
    "os"
    "rivaas.dev/logging"
)

func main() {
    // Create logger
    logger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithLevel(logging.LevelInfo),
        logging.WithServiceName("myapp"),
        logging.WithServiceVersion("v1.0.0"),
    )
    defer logger.Shutdown(context.Background())

    // Application logic
    logger.Info("application started",
        "port", 8080,
        "environment", os.Getenv("ENV"),
    )

    // Simulate work
    processData(logger)

    logger.Info("application stopped")
}

func processData(logger *logging.Logger) {
    logger.Info("processing data", "items", 100)
    // Process logic...
    logger.Info("data processing completed", "processed", 100)
}

HTTP Server

HTTP server with request logging.

package main

import (
    "context"
    "net/http"
    "time"
    "rivaas.dev/logging"
)

func main() {
    logger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithServiceName("api-server"),
    )
    defer logger.Shutdown(context.Background())

    mux := http.NewServeMux()
    
    // Add logging middleware
    mux.HandleFunc("/", loggingMiddleware(logger, handleRoot))
    mux.HandleFunc("/api/users", loggingMiddleware(logger, handleUsers))

    logger.Info("server starting", "port", 8080)
    http.ListenAndServe(":8080", mux)
}

func loggingMiddleware(logger *logging.Logger, next http.HandlerFunc) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        start := time.Now()
        
        // Wrap response writer to capture status
        wrapped := &responseWriter{ResponseWriter: w, statusCode: 200}
        
        next(wrapped, r)
        
        logger.LogRequest(r,
            "status", wrapped.statusCode,
            "duration_ms", time.Since(start).Milliseconds(),
        )
    }
}

type responseWriter struct {
    http.ResponseWriter
    statusCode int
}

func (w *responseWriter) WriteHeader(code int) {
    w.statusCode = code
    w.ResponseWriter.WriteHeader(code)
}

func handleRoot(w http.ResponseWriter, r *http.Request) {
    w.Write([]byte("Hello, World!"))
}

func handleUsers(w http.ResponseWriter, r *http.Request) {
    w.Write([]byte(`{"users": []}`))
}

Router Integration

Full router integration with tracing.

package main

import (
    "context"
    "rivaas.dev/app"
    "rivaas.dev/logging"
    "rivaas.dev/tracing"
    "rivaas.dev/router/middleware/accesslog"
)

func main() {
    // Create app with full observability
    a, err := app.New(
        app.WithServiceName("user-api"),
        app.WithServiceVersion("v2.0.0"),
        app.WithObservability(
            app.WithLogging(
                logging.WithJSONHandler(),
                logging.WithLevel(logging.LevelInfo),
            ),
            app.WithTracing(
                tracing.WithOTLP("localhost:4317"),
            ),
        ),
    )
    if err != nil {
        panic(err)
    }
    defer a.Shutdown(context.Background())

    router := a.Router()
    logger := a.Logger()

    // Add access log middleware
    router.Use(accesslog.New(
        accesslog.WithExcludePaths("/health"),
    ))

    // Health endpoint
    router.GET("/health", func(c *router.Context) {
        c.JSON(200, map[string]string{"status": "healthy"})
    })

    // API endpoints
    api := router.Group("/api/v1")
    {
        api.GET("/users", getUsers(logger))
        api.POST("/users", createUser(logger))
    }

    logger.Info("server starting", "port", 8080)
    a.Run(":8080")
}

func getUsers(logger *logging.Logger) router.HandlerFunc {
    return func(c *router.Context) {
        log := c.Logger()
        log.Info("fetching users")
        
        users := fetchUsers()
        
        log.Info("users fetched", "count", len(users))
        c.JSON(200, users)
    }
}

func createUser(logger *logging.Logger) router.HandlerFunc {
    return func(c *router.Context) {
        log := c.Logger()
        
        var user User
        if err := c.BindJSON(&user); err != nil {
            log.Error("invalid request", "error", err)
            c.JSON(400, map[string]string{"error": "invalid request"})
            return
        }
        
        if err := saveUser(user); err != nil {
            log.Error("failed to save user", "error", err)
            c.JSON(500, map[string]string{"error": "internal error"})
            return
        }
        
        log.Info("user created", "user_id", user.ID)
        c.JSON(201, user)
    }
}

Multiple Loggers

Different loggers for different purposes.

package main

import (
    "context"
    "os"
    "rivaas.dev/logging"
)

type Application struct {
    appLogger   *logging.Logger
    debugLogger *logging.Logger
    auditLogger *logging.Logger
}

func NewApplication() *Application {
    // Application logger - JSON for production
    appLogger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithLevel(logging.LevelInfo),
        logging.WithServiceName("myapp"),
    )

    // Debug logger - Console with source info
    debugLogger := logging.MustNew(
        logging.WithConsoleHandler(),
        logging.WithDebugLevel(),
        logging.WithSource(true),
    )

    // Audit logger - Separate file for compliance
    auditFile, _ := os.OpenFile("audit.log",
        os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
    auditLogger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithOutput(auditFile),
        logging.WithServiceName("myapp-audit"),
    )

    return &Application{
        appLogger:   appLogger,
        debugLogger: debugLogger,
        auditLogger: auditLogger,
    }
}

func (a *Application) Run() {
    defer a.appLogger.Shutdown(context.Background())
    defer a.debugLogger.Shutdown(context.Background())
    defer a.auditLogger.Shutdown(context.Background())

    // Normal application log
    a.appLogger.Info("application started")

    // Debug information
    a.debugLogger.Debug("initialization complete",
        "config_loaded", true,
        "db_connected", true,
    )

    // Audit event
    a.auditLogger.Info("user action",
        "user_id", "123",
        "action", "login",
        "success", true,
    )
}

func main() {
    app := NewApplication()
    app.Run()
}

Environment-Based Configuration

Configure logging based on environment.

package main

import (
    "os"
    "rivaas.dev/logging"
)

func createLogger() *logging.Logger {
    env := os.Getenv("ENV")
    
    var opts []logging.Option
    
    switch env {
    case "development":
        opts = []logging.Option{
            logging.WithConsoleHandler(),
            logging.WithDebugLevel(),
            logging.WithSource(true),
        }
    case "staging":
        opts = []logging.Option{
            logging.WithJSONHandler(),
            logging.WithLevel(logging.LevelInfo),
            logging.WithServiceName(os.Getenv("SERVICE_NAME")),
            logging.WithEnvironment("staging"),
        }
    case "production":
        opts = []logging.Option{
            logging.WithJSONHandler(),
            logging.WithLevel(logging.LevelWarn),
            logging.WithServiceName(os.Getenv("SERVICE_NAME")),
            logging.WithServiceVersion(os.Getenv("VERSION")),
            logging.WithEnvironment("production"),
            logging.WithSampling(logging.SamplingConfig{
                Initial:    1000,
                Thereafter: 100,
                Tick:       time.Minute,
            }),
        }
    default:
        opts = []logging.Option{
            logging.WithJSONHandler(),
            logging.WithLevel(logging.LevelInfo),
        }
    }
    
    return logging.MustNew(opts...)
}

func main() {
    logger := createLogger()
    defer logger.Shutdown(context.Background())
    
    logger.Info("application started", "environment", os.Getenv("ENV"))
}

Worker Pool with Per-Worker Logging

Logging in concurrent workers.

package main

import (
    "context"
    "fmt"
    "sync"
    "time"
    "rivaas.dev/logging"
)

type Worker struct {
    id     int
    logger *logging.Logger
}

func NewWorker(id int, baseLogger *logging.Logger) *Worker {
    // Create worker-specific logger
    workerLogger := baseLogger.With("worker_id", id)
    
    return &Worker{
        id:     id,
        logger: baseLogger,
    }
}

func (w *Worker) Process(job Job) {
    start := time.Now()
    
    w.logger.Info("job started",
        "worker_id", w.id,
        "job_id", job.ID,
    )
    
    // Process job
    time.Sleep(100 * time.Millisecond)
    
    w.logger.LogDuration("job completed", start,
        "worker_id", w.id,
        "job_id", job.ID,
    )
}

type Job struct {
    ID int
}

func main() {
    logger := logging.MustNew(
        logging.WithJSONHandler(),
        logging.WithServiceName("worker-pool"),
    )
    defer logger.Shutdown(context.Background())

    // Create worker pool
    numWorkers := 4
    jobs := make(chan Job, 100)
    var wg sync.WaitGroup

    // Start workers
    for i := 0; i < numWorkers; i++ {
        wg.Add(1)
        worker := NewWorker(i, logger)
        
        go func() {
            defer wg.Done()
            for job := range jobs {
                worker.Process(job)
            }
        }()
    }

    // Send jobs
    for i := 0; i < 10; i++ {
        jobs <- Job{ID: i}
    }
    close(jobs)

    wg.Wait()
    logger.Info("all jobs completed")
}

Error Handling with Context

Comprehensive error logging.

package main

import (
    "context"
    "errors"
    "time"
    "rivaas.dev/logging"
)

type Service struct {
    logger *logging.Logger
}

func NewService(logger *logging.Logger) *Service {
    return &Service{logger: logger}
}

func (s *Service) ProcessPayment(ctx context.Context, payment Payment) error {
    log := s.logger.With(
        "payment_id", payment.ID,
        "amount", payment.Amount,
    )

    log.Info("processing payment")

    // Validation
    if err := s.validatePayment(payment); err != nil {
        log.LogError(err, "payment validation failed",
            "step", "validation",
        )
        return err
    }

    // Process with retry
    var lastErr error
    for retry := 0; retry < 3; retry++ {
        if err := s.chargePayment(payment); err != nil {
            lastErr = err
            log.LogError(err, "payment charge failed",
                "retry", retry,
                "max_retries", 3,
            )
            time.Sleep(time.Second * time.Duration(retry+1))
            continue
        }
        
        log.Info("payment processed successfully")
        return nil
    }

    // Critical failure - log with stack trace
    s.logger.ErrorWithStack("payment processing failed after retries",
        lastErr, true,
        "payment_id", payment.ID,
        "retries", 3,
    )
    
    return lastErr
}

func (s *Service) validatePayment(payment Payment) error {
    if payment.Amount <= 0 {
        return errors.New("invalid amount")
    }
    return nil
}

func (s *Service) chargePayment(payment Payment) error {
    // Simulate charging
    return nil
}

type Payment struct {
    ID     string
    Amount float64
}

Testing Example

Complete testing setup.

package myservice_test

import (
    "testing"
    "github.com/stretchr/testify/assert"
    "github.com/stretchr/testify/require"
    "rivaas.dev/logging"
)

func TestUserService(t *testing.T) {
    th := logging.NewTestHelper(t)
    
    svc := NewUserService(th.Logger)
    
    t.Run("create user", func(t *testing.T) {
        th.Reset()
        
        user, err := svc.CreateUser("alice", "alice@example.com")
        require.NoError(t, err)
        require.NotNil(t, user)
        
        // Verify logging
        th.AssertLog(t, "INFO", "user created", map[string]any{
            "username": "alice",
            "email":    "alice@example.com",
        })
    })
    
    t.Run("duplicate user", func(t *testing.T) {
        th.Reset()
        
        _, err := svc.CreateUser("alice", "alice@example.com")
        require.Error(t, err)
        
        // Verify error logging
        assert.True(t, th.ContainsLog("user creation failed"))
        assert.True(t, th.ContainsAttr("error", "user already exists"))
    })
}

Next Steps

For more examples, check the examples directory on GitHub.