Categorygithub.com/oarkflow/log
modulepackage
1.0.79
Repository: https://github.com/oarkflow/log.git
Documentation: pkg.go.dev

# README

phuslog - Fastest structured logging

godoc goreport build stability-stable

Features

  • Dependency Free
  • Simple and Clean Interface
  • Consistent Writer
    • IOWriter, io.Writer wrapper
    • ConsoleWriter, colorful & formatting
    • FileWriter, rotating & effective
    • MultiLevelWriter, multiple level dispatch
    • SyslogWriter, memory efficient syslog
    • JournalWriter, linux systemd logging
    • EventlogWriter, windows system event
    • AsyncWriter, asynchronously writing
  • Stdlib Log Adapter
    • Logger.Std, transform to std log instances
    • Logger.Slog, transform to log/slog instances
  • Third-party Logger Interceptor
    • logr, logr interceptor
    • gin, gin logging middleware
    • gorm, gorm logger interface
    • fiber, fiber logging handler
    • grpc, grpc logger interceptor
    • grpcgateway, grpcgateway logger interceptor
  • Useful utility function
    • Goid(), the goroutine id matches stack trace
    • NewXID(), create a tracing id
    • Fastrandn(n uint32), fast pseudorandom uint32 in [0,n)
    • IsTerminal(fd uintptr), isatty for golang
    • Printf(fmt string, a ...interface{}), printf logging
  • High Performance

Interfaces

Logger

// DefaultLogger is the global logger.
var DefaultLogger = Logger{
	Level:      DebugLevel,
	Caller:     0,
	TimeField:  "",
	TimeFormat: "",
	Writer:     &IOWriter{os.Stderr},
}

// Logger represents an active logging object that generates lines of JSON output to an io.Writer.
type Logger struct {
	// Level defines log levels.
	Level Level

	// Caller determines if adds the file:line of the "caller" key.
	// If Caller is negative, adds the full /path/to/file:line of the "caller" key.
	Caller int

	// TimeField defines the time field name in output.  It uses "time" in if empty.
	TimeField string

	// TimeFormat specifies the time format in output. Uses RFC3339 with millisecond if empty.
	// If set to `TimeFormatUnix/TimeFormatUnixMs`, timestamps will be formatted.
	TimeFormat string

	// TimeLocation specifices that the location of TimeFormat used. Uses time.Local if empty.
	TimeLocation *time.Location

	// Writer specifies the writer of output. It uses a wrapped os.Stderr Writer in if empty.
	Writer Writer
}

ConsoleWriter

// ConsoleWriter parses the JSON input and writes it in a colorized, human-friendly format to Writer.
// IMPORTANT: Don't use ConsoleWriter on critical path of a high concurrency and low latency application.
//
// Default output format:
//     {Time} {Level} {Goid} {Caller} > {Message} {Key}={Value} {Key}={Value}
type ConsoleWriter struct {
	// ColorOutput determines if used colorized output.
	ColorOutput bool

	// QuoteString determines if quoting string values.
	QuoteString bool

	// EndWithMessage determines if output message in the end of line.
	EndWithMessage bool

	// Writer is the output destination. using os.Stderr if empty.
	Writer io.Writer

	// Formatter specifies an optional text formatter for creating a customized output,
	// If it is set, ColorOutput, QuoteString and EndWithMessage will be ignored.
	Formatter func(w io.Writer, args *FormatterArgs) (n int, err error)
}

// FormatterArgs is a parsed sturct from json input
type FormatterArgs struct {
	Time      string // "2019-07-10T05:35:54.277Z"
	Message   string // "a structure message"
	Level     string // "info"
	Caller    string // "main.go:123"
	Goid      string // "1"
	Stack     string // "<stack string>"
	KeyValues []struct {
		Key       string // "foo"
		Value     string // "bar"
	}
}

FileWriter

// FileWriter is an Writer that writes to the specified filename.
type FileWriter struct {
	// Filename is the file to write logs to.  Backup log files will be retained
	// in the same directory.
	Filename string

	// FileMode represents the file's mode and permission bits.  The default
	// mode is 0644
	FileMode os.FileMode

	// MaxSize is the maximum size in bytes of the log file before it gets rotated.
	MaxSize int64

	// MaxBackups is the maximum number of old log files to retain.  The default
	// is to retain all old log files
	MaxBackups int

	// TimeFormat specifies the time format of filename, uses `2006-01-02T15-04-05` as default format.
	// If set with `TimeFormatUnix`, `TimeFormatUnixMs`, times are formated as UNIX timestamp.
	TimeFormat string

	// LocalTime determines if the time used for formatting the timestamps in
	// log files is the computer's local time.  The default is to use UTC time.
	LocalTime bool

	// HostName determines if the hostname used for formatting in log files.
	HostName bool

	// ProcessID determines if the pid used for formatting in log files.
	ProcessID bool

	// EnsureFolder ensures the file directory creation before writing.
	EnsureFolder bool

	// Header specifies an optional header function of log file after rotation,
	Header func(fileinfo os.FileInfo) []byte

	// Cleaner specifies an optional cleanup function of log backups after rotation,
	// if not set, the default behavior is to delete more than MaxBackups log files.
	Cleaner func(filename string, maxBackups int, matches []os.FileInfo)
}

Highlights:

  • FileWriter implements log.Writer and io.Writer interfaces both, it is a recommended alternative to lumberjack.
  • FileWriter creates a symlink to the current logging file, it requires administrator privileges on Windows.
  • FileWriter does not rotate if you define a broad TimeFormat value(daily or monthly) until reach its MaxSize.

Getting Started

Simple Logging Example

An out of box example. playground

package main

import (
	"github.com/phuslu/log"
)

func main() {
	log.Info().Str("foo", "bar").Int("number", 42).Msg("hi, phuslog")
	log.Info().Msgf("foo=%s number=%d error=%+v", "bar", 42, "an error")
}

// Output:
//   {"time":"2020-03-22T09:58:41.828Z","level":"info","foo":"bar","number":42,"message":"hi, phuslog"}
//   {"time":"2020-03-22T09:58:41.828Z","level":"info","message":"foo=bar number=42 error=an error"}

Note: By default log writes to os.Stderr

Customize the configuration and formatting:

To customize logger filed name and format. playground

package main

import (
	"github.com/phuslu/log"
)

func main() {
	log.DefaultLogger = log.Logger{
		Level:      log.InfoLevel,
		Caller:     1,
		TimeField:  "date",
		TimeFormat: "2006-01-02",
		Writer:     &log.IOWriter{os.Stdout},
	}

	log.Info().Str("foo", "bar").Msgf("hello %s", "world")

	logger := log.Logger{
		Level:      log.InfoLevel,
		TimeField:  "ts",
		TimeFormat: log.TimeFormatUnixMs,
	}

	logger.Log().Str("foo", "bar").Msg("")
}

// Output:
//    {"date":"2019-07-04","level":"info","caller":"prog.go:16","foo":"bar","message":"hello world"}
//    {"ts":1257894000000,"foo":"bar"}

Customize the log writer

To allow the use of ordinary functions as log writers, use WriterFunc.

logger := log.Logger{
	Writer: log.WriterFunc(func(e *log.Entry) (int, error) {
		if e.Level >= log.ErrorLevel {
			return os.Stderr.Write(e.Value())
		} else {
			return os.Stdout.Write(e.Value())
		}
	}),
}

logger.Info().Msg("a stdout entry")
logger.Error().Msg("a stderr entry")

Pretty Console Writer

To log a human-friendly, colorized output, use ConsoleWriter. playground

if log.IsTerminal(os.Stderr.Fd()) {
	log.DefaultLogger = log.Logger{
		TimeFormat: "15:04:05",
		Caller:     1,
		Writer: &log.ConsoleWriter{
			ColorOutput:    true,
			QuoteString:    true,
			EndWithMessage: true,
		},
	}
}

log.Debug().Int("everything", 42).Str("foo", "bar").Msg("hello world")
log.Info().Int("everything", 42).Str("foo", "bar").Msg("hello world")
log.Warn().Int("everything", 42).Str("foo", "bar").Msg("hello world")
log.Error().Err(errors.New("an error")).Msg("hello world")

Pretty logging

Note: pretty logging also works on windows console

Formatting Console Writer

To log with user-defined format(e.g. glog), using ConsoleWriter.Formatter. playground

package main

import (
	"fmt"
	"io"

	"github.com/phuslu/log"
)

type Glog struct {
	Logger log.Logger
}

func (l *Glog) Infof(fmt string, a ...any) { l.Logger.Info().Msgf(fmt, a...) }

func (l *Glog) Warnf(fmt string, a ...any) { l.Logger.Warn().Msgf(fmt, a...) }

func (l *Glog) Errorf(fmt string, a ...any) { l.Logger.Error().Msgf(fmt, a...) }

var glog = &Glog{log.Logger{
	Level:      log.InfoLevel,
	Caller:     2,
	TimeFormat: "0102 15:04:05.999999",
	Writer: &log.ConsoleWriter{Formatter: func(w io.Writer, a *log.FormatterArgs) (int, error) {
		return fmt.Fprintf(w, "%c%s %s %s] %s\n%s", a.Level[0]-32, a.Time, a.Goid, a.Caller, a.Message, a.Stack)
	}},
}}

func main() {
	glog.Infof("hello glog %s", "Info")
	glog.Warnf("hello glog %s", "Warn")
	glog.Errorf("hello glog %s", "Error")
}

// Output:
// I0725 09:59:57.503246 19 console_test.go:183] hello glog Info
// W0725 09:59:57.504247 19 console_test.go:184] hello glog Warn
// E0725 09:59:57.504247 19 console_test.go:185] hello glog Error

Formatting Logfmt output

To log with logfmt format, also using ConsoleWriter.Formatter. playground

package main

import (
	"io"
	"os"

	"github.com/phuslu/log"
)

func main() {
	log.DefaultLogger = log.Logger{
		Level:      log.InfoLevel,
		Caller:     1,
		TimeField:  "ts",
		TimeFormat: log.TimeFormatUnixWithMs,
		Writer: &log.ConsoleWriter{
			Formatter: log.LogfmtFormatter{"ts"}.Formatter,
			Writer:    io.MultiWriter(os.Stdout, os.Stderr),
		},
	}

	log.Info().Str("foo", "bar").Int("no", 42).Msgf("a logfmt %s", "info")
}

// Output:
// ts=1257894000.000 level=info goid=1 caller="prog.go:20" foo="bar" no=42 "a logfmt info"
// ts=1257894000.000 level=info goid=1 caller="prog.go:20" foo="bar" no=42 "a logfmt info"

Rotating File Writer

To log to a daily-rotating file, use FileWriter. playground

package main

import (
	"os"
	"path/filepath"
	"time"

	"github.com/phuslu/log"
	"github.com/robfig/cron/v3"
)

func main() {
	logger := log.Logger{
		Level: log.ParseLevel("info"),
		Writer: &log.FileWriter{
			Filename:     "logs/main.log",
			FileMode:     0600,
			MaxSize:      100 * 1024 * 1024,
			MaxBackups:   7,
			EnsureFolder: true,
			LocalTime:    true,
		},
	}

	runner := cron.New(cron.WithLocation(time.Local))
	runner.AddFunc("0 0 * * *", func() { logger.Writer.(*log.FileWriter).Rotate() })
	go runner.Run()

	for {
		time.Sleep(time.Second)
		logger.Info().Msg("hello world")
	}
}

Rotating File Writer within a total size

To rotating log file hourly and keep in a total size, use FileWriter.Cleaner.

package main

import (
	"os"
	"path/filepath"
	"time"

	"github.com/phuslu/log"
	"github.com/robfig/cron/v3"
)

func main() {
	logger := log.Logger{
		Level: log.ParseLevel("info"),
		Writer: &log.FileWriter{
			Filename: "main.log",
			MaxSize:  500 * 1024 * 1024,
			Cleaner:  func(filename string, maxBackups int, matches []os.FileInfo) {
				var dir = filepath.Dir(filename)
				var total int64
				for i := len(matches) - 1; i >= 0; i-- {
					total += matches[i].Size()
					if total > 5*1024*1024*1024 {
						os.Remove(filepath.Join(dir, matches[i].Name()))
					}
				}
			},
		},
	}

	runner := cron.New(cron.WithLocation(time.UTC))
	runner.AddFunc("0 * * * *", func() { logger.Writer.(*log.FileWriter).Rotate() })
	go runner.Run()

	for {
		time.Sleep(time.Second)
		logger.Info().Msg("hello world")
	}
}

Rotating File Writer with compression

To rotating log file hourly and compressing after rotation, use FileWriter.Cleaner.

package main

import (
	"os"
	"os/exec"
	"path/filepath"
	"time"

	"github.com/phuslu/log"
	"github.com/robfig/cron/v3"
)

func main() {
	logger := log.Logger{
		Level: log.ParseLevel("info"),
		Writer: &log.FileWriter{
			Filename: "main.log",
			MaxSize:  500 * 1024 * 1024,
			Cleaner:  func(filename string, maxBackups int, matches []os.FileInfo) {
				var dir = filepath.Dir(filename)
				for i, fi := range matches {
					filename := filepath.Join(dir, fi.Name())
					switch {
					case i > maxBackups:
						os.Remove(filename)
					case !strings.HasSuffix(filename, ".gz"):
						go exec.Command("nice", "gzip", filename).Run()
					}
				}
			},
		},
	}

	runner := cron.New(cron.WithLocation(time.UTC))
	runner.AddFunc("0 * * * *", func() { logger.Writer.(*log.FileWriter).Rotate() })
	go runner.Run()

	for {
		time.Sleep(time.Second)
		logger.Info().Msg("hello world")
	}
}

Random Sample Logger:

To logging only 5% logs, use below idiom.

if log.Fastrandn(100) < 5 {
	log.Log().Msg("hello world")
}

Multiple Dispatching Writer

To log to different writers by different levels, use MultiLevelWriter.

log.DefaultLogger.Writer = &log.MultiLevelWriter{
	InfoWriter:    &log.FileWriter{Filename: "main.INFO", MaxSize: 100<<20},
	WarnWriter:    &log.FileWriter{Filename: "main.WARNING", MaxSize: 100<<20},
	ErrorWriter:   &log.FileWriter{Filename: "main.ERROR", MaxSize: 100<<20},
	ConsoleWriter: &log.ConsoleWriter{ColorOutput: true},
	ConsoleLevel:  log.ErrorLevel,
}

log.Info().Int("number", 42).Str("foo", "bar").Msg("a info log")
log.Warn().Int("number", 42).Str("foo", "bar").Msg("a warn log")
log.Error().Int("number", 42).Str("foo", "bar").Msg("a error log")

Multiple Entry Writer

To log to different writers, use MultiEntryWriter.

log.DefaultLogger.Writer = &log.MultiEntryWriter{
	&log.ConsoleWriter{ColorOutput: true},
	&log.FileWriter{Filename: "main.log", MaxSize: 100<<20},
	&log.EventlogWriter{Source: ".NET Runtime", ID: 1000},
}

log.Info().Int("number", 42).Str("foo", "bar").Msg("a info log")

Multiple IO Writer

To log to multiple io writers like io.MultiWriter, use below idiom. playground

log.DefaultLogger.Writer = &log.MultiIOWriter{
	os.Stdout,
	&log.FileWriter{Filename: "main.log", MaxSize: 100<<20},
}

log.Info().Int("number", 42).Str("foo", "bar").Msg("a info log")

Multiple Combined Logger:

To logging to different logger as you want, use below idiom. playground

package main

import (
	"github.com/phuslu/log"
)

var logger = struct {
	Console log.Logger
	Access  log.Logger
	Data    log.Logger
}{
	Console: log.Logger{
		TimeFormat: "15:04:05",
		Caller:     1,
		Writer: &log.ConsoleWriter{
			ColorOutput:    true,
			EndWithMessage: true,
		},
	},
	Access: log.Logger{
		Level: log.InfoLevel,
		Writer: &log.FileWriter{
			Filename:   "access.log",
			MaxSize:    50 * 1024 * 1024,
			MaxBackups: 7,
			LocalTime:  false,
		},
	},
	Data: log.Logger{
		Level: log.InfoLevel,
		Writer: &log.FileWriter{
			Filename:   "data.log",
			MaxSize:    50 * 1024 * 1024,
			MaxBackups: 7,
			LocalTime:  false,
		},
	},
}

func main() {
	logger.Console.Info().Msgf("hello world")
	logger.Access.Log().Msgf("handle request")
	logger.Data.Log().Msgf("some data")
}

SyslogWriter

SyslogWriter is a memory-efficient, cross-platform, dependency-free syslog writer, outperforms all other structured logging libraries.

package main

import (
	"net"
	"time"

	"github.com/phuslu/log"
)

func main() {
	go func() {
		ln, _ := net.Listen("tcp", "127.0.0.1:1601")
		for {
			conn, _ := ln.Accept()
			go func(c net.Conn) {
				b := make([]byte, 8192)
				n, _ := conn.Read(b)
				println(string(b[:n]))
			}(conn)
		}
	}()

	syslog := log.Logger{
		Level:      log.InfoLevel,
		TimeField:  "ts",
		TimeFormat: log.TimeFormatUnixMs,
		Writer: &log.SyslogWriter{
			Network: "tcp",            // "unixgram",
			Address: "127.0.0.1:1601", // "/run/systemd/journal/syslog",
			Tag:     "",
			Marker:  "@cee:",
			Dial:    net.Dial,
		},
	}

	syslog.Info().Str("foo", "bar").Int("an", 42).Msg("a syslog info")
	syslog.Warn().Str("foo", "bar").Int("an", 42).Msg("a syslog warn")
	time.Sleep(2)
}

// Output:
// <6>2022-07-24T18:48:15+08:00 127.0.0.1:59277 [11516]: @cee:{"ts":1658659695428,"level":"info","foo":"bar","an":42,"message":"a syslog info"}
// <4>2022-07-24T18:48:15+08:00 127.0.0.1:59277 [11516]: @cee:{"ts":1658659695429,"level":"warn","foo":"bar","an":42,"message":"a syslog warn"}

JournalWriter

To log to linux systemd journald, using JournalWriter.

log.DefaultLogger.Writer = &log.JournalWriter{
	JournalSocket: "/run/systemd/journal/socket",
}

log.Info().Int("number", 42).Str("foo", "bar").Msg("hello world")

EventlogWriter

To log to windows system event, using EventlogWriter.

log.DefaultLogger.Writer = &log.EventlogWriter{
	Source: ".NET Runtime",
	ID:     1000,
}

log.Info().Int("number", 42).Str("foo", "bar").Msg("hello world")

AsyncWriter

To logging asynchronously for performance stability, use AsyncWriter.

logger := log.Logger{
	Level:  log.InfoLevel,
	Writer: &log.AsyncWriter{
		ChannelSize: 100,
		Writer:      &log.FileWriter{
			Filename:   "main.log",
			FileMode:   0600,
			MaxSize:    50*1024*1024,
			MaxBackups: 7,
			LocalTime:  false,
		},
	},
}

logger.Info().Int("number", 42).Str("foo", "bar").Msg("a async info log")
logger.Warn().Int("number", 42).Str("foo", "bar").Msg("a async warn log")
logger.Writer.(io.Closer).Close()

Note: To flush data and quit safely, call AsyncWriter.Close() explicitly.

Stdlib Log Adapter

Using wrapped loggers for stdlog. playground

package main

import (
	stdlog "log"
	"os"

	"github.com/phuslu/log"
)

func main() {
	var logger *stdlog.Logger = (&log.Logger{
		Level:      log.InfoLevel,
		TimeField:  "date",
		TimeFormat: "2006-01-02",
		Caller:     1,
		Context:    log.NewContext(nil).Str("logger", "mystdlog").Int("myid", 42).Value(),
		Writer:     &log.IOWriter{os.Stdout},
	}).Std("", 0)

	logger.Print("hello from stdlog Print")
	logger.Println("hello from stdlog Println")
	logger.Printf("hello from stdlog %s", "Printf")
}

slog Adapter

Using wrapped loggers for slog. playground

package main

import (
	"log/slog"

	"github.com/phuslu/log"
)

func main() {
	var logger *slog.Logger = (&log.Logger{
		Level:      log.InfoLevel,
		TimeField:  "date",
		TimeFormat: "2006-01-02",
		Caller:     1,
	}).Slog()

	logger = logger.With("logger", "a_test_slog").With("everything", 42)

	logger.Info("hello from slog Info")
	logger.Warn("hello from slog Warn")
	logger.Error("hello from slog Error")
}

Third-party Logger Interceptor

LoggerInterceptor
logrhttps://github.com/phuslu/log-contrib/tree/master/logr
ginhttps://github.com/phuslu/log-contrib/tree/master/gin
fiberhttps://github.com/phuslu/log-contrib/tree/master/fiber
gormhttps://github.com/phuslu/log-contrib/tree/master/gorm
grpchttps://github.com/phuslu/log-contrib/tree/master/grpc
grpcgatewayhttps://github.com/phuslu/log-contrib/tree/master/grpcgateway

User-defined Data Structure

To log with user-defined struct effectively, implements MarshalObject. playground

package main

import (
	"github.com/phuslu/log"
)

type User struct {
	ID   int
	Name string
	Pass string
}

func (u *User) MarshalObject(e *log.Entry) {
	e.Int("id", u.ID).Str("name", u.Name).Str("password", "***")
}

func main() {
	log.Info().Object("user", &User{1, "neo", "123456"}).Msg("")
	log.Info().EmbedObject(&User{2, "john", "abc"}).Msg("")
}

// Output:
//   {"time":"2020-07-12T05:03:43.949Z","level":"info","user":{"id":1,"name":"neo","password":"***"}}
//   {"time":"2020-07-12T05:03:43.949Z","level":"info","id":2,"name":"john","password":"***"}

Contextual Fields

To add preserved key:value pairs to each entry, use NewContext. playground

logger := log.Logger{
	Level:   log.InfoLevel,
	Context: log.NewContext(nil).Str("ctx", "some_ctx").Value(),
}

logger.Debug().Int("no0", 0).Msg("zero")
logger.Info().Int("no1", 1).Msg("first")
logger.Info().Int("no2", 2).Msg("second")

// Output:
//   {"time":"2020-07-12T05:03:43.949Z","level":"info","ctx":"some_ctx","no1":1,"message":"first"}
//   {"time":"2020-07-12T05:03:43.949Z","level":"info","ctx":"some_ctx","no2":2,"message":"second"}

You can make a copy of log and add contextual fields. playground

package main

import (
	"github.com/phuslu/log"
)

func main() {
	sublogger := log.DefaultLogger
	sublogger.Level = log.InfoLevel
	sublogger.Context = log.NewContext(nil).Str("ctx", "some_ctx").Value()

	sublogger.Debug().Int("no0", 0).Msg("zero")
	sublogger.Info().Int("no1", 1).Msg("first")
	sublogger.Info().Int("no2", 2).Msg("second")
	log.Debug().Int("no3", 3).Msg("no context")
}

// Output:
//   {"time":"2021-06-14T06:36:42.904+02:00","level":"info","ctx":"some_ctx","no1":1,"message":"first"}
//   {"time":"2021-06-14T06:36:42.905+02:00","level":"info","ctx":"some_ctx","no2":2,"message":"second"}
//   {"time":"2021-06-14T06:36:42.906+02:00","level":"debug","no3":3,"message":"no context"}

High Performance

The most common benchmarks(disable/normal/caller/printf/interface) against slog/zap/zerolog
// go test -v -cpu=4 -run=none -bench=. -benchtime=10s -benchmem bench_test.go
package main

import (
	"io"
	"log"
	"log/slog"
	"testing"

	phuslog "github.com/phuslu/log"
	"github.com/rs/zerolog"
	"go.uber.org/zap"
	"go.uber.org/zap/zapcore"
)

const msg = "The quick brown fox jumps over the lazy dog"
var obj = struct {Rate string; Low int; High float32}{"15", 16, 123.2}

func BenchmarkSlogDisable(b *testing.B) {
	logger := slog.New(slog.NewJSONHandler(io.Discard, nil))
	for i := 0; i < b.N; i++ {
		logger.Debug(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogNormal(b *testing.B) {
	logger := slog.New(slog.NewJSONHandler(io.Discard, nil))
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogPrintf(b *testing.B) {
	slog.SetDefault(slog.New(slog.NewJSONHandler(io.Discard, nil)))
	for i := 0; i < b.N; i++ {
		log.Printf("rate=%s low=%d high=%f msg=%s", "15", 16, 123.2, msg)
	}
}

func BenchmarkSlogCaller(b *testing.B) {
	logger := slog.New(slog.NewJSONHandler(io.Discard, &slog.HandlerOptions{AddSource: true}))
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogInterface(b *testing.B) {
	logger := slog.New(slog.NewJSONHandler(io.Discard, nil))
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "object", &obj)
	}
}

func BenchmarkZapDisable(b *testing.B) {
	logger := zap.New(zapcore.NewCore(
		zapcore.NewJSONEncoder(zap.NewProductionEncoderConfig()),
		zapcore.AddSync(io.Discard),
		zapcore.InfoLevel,
	)).Sugar()
	for i := 0; i < b.N; i++ {
		logger.Debugw(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkZapNormal(b *testing.B) {
	logger := zap.New(zapcore.NewCore(
		zapcore.NewJSONEncoder(zap.NewProductionEncoderConfig()),
		zapcore.AddSync(io.Discard),
		zapcore.InfoLevel,
	)).Sugar()
	for i := 0; i < b.N; i++ {
		logger.Infow(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkZapPrintf(b *testing.B) {
	logger := zap.New(zapcore.NewCore(
		zapcore.NewJSONEncoder(zap.NewProductionEncoderConfig()),
		zapcore.AddSync(io.Discard),
		zapcore.InfoLevel,
	)).Sugar()
	for i := 0; i < b.N; i++ {
		logger.Infof("rate=%s low=%d high=%f msg=%s", "15", 16, 123.2, msg)
	}
}

func BenchmarkZapCaller(b *testing.B) {
	logger := zap.New(zapcore.NewCore(
		zapcore.NewJSONEncoder(zap.NewProductionEncoderConfig()),
		zapcore.AddSync(io.Discard),
		zapcore.InfoLevel),
		zap.AddCaller(),
	).Sugar()
	for i := 0; i < b.N; i++ {
		logger.Infow(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkZapInterface(b *testing.B) {
	logger := zap.New(zapcore.NewCore(
		zapcore.NewJSONEncoder(zap.NewProductionEncoderConfig()),
		zapcore.AddSync(io.Discard),
		zapcore.InfoLevel,
	)).Sugar()
	for i := 0; i < b.N; i++ {
		logger.Infow(msg, "object", &obj)
	}
}

func BenchmarkZeroLogDisable(b *testing.B) {
	zerolog.SetGlobalLevel(zerolog.InfoLevel)
	logger := zerolog.New(io.Discard).With().Timestamp().Logger()
	for i := 0; i < b.N; i++ {
		logger.Debug().Str("rate", "15").Int("low", 16).Float32("high", 123.2).Msg(msg)
	}
}

func BenchmarkZeroLogNormal(b *testing.B) {
	logger := zerolog.New(io.Discard).With().Timestamp().Logger()
	for i := 0; i < b.N; i++ {
		logger.Info().Str("rate", "15").Int("low", 16).Float32("high", 123.2).Msg(msg)
	}
}

func BenchmarkZeroLogPrintf(b *testing.B) {
	logger := zerolog.New(io.Discard).With().Timestamp().Logger()
	for i := 0; i < b.N; i++ {
		logger.Info().Msgf("rate=%s low=%d high=%f msg=%s", "15", 16, 123.2, msg)
	}
}

func BenchmarkZeroLogCaller(b *testing.B) {
	logger := zerolog.New(io.Discard).With().Caller().Timestamp().Logger()
	for i := 0; i < b.N; i++ {
		logger.Info().Str("rate", "15").Int("low", 16).Float32("high", 123.2).Msg(msg)
	}
}

func BenchmarkZeroLogInterface(b *testing.B) {
	logger := zerolog.New(io.Discard).With().Timestamp().Logger()
	for i := 0; i < b.N; i++ {
		logger.Info().Interface("object", &obj).Msg(msg)
	}
}

func BenchmarkPhusLogDisable(b *testing.B) {
	logger := phuslog.Logger{Level: phuslog.InfoLevel, Writer: phuslog.IOWriter{io.Discard}}
	for i := 0; i < b.N; i++ {
		logger.Debug().Str("rate", "15").Int("low", 16).Float32("high", 123.2).Msg(msg)
	}
}

func BenchmarkPhusLogNormal(b *testing.B) {
	logger := phuslog.Logger{Writer: phuslog.IOWriter{io.Discard}}
	for i := 0; i < b.N; i++ {
		logger.Info().Str("rate", "15").Int("low", 16).Float32("high", 123.2).Msg(msg)
	}
}

func BenchmarkPhusLogPrintf(b *testing.B) {
	logger := phuslog.Logger{Writer: phuslog.IOWriter{io.Discard}}
	for i := 0; i < b.N; i++ {
		logger.Info().Msgf("rate=%s low=%d high=%f msg=%s", "15", 16, 123.2, msg)
	}
}

func BenchmarkPhusLogCaller(b *testing.B) {
	logger := phuslog.Logger{Caller: 1, Writer: phuslog.IOWriter{io.Discard}}
	for i := 0; i < b.N; i++ {
		logger.Info().Str("rate", "15").Int("low", 16).Float32("high", 123.2).Msg(msg)
	}
}

func BenchmarkPhusLogInterface(b *testing.B) {
	logger := phuslog.Logger{Writer: phuslog.IOWriter{io.Discard}}
	for i := 0; i < b.N; i++ {
		logger.Info().Interface("object", &obj).Msg(msg)
	}
}

A Performance result as below, for daily benchmark results see github actions

goos: linux
goarch: amd64
cpu: AMD EPYC 7763 64-Core Processor

BenchmarkSlogDisable-4        	1000000000	         8.404 ns/op	       0 B/op	       0 allocs/op
BenchmarkSlogNormal-4         	 9004674	      1332 ns/op	     120 B/op	       3 allocs/op
BenchmarkSlogPrintf-4         	11932806	      1011 ns/op	      80 B/op	       1 allocs/op
BenchmarkSlogCaller-4         	 5491897	      2184 ns/op	     688 B/op	       9 allocs/op
BenchmarkSlogInterface-4      	 9200828	      1299 ns/op	     112 B/op	       2 allocs/op

BenchmarkZapDisable-4         	1000000000	         8.099 ns/op	       0 B/op	       0 allocs/op
BenchmarkZapNormal-4          	13027324	       912.5 ns/op	     384 B/op	       1 allocs/op
BenchmarkZapPrintf-4          	12789286	       949.7 ns/op	      80 B/op	       1 allocs/op
BenchmarkZapCaller-4          	 7093309	      1681 ns/op	     632 B/op	       3 allocs/op
BenchmarkZapInterface-4       	11350916	      1046 ns/op	     224 B/op	       2 allocs/op

BenchmarkZeroLogDisable-4     	1000000000	         9.929 ns/op	       0 B/op	       0 allocs/op
BenchmarkZeroLogNormal-4      	36481122	       327.4 ns/op	       0 B/op	       0 allocs/op
BenchmarkZeroLogPrintf-4      	17880010	       664.8 ns/op	      80 B/op	       1 allocs/op
BenchmarkZeroLogCaller-4      	 9288362	      1293 ns/op	     304 B/op	       4 allocs/op
BenchmarkZeroLogInterface-4   	19508446	       614.7 ns/op	      48 B/op	       1 allocs/op

BenchmarkPhusLogDisable-4     	1000000000	         9.610 ns/op	       0 B/op	       0 allocs/op
BenchmarkPhusLogNormal-4      	51244849	       236.8 ns/op	       0 B/op	       0 allocs/op
BenchmarkPhusLogPrintf-4      	23559231	       509.0 ns/op	       0 B/op	       0 allocs/op
BenchmarkPhusLogCaller-4      	23906091	       508.0 ns/op	       0 B/op	       0 allocs/op
BenchmarkPhusLogInterface-4   	21407361	       558.8 ns/op	       0 B/op	       0 allocs/op

PASS
ok  	bench	246.926s
As slog handlers, comparing with stdlib/zap/zerolog implementations
// go test -v -cpu=1 -run=none -bench=. -benchtime=10s -benchmem bench_test.go
package bench

import (
	"io"
	"log/slog"
	"testing"

	"github.com/phsym/zeroslog"
	phuslog "github.com/phuslu/log"
	"go.uber.org/zap"
	"go.uber.org/zap/exp/zapslog"
	"go.uber.org/zap/zapcore"
)

const msg = "The quick brown fox jumps over the lazy dog"

func BenchmarkSlogNormalStd(b *testing.B) {
	logger := slog.New(slog.NewJSONHandler(io.Discard, nil))
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogGroupStd(b *testing.B) {
	logger := slog.New(slog.NewJSONHandler(io.Discard, nil)).With("a", 1).WithGroup("g").With("b", 2)
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogNormalZap(b *testing.B) {
	logcore := zapcore.NewCore(
		zapcore.NewJSONEncoder(zap.NewProductionEncoderConfig()),
		zapcore.AddSync(io.Discard),
		zapcore.InfoLevel,
	)
	logger := slog.New(zapslog.NewHandler(logcore, nil))
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogGroupZap(b *testing.B) {
	logcore := zapcore.NewCore(
		zapcore.NewJSONEncoder(zap.NewProductionEncoderConfig()),
		zapcore.AddSync(io.Discard),
		zapcore.InfoLevel,
	)
	logger := slog.New(zapslog.NewHandler(logcore, nil)).With("a", 1).WithGroup("g").With("b", 2)
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogNormalZerolog(b *testing.B) {
	logger := slog.New(zeroslog.NewJsonHandler(io.Discard, &zeroslog.HandlerOptions{Level: slog.LevelInfo}))
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogGroupZerolog(b *testing.B) {
	logger := slog.New(zeroslog.NewJsonHandler(io.Discard, &zeroslog.HandlerOptions{Level: slog.LevelInfo})).With("a", 1).WithGroup("g").With("b", 2)
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogNormalPhuslog(b *testing.B) {
	logger := slog.New((&phuslog.Logger{Writer: phuslog.IOWriter{io.Discard}}).Slog().Handler())
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogGroupPhuslog(b *testing.B) {
	logger := slog.New((&phuslog.Logger{Writer: phuslog.IOWriter{io.Discard}}).Slog().Handler()).With("a", 1).WithGroup("g").With("b", 2)
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogNormalPhuslogStd(b *testing.B) {
	logger := slog.New(phuslog.SlogNewJSONHandler(io.Discard, nil))
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

func BenchmarkSlogGroupPhuslogStd(b *testing.B) {
	logger := slog.New(phuslog.SlogNewJSONHandler(io.Discard, nil)).With("a", 1).WithGroup("g").With("b", 2)
	for i := 0; i < b.N; i++ {
		logger.Info(msg, "rate", "15", "low", 16, "high", 123.2)
	}
}

A Performance result as below, for daily benchmark results see github actions

goos: linux
goarch: amd64
cpu: AMD EPYC 7763 64-Core Processor

BenchmarkSlogNormalStd        	 4408033	      1383 ns/op	     120 B/op	       3 allocs/op
BenchmarkSlogGroupStd         	 4298301	      1398 ns/op	     120 B/op	       3 allocs/op

BenchmarkSlogNormalZap        	 4753046	      1273 ns/op	     192 B/op	       1 allocs/op
BenchmarkSlogGroupZap         	 4724052	      1257 ns/op	     192 B/op	       1 allocs/op

BenchmarkSlogNormalZerolog    	 7548705	       789.2 ns/op	       0 B/op	       0 allocs/op
BenchmarkSlogGroupZerolog     	 5592763	      1076 ns/op	     288 B/op	       1 allocs/op

BenchmarkSlogNormalPhuslog    	 8318289	       720.1 ns/op	       0 B/op	       0 allocs/op
BenchmarkSlogGroupPhuslog     	 8175591	       737.8 ns/op	       0 B/op	       0 allocs/op

BenchmarkSlogNormalPhuslogStd 	 7978171	       755.0 ns/op	       0 B/op	       0 allocs/op
BenchmarkSlogGroupPhuslogStd  	 7728466	       775.4 ns/op	       0 B/op	       0 allocs/op

PASS
ok  	bench	70.366s

In summary, phuslog offers a blend of low latency, minimal memory usage, and efficient logging across various scenarios, making it an excellent option for high-performance logging in Go applications.

A Real World Example

The example starts a geoip http server which supports change log level dynamically

package main

import (
	"encoding/json"
	"fmt"
	"net"
	"net/http"
	"os"

	"github.com/phuslu/iploc"
	"github.com/phuslu/log"
)

type Config struct {
	Listen struct {
		Tcp string
	}
	Log struct {
		Level   string
		Maxsize int64
		Backups int
	}
}

type Handler struct {
	Config       *Config
	AccessLogger log.Logger
}

func (h *Handler) ServeHTTP(rw http.ResponseWriter, req *http.Request) {
	reqID := log.NewXID()
	remoteIP, _, _ := net.SplitHostPort(req.RemoteAddr)
	geo := iploc.Country(net.ParseIP(remoteIP))

	h.AccessLogger.Log().
		Xid("req_id", reqID).
		Str("host", req.Host).
		Bytes("geo", geo).
		Str("remote_ip", remoteIP).
		Str("request_uri", req.RequestURI).
		Str("user_agent", req.UserAgent()).
		Str("referer", req.Referer()).
		Msg("access log")

	switch req.RequestURI {
	case "/debug", "/info", "/warn", "/error":
		log.DefaultLogger.SetLevel(log.ParseLevel(req.RequestURI[1:]))
	default:
		fmt.Fprintf(rw, `{"req_id":"%s","ip":"%s","geo":"%s"}`, reqID, remoteIP, geo)
	}
}

func main() {
	config := new(Config)
	err := json.Unmarshal([]byte(`{
		"listen": {
			"tcp": ":8080"
		},
		"log": {
			"level": "debug",
			"maxsize": 1073741824,
			"backups": 5
		}
	}`), config)
	if err != nil {
		log.Fatal().Msgf("json.Unmarshal error: %+v", err)
	}

	handler := &Handler{
		Config: config,
		AccessLogger: log.Logger{
			Writer: &log.FileWriter{
				Filename:   "access.log",
				MaxSize:    config.Log.Maxsize,
				MaxBackups: config.Log.Backups,
				LocalTime:  true,
			},
		},
	}

	if log.IsTerminal(os.Stderr.Fd()) {
		log.DefaultLogger = log.Logger{
			Level:      log.ParseLevel(config.Log.Level),
			Caller:     1,
			TimeFormat: "15:04:05",
			Writer: &log.ConsoleWriter{
				ColorOutput:    true,
				EndWithMessage: true,
			},
		}
		handler.AccessLogger = log.DefaultLogger
	} else {
		log.DefaultLogger = log.Logger{
			Level: log.ParseLevel(config.Log.Level),
			Writer: &log.FileWriter{
				Filename:   "main.log",
				MaxSize:    config.Log.Maxsize,
				MaxBackups: config.Log.Backups,
				LocalTime:  true,
			},
		}
	}

	server := &http.Server{
		Addr:     config.Listen.Tcp,
		ErrorLog: log.DefaultLogger.Std(log.ErrorLevel, nil, "", 0),
		Handler:  handler,
	}

	log.Fatal().Err(server.ListenAndServe()).Msg("listen failed")
}

Acknowledgment

This log is heavily inspired by zerolog, glog, gjson and lumberjack.

# Packages

No description provided by the author
No description provided by the author
No description provided by the author

# Functions

Debug starts a new message with debug level.
Error starts a new message with error level.
Fastrandn returns a pseudorandom uint32 in [0,n).
Fatal starts a new message with fatal level.
Goid returns the current goroutine id.
Info starts a new message with info level.
IsTerminal returns whether the given file descriptor is a terminal.
NewContext starts a new contextual entry.
No description provided by the author
NewNode returns a new snowflake node that can be used to generate snowflake IDs.
Panic starts a new message with panic level.
ParseBase2 converts a Base2 string into a snowflake ID.
ParseBase32 parses a base32 []byte into a snowflake ID NOTE: There are many different base32 implementations so becareful when doing Any interoperation.
ParseBase36 converts a Base36 string into a snowflake ID.
ParseBase58 parses a base58 []byte into a snowflake ID.
ParseBase64 converts a base64 string into a snowflake ID.
ParseBytes converts a byte slice into a snowflake ID.
ParseInt64 converts an int64 into a snowflake ID.
ParseIntBytes converts an array of bytes encoded as big endian integer as a snowflake ID.
ParseLevel converts a level string into a log Level value.
ParseString converts a string into a snowflake ID.
Printf sends a log entry without extra field.
No description provided by the author
Trace starts a new message with trace level.
Warn starts a new message with warning level.
No description provided by the author

# Constants

DebugLevel defines debug log level.
ErrorLevel defines error log level.
FatalLevel defines fatal log level.
InfoLevel defines info log level.
PanicLevel defines panic log level.
TimeFormatUnix defines a time format that makes time fields to be serialized as Unix timestamp integers.
TimeFormatUnixMs defines a time format that makes time fields to be serialized as Unix timestamp integers in milliseconds.
TimeFormatUnixWithMs defines a time format that makes time fields to be serialized as Unix timestamp timestamp floats.
TraceLevel defines trace log level.
WarnLevel defines warn log level.

# Variables

DefaultLogger is the global logger.
Epoch is set to the twitter snowflake epoch of Nov 04 2010 01:42:54 UTC in milliseconds You may customize this to set a different epoch for your application.
ErrInvalidBase32 is returned by ParseBase32 when given an invalid []byte.
ErrInvalidBase58 is returned by ParseBase58 when given an invalid []byte.
NodeBits holds the number of bits to use for Node Remember, you have a total 22 bits to share between Node/Step.
StepBits holds the number of bits to use for Step Remember, you have a total 22 bits to share between Node/Step.

# Structs

AsyncWriter is an Writer that writes asynchronously.
ConsoleWriter parses the JSON input and writes it in a colorized, human-friendly format to Writer.
Entry
Entry represents a log entry.
FileWriter is an Writer that writes to the specified filename.
FormatterArgs is a parsed sturct from json input.
IOWriteCloser wraps an io.IOWriteCloser to Writer.
IOWriter wraps an io.Writer to Writer.
JournalWriter is an Writer that writes logs to journald.
A JSONSyntaxError is returned from UnmarshalJSON if an invalid ID is provided.
No description provided by the author
A Logger represents an active logging object that generates lines of JSON output to an io.Writer.
MultiLevelWriter is an Writer that log to different writers by different levels.
A Node struct holds the basic information needed for a snowflake generator node.
SyslogWriter is an Writer that writes logs to a syslog server..
TSVEntry represents a tsv log entry.
TSVLogger represents an active logging object that generates lines of TSV output to an io.Writer.

# Interfaces

ObjectMarshaler provides a strongly-typed and encoding-agnostic interface to be implemented by types used with Entry's Object methods.
Writer defines an entry writer interface.

# Type aliases

Context represents contextual fields.
Fields type, used to pass to `Fields`.
An ID is a custom type used for a snowflake ID.
Level defines log levels.
MultiEntryWriter is an array Writer that log to different writers.
MultiIOWriter is an array io.Writer that log to different writers.
MultiWriter is an alias for MultiLevelWriter.
The WriterFunc type is an adapter to allow the use of ordinary functions as log writers.