Skip to content

thisisdevelopment/fanunmarshal

Repository files navigation

go report card CircleCI GoDoc Go Reference

fanunmarshal

A high-performance concurrent JSON unmarshalling library for Go that processes slices of byte data using worker pools.

Overview

fanunmarshal is designed for scenarios where you need to unmarshal large amounts of JSON data efficiently. It uses a configurable worker pool pattern to process [][]byte data concurrently, providing significant performance improvements over sequential processing.

Perfect for:

  • Processing Redis MGet responses
  • Bulk JSON data processing
  • High-throughput API response handling
  • Large dataset transformations

Features

  • πŸš€ High Performance: 2-3x faster than sequential JSON unmarshalling
  • βš™οΈ Configurable Worker Pools: Set optimal worker count for your workload
  • πŸ“¦ Flexible Input: Process [][]byte slices or channels
  • πŸ”§ JSON Library Choice: Use stdlib json or high-performance jsoniter
  • πŸ“ˆ Auto-scaling: Automatically adjust workers based on data size
  • πŸ’Ύ Memory Efficient: Minimal memory overhead with worker pool pattern

Installation

go get github.com/thisisdevelopment/fanunmarshal

Quick Start

Slice Processing

package main

import (
    "fmt"
    "github.com/thisisdevelopment/fanunmarshal"
)

type User struct {
    ID   string `json:"id"`
    Name string `json:"name"`
    Age  int    `json:"age"`
}

func main() {
    // Your JSON data as [][]byte (e.g., from Redis MGet)
    jsonData := [][]byte{
        []byte(`{"id":"1","name":"Alice","age":25}`),
        []byte(`{"id":"2","name":"Bob","age":30}`),
        []byte(`{"id":"3","name":"Charlie","age":35}`),
    }

    // Configure and process
    var expected User
    results := fanunmarshal.New().
        WithWorkers(10).           // Use 10 concurrent workers
        WithUseJsonIter().         // Use jsoniter for better performance
        UnMarshalSlice(jsonData, &expected)

    // Process results
    for _, result := range results {
        user := result.(*User)
        fmt.Printf("User: %+v\n", user)
    }
}

Channel Processing

func processWithChannel() {
    jsonData := [][]byte{
        []byte(`{"id":"1","name":"Alice","age":25}`),
        []byte(`{"id":"2","name":"Bob","age":30}`),
    }

    var expected User
    fm := fanunmarshal.New().
        WithWorkers(5).
        WithUseJsonIter().
        DisableAutoScaleDown()

    // Create input channel
    inputChan := fm.MakeChan(jsonData)

    // Process via channel
    outputChan := fm.UnMarshalChan(inputChan, &expected, nil)

    // Consume results
    for result := range outputChan {
        user := result.(*User)
        fmt.Printf("User: %+v\n", user)
    }
}

API Reference

Core Types

// Main interface
type IFanUnMarshal interface {
    WithWorkers(workers uint) IFanUnMarshal
    DisableAutoScaleDown() IFanUnMarshal
    WithUseJsonIter() IFanUnMarshal
    UnMarshalSlice(data [][]byte, expected interface{}) []interface{}
    MakeChan(data [][]byte) <-chan []byte
    UnMarshalChan(pipe <-chan []byte, expected interface{}, dataLength *int) <-chan interface{}
}

Configuration Methods

  • WithWorkers(n uint): Set number of concurrent workers (default: 2)
  • WithUseJsonIter(): Use jsoniter instead of stdlib json for better performance
  • DisableAutoScaleDown(): Prevent automatic worker count adjustment based on data size

Processing Methods

  • UnMarshalSlice(data [][]byte, expected interface{}) []interface{}: Process slice synchronously
  • UnMarshalChan(pipe <-chan []byte, expected interface{}, dataLength *int) <-chan interface{}: Process channel asynchronously
  • MakeChan(data [][]byte) <-chan []byte: Convert slice to channel

Important Notes

  • Result Order: Results may not maintain the original input order due to concurrent processing
  • Error Handling: Panics on JSON unmarshal errors (consider wrapping in recover)
  • Memory Usage: Each worker creates a deep copy of the expected struct
  • Auto-scaling: With channels, provide dataLength or disable auto-scaling

Performance

Benchmarks on 1000 JSON objects (~1.4MB total):

Method Time per Operation Performance Gain
Sequential stdlib ~8.35ms baseline
fanunmarshal + stdlib (10 workers) ~3.57ms 2.3x faster
fanunmarshal + jsoniter (10 workers) ~2.57ms 3.2x faster
# Run benchmarks
go test -bench=. -benchtime=1s
Detailed Benchmark Results

Sequential Processing (stdlib json)

BenchmarkPlainUnMarshal-4                    141           8334686 ns/op

fanunmarshal + stdlib json (10 workers)

BenchmarkWithLibSlice_stdlib_10-4            336           3573922 ns/op

fanunmarshal + jsoniter (10 workers)

BenchmarkWithLibSlice_jsoniter_10-4          501           2469929 ns/op

Use Cases

βœ… Ideal For

  • Redis MGet responses: Perfect for processing multiple cached JSON objects
  • Bulk API processing: Transform large datasets from external APIs
  • Log processing: Parse multiple JSON log entries concurrently
  • Data migration: Convert large JSON datasets efficiently
  • Microservice communication: Process batched JSON responses

❌ Not Optimal For

  • Small datasets (< 100 items)
  • Single JSON objects
  • Memory-constrained environments
  • Applications requiring strict result ordering

About This Company

This is a digital agency located in Utrecht, Netherlands, specializing in crafting high-performance, resilient, and scalable digital solutions, APIs, microservices, and more. Our multidisciplinary team of designers, developers, and strategists collaborate to deliver innovative technology and exceptional user experiences.

Contributing

Contributions are welcome! Please check out CONTRIBUTING.md for guidelines on how to help improve fanunmarshal.

License

Β© This is Development BV, 2022~time.Now() Released under the MIT License

About

fan-out fan-in golang unmarshaller

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published