Categorygithub.com/lyteabovenyte/mock_distributed_services
repository
7.0.0+incompatible
Repository: https://github.com/lyteabovenyte/mock_distributed_services.git
Documentation: pkg.go.dev

# Packages

No description provided by the author
No description provided by the author

# README

implementing distributed services with Golang
features:
  • commit log
  • networking with gRPC
  • encrypting connection, mutual TLS authentication, ACL based authorization using Casbin and peer-to-peer grpc connection
  • Observability using zap, ctxtags and OpenCensus for tracing. all in gRPC interceptors ⚭
  • Server-to-Server Service Discovery using Serf
  • Coordination and Consesusn with Raft algorithm and bind the cluster with Serf for Discovery Integration between servers
implementation:
  • commit log:
    • using store and index files approach for each log segment
    • using go-mmap library to memory map index file for performance issues.
    • tests for each segment and it's store and index files
  • gRPC Services: v2.0.0
    • using bidirectional streaming APIs on the client and server side to stream the content between them.
    • using status, codes and errdetails packages to customize error messages between client and server.
    • Dependency Inversion using Interfaces. (DIP principle). --> wanna know more?
  • Security: v3.0.0
    • my approach to secure the system is based on three approach:
      • encrypt data in-flight to protect against man-in-the-middle attacks
      • authenticate to identify clients
      • authorize to determine the permission of the identified clients
    • base on the adoptiveness of mutual authentication in distributed services, I'm gonna try my best to adopt it too 🤠. interested? learn how cloudflare adopt it.
    • building access control list-based authorization & differentiate between authentication and authorization in case of varying levels of access and permissions.
    • using cloudflare's open source Certificate Authority (CA) called CSFFL for signing, verifying and bundling TLS certificates.
    • v3.0.2(Authentication) has compeleted mutual communication between client and server + containing tests.
    • Authorization: - using Casbin: Casbin supports enforcing authorization based on various control models—including ACLs. Plus Casbin extendable and i's exciting to explore it.
    • v4.0.0 --> encrypting connection, mutual TLS authentication, ACL based authorization using casbin
  • Observability: v4.0.0
    • using zap for structured logs
    • using request context tags to set value for request tags in context. it'll add a Tag object to the context that can be used by other middleware to add context about a request.
    • using OpenCensus for tracing
  • Service-to-Service Discovery:
    • implementing Membership using Serf on each service instance to discover other service instances.
    • implementing Replication to duplicate each server's data - already thinking about Consensus -_-.
    • after implementing our replicator, membership, log and server components, we'll implement and import an Agent type to run and sync these components on each instance. just like Hachicorp Consul.
    • updated on v6.0.0
  • Coordinate the Service with Consesus using Raft Algorithm:
    • using my own log library as Raft's log store and satisfy the LogStore interface that Raft requires.
    • using Bolt which is an embedded and persisted key-value database for Go, as my stable store in Raft to store server's current Term and important metadata like the candidates the server voted for.
    • implemented Raft snapshots to recover and restore data efficiently, when necessary, like if our server’s EC2 instance failed and an autoscaling group(terraform terminology 🥸) brought up another instance for the Raft server. Rather than streaming all the data from the Raft leader, the new server would restore from the snapshot (which you could store in S3 or a similar storage service) and then get the latest changes from the leader.
    • again, using my own Log library as Raft's finite-state-machine(FSM), to replicate the logs across server's in the cluster.
    • Discovery integration and binding Serf and Raft to implement service discovery on Raft cluster by implementing Join and Leave methods to satisfy Serf's interface hence having a Membership in the cluster to be discovered.
    • Multiplexing on our Service to run multiple services on one port
      • we identify the Raft connections from gRPC connections by making the Raft connection write a byte to identify them by, and multiplexing connection to different listeners to handle, and configured our agents to both manage Raft cluster connections and gRPC connection on our servers on a single port