# README

On-chain monitor framework

Architecture

Chainlink blueprints contains a descript of the architecture of the on-chain monitor framework, the tools available for integrators as well as a tutorial for creating new integrations.

Documentation

Godoc generated documentation is available here

Developing the monitoring framework

Abstractions

Don't create abstractions for the sake of it. Always ask "What is the simplest thing that can solve the problem?" Always ask "Would this code make sense if I didn't write it but had to change it?" As a rule of thumb, abstraction around IO - think database or http connections - are almost always a good idea, because they make testing easier. Another rule of thumb, is to compose abstractions by means of dependency injection.

Concurrency

Concurrency is hard. To make it manageable, always use established concurrent patterns specific to golang. Eg. https://go.dev/blog/pipelines or https://talks.golang.org/2012/concurrency.slide Have all the concurrent code in one place and extract everything else in functions or interfaces. This will make testing the concurrent code easier - but still not easy!

A tenant of good concurrent code is resource management. Your primary resources are goroutines, channels and contexts (effectively wrappers ontop of channels). Make sure, that upon exit, your program cleanly terminates all goroutine, releases all OS resources (file pointers, sockets, etc.), no channel is being used anymore and all contexts are cancelled. This will force you to manage these things actively in your code and - hopefully - prevent leaks.

Concurrency abstractions are notoriously "leaky". Unless they are very simple layers on top of well tested solution - eg. Subprocesses is a wrapper over sync.WorkGroup - avoid introducing concurrency abstractions!

Logging

I have yet to find an engineer who like GBs of logging. Useless logs have a cognitive load on the person trying to solve an issue. My approach is to log as little as possible, but when you do log, put all the data needed to reproduce the issue and fix it in the log! Logging takes time to tune. Try to trigger or simulate errors in development and see if the log line is useful for debugging!

Testing

This is controversial but I'm not a huge fan of testing as much as possible. Most tests I've seen - and written - are brittle, are non-deterministic - ofc they break only in CI - and are not very valuable. The most valuable test is an end-to-end test that checks a use-case. The lest valuable test is a unit test that tests the implementation of a function.

Another thing that makes writing valuable tests easier is good "interfaces". If a piece of code has clearly defined inputs and outputs, it's easier to test.

Errors

An often overlooked part of the interface of a component are the errors it can produce. It's easy to return nil, err! Well defined errors can be either public values or error types - when more context is needed to debug the error. Make sure you consider whether a specific error can be handled by the caller or needs to be pushed up the stack!

Benchmarks

Execute the existing benchmark whenever a significant change to the system is introduced. While these benchmarks run in an ideal situation - eg. 0 network latency, correctly formatted message, etc. - they give a reference point for potential performance degradation introduced by new features.

Benchmarks are - arguably - the easiest way to profile your code!

# Packages

Package monitoring contains a small DSL to help write more robust Avro schemas by taking advantage of go's type system.
package config parses flags, environment variables and json object to build a Config object that's used throughout the monitor.
No description provided by the author

# Functions

No description provided by the author
No description provided by the author
No description provided by the author
NewExporterFactoryMock creates a new instance of ExporterFactoryMock.
NewExporterMock creates a new instance of ExporterMock.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NewInstrumentedSourceFactory wraps a Source and transparently monitors it.
NewKafkaExporterFactory produces Kafka exporters which consume, format and publish source outputs to kafka.
No description provided by the author
No description provided by the author
NewMetricsMock creates a new instance of MetricsMock.
NewMonitor builds a new Monitor instance using dependency injection.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NewSourceFactoryMock creates a new instance of SourceFactoryMock.
NewSourceMock creates a new instance of SourceMock.
NewSourcePoller builds Pollers for Sources.
SubjectFromTopic computes the associated AVRO schema subject name from a kafka topic name.

# Variables

No description provided by the author
ErrNoUpdate is an error value interpreted by a Poller to mean that the Fetch() was successful but a new value was not found.
Avro schemas to sync with the registry.

# Structs

Envelope contains data that is required from all the chain integrations.
ExporterFactoryMock is an autogenerated mock type for the ExporterFactory type.
No description provided by the author
ExporterFactoryMock_NewExporter_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'NewExporter'.
ExporterMock is an autogenerated mock type for the Exporter type.
ExporterMock_Cleanup_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Cleanup'.
No description provided by the author
ExporterMock_Export_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Export'.
No description provided by the author
MetricsMock is an autogenerated mock type for the Metrics type.
MetricsMock_Cleanup_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Cleanup'.
No description provided by the author
MetricsMock_HTTPHandler_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'HTTPHandler'.
MetricsMock_IncOffchainAggregatorAnswersTotal_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'IncOffchainAggregatorAnswersTotal'.
MetricsMock_SetFeedContractLinkBalance_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetFeedContractLinkBalance'.
MetricsMock_SetFeedContractMetadata_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetFeedContractMetadata'.
MetricsMock_SetFeedContractTransactionsFailed_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetFeedContractTransactionsFailed'.
MetricsMock_SetFeedContractTransactionsSucceeded_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetFeedContractTransactionsSucceeded'.
MetricsMock_SetHeadTrackerCurrentHead_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetHeadTrackerCurrentHead'.
MetricsMock_SetLinkAvailableForPayment_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetLinkAvailableForPayment'.
MetricsMock_SetNodeMetadata_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetNodeMetadata'.
MetricsMock_SetOffchainAggregatorAnswers_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetOffchainAggregatorAnswers'.
MetricsMock_SetOffchainAggregatorAnswersLatestTimestamp_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetOffchainAggregatorAnswersLatestTimestamp'.
MetricsMock_SetOffchainAggregatorAnswersRaw_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetOffchainAggregatorAnswersRaw'.
MetricsMock_SetOffchainAggregatorAnswerStalled_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetOffchainAggregatorAnswerStalled'.
MetricsMock_SetOffchainAggregatorJuelsPerFeeCoin_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetOffchainAggregatorJuelsPerFeeCoin'.
MetricsMock_SetOffchainAggregatorJuelsPerFeeCoinRaw_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetOffchainAggregatorJuelsPerFeeCoinRaw'.
MetricsMock_SetOffchainAggregatorJuelsPerFeeCoinReceivedValues_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetOffchainAggregatorJuelsPerFeeCoinReceivedValues'.
MetricsMock_SetOffchainAggregatorRoundID_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetOffchainAggregatorRoundID'.
MetricsMock_SetOffchainAggregatorSubmissionReceivedValues_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetOffchainAggregatorSubmissionReceivedValues'.
Monitor is the entrypoint for an on-chain monitor integration.
Pipeline represents a succession of transformations on the data coming from a source: source output -> adapt to a map -> encode to AVRO -> send to Kafka.
No description provided by the author
SourceFactoryMock is an autogenerated mock type for the SourceFactory type.
No description provided by the author
SourceFactoryMock_GetType_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetType'.
SourceFactoryMock_NewSource_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'NewSource'.
SourceMock is an autogenerated mock type for the Source type.
No description provided by the author
SourceMock_Fetch_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Fetch'.
TxResults counts the number of successful and failed transactions in a predetermined window of time.

# Interfaces

ChainConfig contains chain-specific configuration.
No description provided by the author
Exporter methods can be executed out of order and should be thread safe.
ExporterFactory is used to create a new exporter for each feed that needs to be monitored.
FeedConfig is the interface for feed configurations extracted from the RDD.
No description provided by the author
No description provided by the author
HTTPServer is the HTTP interface exposed by every monitoring.
Manager restarts the multi-feed monitor whenever the feed configuration list has changed.
Metrics is a thin interface on top of the prometheus API.
MultiFeedMonitor manages the flow of data from multiple sources to multiple exporters for each feed in the configuration.
NetworkMOnitor manages the flow of data from sources to exporters for non-feed-specific metrics (blockheight, balances, etc).
No description provided by the author
NodeConfig is the subset of on-chain node operator's configuration required by the OM framework.
Poller implements Updater by periodically invoking a Source's Fetch() method.
Producer is an abstraction on top of Kafka to aid with tests.
Schema is an interface for encoding/decoding data structures into the AVRO format.
No description provided by the author
Source is an abstraction for reading data from a remote API, usually a chain RPC endpoint.
No description provided by the author
Updater is a generic interface implemented by either polling or subscribing.

# Type aliases

FeedParser is the interface for deserializing feed configuration data for each chain integration.
Logger is a type alias for backwards compatibility.
No description provided by the author
Mapper is an interface for converting Envelopes into data structures that can be encoded in AVRO and sent to Kafka.
NodesParser extracts multiple nodes' configurations from the configuration server, eg.