package
0.79.0
Repository: https://github.com/viant/endly.git
Documentation: pkg.go.dev

# README

##Log validation service

Usage:

  1. register log listener, to dynamically detect any log changes (log shrinking/rotation is supported), any new log records are queued to be validated.
  2. run log validation. Log validation verifies actual and expected log records, shifting record from actual logs pending queue.
  3. reset - optionally reset log queues, to discard pending validation logs.

Supported actions:

Service IdActionDescriptionRequestResponse
validator/loglistenstart listening for log file changes on specified locationListenRequestListenResponse
validator/logresetdiscard logs detected by listenerResetRequestResetResponse
validator/logassertperform validation on provided expected log records against actual log file records.AssertRequestAssertResponse

Validation strategies:

A log validation verifies produced by a logger with a user provides a desired log records in the asset request.Expect[logTypeIndex].Records Any arbitrary data structure can represent records.

Once a log/validator listener detects data produce by a logger, it places it to the pending validation queue, then later when assert request takes place, validator takes (and removes) records from pending validation queue to match and validate with expected records.

This process may use either position or index based matching method. In the first strategy, a matcher takes the first record from the pending validation queue (FIFO) for each expected record. The latter strategy requires an indexing expression (provided in listen request IndexRegExpr i.e. "UUID":"([^"]+)" ) which is used for both indexing pending logs and desired logs. If the validator is unable to match record with indexing expression, it falls back to the position based one.

Validator also supports data transformation on the fly just before validation with UDF

Actual validation is delegated to assertly

Examples

Standalone testing workflow example:**

endly -r=run

@run.yaml

init:
  logLocation: /tmp/logs
  target:
    url:  ssh://127.0.0.1/
    credentials: ${env.HOME}/.secret/localhost.json
defaults:
  target: $target
pipeline:
  init:
    action: exec:run
    commands:
      - mkdir -p $logLocation
      - "> ${logLocation}/myevents.log"
      - echo '{"EventID":111, "EventType":"event1", "X":11111111}' >> ${logLocation}/myevents.log
      - echo '{"EventID":222, "EventType":"event2", "X":11141111}' >> ${logLocation}/myevents.log
      - echo '{"EventID":333, "EventType":"event1","X":22222222}' >>  ${logLocation}/myevents.log
  listen:
    action: validator/log:listen
    frequencyMs: 500
    source:
      URL: $logLocation
    types:
      - format: json
        inclusion: event1
        mask: '*.log'
        name: event1
  validate:
    action: validator/log:assert
    logTypes:
      - event1
    description: E-logger event log validation
    expect:
      - type: event1
        records:
         - EventID: 111
           X: 11111111
         - EventID: 333
           X: 22222222
    logWaitRetryCount: 2
    logWaitTimeMs: 5000

with request delegation:

endly -r=test

@test.yaml

init:
  logLocation: /tmp/logs
  target:
    url:  ssh://127.0.0.1/
    credentials: ${env.HOME}/.secret/localhost.json
defaults:
  target: $target
pipeline:
  init:
    action: exec:run
    request: '@exec.yaml'
  listen:
    action: validator/log:listen
    request: '@listen.yaml'
  validate:
    action: validator/log:assert
    request: '@validate.json'

@exec.yaml

commands:
  - mkdir -p $logLocation
  - "> ${logLocation}/myevents.log"
  - echo '{"EventID":111, "EventType":"event1", "X":11111111}' >> ${logLocation}/myevents.log
  - echo '{"EventID":222, "EventType":"event2", "X":11141111}' >> ${logLocation}/myevents.log
  - echo '{"EventID":333, "EventType":"event1","X":22222222}' >>  ${logLocation}/myevents.log

@listen.yaml

frequencyMs: 500
source:
  URL: $logLocation
types:
  - format: json
    inclusion: event1
    mask: '*.log'
    name: event1

@validate.json

{
  "Expect": [
    {
      "type": "event1",
      "records": [
        {
          "EventID": 111,
          "X": 11111111
        },
        {
          "EventID": 333,
          "X": 22222222
        }
      ]
    }
  ]
}

Workflow with csv UDF example

endly -r=csv

csv.yaml

init:
  i: 0
  j: 0
  logLocation: /tmp/logs
  target:
    url:  ssh://127.0.0.1/
    credentials: ${env.HOME}/.secret/localhost.json
defaults:
  target: $target
pipeline:
  init:
    make-dir:
      action: exec:run
      commands:
      - mkdir -p $logLocation
      - "> ${logLocation}/events.csv"
    register-udf:
      action: udf:register
      udfs:
        - id: UserCsvReader
          provider: CsvReader
          params:
            - id,type,timestamp,user
    listen:
      action: validator/log:listen
      frequencyMs: 500
      source:
        URL: $logLocation
      types:
        - format: json
          inclusion: event1
          mask: '*.csv'
          name: event1
          UDF: UserCsvReader
          debug: true
  test:
    multiAction: true
    produce:
      async: true
      repeat: 6
      sleepTimeMs: 400
      action: exec:run
      commands:
        - echo '$i++,event1,${timestamp.now},user $j++' >> ${logLocation}/events.csv

    validate:
      action: validator/log:assert
      logTypes:
        - event1
      description: E-logger event log validation
      expect:
      - type: event1
        records:
          - id: 0
            user: user 0
          - id: 1
            user: user 1
          - id: 2
            user: user 2
          - id: 3
            user: user 3
          - id: 4
            user: user 4
          - id: 5
            user: user 5

      logWaitRetryCount: 10
      logWaitTimeMs: 2000

As part of workflow

See more