package
1.6.65
Repository: https://github.com/deepflowio/deepflow.git
Documentation: pkg.go.dev

# README

Global {#global}

Enabled {#global.enabled}

Tags:

hot_update

FQCN:

global.enabled

Upgrade from old version: enabled

Default value:

global:
  enabled: true

Schema:

KeyValue
Typebool

Description:

Disabled / Enabled the deepflow-agent.

Limits {#global.limits}

Resource limitations

CPU Limit {#global.limits.max_millicpus}

Tags:

hot_update

FQCN:

global.limits.max_millicpus

Upgrade from old version: max_millicpus

Default value:

global:
  limits:
    max_millicpus: 1000

Schema:

KeyValue
Typeint
UnitLogical Milli Cores
Range[1, 100000]

Description:

deepflow-agent uses cgroups to limit CPU usage. 1 millicpu = 1 millicore = 0.001 core.

CPU Limit (Cores) {#global.limits.max_cpus}

Tags:

deprecated

FQCN:

global.limits.max_cpus

Upgrade from old version: max_cpus

Default value:

global:
  limits:
    max_cpus: 1

Schema:

KeyValue
Typeint

Memory Limit {#global.limits.max_memory}

Tags:

hot_update

FQCN:

global.limits.max_memory

Upgrade from old version: max_memory

Default value:

global:
  limits:
    max_memory: 768

Schema:

KeyValue
Typeint
UnitMiB
Range[128, 100000]

Description:

deepflow-agent uses cgroups to limit memory usage.

Maximum Log Backhaul Rate {#global.limits.max_log_backhaul_rate}

Tags:

hot_update

FQCN:

global.limits.max_log_backhaul_rate

Upgrade from old version: log_threshold

Default value:

global:
  limits:
    max_log_backhaul_rate: 300

Schema:

KeyValue
Typeint
UnitLines/Hour
Range[0, 10000]

Description:

deepflow-agent will send logs to deepflow-server, 0 means no limit.

Maximum Local Log File Size {#global.limits.max_local_log_file_size}

Tags:

hot_update

FQCN:

global.limits.max_local_log_file_size

Upgrade from old version: log_file_size

Default value:

global:
  limits:
    max_local_log_file_size: 1000

Schema:

KeyValue
Typeint
UnitMiB
Range[10, 10000]

Description:

The maximum disk space allowed for deepflow-agent log files.

Local Log Retention {#global.limits.local_log_retention}

Tags:

hot_update

FQCN:

global.limits.local_log_retention

Upgrade from old version: log_retention

Default value:

global:
  limits:
    local_log_retention: 300d

Schema:

KeyValue
Typeduration
Range['10d', '10000d']

Description:

The retention time for deepflow-agent log files.

Alerts {#global.alerts}

Thread Limit {#global.alerts.thread_threshold}

Tags:

hot_update

FQCN:

global.alerts.thread_threshold

Upgrade from old version: thread_threshold

Default value:

global:
  alerts:
    thread_threshold: 500

Schema:

KeyValue
Typeint
Range[1, 1000]

Description:

Maximum number of threads that deepflow-agent is allowed to launch.

Process Limit {#global.alerts.process_threshold}

Tags:

hot_update

FQCN:

global.alerts.process_threshold

Upgrade from old version: process_threshold

Default value:

global:
  alerts:
    process_threshold: 10

Schema:

KeyValue
Typeint
Range[1, 100]

Description:

Maximum number of processes that deepflow-agent is allowed to launch.

Core File Checker {#global.alerts.check_core_file_disabled}

Tags:

agent_restart deprecated

FQCN:

global.alerts.check_core_file_disabled

Upgrade from old version: static_config.check-core-file-disabled

Default value:

global:
  alerts:
    check_core_file_disabled: false

Schema:

KeyValue
Typebool

Description:

When the host has an invalid NFS file system or a docker is running, sometime program hang when checking the core file, so the core file check provides a switch to prevent the process hang. Additional links:

Circuit Breakers {#global.circuit_breakers}

System Free Memory Percentage {#global.circuit_breakers.sys_free_memory_percentage}

Calculation Method: (free_memory / total_memory) * 100%

Trigger Threshold {#global.circuit_breakers.sys_free_memory_percentage.trigger_threshold}

Tags:

hot_update

FQCN:

global.circuit_breakers.sys_free_memory_percentage.trigger_threshold

Upgrade from old version: sys_free_memory_limit

Default value:

global:
  circuit_breakers:
    sys_free_memory_percentage:
      trigger_threshold: 0

Schema:

KeyValue
Typeint
Unit%
Range[0, 100]

Description:

Setting sys_free_memory_limit to 0 indicates that the system free memory ratio is not checked.

  1. When the current system free memory ratio is below sys_free_memory_limit * 70%, the agent will automatically restart.
  2. When the current system free memory ratio is below sys_free_memory_limit but above 70%, the agent enters the disabled state.
  3. When the current system free memory ratio remains above sys_free_memory_limit * 110%, the agent recovers from the disabled state.

Relative System Load {#global.circuit_breakers.relative_sys_load}

Calculation Method: system_load / total_cpu_cores

Trigger Threshold {#global.circuit_breakers.relative_sys_load.trigger_threshold}

Tags:

hot_update

FQCN:

global.circuit_breakers.relative_sys_load.trigger_threshold

Upgrade from old version: system_load_circuit_breaker_threshold

Default value:

global:
  circuit_breakers:
    relative_sys_load:
      trigger_threshold: 1.0

Schema:

KeyValue
Typefloat
Range[0, 10]

Description:

When the load of the Linux system divided by the number of CPU cores exceeds this value, the agent automatically enters the disabled state. It will automatically recover if it remains below 90% of this value for a continuous 5 minutes. Setting it to 0 disables this feature.

Recovery Threshold {#global.circuit_breakers.relative_sys_load.recovery_threshold}

Tags:

hot_update

FQCN:

global.circuit_breakers.relative_sys_load.recovery_threshold

Upgrade from old version: system_load_circuit_breaker_recover

Default value:

global:
  circuit_breakers:
    relative_sys_load:
      recovery_threshold: 0.9

Schema:

KeyValue
Typefloat
Range[0, 10]

Description:

When the system load of the Linux system divided by the number of CPU cores is continuously below this value for 5 minutes, the agent can recover from the circuit breaker disabled state, and setting it to 0 means turning off the circuit breaker feature.

Metric {#global.circuit_breakers.relative_sys_load.system_load_circuit_breaker_metric}

Tags:

hot_update

FQCN:

global.circuit_breakers.relative_sys_load.system_load_circuit_breaker_metric

Upgrade from old version: system_load_circuit_breaker_metric

Default value:

global:
  circuit_breakers:
    relative_sys_load:
      system_load_circuit_breaker_metric: load15

Enum options:

ValueNote
load1
load5
load15

Schema:

KeyValue
Typestring

Description:

The system load circuit breaker mechanism uses this metric, and the agent will check this metric every 10 seconds by default.

Tx Throughput {#global.circuit_breakers.tx_throughput}

Trigger Threshold {#global.circuit_breakers.tx_throughput.trigger_threshold}

Tags:

hot_update ee_feature

FQCN:

global.circuit_breakers.tx_throughput.trigger_threshold

Upgrade from old version: max_tx_bandwidth

Default value:

global:
  circuit_breakers:
    tx_throughput:
      trigger_threshold: 0

Schema:

KeyValue
Typeint
UnitMbps
Range[0, 100000]

Description:

When the outbound throughput of the NPB interface reaches or exceeds the threshold, the broker will be stopped, after that the broker will be resumed if the throughput is lower than (trigger_threshold - outputs.npb.max_npb_throughput)*90% within 5 consecutive monitoring intervals.

Attention: When configuring this value, it must be greater than outputs.npb.max_npb_throughput. Set to 0 will disable this feature.

Throughput Monitoring Interval {#global.circuit_breakers.tx_throughput.throughput_monitoring_interval}

Tags:

hot_update ee_feature

FQCN:

global.circuit_breakers.tx_throughput.throughput_monitoring_interval

Upgrade from old version: bandwidth_probe_interval

Default value:

global:
  circuit_breakers:
    tx_throughput:
      throughput_monitoring_interval: 10s

Schema:

KeyValue
Typeduration
Range['1s', '60s']

Description:

Monitoring interval for outbound traffic rate of NPB interface.

Tunning {#global.tunning}

CPU Affinity {#global.tunning.cpu_affinity}

Tags:

agent_restart

FQCN:

global.tunning.cpu_affinity

Upgrade from old version: static_config.cpu-affinity

Default value:

global:
  tunning:
    cpu_affinity: []

Schema:

KeyValue
Typeint
Range[0, 65536]

Description:

CPU affinity is the tendency of a process to run on a given CPU for as long as possible without being migrated to other processors. Example:

global:
  tunning:
    cpu_affinity: [1, 3, 5, 7, 9]

Process Scheduling Priority {#global.tunning.process_scheduling_priority}

Tags:

agent_restart

FQCN:

global.tunning.process_scheduling_priority

Upgrade from old version: static_config.process-scheduling-priority

Default value:

global:
  tunning:
    process_scheduling_priority: 0

Schema:

KeyValue
Typeint
Range[-20, 19]

Description:

The smaller the value of process scheduling priority, the higher the priority of the deepflow-agent process, and the larger the value, the lower the priority.

Idle Memory Trimming {#global.tunning.idle_memory_trimming}

Tags:

agent_restart

FQCN:

global.tunning.idle_memory_trimming

Upgrade from old version: static_config.memory-trim-disabled

Default value:

global:
  tunning:
    idle_memory_trimming: false

Schema:

KeyValue
Typebool

Description:

Proactive memory trimming can effectively reduce memory usage, but there may be performance loss.

Resource Monitoring Interval {#global.tunning.resource_monitoring_interval}

Tags:

agent_restart

FQCN:

global.tunning.resource_monitoring_interval

Upgrade from old version: static_config.guard-interval

Default value:

global:
  tunning:
    resource_monitoring_interval: 10s

Schema:

KeyValue
Typeduration
Range['1s', '3600s']

Description:

The agent will monitor:

  1. System free memory
  2. Get the number of threads of the agent itself by reading the file information under the /proc directory
  3. Size and number of log files generated by the agent.
  4. System load
  5. Agent memory usage (check if memory trimming is needed)

NTP Clock Synchronization {#global.ntp}

This synchronization mechanism does not alter the host's clock; it is only used internally by the deepflow-agent process.

Enabled {#global.ntp.enabled}

Tags:

hot_update

FQCN:

global.ntp.enabled

Upgrade from old version: ntp_enabled

Default value:

global:
  ntp:
    enabled: false

Schema:

KeyValue
Typebool

Description:

Whether to synchronize the clock to the deepflow-server, this behavior will not change the time of the deepflow-agent running environment.

Maximum Drift {#global.ntp.max_drift}

Tags:

agent_restart

FQCN:

global.ntp.max_drift

Upgrade from old version: static_config.ntp-max-interval

Default value:

global:
  ntp:
    max_drift: 300s

Schema:

KeyValue
Typeduration
Range[0, '365d']

Description:

When the clock drift exceeds this value, the agent will restart.

Minimal Drift {#global.ntp.min_drift}

Tags:

agent_restart

FQCN:

global.ntp.min_drift

Upgrade from old version: static_config.ntp-min-interval

Default value:

global:
  ntp:
    min_drift: 10s

Schema:

KeyValue
Typeduration
Range[0, '365d']

Description:

When the clock drift exceeds this value, the timestamp will be corrected.

Communication {#global.communication}

Proactive Request Interval {#global.communication.proactive_request_interval}

Tags:

hot_update

FQCN:

global.communication.proactive_request_interval

Upgrade from old version: sync_interval

Default value:

global:
  communication:
    proactive_request_interval: 60s

Schema:

KeyValue
Typeduration
Range['10s', '3600s']

Description:

The interval at which deepflow-agent proactively requests configuration and tag information from deepflow-server.

Maximum Escape Duration {#global.communication.max_escape_duration}

Tags:

hot_update

FQCN:

global.communication.max_escape_duration

Upgrade from old version: max_escape_seconds

Default value:

global:
  communication:
    max_escape_duration: 3600s

Schema:

KeyValue
Typeduration
Range['600s', '30d']

Description:

The maximum time that the agent is allowed to work normally when it cannot connect to the server. After the timeout, the agent automatically enters the disabled state.

Controller IP Address {#global.communication.controller_ip}

Tags:

hot_update

FQCN:

global.communication.controller_ip

Upgrade from old version: proxy_controller_ip

Default value:

global:
  communication:
    controller_ip: ''

Schema:

KeyValue
Typeip

Description:

When this value is set, deepflow-agent will use this IP to access the control plane port of deepflow-server, which is usually used when deepflow-server uses an external load balancer.

Controller Port {#global.communication.controller_port}

Tags:

hot_update

FQCN:

global.communication.controller_port

Upgrade from old version: proxy_controller_port

Default value:

global:
  communication:
    controller_port: 30035

Schema:

KeyValue
Typeint
Range[1, 65535]

Description:

The control plane port used by deepflow-agent to access deepflow-server. The default port within the same K8s cluster is 20035, and the default port of deepflow-agent outside the cluster is 30035.

Ingester IP Address {#global.communication.ingester_ip}

Tags:

hot_update

FQCN:

global.communication.ingester_ip

Upgrade from old version: analyzer_ip

Default value:

global:
  communication:
    ingester_ip: ''

Schema:

KeyValue
Typeip

Description:

When this value is set, deepflow-agent will use this IP to access the data plane port of deepflow-server, which is usually used when deepflow-server uses an external load balancer.

Ingester Port {#global.communication.ingester_port}

Tags:

hot_update

FQCN:

global.communication.ingester_port

Upgrade from old version: analyzer_port

Default value:

global:
  communication:
    ingester_port: 30033

Schema:

KeyValue
Typeint
Range[1, 65535]

Description:

The data plane port used by deepflow-agent to access deepflow-server. The default port within the same K8s cluster is 20033, and the default port of deepflow-agent outside the cluster is 30033.

gRPC Socket Buffer Size {#global.communication.grpc_buffer_size}

Tags:

agent_restart

FQCN:

global.communication.grpc_buffer_size

Upgrade from old version: static_config.grpc-buffer-size

Default value:

global:
  communication:
    grpc_buffer_size: 5

Schema:

KeyValue
Typeint
UnitMiB
Range[5, 1024]

Description:

gRPC socket buffer size.

Request via NAT IP Address {#global.communication.request_via_nat_ip}

Tags:

hot_update

FQCN:

global.communication.request_via_nat_ip

Upgrade from old version: nat_ip_enabled

Default value:

global:
  communication:
    request_via_nat_ip: false

Schema:

KeyValue
Typebool

Description:

Used when deepflow-agent uses an external IP address to access deepflow-server. For example, when deepflow-server is behind a NAT gateway, or the host where deepflow-server is located has multiple node IP addresses and different deepflow-agents need to access different node IPs, you can set an additional NAT IP for each deepflow-server address, and modify this value to true.

Self Monitoring {#global.self_monitoring}

Log {#global.self_monitoring.log}

Log Level {#global.self_monitoring.log.log_level}

Tags:

hot_update

FQCN:

global.self_monitoring.log.log_level

Upgrade from old version: log_level

Default value:

global:
  self_monitoring:
    log:
      log_level: INFO

Enum options:

ValueNote
DEBUG
INFO
WARNING
ERROR

Schema:

KeyValue
Typestring

Description:

Log level of deepflow-agent.

Log File {#global.self_monitoring.log.log_file}

Tags:

agent_restart

FQCN:

global.self_monitoring.log.log_file

Upgrade from old version: static_config.log-file

Default value:

global:
  self_monitoring:
    log:
      log_file: /var/log/deepflow_agent/deepflow_agent.log

Schema:

KeyValue
Typestring

Description:

Note that this configuration is only used in standalone mode.

Log Backhaul Enabled {#global.self_monitoring.log.log_backhaul_enabled}

Tags:

hot_update

FQCN:

global.self_monitoring.log.log_backhaul_enabled

Upgrade from old version: rsyslog_enabled

Default value:

global:
  self_monitoring:
    log:
      log_backhaul_enabled: true

Schema:

KeyValue
Typebool

Description:

When enabled, deepflow-agent will send its own logs to deepflow-server.

Profile {#global.self_monitoring.profile}

Enabled {#global.self_monitoring.profile.enabled}

Tags:

agent_restart deprecated

FQCN:

global.self_monitoring.profile.enabled

Upgrade from old version: static_config.profiler

Default value:

global:
  self_monitoring:
    profile:
      enabled: false

Schema:

KeyValue
Typebool

Description:

Only available for Trident (Golang version of Agent).

Debug {#global.self_monitoring.debug}

Enabled {#global.self_monitoring.debug.enabled}

Tags:

hot_update

FQCN:

global.self_monitoring.debug.enabled

Upgrade from old version: debug_enabled

Default value:

global:
  self_monitoring:
    debug:
      enabled: true

Schema:

KeyValue
Typebool

Description:

Disabled / Enabled the debug function of the deepflow-agent.

Local UDP Port {#global.self_monitoring.debug.local_udp_port}

Tags:

agent_restart

FQCN:

global.self_monitoring.debug.local_udp_port

Upgrade from old version: static_config.debug-listen-port

Default value:

global:
  self_monitoring:
    debug:
      local_udp_port: 0

Schema:

KeyValue
Typeint
Range[0, 65535]

Description:

Default value 0 means use a random client port number. Only available for Trident (Golang version of Agent).

Debug Metrics Enabled {#global.self_monitoring.debug.debug_metrics_enabled}

Tags:

agent_restart deprecated

FQCN:

global.self_monitoring.debug.debug_metrics_enabled

Upgrade from old version: static_config.enable-debug-stats

Default value:

global:
  self_monitoring:
    debug:
      debug_metrics_enabled: false

Schema:

KeyValue
Typebool

Description:

Only available for Trident (Golang version of Agent).

Hostname {#global.self_monitoring.hostname}

Tags:

hot_update

FQCN:

global.self_monitoring.hostname

Upgrade from old version: host

Default value:

global:
  self_monitoring:
    hostname: ''

Schema:

KeyValue
Typestring

Description:

Override statsd host tag.

Standalone Mode {#global.standalone_mode}

Maximum Data File Size {#global.standalone_mode.max_data_file_size}

Tags:

agent_restart

FQCN:

global.standalone_mode.max_data_file_size

Upgrade from old version: static_config.standalone-data-file-size

Default value:

global:
  standalone_mode:
    max_data_file_size: 200

Schema:

KeyValue
Typeint
UnitMiB
Range[1, 1000000]

Description:

When deepflow-agent runs in standalone mode, it will not be controlled by deepflow-server, and the collected data will only be written to the local file. Currently supported data types for writing are l4_flow_log and l7_flow_log. Each type of data is written to a separate file. This configuration can be used to specify the maximum size of the data file, and rotate when it exceeds this size. A maximum of two files are kept for each type of data.

Data File Directory {#global.standalone_mode.data_file_dir}

Tags:

agent_restart

FQCN:

global.standalone_mode.data_file_dir

Upgrade from old version: static_config.standalone-data-file-dir

Default value:

global:
  standalone_mode:
    data_file_dir: /var/log/deepflow_agent/

Schema:

KeyValue
Typestring

Description:

Directory where data files are written to.

Tags {#global.tags}

Tags related to deepflow-agent.

Region ID {#global.tags.region_id}

Tags:

hot_update

FQCN:

global.tags.region_id

Upgrade from old version: region_id

Default value:

global:
  tags:
    region_id: 0

Schema:

KeyValue
Typeint

Description:

Region ID of the deepflow-agent or Region ID of the data node.

Pod cluster ID {#global.tags.pod_cluster_id}

Tags:

hot_update

FQCN:

global.tags.pod_cluster_id

Upgrade from old version: pod_cluster_id

Default value:

global:
  tags:
    pod_cluster_id: 0

Schema:

KeyValue
Typeint

Description:

Cluster ID of the container where the deepflow-agent is located.

VPC ID {#global.tags.vpc_id}

Tags:

hot_update

FQCN:

global.tags.vpc_id

Upgrade from old version: epc_id

Default value:

global:
  tags:
    vpc_id: 0

Schema:

KeyValue
Typeint

Description:

The ID of the VPC where the deepflow-agent is located is meaningful only for Workload-V/P and pod-V/P types.

Agent ID {#global.tags.agent_id}

Tags:

hot_update

FQCN:

global.tags.agent_id

Upgrade from old version: vtap_id

Default value:

global:
  tags:
    agent_id: 0

Schema:

KeyValue
Typeint
Range[0, 64000]

Description:

Agent ID.

Agent Type {#global.tags.agent_type}

Tags:

hot_update

FQCN:

global.tags.agent_type

Upgrade from old version: trident_type

Default value:

global:
  tags:
    agent_type: 0

Enum options:

ValueNote
0TT_UNKNOWN
1TT_PROCESS, Agent in KVM
2TT_VM, Agent in a dedicated VM on ESXi
3TT_PUBLIC_CLOUD, Agent in Cloud host (VM)
5TT_PHYSICAL_MACHINE, Agent in Cloud host (BM), or legacy host
6TT_DEDICATED_PHYSICAL_MACHINE, Agent in a dedicated host to receive mirror traffic
7TT_HOST_POD, Agent in K8s Node (Cloud BM, or legacy host)
8TT_VM_POD, Agent in K8s Node (Cloud VM)
9TT_TUNNEL_DECAPSULATION, Agent in a dedicated host to decap tunnel traffic
10TT_HYPER_V_COMPUTE, Agent in Hyper-V Compute Node
11TT_HYPER_V_NETWORK, Agent in Hyper-V Network Node
12TT_K8S_SIDECAR, Agent in K8s POD

Schema:

KeyValue
Typeint
Range[0, 12]

Description:

Agent Type.

Team ID {#global.tags.team_id}

Tags:

hot_update

FQCN:

global.tags.team_id

Upgrade from old version: team_id

Default value:

global:
  tags:
    team_id: 0

Schema:

KeyValue
Typeint

Description:

The ID of the team where the deepflow-agent is located.

Organize ID {#global.tags.organize_id}

Tags:

hot_update

FQCN:

global.tags.organize_id

Upgrade from old version: organize_id

Default value:

global:
  tags:
    organize_id: 0

Schema:

KeyValue
Typeint

Description:

The ID of the organize where the deepflow-agent is located.

Inputs {#inputs}

Proc {#inputs.proc}

Enabled {#inputs.proc.enabled}

Tags:

agent_restart

FQCN:

inputs.proc.enabled

Upgrade from old version: static_config.os-proc-sync-enabled

Default value:

inputs:
  proc:
    enabled: false

Schema:

KeyValue
Typebool

Description:

Only make sense when agent type is one of CHOST_VM, CHOST_BM, K8S_VM, K8S_BM.

Directory of /proc {#inputs.proc.proc_dir_path}

Tags:

agent_restart

FQCN:

inputs.proc.proc_dir_path

Upgrade from old version: static_config.os-proc-root

Default value:

inputs:
  proc:
    proc_dir_path: /proc

Schema:

KeyValue
Typestring

Description:

The /proc fs mount path.

Synchronization Interval {#inputs.proc.sync_interval}

Tags:

agent_restart

FQCN:

inputs.proc.sync_interval

Upgrade from old version: static_config.os-proc-socket-sync-interval

Default value:

inputs:
  proc:
    sync_interval: 10s

Schema:

KeyValue
Typeduration
Range['1s', '1h']

Description:

The interval of socket info sync.

Minimal Lifetime {#inputs.proc.min_lifetime}

Tags:

agent_restart

FQCN:

inputs.proc.min_lifetime

Upgrade from old version: static_config.os-proc-socket-min-lifetime

Default value:

inputs:
  proc:
    min_lifetime: 3s

Schema:

KeyValue
Typeduration
Range['1s', '1h']

Description:

Socket and Process uptime threshold

Tag Extraction {#inputs.proc.tag_extraction}

Script Command {#inputs.proc.tag_extraction.script_command}

Tags:

agent_restart

FQCN:

inputs.proc.tag_extraction.script_command

Upgrade from old version: static_config.os-app-tag-exec

Default value:

inputs:
  proc:
    tag_extraction:
      script_command: []

Schema:

KeyValue
Typestring

Description:

Execute the command every time when scan the process, expect get the process tag from stdout in yaml format, the example yaml format as follow:

- pid: 1
  tags:
  - key: xxx
    value: xxx
- pid: 2
  tags:
  - key: xxx
    value: xxx

Example configuration:

inputs:
  proc:
    tag_extraction:
      script_command: ["cat", "/tmp/tag.yaml"]

Execution Username {#inputs.proc.tag_extraction.exec_username}

Tags:

agent_restart

FQCN:

inputs.proc.tag_extraction.exec_username

Upgrade from old version: static_config.os-app-tag-exec-user

Default value:

inputs:
  proc:
    tag_extraction:
      exec_username: deepflow

Schema:

KeyValue
Typestring

Description:

The user who should execute the os-app-tag-exec command.

Process Matcher {#inputs.proc.process_matcher}

Tags:

agent_restart

FQCN:

inputs.proc.process_matcher

Upgrade from old version: static_config.os-proc-regex

Default value:

inputs:
  proc:
    process_matcher:
    - enabled_features:
      - ebpf.profile.on_cpu
      - ebpf.profile.off_cpu
      match_regex: deepflow-*
      only_in_container: false

Schema:

KeyValue
Typedict

Description:

Will traverse over the entire array, so the previous ones will be matched first. when match_type is parent_process_name, will recursive to match parent proc name, and rewrite_name field will ignore. rewrite_name can replace by regexp capture group and windows style environment variable, for example: $1-py-script-%HOSTNAME% will replace regexp capture group 1 and HOSTNAME env var. If proc not match any regexp will be accepted (essentially will auto append - match_regex: .* at the end).

Configuration Item:

  • match_regex: The regexp use for match the process, default value is .*
  • match_type: regexp match field, default value is process_name, options are [process_name, cmdline, parent_process_name, tag]
  • ignore: Whether to ignore when regex match, default value is false
  • rewrite_name: The name will replace the process name or cmd use regexp replace. Default value "" means no replacement.

Example:

inputs:
  proc:
    process_matcher:
    - match_regex: python3 (.*)\.py
      match_type: cmdline
      match_languages: []
      match_usernames: []
      only_in_container: true
      only_with_tag: false
      ignore: false
      rewrite_name: $1-py-script
      enabled_features: [ebpf.socket.uprobe.golang, ebpf.profile.on_cpu]
    - match_regex: (?P<PROC_NAME>nginx)
      match_type: process_name
      rewrite_name: ${PROC_NAME}-%HOSTNAME%
    - match_regex: "nginx"
      match_type: parent_process_name
      ignore: true
    - match_regex: .*sleep.*
      match_type: process_name
      ignore: true
    - match_regex: .+ # match after concatenating a tag key and value pair using colon,
                      # i.e., an regex `app:.+` can match all processes has a `app` tag
      match_type: tag

Match Regex {#inputs.proc.process_matcher.match_regex}

Tags:

agent_restart

FQCN:

inputs.proc.process_matcher.match_regex

Upgrade from old version: static_config.os-proc-regex.match-regex

Default value:

inputs:
  proc:
    process_matcher:
    - match_regex: ''

Schema:

KeyValue
Typestring

Description:

The regex of matcher.

Match Type {#inputs.proc.process_matcher.match_type}

Tags:

agent_restart

FQCN:

inputs.proc.process_matcher.match_type

Upgrade from old version: static_config.os-proc-regex.match-regex

Default value:

inputs:
  proc:
    process_matcher:
    - match_type: ''

Enum options:

ValueNote
process_name
cmdline
parent_process_name
tag

Schema:

KeyValue
Typestring

Description:

The type of matcher.

Match Languages {#inputs.proc.process_matcher.match_languages}

Tags:

agent_restart

FQCN:

inputs.proc.process_matcher.match_languages

Default value:

inputs:
  proc:
    process_matcher:
    - match_languages: []

Enum options:

ValueNote
java
golang
python
nodejs
dotnet

Schema:

KeyValue
Typestring

Description:

Default value [] match all languages.

Match Usernames {#inputs.proc.process_matcher.match_usernames}

Tags:

agent_restart

FQCN:

inputs.proc.process_matcher.match_usernames

Default value:

inputs:
  proc:
    process_matcher:
    - match_usernames: []

Schema:

KeyValue
Typestring

Description:

Default value [] match all usernames.

Only in Container {#inputs.proc.process_matcher.only_in_container}

Tags:

agent_restart

FQCN:

inputs.proc.process_matcher.only_in_container

Default value:

inputs:
  proc:
    process_matcher:
    - only_in_container: true

Schema:

KeyValue
Typebool

Description:

Default value true means only match processes in container.

Only with Tag {#inputs.proc.process_matcher.only_with_tag}

Tags:

agent_restart

FQCN:

inputs.proc.process_matcher.only_with_tag

Upgrade from old version: static_config.os-proc-sync-tagged-only

Default value:

inputs:
  proc:
    process_matcher:
    - only_with_tag: false

Schema:

KeyValue
Typebool

Description:

Default value false means match processes with or without tags.

Ignore {#inputs.proc.process_matcher.ignore}

Tags:

agent_restart

FQCN:

inputs.proc.process_matcher.ignore

Upgrade from old version: static_config.os-proc-regex.action

Default value:

inputs:
  proc:
    process_matcher:
    - ignore: false

Schema:

KeyValue
Typebool

Description:

Whether to ingore matched processes..

Rewrite Name {#inputs.proc.process_matcher.rewrite_name}

Tags:

agent_restart

FQCN:

inputs.proc.process_matcher.rewrite_name

Upgrade from old version: static_config.os-proc-regex.rewrite-name

Default value:

inputs:
  proc:
    process_matcher:
    - rewrite_name: ''

Schema:

KeyValue
Typestring

Description:

New name after matched.

Enabled Features {#inputs.proc.process_matcher.enabled_features}

Tags:

agent_restart

FQCN:

inputs.proc.process_matcher.enabled_features

Upgrade from old version: static_config.ebpf.on-cpu-profile.regex, static_config.ebpf.off-cpu-profile.regex

Default value:

inputs:
  proc:
    process_matcher:
    - enabled_features: []

Enum options:

ValueNote
proc.socket_list
proc.symbol_table
proc.proc_event
ebpf.socket.uprobe.golang
ebpf.socket.uprobe.tls
ebpf.socket.uprobe.rdma
ebpf.file.io_event
ebpf.file.management_event
ebpf.profile.on_cpu
ebpf.profile.off_cpu
ebpf.profile.memory
ebpf.profile.cuda
ebpf.profile.hbm

Schema:

KeyValue
Typestring

Description:

Enabled feature list.

Symbol Table {#inputs.proc.symbol_table}

Golang-specific {#inputs.proc.symbol_table.golang_specific}

Enabled {#inputs.proc.symbol_table.golang_specific.enabled}

Tags:

agent_restart

FQCN:

inputs.proc.symbol_table.golang_specific.enabled

Upgrade from old version: static_config.ebpf.uprobe-process-name-regexs.golang-symbol

Default value:

inputs:
  proc:
    symbol_table:
      golang_specific:
        enabled: false

Schema:

KeyValue
Typestring

Description:

Whether to enable Golang-specific symbol table parsing.

This feature acts on Golang processes that have trimmed the standard symbol table. When this feature is enabled, for processes with Golang version >= 1.13 and < 1.18, when the standard symbol table is missing, the Golang-specific symbol table will be parsed to complete uprobe data collection. Note that enabling this feature may cause the eBPF initialization process to take ten minutes. The golang-symbol configuration item depends on the golang configuration item, the golang-symbol is a subset of the golang configuration item.

Example:

  • Ensure that the regular expression matching for the 'golang' configuration item is enabled, for example: golang: .*
  • You've encountered the following warning log:
    [eBPF] WARNING: func resolve_bin_file() [user/go_tracer.c:558] Go process pid 1946
    [path: /proc/1946/root/usr/local/bin/kube-controller-manager] (version: go1.16). Not find any symbols!
    
    Suppose there is a Golang process with a process ID of '1946.'
  • To initially confirm whether the executable file for this process has a symbol table:
    • Retrieve the executable file's path using the process ID:
      # ls -al /proc/1946/exe
      /proc/1946/exe -> /usr/local/bin/kube-controller-manager
      
    • Check if there is a symbol table:
      # nm /proc/1946/root/usr/local/bin/kube-controller-manager
      nm: /proc/1946/root/usr/local/bin/kube-controller-manager: no symbols
      
  • If "no symbols" is encountered, it indicates the absence of a symbol table. In such a scenario, we need to configure the "golang-symbol" setting.
  • During the agent startup process, you will observe the following log information: (The entry address for the function crypto/tls.(*Conn).Write has already been resolved, i.e., entry:0x25fca0).
    [eBPF] INFO Uprobe [/proc/1946/root/usr/local/bin/kube-controller-manager] pid:1946 go1.16.0
    entry:0x25fca0 size:1952 symname:crypto/tls.(*Conn).Write probe_func:uprobe_go_tls_write_enter rets_count:0
    
    The logs indicate that the Golang program has been successfully hooked.

Java {#inputs.proc.symbol_table.java}

Refresh Defer Duration {#inputs.proc.symbol_table.java.refresh_defer_duration}

Tags:

agent_restart

FQCN:

inputs.proc.symbol_table.java.refresh_defer_duration

Upgrade from old version: static_config.ebpf.java-symbol-file-refresh-defer-interval

Default value:

inputs:
  proc:
    symbol_table:
      java:
        refresh_defer_duration: 60s

Schema:

KeyValue
Typeduration
Range['5s', '3600s']

Description:

When deepflow-agent finds that an unresolved function name appears in the function call stack of a Java process, it will trigger the regeneration of the symbol file of the process. Because Java utilizes the Just-In-Time (JIT) compilation mechanism, to obtain more symbols for Java processes, the regeneration will be deferred for a period of time.

At the startup of a Java program, the JVM and JIT compiler are in a "warm-up" phase. During this period, symbol changes are typically frequent due to the dynamic compilation and optimization processes. Therefore, deepflow-agent delay symbol collection for one minute after the Java program starts, allowing the JVM and JIT to "warm up" and for symbol churn to be minimized before proceeding with the collection.

Maximum Symbol File Size {#inputs.proc.symbol_table.java.max_symbol_file_size}

Tags:

agent_restart

FQCN:

inputs.proc.symbol_table.java.max_symbol_file_size

Upgrade from old version: static_config.ebpf.java-symbol-file-max-space-limit

Default value:

inputs:
  proc:
    symbol_table:
      java:
        max_symbol_file_size: 10

Schema:

KeyValue
Typeint
UnitMiB
Range[2, 100]

Description:

All Java symbol files are stored in the '/tmp' directory mounted by the deepflow-agent. To prevent excessive occupation of host node space due to large Java symbol files, a maximum size limit is set for each generated Java symbol file.

cBPF {#inputs.cbpf}

Common {#inputs.cbpf.common}

Packet Capture Mode {#inputs.cbpf.common.capture_mode}

Tags:

hot_update

FQCN:

inputs.cbpf.common.capture_mode

Upgrade from old version: tap_mode

Default value:

inputs:
  cbpf:
    common:
      capture_mode: 0

Enum options:

ValueNote
0Local
1Virtual Mirror
2Physical Mirror

Schema:

KeyValue
Typeint

Description:

Mirror mode is used when deepflow-agent cannot directly capture the traffic from the source. For example:

  • in the K8s macvlan environment, capture the Pod traffic through the Node NIC
  • in the Hyper-V environment, capture the VM traffic through the Hypervisor NIC
  • in the ESXi environment, capture traffic through VDS/VSS local SPAN
  • in the DPDK environment, capture traffic through DPDK ring buffer

Use Physical Mirror mode when deepflow-agent captures traffic through physical switch mirroring.

Physical Mirror is only supported in the Enterprise Edition.

Capture via AF_PACKET {#inputs.cbpf.af_packet}

Interface Regex {#inputs.cbpf.af_packet.interface_regex}

Tags:

hot_update

FQCN:

inputs.cbpf.af_packet.interface_regex

Upgrade from old version: tap_interface_regex

Default value:

inputs:
  cbpf:
    af_packet:
      interface_regex: ^(tap.*|cali.*|veth.*|eth.*|en[osipx].*|lxc.*|lo|[0-9a-f]+_h)$

Schema:

KeyValue
Typestring
Range[0, 65535]

Description:

Regular expression of NIC name for collecting traffic.

Explanation of the default configuration:

Localhost:     lo
Common NIC:    eth.*|en[osipx].*
QEMU VM NIC:   tap.*
Flannel:       veth.*
Calico:        cali.*
Cilium         lxc.*
Kube-OVN       [0-9a-f]+_h$

When the tap_interface_regex is not configured, it indicates that network card traffic is not being collected

Bond Interfaces {#inputs.cbpf.af_packet.bond_interfaces}

Tags:

agent_restart

FQCN:

inputs.cbpf.af_packet.bond_interfaces

Upgrade from old version: static_config.tap-interface-bond-groups

Default value:

inputs:
  cbpf:
    af_packet:
      bond_interfaces: []

Schema:

KeyValue
Typedict

Description:

Packets of interfaces in the same group can be aggregated together, Only effective when capture_mode is 0.

Example:

inputs:
  cbpf:
    af_packet:
      bond_interfaces:
      - slave_interfaces: [eth0, eth1]
      - slave_interfaces: [eth2, eth3]
Slave Interfaces {#inputs.cbpf.af_packet.bond_interfaces.slave_interfaces}

Tags:

agent_restart

FQCN:

inputs.cbpf.af_packet.bond_interfaces.slave_interfaces

Upgrade from old version: static_config.tap-interface-bond-groups.tap-interfaces

Default value:

inputs:
  cbpf:
    af_packet:
      bond_interfaces:
      - slave_interfaces: []

Schema:

KeyValue
Typestring

Description:

The slave interfaces of one bond interface.

Extra Network Namespace Regex {#inputs.cbpf.af_packet.extra_netns_regex}

Tags:

hot_update ee_feature

FQCN:

inputs.cbpf.af_packet.extra_netns_regex

Upgrade from old version: extra_netns_regex

Default value:

inputs:
  cbpf:
    af_packet:
      extra_netns_regex: ''

Schema:

KeyValue
Typestring

Description:

Packet will be captured in regex matched namespaces besides the default namespace. NICs captured in extra namespaces are also filtered with tap_interface_regex.

Default value "" means no extra network namespace (default namespace only).

Extra BPF Filter {#inputs.cbpf.af_packet.extra_bpf_filter}

Tags:

hot_update

FQCN:

inputs.cbpf.af_packet.extra_bpf_filter

Upgrade from old version: capture_bpf

Default value:

inputs:
  cbpf:
    af_packet:
      extra_bpf_filter: ''

Schema:

KeyValue
Typestring
Range[0, 512]

Description:

If not configured, all traffic will be collected. Please refer to BPF syntax: https://biot.com/capstats/bpf.html

TAP Interfaces {#inputs.cbpf.af_packet.src_interfaces}

Tags:

deprecated

FQCN:

inputs.cbpf.af_packet.src_interfaces

Upgrade from old version: static_config.src-interfaces

Default value:

inputs:
  cbpf:
    af_packet:
      src_interfaces: []

Schema:

KeyValue
Typestring

VLAN PCP in Physical Mirror Traffic {#inputs.cbpf.af_packet.vlan_pcp_in_physical_mirror_traffic}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.af_packet.vlan_pcp_in_physical_mirror_traffic

Upgrade from old version: static_config.mirror-traffic-pcp

Default value:

inputs:
  cbpf:
    af_packet:
      vlan_pcp_in_physical_mirror_traffic: 0

Schema:

KeyValue
Typeint
Range[0, 8]

Description:

When mirror-traffic-pcp <= 7 calculate TAP value from vlan tag only if vlan pcp matches this value. when mirror-traffic-pcp is 8 calculate TAP value from outer vlan tag, when mirror-traffic-pcp is 9 calculate TAP value from inner vlan tag.

BPF Filter Disabled {#inputs.cbpf.af_packet.bpf_filter_disabled}

Tags:

agent_restart

FQCN:

inputs.cbpf.af_packet.bpf_filter_disabled

Upgrade from old version: static_config.bpf-disabled

Default value:

inputs:
  cbpf:
    af_packet:
      bpf_filter_disabled: false

Schema:

KeyValue
Typebool

Description:

It is found that there may be bugs in BPF traffic filtering under some versions of Linux Kernel. After this configuration is enabled, deepflow-agent will not use the filtering capabilities of BPF, and will filter by itself after capturing full traffic. Note that this may significantly increase the resource overhead of deepflow-agent.

Tunning {#inputs.cbpf.af_packet.tunning}

Socket Version {#inputs.cbpf.af_packet.tunning.socket_version}

Tags:

hot_update

FQCN:

inputs.cbpf.af_packet.tunning.socket_version

Upgrade from old version: capture_socket_type

Default value:

inputs:
  cbpf:
    af_packet:
      tunning:
        socket_version: 0

Enum options:

ValueNote
0Adaptive
2AF_PACKET V2
3AF_PACKET V3

Schema:

KeyValue
Typeint

Description:

AF_PACKET socket version in Linux environment.

Ring Blocks Config Enabled {#inputs.cbpf.af_packet.tunning.ring_blocks_enabled}

Tags:

agent_restart

FQCN:

inputs.cbpf.af_packet.tunning.ring_blocks_enabled

Upgrade from old version: static_config.afpacket-blocks-enabled

Default value:

inputs:
  cbpf:
    af_packet:
      tunning:
        ring_blocks_enabled: false

Schema:

KeyValue
Typebool

Description:

When capture_mode != 2, you need to explicitly turn on this switch to configure 'afpacket-blocks'.

Ring Blocks {#inputs.cbpf.af_packet.tunning.ring_blocks}

Tags:

agent_restart

FQCN:

inputs.cbpf.af_packet.tunning.ring_blocks

Upgrade from old version: static_config.afpacket-blocks

Default value:

inputs:
  cbpf:
    af_packet:
      tunning:
        ring_blocks: 128

Schema:

KeyValue
Typeint
Range[8, 1000000]

Description:

deepflow-agent will automatically calculate the number of blocks used by AF_PACKET according to max_memory, which can also be specified using this configuration item. The size of each block is fixed at 1MB.

Packet Fanout Count {#inputs.cbpf.af_packet.tunning.packet_fanout_count}

Tags:

agent_restart

FQCN:

inputs.cbpf.af_packet.tunning.packet_fanout_count

Upgrade from old version: static_config.local-dispatcher-count

Default value:

inputs:
  cbpf:
    af_packet:
      tunning:
        packet_fanout_count: 1

Schema:

KeyValue
Typeint
Range[1, 64]

Description:

The configuration takes effect when capture_mode is 0 and extra_netns_regex is null, PACKET_FANOUT is to enable load balancing and parallel processing, which can improve the performance and scalability of network applications. When the local-dispatcher-count is greater than 1, multiple dispatcher threads will be launched, consuming more CPU and memory. Increasing the local-dispatcher-count helps to reduce the operating system's software interrupts on multi-core CPU servers.

Attention: only valid for traffic_capture_mode = Local

Packet Fanout Mode {#inputs.cbpf.af_packet.tunning.packet_fanout_mode}

Tags:

agent_restart

FQCN:

inputs.cbpf.af_packet.tunning.packet_fanout_mode

Upgrade from old version: static_config.packet-fanout-mode

Default value:

inputs:
  cbpf:
    af_packet:
      tunning:
        packet_fanout_mode: 0

Enum options:

ValueNote
0PACKET_FANOUT_HASH
1PACKET_FANOUT_LB
2PACKET_FANOUT_CPU
3PACKET_FANOUT_ROLLOVER
4PACKET_FANOUT_RND
5PACKET_FANOUT_QM
6PACKET_FANOUT_CBPF
7PACKET_FANOUT_EBPF

Schema:

KeyValue
Typeint

Description:

The configuration is a parameter used with the PACKET_FANOUT feature in the Linux kernel to specify the desired packet distribution algorithm. Refer to:

Special Network {#inputs.cbpf.special_network}

DPDK {#inputs.cbpf.special_network.dpdk}

Enabled {#inputs.cbpf.special_network.dpdk.enabled}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.special_network.dpdk.enabled

Upgrade from old version: static_config.dpdk-enabled

Default value:

inputs:
  cbpf:
    special_network:
      dpdk:
        enabled: false

Schema:

KeyValue
Typebool

Description:

The DPDK RecvEngine is only started when this configuration item is turned on. Note that you also need to set capture_mode to 1. Please refer to https://dpdk-docs.readthedocs.io/en/latest/prog_guide/multi_proc_support.html

Libpcap {#inputs.cbpf.special_network.libpcap}

Enabled {#inputs.cbpf.special_network.libpcap.enabled}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.special_network.libpcap.enabled

Upgrade from old version: static_config.libpcap-enabled

Default value:

inputs:
  cbpf:
    special_network:
      libpcap:
        enabled: false

Schema:

KeyValue
Typebool

Description:

Supports running on Windows and Linux, Low performance when using multiple interfaces. Default to true in Windows, false in Linux.

vHost User {#inputs.cbpf.special_network.vhost_user}

vHost Socket Path {#inputs.cbpf.special_network.vhost_user.vhost_socket_path}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.special_network.vhost_user.vhost_socket_path

Upgrade from old version: static_config.vhost-socket-path

Default value:

inputs:
  cbpf:
    special_network:
      vhost_user:
        vhost_socket_path: ''

Schema:

KeyValue
Typebool

Description:

Supports running on Linux with mirror mode.

Physical Switch {#inputs.cbpf.special_network.physical_switch}

sFlow Receiving Ports {#inputs.cbpf.special_network.physical_switch.sflow_ports}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.special_network.physical_switch.sflow_ports

Upgrade from old version: static_config.xflow-collector.sflow-ports

Default value:

inputs:
  cbpf:
    special_network:
      physical_switch:
        sflow_ports: []

Schema:

KeyValue
Typeint
Range[1, 65535]

Description:

This feature is only supported by the Enterprise Edition of Trident. In general, sFlow uses port 6343. Default value [] means that no sFlow data will be collected.

NetFlow Receiving Ports {#inputs.cbpf.special_network.physical_switch.netflow_ports}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.special_network.physical_switch.netflow_ports

Upgrade from old version: static_config.xflow-collector.netflow-ports

Default value:

inputs:
  cbpf:
    special_network:
      physical_switch:
        netflow_ports: []

Schema:

KeyValue
Typeint
Range[1, 65535]

Description:

This feature is only supported by the Enterprise Edition of Trident. Additionally, only NetFlow v5 is currently supported. In general, NetFlow uses port 2055. Default value [] means that no NetFlow data will be collected.

Tunning {#inputs.cbpf.tunning}

Dispatcher Queue Enabled {#inputs.cbpf.tunning.dispatcher_queue_enabled}

Tags:

agent_restart

FQCN:

inputs.cbpf.tunning.dispatcher_queue_enabled

Upgrade from old version: static_config.dispatcher-queue

Default value:

inputs:
  cbpf:
    tunning:
      dispatcher_queue_enabled: false

Schema:

KeyValue
Typebool

Description:

The configuration takes effect when capture_mode is 0 or 2, dispatcher-queue is always true when capture_mode is 2.

Available for all recv_engines.

Maximum Capture Packet Size {#inputs.cbpf.tunning.max_capture_packet_size}

Tags:

hot_update

FQCN:

inputs.cbpf.tunning.max_capture_packet_size

Upgrade from old version: capture_packet_size

Default value:

inputs:
  cbpf:
    tunning:
      max_capture_packet_size: 65535

Schema:

KeyValue
Typeint
Unitbyte
Range[128, 65535]

Description:

DPDK environment does not support this configuration.

Raw Packet Buffer Block Size {#inputs.cbpf.tunning.raw_packet_buffer_block_size}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.tunning.raw_packet_buffer_block_size

Upgrade from old version: static_config.analyzer-raw-packet-block-size

Default value:

inputs:
  cbpf:
    tunning:
      raw_packet_buffer_block_size: 65536

Schema:

KeyValue
Typeint
Range[65536, 16000000]

Description:

Larger value will reduce memory allocation for raw packet, but will also delay memory free.

Raw Packet Queue Size {#inputs.cbpf.tunning.raw_packet_queue_size}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.tunning.raw_packet_queue_size

Upgrade from old version: static_config.analyzer-queue-size

Default value:

inputs:
  cbpf:
    tunning:
      raw_packet_queue_size: 131072

Schema:

KeyValue
Typeint
Range[65536, 64000000]

Description:

The length of the following queues (only for capture_mode = 2):

  • 0.1-bytes-to-parse
  • 0.2-packet-to-flowgenerator
  • 0.3-packet-to-pipeline

Max Capture PPS {#inputs.cbpf.tunning.max_capture_pps}

Tags:

hot_update

FQCN:

inputs.cbpf.tunning.max_capture_pps

Upgrade from old version: max_collect_pps

Default value:

inputs:
  cbpf:
    tunning:
      max_capture_pps: 200

Schema:

KeyValue
Typeint
UnitKpps
Range[1, 1000000]

Description:

Maximum packet rate allowed for collection.

Available for all recv_engines.

Preprocess {#inputs.cbpf.preprocess}

Tunnel Decap Protocols {#inputs.cbpf.preprocess.tunnel_decap_protocols}

Tags:

hot_update

FQCN:

inputs.cbpf.preprocess.tunnel_decap_protocols

Upgrade from old version: decap_type

Default value:

inputs:
  cbpf:
    preprocess:
      tunnel_decap_protocols:
      - 1
      - 2

Enum options:

ValueNote
1VXLAN
2IPIP
3GRE
4Geneve

Schema:

KeyValue
Typeint

Description:

Decapsulation tunnel protocols.

Tunnel Trim Protocols {#inputs.cbpf.preprocess.tunnel_trim_protocols}

Tags:

agent_restart

FQCN:

inputs.cbpf.preprocess.tunnel_trim_protocols

Upgrade from old version: static_config.trim-tunnel-types

Default value:

inputs:
  cbpf:
    preprocess:
      tunnel_trim_protocols: []

Enum options:

ValueNote
ERSPAN
VXLAN
TEB

Schema:

KeyValue
Typestring

Description:

Whether to remove the tunnel header in mirrored traffic.

Physical Mirror Traffic {#inputs.cbpf.physical_mirror}

Default Capture Network Type {#inputs.cbpf.physical_mirror.default_capture_network_type}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.physical_mirror.default_capture_network_type

Upgrade from old version: static_config.default-tap-type

Default value:

inputs:
  cbpf:
    physical_mirror:
      default_capture_network_type: 3

Enum options:

ValueNote
3Cloud Network
DYNAMIC_OPTIONSDYNAMIC_OPTIONS

Schema:

KeyValue
Typeint

Description:

deepflow-agent will mark the TAP (Traffic Access Point) location according to the outer vlan tag in the mirrored traffic of the physical switch. When the vlan tag has no corresponding TAP value, or the vlan pcp does not match the 'mirror-traffic-pcp', it will assign the TAP value. This configuration item. Default value 3 means Cloud Network.

Packet Dedup Disabled {#inputs.cbpf.physical_mirror.packet_dedup_disabled}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.physical_mirror.packet_dedup_disabled

Upgrade from old version: static_config.analyzer-dedup-disabled

Default value:

inputs:
  cbpf:
    physical_mirror:
      packet_dedup_disabled: false

Schema:

KeyValue
Typebool

Description:

Whether to enable mirror traffic deduplication when capture_mode = 2.

Gateway Traffic of Private Cloud {#inputs.cbpf.physical_mirror.private_cloud_gateway_traffic}

Tags:

agent_restart ee_feature

FQCN:

inputs.cbpf.physical_mirror.private_cloud_gateway_traffic

Upgrade from old version: static_config.cloud-gateway-traffic

Default value:

inputs:
  cbpf:
    physical_mirror:
      private_cloud_gateway_traffic: false

Schema:

KeyValue
Typebool

Description:

Whether it is the mirrored traffic of NFVGW (cloud gateway).

eBPF {#inputs.ebpf}

Disabled {#inputs.ebpf.disabled}

Tags:

agent_restart

FQCN:

inputs.ebpf.disabled

Upgrade from old version: static_config.ebpf.disabled

Default value:

inputs:
  ebpf:
    disabled: false

Schema:

KeyValue
Typebool

Description:

Whether to enable eBPF features.

Socket {#inputs.ebpf.socket}

Uprobe {#inputs.ebpf.socket.uprobe}

Golang {#inputs.ebpf.socket.uprobe.golang}
Enabled {#inputs.ebpf.socket.uprobe.golang.enabled}

Tags:

agent_restart

FQCN:

inputs.ebpf.socket.uprobe.golang.enabled

Upgrade from old version: static_config.ebpf.uprobe-process-name-regexs.golang

Default value:

inputs:
  ebpf:
    socket:
      uprobe:
        golang:
          enabled: false

Schema:

KeyValue
Typestring

Description:

Whether golang process enables HTTP2/HTTPS protocol data collection and auto-tracing. go auto-tracing also dependent go-tracing-timeout.

Tracing Timeout {#inputs.ebpf.socket.uprobe.golang.tracing_timeout}

Tags:

agent_restart

FQCN:

inputs.ebpf.socket.uprobe.golang.tracing_timeout

Upgrade from old version: static_config.ebpf.go-tracing-timeout

Default value:

inputs:
  ebpf:
    socket:
      uprobe:
        golang:
          tracing_timeout: 120s

Schema:

KeyValue
Typeduration
Range[0, '1d']

Description:

The expected maximum time interval between the server receiving the request and returning the response, If the value is 0, this feature is disabled. Tracing only considers the thread number.

TLS {#inputs.ebpf.socket.uprobe.tls}
Enabled {#inputs.ebpf.socket.uprobe.tls.enabled}

Tags:

agent_restart

FQCN:

inputs.ebpf.socket.uprobe.tls.enabled

Upgrade from old version: static_config.ebpf.uprobe-process-name-regexs.openssl

Default value:

inputs:
  ebpf:
    socket:
      uprobe:
        tls:
          enabled: false

Schema:

KeyValue
Typestring

Description:

Whether the process that uses the openssl library to enable HTTPS protocol data collection. One can use the following method to determine whether an application process can use Uprobe hook openssl library to access encrypted data:

Use the command cat /proc/<PID>/maps | grep "libssl.so" to check if it contains information about openssl. If it does, it indicates that this process is using the openssl library. After configuring the openssl options, deepflow-agent will retrieve process information that matches the regular expression, hooking the corresponding encryption/decryption interfaces of the openssl library.

In the logs, you will encounter a message similar to the following:

[eBPF] INFO openssl uprobe, pid:1005, path:/proc/1005/root/usr/lib64/libssl.so.1.0.2k

Kprobe {#inputs.ebpf.socket.kprobe}

Blacklist {#inputs.ebpf.socket.kprobe.blacklist}
Port Numbers {#inputs.ebpf.socket.kprobe.blacklist.ports}

Tags:

agent_restart

FQCN:

inputs.ebpf.socket.kprobe.blacklist.ports

Upgrade from old version: static_config.ebpf.kprobe-blacklist.port-list

Default value:

inputs:
  ebpf:
    socket:
      kprobe:
        blacklist:
          ports: ''

Schema:

KeyValue
Typestring

Description:

TCP&UDP Port Blacklist, Priority higher than kprobe-whitelist.

Example: ports: 80,1000-2000

Whitelist {#inputs.ebpf.socket.kprobe.whitelist}
Port Numbers {#inputs.ebpf.socket.kprobe.whitelist.port}

Tags:

agent_restart

FQCN:

inputs.ebpf.socket.kprobe.whitelist.port

Upgrade from old version: static_config.ebpf.kprobe-whitelist.port-list

Default value:

inputs:
  ebpf:
    socket:
      kprobe:
        whitelist:
          port: ''

Schema:

KeyValue
Typestring

Description:

TCP&UDP Port Whitelist, Priority lower than kprobe-blacklist.

Example: ports: 80,1000-2000

Tunning {#inputs.ebpf.socket.tunning}

Max Capture Rate {#inputs.ebpf.socket.tunning.max_capture_rate}

Tags:

hot_update

FQCN:

inputs.ebpf.socket.tunning.max_capture_rate

Upgrade from old version: static_config.ebpf.global-ebpf-pps-threshold

Default value:

inputs:
  ebpf:
    socket:
      tunning:
        max_capture_rate: 0

Schema:

KeyValue
Typeint
UnitPer Second
Range[0, 64000000]

Description:

Default value 0 means no limitation.

Syscall_trace_id Disabled {#inputs.ebpf.socket.tunning.syscall_trace_id_disabled}

Tags:

agent_restart

FQCN:

inputs.ebpf.socket.tunning.syscall_trace_id_disabled

Default value:

inputs:
  ebpf:
    socket:
      tunning:
        syscall_trace_id_disabled: false

Schema:

KeyValue
Typebool

Description:

When the trace_id is injected into all requests, the computation logic for all syscall_trace_id can be turned off. This will significantly reduce the impact of the eBPF hook on the CPU consumption of the application process.

Preprocess {#inputs.ebpf.socket.preprocess}

OOOR Cache Size {#inputs.ebpf.socket.preprocess.out_of_order_reassembly_cache_size}

Tags:

agent_restart ee_feature

FQCN:

inputs.ebpf.socket.preprocess.out_of_order_reassembly_cache_size

Upgrade from old version: static_config.ebpf.syscall-out-of-order-cache-size

Default value:

inputs:
  ebpf:
    socket:
      preprocess:
        out_of_order_reassembly_cache_size: 16

Schema:

KeyValue
Typeint
Range[8, 1024]

Description:

OOOR: Out Of Order Reassembly

When syscall-out-of-order-reassembly is enabled, up to syscall-out-of-order-cache-size eBPF socket events (each event consuming up to l7_log_packet_size bytes) will be cached in each TCP/UDP flow to prevent out-of-order events from impacting application protocol parsing. Since eBPF socket events are sent to user space in batches, out-of-order scenarios mainly occur when requests and responses within a single session are processed by different CPUs, causing the response to reach user space before the request.

OOOR Protocols {#inputs.ebpf.socket.preprocess.out_of_order_reassembly_protocols}

Tags:

agent_restart ee_feature

FQCN:

inputs.ebpf.socket.preprocess.out_of_order_reassembly_protocols

Upgrade from old version: static_config.ebpf.syscall-out-of-order-reassembly

Default value:

inputs:
  ebpf:
    socket:
      preprocess:
        out_of_order_reassembly_protocols: []

Enum options:

ValueNote
DYNAMIC_OPTIONSDYNAMIC_OPTIONS

Schema:

KeyValue
Typestring

Description:

OOOR: Out Of Order Reassembly

When this capability is enabled for a specific application protocol, the agent will add out-of-order-reassembly processing for it. Note that the agent will consume more memory in this case, so please adjust the syscall-out-of-order-cache-size accordingly and monitor the agent's memory usage.

Supported protocols: https://www.deepflow.io/docs/features/l7-protocols/overview/

Attention: use HTTP2 for gRPC Protocol.

SR Protocols {#inputs.ebpf.socket.preprocess.segmentation_reassembly_protocols}

Tags:

agent_restart ee_feature

FQCN:

inputs.ebpf.socket.preprocess.segmentation_reassembly_protocols

Upgrade from old version: static_config.ebpf.syscall-segmentation-reassembly

Default value:

inputs:
  ebpf:
    socket:
      preprocess:
        segmentation_reassembly_protocols: []

Enum options:

ValueNote
DYNAMIC_OPTIONSDYNAMIC_OPTIONS

Schema:

KeyValue
Typestring

Description:

SR: Segmentation Reassembly

When this capability is enabled for a specific application protocol, the agent will add segmentation-reassembly processing to merge application protocol content spread across multiple syscalls before parsing it. This enhances the success rate of application protocol parsing. Note that syscall-out-of-order-reassembly must also be enabled for this feature to be effective.

Supported protocols: https://www.deepflow.io/docs/features/l7-protocols/overview/

Attention: use HTTP2 for gRPC Protocol.

File {#inputs.ebpf.file}

IO Event {#inputs.ebpf.file.io_event}

Collect Mode {#inputs.ebpf.file.io_event.collect_mode}

Tags:

agent_restart

FQCN:

inputs.ebpf.file.io_event.collect_mode

Upgrade from old version: static_config.ebpf.io-event-collect-mode

Default value:

inputs:
  ebpf:
    file:
      io_event:
        collect_mode: 1

Enum options:

ValueNote
0Disabled
1Request Life Cycle
2All

Schema:

KeyValue
Typeint

Description:

Collection modes:

  • 0: Indicates that no IO events are collected.
  • 1: Indicates that only IO events within the request life cycle are collected.
  • 2: Indicates that all IO events are collected.
Minimal Duration {#inputs.ebpf.file.io_event.minimal_duration}

Tags:

agent_restart

FQCN:

inputs.ebpf.file.io_event.minimal_duration

Upgrade from old version: static_config.ebpf.io-event-minimal-duration

Default value:

inputs:
  ebpf:
    file:
      io_event:
        minimal_duration: 1ms

Schema:

KeyValue
Typeduration
Range['1ns', '1s']

Description:

Only collect IO events with delay exceeding this threshold.

Profile {#inputs.ebpf.profile}

On-CPU {#inputs.ebpf.profile.on_cpu}

Disabled {#inputs.ebpf.profile.on_cpu.disabled}

Tags:

agent_restart

FQCN:

inputs.ebpf.profile.on_cpu.disabled

Upgrade from old version: static_config.ebpf.on-cpu-profile.disabled

Default value:

inputs:
  ebpf:
    profile:
      on_cpu:
        disabled: false

Schema:

KeyValue
Typebool

Description:

eBPF On-CPU profile switch.

Sampling Frequency {#inputs.ebpf.profile.on_cpu.sampling_frequency}

Tags:

agent_restart

FQCN:

inputs.ebpf.profile.on_cpu.sampling_frequency

Upgrade from old version: static_config.ebpf.on-cpu-profile.frequency

Default value:

inputs:
  ebpf:
    profile:
      on_cpu:
        sampling_frequency: 99

Schema:

KeyValue
Typeint
Range[1, 1000]

Description:

eBPF On-CPU profile sampling frequency.

Aggregate by CPU {#inputs.ebpf.profile.on_cpu.aggregate_by_cpu}

Tags:

agent_restart

FQCN:

inputs.ebpf.profile.on_cpu.aggregate_by_cpu

Upgrade from old version: static_config.ebpf.on-cpu-profile.cpu

Default value:

inputs:
  ebpf:
    profile:
      on_cpu:
        aggregate_by_cpu: false

Schema:

KeyValue
Typebool

Description:

Whether to obtain the value of CPUID and decide whether to participate in aggregation.

  • Set to 1: Obtain the value of CPUID and will be included in the aggregation of stack trace data.
  • Set to 0: It will not be included in the aggregation. Any other value is considered invalid, the CPU value for stack trace data reporting is a special value CPU_INVALID: 0xfff used to indicate that it is an invalid value.

Off-CPU {#inputs.ebpf.profile.off_cpu}

Disabled {#inputs.ebpf.profile.off_cpu.disabled}

Tags:

agent_restart ee_feature

FQCN:

inputs.ebpf.profile.off_cpu.disabled

Upgrade from old version: static_config.ebpf.off-cpu-profile.disabled

Default value:

inputs:
  ebpf:
    profile:
      off_cpu:
        disabled: true

Schema:

KeyValue
Typebool

Description:

eBPF Off-CPU profile switch.

Aggregate by CPU {#inputs.ebpf.profile.off_cpu.aggregate_by_cpu}

Tags:

agent_restart ee_feature

FQCN:

inputs.ebpf.profile.off_cpu.aggregate_by_cpu

Upgrade from old version: static_config.ebpf.off-cpu-profile.cpu

Default value:

inputs:
  ebpf:
    profile:
      off_cpu:
        aggregate_by_cpu: false

Schema:

KeyValue
Typebool

Description:

Whether to obtain the value of CPUID and decide whether to participate in aggregation.

  • Set to 1: Obtain the value of CPUID and will be included in the aggregation of stack trace data.
  • Set to 0: It will not be included in the aggregation. Any other value is considered invalid, the CPU value for stack trace data reporting is a special value CPU_INVALID: 0xfff used to indicate that it is an invalid value.
Minimum Blocking Time {#inputs.ebpf.profile.off_cpu.min_blocking_time}

Tags:

agent_restart ee_feature

FQCN:

inputs.ebpf.profile.off_cpu.min_blocking_time

Upgrade from old version: static_config.ebpf.off-cpu-profile.minblock

Default value:

inputs:
  ebpf:
    profile:
      off_cpu:
        min_blocking_time: 50us

Schema:

KeyValue
Typeduration
Range[0, '1h']

Description:

If set to 0, there will be no minimum value limitation. Scheduler events are still high-frequency events, as their rate may exceed 1 million events per second, so caution should still be exercised.

If overhead remains an issue, you can configure the 'minblock' tunable parameter here. If the off-CPU time is less than the value configured in this item, the data will be discarded. If your goal is to trace longer blocking events, increasing this parameter can filter out shorter blocking events, further reducing overhead. Additionally, we will not collect events with a blocking time exceeding 1 hour.

Memory {#inputs.ebpf.profile.memory}

Disabled {#inputs.ebpf.profile.memory.disabled}

Tags:

agent_restart ee_feature

FQCN:

inputs.ebpf.profile.memory.disabled

Upgrade from old version: static_config.ebpf.memory-profile.disabled

Default value:

inputs:
  ebpf:
    profile:
      memory:
        disabled: true

Schema:

KeyValue
Typebool

Description:

eBPF memory profile switch.

Preprocess {#inputs.ebpf.profile.preprocess}

Stack Compression {#inputs.ebpf.profile.preprocess.stack_compression}

Tags:

agent_restart

FQCN:

inputs.ebpf.profile.preprocess.stack_compression

Default value:

inputs:
  ebpf:
    profile:
      preprocess:
        stack_compression: true

Schema:

KeyValue
Typebool

Description:

Compress the call stack before sending data. Compression can effectively reduce the agent's memory usage, data transmission bandwidth consumption, and ingester's CPU overhead. However, it also increases the CPU usage of the agent. Tests have shown that compressing the on-cpu function call stack of the deepflow-agent can reduce bandwidth consumption by x times, but it will result in an additional y% CPU usage for the agent.

Tunning {#inputs.ebpf.tunning}

Collector Queue Size {#inputs.ebpf.tunning.collector_queue_size}

Tags:

agent_restart

FQCN:

inputs.ebpf.tunning.collector_queue_size

Upgrade from old version: static_config.ebpf-collector-queue-size

Default value:

inputs:
  ebpf:
    tunning:
      collector_queue_size: 65535

Schema:

KeyValue
Typeint
Range[4096, 64000000]

Description:

The length of the following queues:

  • 0-ebpf-to-ebpf-collector
  • 1-proc-event-to-sender
  • 1-profile-to-sender

Userspace Worker Threads {#inputs.ebpf.tunning.userspace_worker_threads}

Tags:

agent_restart

FQCN:

inputs.ebpf.tunning.userspace_worker_threads

Upgrade from old version: static_config.ebpf.thread-num

Default value:

inputs:
  ebpf:
    tunning:
      userspace_worker_threads: 1

Schema:

KeyValue
Typeint
Range[1, 1024]

Description:

The number of worker threads refers to how many threads participate in data processing in user-space. The actual maximal value is the number of CPU logical cores on the host.

Perf Pages Count {#inputs.ebpf.tunning.perf_pages_count}

Tags:

agent_restart

FQCN:

inputs.ebpf.tunning.perf_pages_count

Upgrade from old version: static_config.ebpf.perf-pages-count

Default value:

inputs:
  ebpf:
    tunning:
      perf_pages_count: 128

Schema:

KeyValue
Typeint
Range[32, 8192]

Description:

The number of page occupied by the shared memory of the kernel. The value is 2^n (5 <= n <= 13). Used for perf data transfer. If the value is between 2^n and 2^(n+1), it will be automatically adjusted by the ebpf configurator to the minimum value 2^n.

Kernel Ring Size {#inputs.ebpf.tunning.kernel_ring_size}

Tags:

agent_restart

FQCN:

inputs.ebpf.tunning.kernel_ring_size

Upgrade from old version: static_config.ebpf.ring-size

Default value:

inputs:
  ebpf:
    tunning:
      kernel_ring_size: 65536

Schema:

KeyValue
Typeint
Range[8192, 131072]

Description:

The size of the ring cache queue, The value is 2^n (13 <= n <= 17). If the value is between 2^n and 2^(n+1), it will be automatically adjusted by the ebpf configurator to the minimum value 2^n.

Maximum Socket Entries {#inputs.ebpf.tunning.max_socket_entries}

Tags:

agent_restart

FQCN:

inputs.ebpf.tunning.max_socket_entries

Upgrade from old version: static_config.ebpf.max-socket-entries

Default value:

inputs:
  ebpf:
    tunning:
      max_socket_entries: 131072

Schema:

KeyValue
Typeint
Range[100000, 2000000]

Description:

Set the maximum value of hash table entries for socket tracking, depending on the number of concurrent requests in the actual scenario

Socket Map Reclaim Threshold {#inputs.ebpf.tunning.socket_map_reclaim_threshold}

Tags:

agent_restart

FQCN:

inputs.ebpf.tunning.socket_map_reclaim_threshold

Upgrade from old version: static_config.ebpf.socket-map-max-reclaim

Default value:

inputs:
  ebpf:
    tunning:
      socket_map_reclaim_threshold: 120000

Schema:

KeyValue
Typeint
Range[100000, 2000000]

Description:

The threshold for cleaning socket map table entries.

Maximum Trace Entries {#inputs.ebpf.tunning.max_trace_entries}

Tags:

agent_restart

FQCN:

inputs.ebpf.tunning.max_trace_entries

Upgrade from old version: static_config.ebpf.max-trace-entries

Default value:

inputs:
  ebpf:
    tunning:
      max_trace_entries: 131072

Schema:

KeyValue
Typeint
Range[100000, 2000000]

Description:

Set the maximum value of hash table entries for thread/coroutine tracking sessions.

Resources {#inputs.resources}

Push Interval {#inputs.resources.push_interval}

Tags:

hot_update

FQCN:

inputs.resources.push_interval

Upgrade from old version: platform_sync_interval

Default value:

inputs:
  resources:
    push_interval: 10s

Schema:

KeyValue
Typeduration
Range['10s', '3600s']

Description:

The interval at which deepflow-agent actively reports resource information to deepflow-server.

Collect Private Cloud Resource {#inputs.resources.private_cloud}

Hypervisor Resource Enabled {#inputs.resources.private_cloud.hypervisor_resource_enabled}

Tags:

hot_update

FQCN:

inputs.resources.private_cloud.hypervisor_resource_enabled

Upgrade from old version: platform_enabled

Default value:

inputs:
  resources:
    private_cloud:
      hypervisor_resource_enabled: false

Schema:

KeyValue
Typebool

Description:

When enabled, deepflow-agent will automatically synchronize virtual machine and network information on the KVM (or Host) to deepflow-server.

VM MAC Source {#inputs.resources.private_cloud.vm_mac_source}

Tags:

hot_update

FQCN:

inputs.resources.private_cloud.vm_mac_source

Upgrade from old version: if_mac_source

Default value:

inputs:
  resources:
    private_cloud:
      vm_mac_source: 0

Enum options:

ValueNote
0Interface MAC Address
1Interface Name
2Qemu XML File

Schema:

KeyValue
Typeint

Description:

How to extract the real MAC address of the virtual machine when the agent runs on the KVM host.

Explanation of the options:

  • 0: extracted from tap interface MAC address
  • 1: extracted from tap interface name
  • 2: extracted from the XML file of the virtual machine

VM XML Directory {#inputs.resources.private_cloud.vm_xml_directory}

Tags:

hot_update

FQCN:

inputs.resources.private_cloud.vm_xml_directory

Upgrade from old version: vm_xml_path

Default value:

inputs:
  resources:
    private_cloud:
      vm_xml_directory: /etc/libvirt/qemu/

Schema:

KeyValue
Typestring
Range[0, 100]

Description:

VM XML file directory.

VM MAC Mapping Script {#inputs.resources.private_cloud.vm_mac_mapping_script}

Tags:

agent_restart

FQCN:

inputs.resources.private_cloud.vm_mac_mapping_script

Upgrade from old version: static_config.tap-mac-script

Default value:

inputs:
  resources:
    private_cloud:
      vm_mac_mapping_script: ''

Schema:

KeyValue
Typestring
Range[0, 100]

Description:

The MAC address mapping relationship of TAP NIC in complex environment can be constructed by writing a script. The following conditions must be met to use this script:

  1. if_mac_source = 2
  2. tap_mode = 0
  3. The name of the TAP NIC is the same as in the virtual machine XML file
  4. The format of the script output is as follows:
    • tap2d283dfe,11:22:33:44:55:66
    • tap2d283223,aa:bb:cc:dd:ee:ff

Collect K8s Resource {#inputs.resources.kubernetes}

Enabled {#inputs.resources.kubernetes.enabled}

Tags:

hot_update

FQCN:

inputs.resources.kubernetes.enabled

Upgrade from old version: kubernetes_api_enabled

Default value:

inputs:
  resources:
    kubernetes:
      enabled: false

Schema:

KeyValue
Typebool

Description:

When there are multiple deepflow-agents in the same K8s cluster, only one deepflow-agent will be enabled to collect K8s resources.

K8s Namespace {#inputs.resources.kubernetes.kubernetes_namespace}

Tags:

agent_restart

FQCN:

inputs.resources.kubernetes.kubernetes_namespace

Upgrade from old version: static_config.kubernetes-namespace

Default value:

inputs:
  resources:
    kubernetes:
      kubernetes_namespace: null

Schema:

KeyValue
Typestring

Description:

TODO

K8s API Resources {#inputs.resources.kubernetes.api_resources}

Tags:

agent_restart

FQCN:

inputs.resources.kubernetes.api_resources

Upgrade from old version: static_config.kubernetes-resources

Default value:

inputs:
  resources:
    kubernetes:
      api_resources:
      - name: namespaces
      - name: nodes
      - name: pods
      - name: replicationcontrollers
      - name: services
      - name: daemonsets
      - name: deployments
      - name: replicasets
      - name: statefulsets
      - name: ingresses

Schema:

KeyValue
Typedict

Description:

Specify kubernetes resources to watch.

To disable a resource, add an entry to the list with disabled: true:

inputs:
  resources:
    kubernetes:
      api_resources:
      - name: services
        disabled: true

To enable a resource, add an entry of this resource to the list. Be advised that this setting overrides the default of the same resource. For example, to enable statefulsets in both group apps (the default) and apps.kruise.io will require two entries:

inputs:
  resources:
    kubernetes:
      api_resources:
      - name: statefulsets
        group: apps
      - name: statefulsets
        group: apps.kruise.io
        version: v1beta1

To watching routes in openshift you can use the following settings:

inputs:
  resources:
    kubernetes:
      api_resources:
      - name: ingresses
        disabled: true
      - name: routes
Name {#inputs.resources.kubernetes.api_resources.name}

Tags:

agent_restart

FQCN:

inputs.resources.kubernetes.api_resources.name

Upgrade from old version: static_config.kubernetes-resources.name

Default value:

inputs:
  resources:
    kubernetes:
      api_resources:
      - name: ''

Enum options:

ValueNote
namespaces
nodes
pods
replicationcontrollers
services
daemonsets
deployments
replicasets
statefulsets
ingresses

Schema:

KeyValue
Typestring

Description:

K8s API resource name.

Group {#inputs.resources.kubernetes.api_resources.group}

Tags:

agent_restart

FQCN:

inputs.resources.kubernetes.api_resources.group

Upgrade from old version: static_config.kubernetes-resources.group

Default value:

inputs:
  resources:
    kubernetes:
      api_resources:
      - group: ''

Schema:

KeyValue
Typestring

Description:

K8s API resource group.

Version {#inputs.resources.kubernetes.api_resources.version}

Tags:

agent_restart

FQCN:

inputs.resources.kubernetes.api_resources.version

Upgrade from old version: static_config.kubernetes-resources.version

Default value:

inputs:
  resources:
    kubernetes:
      api_resources:
      - version: ''

Schema:

KeyValue
Typestring

Description:

K8s API version.

Disabled {#inputs.resources.kubernetes.api_resources.disabled}

Tags:

agent_restart

FQCN:

inputs.resources.kubernetes.api_resources.disabled

Upgrade from old version: static_config.kubernetes-resources.disabled

Default value:

inputs:
  resources:
    kubernetes:
      api_resources:
      - disabled: false

Schema:

KeyValue
Typebool

Description:

K8s API resource disabled.

Field Selector {#inputs.resources.kubernetes.api_resources.field_selector}

Tags:

agent_restart

FQCN:

inputs.resources.kubernetes.api_resources.field_selector

Upgrade from old version: static_config.kubernetes-resources.field-selector

Default value:

inputs:
  resources:
    kubernetes:
      api_resources:
      - field_selector: ''

Schema:

KeyValue
Typestring

Description:

K8s API resource field selector.

K8s API List Page Size {#inputs.resources.kubernetes.api_list_page_size}

Tags:

agent_restart

FQCN:

inputs.resources.kubernetes.api_list_page_size

Upgrade from old version: static_config.kubernetes-api-list-limit

Default value:

inputs:
  resources:
    kubernetes:
      api_list_page_size: 1000

Schema:

KeyValue
Typeint
Range[10, 4294967295]

Description:

Used when limit k8s api list entry size.

K8s API List Maximum Interval {#inputs.resources.kubernetes.api_list_max_interval}

Tags:

agent_restart

FQCN:

inputs.resources.kubernetes.api_list_max_interval

Upgrade from old version: static_config.kubernetes-api-list-interval

Default value:

inputs:
  resources:
    kubernetes:
      api_list_max_interval: 10m

Schema:

KeyValue
Typeduration
Range['10m', '30d']

Description:

Interval of listing resource when watcher idles

Ingress Flavour {#inputs.resources.kubernetes.ingress_flavour}

Tags:

deprecated

FQCN:

inputs.resources.kubernetes.ingress_flavour

Upgrade from old version: static_config.ingress-flavour

Default value:

inputs:
  resources:
    kubernetes:
      ingress_flavour: kubernetes

Schema:

KeyValue
Typestring

Pod MAC Collection Method {#inputs.resources.kubernetes.pod_mac_collection_method}

Tags:

agent_restart

FQCN:

inputs.resources.kubernetes.pod_mac_collection_method

Upgrade from old version: static_config.kubernetes-poller-type

Default value:

inputs:
  resources:
    kubernetes:
      pod_mac_collection_method: adaptive

Enum options:

ValueNote
adaptive
active
passive

Schema:

KeyValue
Typestring

Description:

In active mode, deepflow-agent enters the netns of other Pods through setns syscall to query the MAC and IP addresses. In this mode, the setns operation requires the SYS_ADMIN permission. In passive mode deepflow-agent calculates the MAC and IP addresses used by Pods by capturing ARP/ND traffic. When set to adaptive, active mode will be used first.

Pull Resource From Controller {#inputs.resources.pull_resource_from_controller}

Domain Filter {#inputs.resources.pull_resource_from_controller.domain_filter}

Tags:

hot_update

FQCN:

inputs.resources.pull_resource_from_controller.domain_filter

Upgrade from old version: domains

Default value:

inputs:
  resources:
    pull_resource_from_controller:
      domain_filter:
      - 0

Enum options:

ValueNote
DYNAMIC_OPTIONSDYNAMIC_OPTIONS

Schema:

KeyValue
Typeint

Description:

Default value 0 means all domains, or can be set to a list of lcuuid of a series of domains, you can get lcuuid through 'deepflow-ctl domain list'.

Note: The list of MAC and IP addresses is used by deepflow-agent to inject tags into data. This configuration can reduce the number and frequency of MAC and IP addresses delivered by deepflow-server to deepflow-agent. When there is no cross-domain service request, deepflow-server can be configured to only deliver the information in the local domain to deepflow-agent.

Only K8s Pod IP in Local Cluster {#inputs.resources.pull_resource_from_controller.only_kubernetes_pod_ip_in_local_cluster}

Tags:

hot_update

FQCN:

inputs.resources.pull_resource_from_controller.only_kubernetes_pod_ip_in_local_cluster

Upgrade from old version: pod_cluster_internal_ip

Default value:

inputs:
  resources:
    pull_resource_from_controller:
      only_kubernetes_pod_ip_in_local_cluster: false

Schema:

KeyValue
Typebool

Description:

The list of MAC and IP addresses is used by deepflow-agent to inject tags into data. This configuration can reduce the number and frequency of MAC and IP addresses delivered by deepflow-server to deepflow-agent. When the Pod IP is not used for direct communication between the K8s cluster and the outside world, deepflow-server can be configured to only deliver the information in the local K8s cluster to deepflow-agent.

Integration {#inputs.integration}

Enabled {#inputs.integration.enabled}

Tags:

hot_update

FQCN:

inputs.integration.enabled

Upgrade from old version: external_agent_http_proxy_enabled

Default value:

inputs:
  integration:
    enabled: true

Schema:

KeyValue
Typebool

Description:

Whether to enable receiving external data sources such as Prometheus, Telegraf, OpenTelemetry, and SkyWalking.

Listen Port {#inputs.integration.listen_port}

Tags:

hot_update

FQCN:

inputs.integration.listen_port

Upgrade from old version: external_agent_http_proxy_port

Default value:

inputs:
  integration:
    listen_port: 38086

Schema:

KeyValue
Typeint
Range[1, 65535]

Description:

Listen port of the data integration socket.

Compression {#inputs.integration.compression}

Trace {#inputs.integration.compression.trace}

Tags:

agent_restart

FQCN:

inputs.integration.compression.trace

Upgrade from old version: static_config.external-agent-http-proxy-compressed

Default value:

inputs:
  integration:
    compression:
      trace: true

Schema:

KeyValue
Typebool

Description:

Whether to compress the integrated trace data received by deepflow-agent. The compression ratio is about 5:1~10:1. Turning on this feature will result in higher CPU consumption of deepflow-agent.

Profile {#inputs.integration.compression.profile}

Tags:

agent_restart

FQCN:

inputs.integration.compression.profile

Upgrade from old version: static_config.external-agent-http-proxy-compressed

Default value:

inputs:
  integration:
    compression:
      profile: true

Schema:

KeyValue
Typebool

Description:

Whether to compress the integrated profile data received by deepflow-agent. The compression ratio is about 5:1~10:1. Turning on this feature will result in higher CPU consumption of deepflow-agent.

Prometheus Extra Labels {#inputs.integration.prometheus_extra_labels}

Support for getting extra labels from headers in http requests from RemoteWrite.

Enabled {#inputs.integration.prometheus_extra_labels.enabled}

Tags:

agent_restart

FQCN:

inputs.integration.prometheus_extra_labels.enabled

Upgrade from old version: static_config.prometheus-extra-config.enabled

Default value:

inputs:
  integration:
    prometheus_extra_labels:
      enabled: false

Schema:

KeyValue
Typebool

Description:

Prometheus extra labels switch.

Extra Labels {#inputs.integration.prometheus_extra_labels.extra_labels}

Tags:

agent_restart

FQCN:

inputs.integration.prometheus_extra_labels.extra_labels

Upgrade from old version: static_config.prometheus-extra-config.labels

Default value:

inputs:
  integration:
    prometheus_extra_labels:
      extra_labels: []

Schema:

KeyValue
Typestring

Description:

Labels list. Labels in this list are sent. Label is a string matching the regular expression [a-zA-Z_][a-zA-Z0-9_]*

Label Length Limit {#inputs.integration.prometheus_extra_labels.label_length}

Tags:

agent_restart

FQCN:

inputs.integration.prometheus_extra_labels.label_length

Upgrade from old version: static_config.prometheus-extra-config.labels-limit

Default value:

inputs:
  integration:
    prometheus_extra_labels:
      label_length: 1024

Schema:

KeyValue
Typeint
Unitbyte
Range[1024, 1048576]

Description:

The size limit of the parsed key.

Value Length Limit {#inputs.integration.prometheus_extra_labels.value_length}

Tags:

agent_restart

FQCN:

inputs.integration.prometheus_extra_labels.value_length

Upgrade from old version: static_config.prometheus-extra-config.values-limit

Default value:

inputs:
  integration:
    prometheus_extra_labels:
      value_length: 4096

Schema:

KeyValue
Typeint
Unitbyte
Range[4096, 4194304]

Description:

The size limit of the parsed value.

Feature Control {#inputs.integration.feature_control}

Profile Integration Disabled {#inputs.integration.feature_control.profile_integration_disabled}

Tags:

agent_restart

FQCN:

inputs.integration.feature_control.profile_integration_disabled

Upgrade from old version: static_config.external-profile-integration-disabled

Default value:

inputs:
  integration:
    feature_control:
      profile_integration_disabled: false

Schema:

KeyValue
Typebool

Trace Integration Disabled {#inputs.integration.feature_control.trace_integration_disabled}

Tags:

agent_restart

FQCN:

inputs.integration.feature_control.trace_integration_disabled

Upgrade from old version: static_config.external-trace-integration-disabled

Default value:

inputs:
  integration:
    feature_control:
      trace_integration_disabled: false

Schema:

KeyValue
Typebool

Metric Integration Disabled {#inputs.integration.feature_control.metric_integration_disabled}

Tags:

agent_restart

FQCN:

inputs.integration.feature_control.metric_integration_disabled

Upgrade from old version: static_config.external-metric-integration-disabled

Default value:

inputs:
  integration:
    feature_control:
      metric_integration_disabled: false

Schema:

KeyValue
Typebool

Log Integration Disabled {#inputs.integration.feature_control.log_integration_disabled}

Tags:

agent_restart

FQCN:

inputs.integration.feature_control.log_integration_disabled

Upgrade from old version: static_config.external-log-integration-disabled

Default value:

inputs:
  integration:
    feature_control:
      log_integration_disabled: false

Schema:

KeyValue
Typebool

Processors {#processors}

Packet {#processors.packet}

Policy {#processors.packet.policy}

Fast-path Map Size {#processors.packet.policy.fast_path_map_size}

Tags:

agent_restart

FQCN:

processors.packet.policy.fast_path_map_size

Upgrade from old version: static_config.fast-path-map-size

Default value:

processors:
  packet:
    policy:
      fast_path_map_size: 0

Schema:

KeyValue
Typeint

Description:

When set to 0, deepflow-agent will automatically adjust the map size according to max_memory.

Fast-path Disabled {#processors.packet.policy.fast_path_disabled}

Tags:

agent_restart

FQCN:

processors.packet.policy.fast_path_disabled

Upgrade from old version: static_config.fast-path-disabled

Default value:

processors:
  packet:
    policy:
      fast_path_disabled: false

Schema:

KeyValue
Typebool

Description:

When set to true, deepflow-agent will not use fast path.

Forward Table Capacity {#processors.packet.policy.forward_table_capacity}

Tags:

agent_restart

FQCN:

processors.packet.policy.forward_table_capacity

Upgrade from old version: static_config.forward-capacity

Default value:

processors:
  packet:
    policy:
      forward_table_capacity: 16384

Schema:

KeyValue
Typeint
Range[16384, 64000000]

Description:

When this value is larger, the more memory usage may be.

Max First-path Level {#processors.packet.policy.max_first_path_level}

Tags:

agent_restart

FQCN:

processors.packet.policy.max_first_path_level

Upgrade from old version: static_config.first-path-level

Default value:

processors:
  packet:
    policy:
      max_first_path_level: 8

Schema:

KeyValue
Typeint
Range[1, 16]

Description:

When this value is larger, the memory overhead is smaller, but the performance of policy matching is worse.

TCP Header {#processors.packet.tcp_header}

Block Size {#processors.packet.tcp_header.block_size}

Tags:

agent_restart ee_feature

FQCN:

processors.packet.tcp_header.block_size

Upgrade from old version: static_config.packet-sequence-block-size

Default value:

processors:
  packet:
    tcp_header:
      block_size: 256

Schema:

KeyValue
Typeint
Range[16, 8192]

Description:

When generating TCP header data, each flow uses one block to compress and store multiple TCP headers, and the block size can be set here.

Sender Queue Size {#processors.packet.tcp_header.sender_queue_size}

Tags:

agent_restart ee_feature

FQCN:

processors.packet.tcp_header.sender_queue_size

Upgrade from old version: static_config.packet-sequence-queue-size

Default value:

processors:
  packet:
    tcp_header:
      sender_queue_size: 65536

Schema:

KeyValue
Typeint
Range[65536, 64000000]

Description:

The length of the following queues (to UniformCollectSender):

  • 1-packet-sequence-block-to-uniform-collect-sender

Sender Queue Count {#processors.packet.tcp_header.sender_queue_count}

Tags:

agent_restart ee_feature

FQCN:

processors.packet.tcp_header.sender_queue_count

Upgrade from old version: static_config.packet-sequence-queue-count

Default value:

processors:
  packet:
    tcp_header:
      sender_queue_count: 1

Schema:

KeyValue
Typeint
Range[1, 64]

Description:

The number of replicas for each output queue of the PacketSequence.

Header Fields Flag {#processors.packet.tcp_header.header_fields_flag}

Tags:

agent_restart ee_feature

FQCN:

processors.packet.tcp_header.header_fields_flag

Upgrade from old version: static_config.packet-sequence-flag

Default value:

processors:
  packet:
    tcp_header:
      header_fields_flag: 0

Schema:

KeyValue
Typeint
Range[0, 255]

Description:

packet-sequence-flag determines which fields need to be reported, the default value is 0, which means the feature is disabled, and 255, which means all fields need to be reported all fields corresponding to each bit:

| FLAG | SEQ | ACK | PAYLOAD_SIZE | WINDOW_SIZE | OPT_MSS | OPT_WS | OPT_SACK |
    7     6     5              4             3         2        1          0

PCAP Stream {#processors.packet.pcap_stream}

Receiver Queue Size {#processors.packet.pcap_stream.receiver_queue_size}

Tags:

agent_restart ee_feature

FQCN:

processors.packet.pcap_stream.receiver_queue_size

Upgrade from old version: static_config.pcap.queue-size

Default value:

processors:
  packet:
    pcap_stream:
      receiver_queue_size: 65536

Schema:

KeyValue
Typeint
Range[65536, 64000000]

Description:

The length of the following queues:

  • 1-mini-meta-packet-to-pcap

Buffer Size Per Flow {#processors.packet.pcap_stream.buffer_size_per_flow}

Tags:

agent_restart ee_feature

FQCN:

processors.packet.pcap_stream.buffer_size_per_flow

Upgrade from old version: static_config.pcap.flow-buffer-size

Default value:

processors:
  packet:
    pcap_stream:
      buffer_size_per_flow: 65536

Schema:

KeyValue
Typeint
Range[64, 64000000]

Description:

Buffer flushes when one of the flows reach this limit.

Total Buffer Size {#processors.packet.pcap_stream.total_buffer_size}

Tags:

agent_restart ee_feature

FQCN:

processors.packet.pcap_stream.total_buffer_size

Upgrade from old version: static_config.pcap.buffer-size

Default value:

processors:
  packet:
    pcap_stream:
      total_buffer_size: 88304

Schema:

KeyValue
Typeint
Range[65536, 64000000]

Description:

Buffer flushes when total data size reach this limit, cannot exceed sender buffer size 128K.

Flush Interval {#processors.packet.pcap_stream.flush_interval}

Tags:

agent_restart ee_feature

FQCN:

processors.packet.pcap_stream.flush_interval

Upgrade from old version: static_config.pcap.flush-interval

Default value:

processors:
  packet:
    pcap_stream:
      flush_interval: 1m

Schema:

KeyValue
Typeduration
Range['1s', '10m']

Description:

Flushes a flow if its first packet were older then this interval.

TOA (TCP Option Address) {#processors.packet.toa}

Sender Queue Size {#processors.packet.toa.sender_queue_size}

Tags:

agent_restart

FQCN:

processors.packet.toa.sender_queue_size

Upgrade from old version: static_config.toa-sender-queue-size

Default value:

processors:
  packet:
    toa:
      sender_queue_size: 65536

Schema:

KeyValue
Typeint
Range[65536, 64000000]

Description:

TODO

Cache Size {#processors.packet.toa.cache_size}

Tags:

agent_restart

FQCN:

processors.packet.toa.cache_size

Upgrade from old version: static_config.toa-lru-cache-size

Default value:

processors:
  packet:
    toa:
      cache_size: 65536

Schema:

KeyValue
Typeint
Range[1, 64000000]

Description:

Size of tcp option address info cache size.

Request Log {#processors.request_log}

Application Protocol Inference {#processors.request_log.application_protocol_inference}

Inference Maximum Retries {#processors.request_log.application_protocol_inference.inference_max_retries}

Tags:

agent_restart

FQCN:

processors.request_log.application_protocol_inference.inference_max_retries

Upgrade from old version: static_config.l7-protocol-inference-max-fail-count

Default value:

processors:
  request_log:
    application_protocol_inference:
      inference_max_retries: 5

Schema:

KeyValue
Typeint
Range[0, 10000]

Description:

deepflow-agent will mark the long live stream and application protocol for each <vpc, ip, protocol, port> tuple, when the traffic corresponding to a tuple fails to be identified for many times (for multiple packets, Socket Data, Function Data), the tuple will be marked as an unknown type to avoid deepflow-agent continuing to try (incurring significant computational overhead) until the duration exceeds l7-protocol-inference-ttl.

Inference Result TTL {#processors.request_log.application_protocol_inference.inference_result_ttl}

Tags:

agent_restart

FQCN:

processors.request_log.application_protocol_inference.inference_result_ttl

Upgrade from old version: static_config.l7-protocol-inference-ttl

Default value:

processors:
  request_log:
    application_protocol_inference:
      inference_result_ttl: 60

Schema:

KeyValue
Typeduration
Range[0, '1d']

Description:

deepflow-agent will mark the application protocol for each <vpc, ip, protocol, port> tuple. In order to avoid misidentification caused by IP changes, the validity period after successfully identifying the protocol will be limited to this value.

Enabled Protocols {#processors.request_log.application_protocol_inference.enabled_protocols}

Tags:

agent_restart

FQCN:

processors.request_log.application_protocol_inference.enabled_protocols

Upgrade from old version: static_config.l7-protocol-enabled

Default value:

processors:
  request_log:
    application_protocol_inference:
      enabled_protocols:
      - HTTP
      - HTTP2
      - Dubbo
      - SofaRPC
      - FastCGI
      - bRPC
      - MySQL
      - PostgreSQL
      - Oracle
      - Redis
      - MongoDB
      - Kafka
      - MQTT
      - AMQP
      - OpenWire
      - NATS
      - Pulsar
      - ZMTP
      - DNS
      - TLS
      - Custom

Enum options:

ValueNote
DYNAMIC_OPTIONSDYNAMIC_OPTIONS

Schema:

KeyValue
Typestring

Description:

Turning off some protocol identification can reduce deepflow-agent resource consumption.

Supported protocols: https://www.deepflow.io/docs/features/l7-protocols/overview/

Oracle and TLS is only supported in the Enterprise Edition.

Protocol Special Config {#processors.request_log.application_protocol_inference.protocol_special_config}

Oracle {#processors.request_log.application_protocol_inference.protocol_special_config.oracle}
Integer Byte Order {#processors.request_log.application_protocol_inference.protocol_special_config.oracle.is_be}

Tags:

agent_restart

FQCN:

processors.request_log.application_protocol_inference.protocol_special_config.oracle.is_be

Upgrade from old version: static_config.oracle-parse-config.is-be

Default value:

processors:
  request_log:
    application_protocol_inference:
      protocol_special_config:
        oracle:
          is_be: true

Schema:

KeyValue
Typebool

Description:

Whether the oracle integer encode is big endian.

Integer Compressed {#processors.request_log.application_protocol_inference.protocol_special_config.oracle.int_compressed}

Tags:

agent_restart

FQCN:

processors.request_log.application_protocol_inference.protocol_special_config.oracle.int_compressed

Upgrade from old version: static_config.oracle-parse-config.int-compress

Default value:

processors:
  request_log:
    application_protocol_inference:
      protocol_special_config:
        oracle:
          int_compressed: true

Schema:

KeyValue
Typebool

Description:

Whether the oracle integer encode is compress.

Response 0x04 with Extra Byte {#processors.request_log.application_protocol_inference.protocol_special_config.oracle.resp_0x04_extra_byte}

Tags:

agent_restart

FQCN:

processors.request_log.application_protocol_inference.protocol_special_config.oracle.resp_0x04_extra_byte

Upgrade from old version: static_config.oracle-parse-config.resp-0x04-extra-byte

Default value:

processors:
  request_log:
    application_protocol_inference:
      protocol_special_config:
        oracle:
          resp_0x04_extra_byte: false

Schema:

KeyValue
Typebool

Description:

Due to the response with data id 0x04 has different struct in different version, it may has one byte before row affect.

Filters {#processors.request_log.filters}

Port Number Pre-filters {#processors.request_log.filters.port_number_prefilters}

Tags:

agent_restart

FQCN:

processors.request_log.filters.port_number_prefilters

Upgrade from old version: static_config.l7-protocol-ports

Default value:

processors:
  request_log:
    filters:
      port_number_prefilters:
        AMQP: 1-65535
        Custom: 1-65535
        DNS: 53,5353
        Dubbo: 1-65535
        FastCGI: 1-65535
        HTTP: 1-65535
        HTTP2: 1-65535
        Kafka: 1-65535
        MQTT: 1-65535
        MongoDB: 1-65535
        MySQL: 1-65535
        NATS: 1-65535
        OpenWire: 1-65535
        Oracle: 1521
        PostgreSQL: 1-65535
        Pulsar: 1-65535
        Redis: 1-65535
        SofaRPC: 1-65535
        TLS: 443,6443
        ZMTP: 1-65535
        bRPC: 1-65535

Enum options:

ValueNote
DYNAMIC_OPTIONS

Schema:

KeyValue
Typedict

Description:

Port-list example: 80,1000-2000

HTTP2 and TLS are only used for kprobe, not applicable to uprobe. All data obtained through uprobe is not subject to port restrictions.

Supported protocols: https://www.deepflow.io/docs/features/l7-protocols/overview/

Oracle and TLS is only supported in the Enterprise Edition.

Attention: use HTTP2 for gRPC Protocol.

Tag Filters {#processors.request_log.filters.tag_filters}

Tags:

agent_restart

FQCN:

processors.request_log.filters.tag_filters

Upgrade from old version: static_config.l7-log-blacklist

Default value:

processors:
  request_log:
    filters:
      tag_filters:
        AMQP: []
        DNS: []
        Dubbo: []
        FastCGI: []
        HTTP: []
        HTTP2: []
        Kafka: []
        MQTT: []
        MongoDB: []
        MySQL: []
        NATS: []
        OpenWire: []
        Oracle: []
        PostgreSQL: []
        Pulsar: []
        Redis: []
        SOFARPC: []
        TLS: []
        ZMTP: []
        bRPC: []
        gRPC: []

Enum options:

ValueNote
DYNAMIC_OPTIONS

Schema:

KeyValue
Typedict

Description:

Tag filter example:

processors:
  request_log:
    filters:
      tag_filters:
        HTTP:
          - field-name: request_resource  # endpoint, request_type, request_domain, request_resource
            operator: equal               # equal, prefix
            value: somevalue
        HTTP2: []
        # other protocols

A l7_flow_log blacklist can be configured for each protocol, preventing request logs matching the blacklist from being collected by the agent or included in application performance metrics. It's recommended to only place non-business request logs like heartbeats or health checks in this blacklist. Including business request logs might lead to breaks in the distributed tracing tree.

Supported protocols: https://www.deepflow.io/docs/features/l7-protocols/overview/

Oracle and TLS is only supported in the Enterprise Edition.

$HTTP Tag Filters {#processors.request_log.filters.tag_filters.HTTP}

Tags:

agent_restart

FQCN:

processors.request_log.filters.tag_filters.HTTP

Upgrade from old version: static_config.l7-log-blacklist.$protocol

Default value:

processors:
  request_log:
    filters:
      tag_filters:
        HTTP: []

Schema:

KeyValue
Typedict

Description:

HTTP Tag filter example:

processors:
  request_log:
    filters:
      tag_filters:
        HTTP:
          - field-name: request_resource  # endpoint, request_type, request_domain, request_resource
            operator: equal               # equal, prefix
            value: somevalue

A l7_flow_log tag_filter can be configured for each protocol, preventing request logs matching the blacklist from being collected by the agent or included in application performance metrics. It's recommended to only place non-business request logs like heartbeats or health checks in this blacklist. Including business request logs might lead to breaks in the distributed tracing tree.

Supported protocols: https://www.deepflow.io/docs/features/l7-protocols/overview/

Oracle and TLS is only supported in the Enterprise Edition.

Field Name {#processors.request_log.filters.tag_filters.HTTP.field_name}

Tags:

agent_restart

FQCN:

processors.request_log.filters.tag_filters.HTTP.field_name

Upgrade from old version: static_config.l7-log-blacklist.$protocol.field-name

Default value:

processors:
  request_log:
    filters:
      tag_filters:
        HTTP:
        - field_name: ''

Enum options:

ValueNote
endpoint
request_type
request_domain
request_resource

Schema:

KeyValue
Typestring

Description:

Match field name.

Operator {#processors.request_log.filters.tag_filters.HTTP.operator}

Tags:

agent_restart

FQCN:

processors.request_log.filters.tag_filters.HTTP.operator

Upgrade from old version: static_config.l7-log-blacklist.$protocol.operator

Default value:

processors:
  request_log:
    filters:
      tag_filters:
        HTTP:
        - operator: ''

Enum options:

ValueNote
equal
prefix

Schema:

KeyValue
Typestring

Description:

Match operator.

Field Value {#processors.request_log.filters.tag_filters.HTTP.field_value}

Tags:

agent_restart

FQCN:

processors.request_log.filters.tag_filters.HTTP.field_value

Upgrade from old version: static_config.l7-log-blacklist.$protocol.value

Default value:

processors:
  request_log:
    filters:
      tag_filters:
        HTTP:
        - field_value: ''

Schema:

KeyValue
Typestring

Description:

Match field value.

Unconcerned DNS NXDOMAIN {#processors.request_log.filters.unconcerned_dns_nxdomain_response_suffixes}

Tags:

agent_restart

FQCN:

processors.request_log.filters.unconcerned_dns_nxdomain_response_suffixes

Upgrade from old version: static_config.l7-protocol-advanced-features.unconcerned-dns-nxdomain-response-suffixes

Default value:

processors:
  request_log:
    filters:
      unconcerned_dns_nxdomain_response_suffixes: []

Schema:

KeyValue
Typestring

Description:

You might not be concerned about certain DNS NXDOMAIN errors and may wish to ignore them. For example, when a K8s Pod tries to resolve an external domain name, it first concatenates it with the internal domain suffix of the cluster and attempts to resolve it. All these attempts will receive an NXDOMAIN reply before it finally requests the original domain name directly, and these errors may not be of concern to you. In such cases, you can configure their response_result suffix here, so that the corresponding response_status in the l7_flow_log is forcibly set to Success.

Timeouts {#processors.request_log.timeouts}

TCP Request Timeout {#processors.request_log.timeouts.tcp_request_timeout}

Tags:

agent_restart

FQCN:

processors.request_log.timeouts.tcp_request_timeout

Upgrade from old version: static_config.rrt-tcp-timeout

Default value:

processors:
  request_log:
    timeouts:
      tcp_request_timeout: 1800s

Schema:

KeyValue
Typeduration
Range['10s', '3600s']

Description:

The timeout of l7 log info rrt calculate, when rrt exceed the value will act as timeout and will not calculate the sum and average and will not merge the request and response in session aggregate. the value must greater than session aggregate SLOT_TIME (const 10s) and less than 3600 on tcp.

UDP Request Timeout {#processors.request_log.timeouts.udp_request_timeout}

Tags:

agent_restart

FQCN:

processors.request_log.timeouts.udp_request_timeout

Upgrade from old version: static_config.rrt-udp-timeout

Default value:

processors:
  request_log:
    timeouts:
      udp_request_timeout: 150s

Schema:

KeyValue
Typeduration
Range['10s', '300s']

Description:

The timeout of l7 log info rrt calculate, when rrt exceed the value will act as timeout and will not calculate the sum and average and will not merge the request and response in session aggregate. the value must greater than session aggregate SLOT_TIME (const 10s) and less than 300 on udp.

Session Aggregate Window Duration {#processors.request_log.timeouts.session_aggregate_window_duration}

Tags:

agent_restart

FQCN:

processors.request_log.timeouts.session_aggregate_window_duration

Upgrade from old version: static_config.l7-log-session-aggr-timeout

Default value:

processors:
  request_log:
    timeouts:
      session_aggregate_window_duration: 120s

Schema:

KeyValue
Typeduration
Range['20s', '300s']

Description:

l7_flow_log aggregate window.

Tag Extraction {#processors.request_log.tag_extraction}

Tracing Tag {#processors.request_log.tag_extraction.tracing_tag}

HTTP Real Client {#processors.request_log.tag_extraction.tracing_tag.http_real_client}

Tags:

hot_update

FQCN:

processors.request_log.tag_extraction.tracing_tag.http_real_client

Upgrade from old version: http_log_proxy_client

Default value:

processors:
  request_log:
    tag_extraction:
      tracing_tag:
        http_real_client: X_Forwarded_For

Schema:

KeyValue
Typestring

Description:

It is used to extract the real client IP field in the HTTP header, such as X-Forwarded-For, etc. Leave it empty to disable this feature.

X-Request-ID {#processors.request_log.tag_extraction.tracing_tag.x_request_id}

Tags:

hot_update

FQCN:

processors.request_log.tag_extraction.tracing_tag.x_request_id

Upgrade from old version: http_log_x_request_id

Default value:

processors:
  request_log:
    tag_extraction:
      tracing_tag:
        x_request_id: X_Request_ID

Schema:

KeyValue
Typestring

Description:

It is used to extract the fields in the HTTP header that are used to uniquely identify the same request before and after the gateway, such as X-Request-ID, etc. This feature can be turned off by setting it to empty.

APM TraceID {#processors.request_log.tag_extraction.tracing_tag.apm_trace_id}

Tags:

hot_update

FQCN:

processors.request_log.tag_extraction.tracing_tag.apm_trace_id

Upgrade from old version: http_log_trace_id

Default value:

processors:
  request_log:
    tag_extraction:
      tracing_tag:
        apm_trace_id:
        - traceparent
        - sw8

Schema:

KeyValue
Typestring

Description:

Used to extract the TraceID field in HTTP and RPC headers, supports filling in multiple values separated by commas. This feature can be turned off by setting it to empty.

APM SpanID {#processors.request_log.tag_extraction.tracing_tag.apm_span_id}

Tags:

hot_update

FQCN:

processors.request_log.tag_extraction.tracing_tag.apm_span_id

Upgrade from old version: http_log_span_id

Default value:

processors:
  request_log:
    tag_extraction:
      tracing_tag:
        apm_span_id:
        - traceparent
        - sw8

Schema:

KeyValue
Typestring

Description:

Used to extract the SpanID field in HTTP and RPC headers, supports filling in multiple values separated by commas. This feature can be turned off by setting it to empty.

HTTP Endpoint {#processors.request_log.tag_extraction.http_endpoint}

Extraction Disabled {#processors.request_log.tag_extraction.http_endpoint.extraction_disabled}

Tags:

agent_restart

FQCN:

processors.request_log.tag_extraction.http_endpoint.extraction_disabled

Upgrade from old version: static_config.l7-protocol-advanced-features.http-endpoint-extraction.disabled

Default value:

processors:
  request_log:
    tag_extraction:
      http_endpoint:
        extraction_disabled: false

Schema:

KeyValue
Typebool

Description:

HTTP endpoint extration is enabled by default.

Match Rules {#processors.request_log.tag_extraction.http_endpoint.match_rules}

Tags:

agent_restart

FQCN:

processors.request_log.tag_extraction.http_endpoint.match_rules

Upgrade from old version: static_config.l7-protocol-advanced-features.http-endpoint-extraction.match-rules

Default value:

processors:
  request_log:
    tag_extraction:
      http_endpoint:
        match_rules:
        - keep_segments: 2
          url_prefix: ''

Schema:

KeyValue
Typedict

Description:

Extract endpoint according to the following rules:

  • Find a longest prefix that can match according to the principle of "longest prefix matching"
  • Intercept the first few paragraphs in URL (the content between two / is regarded as one paragraph) as endpoint

By default, two segments are extracted from the URL. For example, the URL is /a/b/c?query=xxx, whose segment is 3, extracts /a/b as the endpoint.

URL Prefix {#processors.request_log.tag_extraction.http_endpoint.match_rules.url_prefix}

Tags:

agent_restart

FQCN:

processors.request_log.tag_extraction.http_endpoint.match_rules.url_prefix

Upgrade from old version: static_config.l7-protocol-advanced-features.http-endpoint-extraction.match-rules.prefix

Default value:

processors:
  request_log:
    tag_extraction:
      http_endpoint:
        match_rules:
        - url_prefix: ''

Schema:

KeyValue
Typestring

Description:

HTTP URL prefix.

Keep Segments {#processors.request_log.tag_extraction.http_endpoint.match_rules.keep_segments}

Tags:

agent_restart

FQCN:

processors.request_log.tag_extraction.http_endpoint.match_rules.keep_segments

Upgrade from old version: static_config.l7-protocol-advanced-features.http-endpoint-extraction.match-rules.keep-segments

Default value:

processors:
  request_log:
    tag_extraction:
      http_endpoint:
        match_rules:
        - keep_segments: 0

Schema:

KeyValue
Typeint

Description:

Keep how many segments.

Custom Fields {#processors.request_log.tag_extraction.custom_fields}

Tags:

agent_restart

FQCN:

processors.request_log.tag_extraction.custom_fields

Upgrade from old version: static_config.l7-protocol-advanced-features.extra-log-fields

Default value:

processors:
  request_log:
    tag_extraction:
      custom_fields:
        HTTP: []
        HTTP2: []

Enum options:

ValueNote
HTTP
HTTP2

Schema:

KeyValue
Typedict

Description:

Configuration to extract the customized header fields of HTTP, HTTP2, gRPC protocol etc.

Example:

processors:
  request_log:
    tag_extraction:
      custom_fields:
        HTTP:
        - field-name: "user-agent"
        - field-name: "cookie"

Attention: use HTTP2 for gRPC Protocol.

$HTTP Custom Fields {#processors.request_log.tag_extraction.custom_fields.HTTP}

Tags:

agent_restart

FQCN:

processors.request_log.tag_extraction.custom_fields.HTTP

Upgrade from old version: static_config.l7-protocol-advanced-features.extra-log-fields.$protocol

Default value:

processors:
  request_log:
    tag_extraction:
      custom_fields:
        HTTP: []

Schema:

KeyValue
Typedict

Description:

Configuration to extract the customized header fields of HTTP, HTTP2, gRPC protocol etc.

Example:

processors:
  request_log:
    tag_extraction:
      custom_fields:
        HTTP:
        - field-name: "user-agent"
        - field-name: "cookie"

Attention: use HTTP2 for gRPC Protocol.

Field Name {#processors.request_log.tag_extraction.custom_fields.HTTP.field_name}

Tags:

agent_restart

FQCN:

processors.request_log.tag_extraction.custom_fields.HTTP.field_name

Upgrade from old version: static_config.l7-protocol-advanced-features.extra-log-fields.$protocol.field-name

Default value:

processors:
  request_log:
    tag_extraction:
      custom_fields:
        HTTP:
        - field_name: ''

Schema:

KeyValue
Typestring

Description:

Field name.

Obfuscate Protocols {#processors.request_log.tag_extraction.obfuscate_protocols}

Tags:

agent_restart

FQCN:

processors.request_log.tag_extraction.obfuscate_protocols

Upgrade from old version: static_config.l7-protocol-advanced-features.obfuscate-enabled-protocols

Default value:

processors:
  request_log:
    tag_extraction:
      obfuscate_protocols:
      - Redis

Enum options:

ValueNote
MySQL
PostgreSQL
HTTP
HTTP2
Redis

Schema:

KeyValue
Typestring

Description:

For the sake of data security, the data of the protocol that needs to be desensitized is configured here and is not processed by default.

Tunning {#processors.request_log.tunning}

Payload Truncation {#processors.request_log.tunning.payload_truncation}

Tags:

hot_update

FQCN:

processors.request_log.tunning.payload_truncation

Upgrade from old version: l7_log_packet_size

Default value:

processors:
  request_log:
    tunning:
      payload_truncation: 1024

Schema:

KeyValue
Typeint
Unitbyte
Range[256, 65535]

Description:

The maximum data length used for application protocol identification, note that the effective value is less than or equal to the value of capture_packet_size.

NOTE: For eBPF data, the largest valid value is 16384.

Session Aggregate Slot Capacity {#processors.request_log.tunning.session_aggregate_slot_capacity}

Tags:

agent_restart

FQCN:

processors.request_log.tunning.session_aggregate_slot_capacity

Upgrade from old version: static_config.l7-log-session-slot-capacity

Default value:

processors:
  request_log:
    tunning:
      session_aggregate_slot_capacity: 1024

Schema:

KeyValue
Typeint
Range[1024, 1000000]

Description:

By default, unidirectional l7_flow_log is aggregated into bidirectional request_log (session) with a caching time window of 2 minutes. During this period, every 5 seconds is considered as a time slot (i.e., a LRU). This configuration is used to specify the maximum number of unidirectional l7_flow_log entries that can be cached in each time slot.

If the number of l7_flow_log entries cached in a time slot exceeds this configuration, 10% of the data in that time slot will be evicted based on the LRU strategy to reduce memory consumption. Note that the evicted data will not be discarded; instead, they will be sent to the deepflow-server as unidirectional request_log.

The following metrics can be used as reference data for adjusting this configuration:

  • Metric deepflow_system.deepflow_agent_l7_session_aggr.cached-request-resource Used to record the total memory occupied by the request_resource field of the unidirectional l7_flow_log cached in all time slots at the current moment, in bytes.
  • Metric deepflow_system.deepflow_agent_l7_session_aggr.over-limit Used to record the number of times eviction is triggered due to reaching the LRU capacity limit.

Consistent Timestamp in L7 Metrics {#processors.request_log.tunning.consistent_timestamp_in_l7_metrics}

Tags:

agent_restart

FQCN:

processors.request_log.tunning.consistent_timestamp_in_l7_metrics

Default value:

processors:
  request_log:
    tunning:
      consistent_timestamp_in_l7_metrics: false

Schema:

KeyValue
Typebool

Description:

When this configuration is enabled, for the same session, response-related metrics (such as response count, latency, exceptions) are recorded in the time slot corresponding to when the request occurred, rather than the time slot of the response itself. This means that when calculating metrics for requests and responses within a session, a consistent timestamp based on the time of the request occurrence is used.

Flow Log {#processors.flow_log}

Time Window {#processors.flow_log.time_window}

Maximum Tolerable Packet Delay {#processors.flow_log.time_window.max_tolerable_packet_delay}

Tags:

agent_restart

FQCN:

processors.flow_log.time_window.max_tolerable_packet_delay

Upgrade from old version: static_config.packet-delay

Default value:

processors:
  flow_log:
    time_window:
      max_tolerable_packet_delay: 1s

Schema:

KeyValue
Typeduration
Range['1s', '10s']

Description:

Extra tolerance for QuadrupleGenerator receiving 1s-FlowLog.

Extra Tolerable Flow Delay {#processors.flow_log.time_window.extra_tolerable_flow_delay}

Tags:

agent_restart

FQCN:

processors.flow_log.time_window.extra_tolerable_flow_delay

Upgrade from old version: static_config.second-flow-extra-delay-second

Default value:

processors:
  flow_log:
    time_window:
      extra_tolerable_flow_delay: 0s

Schema:

KeyValue
Typeduration
Range['1s', '10s']

Description:

Extra tolerance for QuadrupleGenerator receiving 1s-FlowLog.

Conntrack (a.k.a. Flow Map) {#processors.flow_log.conntrack}

Flow Flush Interval {#processors.flow_log.conntrack.flow_flush_interval}

Tags:

agent_restart

FQCN:

processors.flow_log.conntrack.flow_flush_interval

Upgrade from old version: static_config.flow.flush-interval

Default value:

processors:
  flow_log:
    conntrack:
      flow_flush_interval: 1s

Schema:

KeyValue
Typeduration
Range['1s', '1m']

Description:

Flush interval of the queue connected to the collector.

Flow Generation {#processors.flow_log.conntrack.flow_generation}

Server Ports {#processors.flow_log.conntrack.flow_generation.server_ports}

Tags:

agent_restart

FQCN:

processors.flow_log.conntrack.flow_generation.server_ports

Upgrade from old version: static_config.server-ports

Default value:

processors:
  flow_log:
    conntrack:
      flow_generation:
        server_ports: []

Schema:

KeyValue
Typeint
Range[1, 65535]

Description:

Service port list, priority lower than TCP SYN flags.

Cloud Traffic Ignore MAC {#processors.flow_log.conntrack.flow_generation.cloud_traffic_ignore_mac}

Tags:

agent_restart

FQCN:

processors.flow_log.conntrack.flow_generation.cloud_traffic_ignore_mac

Upgrade from old version: static_config.flow.ignore-tor-mac

Default value:

processors:
  flow_log:
    conntrack:
      flow_generation:
        cloud_traffic_ignore_mac: false

Schema:

KeyValue
Typebool

Description:

When the MAC addresses of the two-way traffic collected at the same location are asymmetrical, the traffic cannot be aggregated into a Flow. You can set this value at this time. Only valid for Cloud (not IDC) traffic.

Ignore L2End {#processors.flow_log.conntrack.flow_generation.ignore_l2_end}

Tags:

agent_restart

FQCN:

processors.flow_log.conntrack.flow_generation.ignore_l2_end

Upgrade from old version: static_config.flow.ignore-l2-end

Default value:

processors:
  flow_log:
    conntrack:
      flow_generation:
        ignore_l2_end: false

Schema:

KeyValue
Typebool

Description:

For Cloud traffic, only the MAC address corresponding to the side with L2End = true is matched when generating the flow. Set this value to true to force a double-sided MAC address match and only aggregate traffic with exactly equal MAC addresses.

IDC Traffic Ignore VLAN {#processors.flow_log.conntrack.flow_generation.idc_traffic_ignore_vlan}

Tags:

agent_restart ee_feature

FQCN:

processors.flow_log.conntrack.flow_generation.idc_traffic_ignore_vlan

Upgrade from old version: static_config.flow.ignore-idc-vlan

Default value:

processors:
  flow_log:
    conntrack:
      flow_generation:
        idc_traffic_ignore_vlan: false

Schema:

KeyValue
Typebool

Description:

When the VLAN of the two-way traffic collected at the same location are asymmetrical, the traffic cannot be aggregated into a Flow. You can set this value at this time. Only valid for IDC (not Cloud) traffic.

Timeouts {#processors.flow_log.conntrack.timeouts}

Established {#processors.flow_log.conntrack.timeouts.established}

Tags:

agent_restart

FQCN:

processors.flow_log.conntrack.timeouts.established

Upgrade from old version: static_config.flow.established-timeout

Default value:

processors:
  flow_log:
    conntrack:
      timeouts:
        established: 300s

Schema:

KeyValue
Typeduration
Range['1s', '1d']

Description:

Timeouts for TCP State Machine - Established.

Closing RST {#processors.flow_log.conntrack.timeouts.closing_rst}

Tags:

agent_restart

FQCN:

processors.flow_log.conntrack.timeouts.closing_rst

Upgrade from old version: static_config.flow.closing-rst-timeout

Default value:

processors:
  flow_log:
    conntrack:
      timeouts:
        closing_rst: 35s

Schema:

KeyValue
Typeduration
Range['1s', '1d']

Description:

Timeouts for TCP State Machine - Closing Reset.

Opening RST {#processors.flow_log.conntrack.timeouts.opening_rst}

Tags:

agent_restart

FQCN:

processors.flow_log.conntrack.timeouts.opening_rst

Upgrade from old version: static_config.flow.opening-rst-timeout

Default value:

processors:
  flow_log:
    conntrack:
      timeouts:
        opening_rst: 1s

Schema:

KeyValue
Typeduration
Range['1s', '1d']

Description:

Timeouts for TCP State Machine - Opening Reset.

Others {#processors.flow_log.conntrack.timeouts.others}

Tags:

agent_restart

FQCN:

processors.flow_log.conntrack.timeouts.others

Upgrade from old version: static_config.flow.others-timeout

Default value:

processors:
  flow_log:
    conntrack:
      timeouts:
        others: 5s

Schema:

KeyValue
Typeduration
Range['1s', '1d']

Description:

Timeouts for TCP State Machine - Others.

Tunning {#processors.flow_log.tunning}

FlowMap Hash Slots {#processors.flow_log.tunning.flow_map_hash_slots}

Tags:

agent_restart

FQCN:

processors.flow_log.tunning.flow_map_hash_slots

Upgrade from old version: static_config.flow.flow-slots-size

Default value:

processors:
  flow_log:
    tunning:
      flow_map_hash_slots: 131072

Schema:

KeyValue
Typeint
Range[1024, 64000000]

Description:

Since FlowAggregator is the first step in all processing, this value is also widely used in other hash tables such as QuadrupleGenerator, Collector, etc.

Concurrent Flow Limit {#processors.flow_log.tunning.concurrent_flow_limit}

Tags:

agent_restart

FQCN:

processors.flow_log.tunning.concurrent_flow_limit

Upgrade from old version: static_config.flow.flow-count-limit

Default value:

processors:
  flow_log:
    tunning:
      concurrent_flow_limit: 65535

Schema:

KeyValue
Typeint
Range[1024, 64000000]

Description:

Maximum number of flows that can be stored in FlowMap, It will also affect the capacity of the RRT cache, Example: rrt-cache-capacity = flow-count-limit. When rrt-cache-capacity is not enough, it will be unable to calculate the rrt of l7.

Memory Pool Size {#processors.flow_log.tunning.memory_pool_size}

Tags:

agent_restart

FQCN:

processors.flow_log.tunning.memory_pool_size

Upgrade from old version: static_config.flow.memory-pool-size

Default value:

processors:
  flow_log:
    tunning:
      memory_pool_size: 65536

Schema:

KeyValue
Typeint
Range[1024, 64000000]

Description:

This value is used to set max length of memory pool in FlowMap Memory pools are used for frequently create and destroy objects like FlowNode, FlowLog, etc.

Maximum Size of Batched Buffer {#processors.flow_log.tunning.max_batched_buffer_size}

Tags:

agent_restart

FQCN:

processors.flow_log.tunning.max_batched_buffer_size

Upgrade from old version: static_config.batched-buffer-size-limit

Default value:

processors:
  flow_log:
    tunning:
      max_batched_buffer_size: 131072

Schema:

KeyValue
Typeint
Range[1024, 64000000]

Description:

Only TaggedFlow allocation is affected at the moment. Structs will be allocated in batch to minimalize malloc calls. Total memory size of a batch will not exceed this limit. A number larger than 128K is not recommended because the default MMAP_THRESHOLD is 128K, allocating chunks larger than 128K will result in calling mmap and more page faults.

FlowAggregator Queue Size {#processors.flow_log.tunning.flow_aggregator_queue_size}

Tags:

agent_restart

FQCN:

processors.flow_log.tunning.flow_aggregator_queue_size

Upgrade from old version: static_config.flow.flow-aggr-queue-size

Default value:

processors:
  flow_log:
    tunning:
      flow_aggregator_queue_size: 65535

Schema:

KeyValue
Typeint
Range[65536, 64000000]

Description:

The length of the following queues:

  • 2-second-flow-to-minute-aggrer

FlowGenerator Queue Size {#processors.flow_log.tunning.flow_generator_queue_size}

Tags:

agent_restart

FQCN:

processors.flow_log.tunning.flow_generator_queue_size

Upgrade from old version: static_config.flow-queue-size

Default value:

processors:
  flow_log:
    tunning:
      flow_generator_queue_size: 65536

Schema:

KeyValue
Typeint
Range[65536, 64000000]

Description:

The length of the following queues:

  • 1-tagged-flow-to-quadruple-generator
  • 1-tagged-flow-to-app-protocol-logs
  • 0-{flow_type}-{port}-packet-to-tagged-flow (flow_type: sflow, netflow)

QuadrupleGenerator Queue Size {#processors.flow_log.tunning.quadruple_generator_queue_size}

Tags:

agent_restart

FQCN:

processors.flow_log.tunning.quadruple_generator_queue_size

Upgrade from old version: static_config.quadruple-queue-size

Default value:

processors:
  flow_log:
    tunning:
      quadruple_generator_queue_size: 262144

Schema:

KeyValue
Typeint
Range[262144, 64000000]

Description:

The length of the following queues:

  • 2-flow-with-meter-to-second-collector
  • 2-flow-with-meter-to-minute-collector

Outputs {#outputs}

Socket {#outputs.socket}

Data Socket Type {#outputs.socket.data_socket_type}

Tags:

hot_update

FQCN:

outputs.socket.data_socket_type

Upgrade from old version: collector_socket_type

Default value:

outputs:
  socket:
    data_socket_type: TCP

Enum options:

ValueNote
TCP
UDP
FILE

Schema:

KeyValue
Typestring

Description:

It can only be set to FILE in standalone mode, in which case l4_flow_log and l7_flow_log will be written to local files.

PCAP Socket Type {#outputs.socket.pcap_socket_type}

Tags:

hot_update

FQCN:

outputs.socket.pcap_socket_type

Upgrade from old version: compressor_socket_type

Default value:

outputs:
  socket:
    pcap_socket_type: TCP

Enum options:

ValueNote
TCP
UDP
RAW_UDP

Schema:

KeyValue
Typestring

Description:

RAW_UDP uses RawSocket to send UDP packets, which has the highest performance, but there may be compatibility issues in some environments.

NPB Socket Type {#outputs.socket.npb_socket_type}

Tags:

hot_update ee_feature

FQCN:

outputs.socket.npb_socket_type

Upgrade from old version: npb_socket_type

Default value:

outputs:
  socket:
    npb_socket_type: RAW_UDP

Enum options:

ValueNote
UDP
RAW_UDP

Schema:

KeyValue
Typestring

Description:

RAW_UDP uses RawSocket to send UDP packets, which has the highest performance, but there may be compatibility issues in some environments.

RAW_UDP QoS Bypass {#outputs.socket.raw_udp_qos_bypass}

Tags:

agent_restart

FQCN:

outputs.socket.raw_udp_qos_bypass

Upgrade from old version: static_config.enable-qos-bypass

Default value:

outputs:
  socket:
    raw_udp_qos_bypass: false

Schema:

KeyValue
Typebool

Description:

When sender uses RAW_UDP to send data, this feature can be enabled to improve performance. Linux Kernel >= 3.14 is required. Note that the data sent when this feature is enabled cannot be captured by tcpdump.

Multiple Sockets To Ingester {#outputs.socket.multiple_sockets_to_ingester}

Tags:

hot_update

FQCN:

outputs.socket.multiple_sockets_to_ingester

Upgrade from old version: static_config.multiple-sockets-to-ingester

Default value:

outputs:
  socket:
    multiple_sockets_to_ingester: false

Schema:

KeyValue
Typebool

Description:

When set to true, deepflow-agent will send data with multiple sockets to Ingester, which has higher performance, but will bring more impact to the firewall.

Flow Log and Request Log {#outputs.flow_log}

Filters {#outputs.flow_log.filters}

Capture Network Types for L4 {#outputs.flow_log.filters.l4_capture_network_types}

Tags:

hot_update

FQCN:

outputs.flow_log.filters.l4_capture_network_types

Upgrade from old version: l4_log_tap_types

Default value:

outputs:
  flow_log:
    filters:
      l4_capture_network_types:
      - 0

Enum options:

ValueNote
-1Disabled
0All TAPs
DYNAMIC_OPTIONSDYNAMIC_OPTIONS

Schema:

KeyValue
Typeint

Description:

The list of TAPs to collect l4_flow_log, you can also set a list of TAPs to be collected.

Capture Network Types for L7 {#outputs.flow_log.filters.l7_capture_network_types}

Tags:

hot_update

FQCN:

outputs.flow_log.filters.l7_capture_network_types

Upgrade from old version: l7_log_store_tap_types

Default value:

outputs:
  flow_log:
    filters:
      l7_capture_network_types: []

Enum options:

ValueNote
-1Disabled
0All TAPs
DYNAMIC_OPTIONSDYNAMIC_OPTIONS

Schema:

KeyValue
Typeint

Description:

The list of TAPs to collect l7_flow_log, you can also set a list of TAPs to be collected.

Ignored Observation Points for L4 {#outputs.flow_log.filters.l4_ignored_observation_points}

Tags:

hot_update

FQCN:

outputs.flow_log.filters.l4_ignored_observation_points

Upgrade from old version: l4_log_ignore_tap_sides

Default value:

outputs:
  flow_log:
    filters:
      l4_ignored_observation_points: []

Enum options:

ValueNote
0rest, Other NIC
1c, Client NIC
2s, Server NIC
4local, Local NIC
9c-nd, Client K8s Node
10s-nd, Server K8s Node
17c-hv, Client VM Hypervisor
18s-hv, Server VM Hypervisor
25c-gw-hv, Client-side Gateway Hypervisor
26s-gw-hv, Server-side Gateway Hypervisor
33c-gw, Client-side Gateway
34s-gw, Server-side Gateway
41c-p, Client Process
42s-p, Server Process

Schema:

KeyValue
Typeint

Description:

Use the value of tap_side to control which l4_flow_log should be ignored for collection. This configuration also applies to tcp_sequence and pcap data in the Enterprise Edition. Default value [] means store everything.

Ignored Observation Points for L7 {#outputs.flow_log.filters.l7_ignored_observation_points}

Tags:

hot_update

FQCN:

outputs.flow_log.filters.l7_ignored_observation_points

Upgrade from old version: l7_log_ignore_tap_sides

Default value:

outputs:
  flow_log:
    filters:
      l7_ignored_observation_points: []

Enum options:

ValueNote
0rest, Other NIC
1c, Client NIC
2s, Server NIC
4local, Local NIC
9c-nd, Client K8s Node
10s-nd, Server K8s Node
17c-hv, Client VM Hypervisor
18s-hv, Server VM Hypervisor
25c-gw-hv, Client-side Gateway Hypervisor
26s-gw-hv, Server-side Gateway Hypervisor
33c-gw, Client-side Gateway
34s-gw, Server-side Gateway
41c-p, Client Process
42s-p, Server Process

Schema:

KeyValue
Typeint

Description:

Use the value of tap_side to control which l7_flow_log should be ignored for collection.

Throttles {#outputs.flow_log.throttles}

L4 Throttle {#outputs.flow_log.throttles.l4_throttle}

Tags:

hot_update

FQCN:

outputs.flow_log.throttles.l4_throttle

Upgrade from old version: l4_log_collect_nps_threshold

Default value:

outputs:
  flow_log:
    throttles:
      l4_throttle: 10000

Schema:

KeyValue
Typeint
UnitPer Second
Range[100, 1000000]

Description:

The maximum number of rows of l4_flow_log sent per second, when the actual number of rows exceeds this value, sampling is triggered.

L7 Throttle {#outputs.flow_log.throttles.l7_throttle}

Tags:

hot_update

FQCN:

outputs.flow_log.throttles.l7_throttle

Upgrade from old version: l7_log_collect_nps_threshold

Default value:

outputs:
  flow_log:
    throttles:
      l7_throttle: 10000

Schema:

KeyValue
Typeint
UnitPer Second
Range[100, 1000000]

Description:

The maximum number of rows of l7_flow_log sent per second, when the actual number of rows exceeds this value, sampling is triggered.

Tunning {#outputs.flow_log.tunning}

Collector Queue Size {#outputs.flow_log.tunning.collector_queue_size}

Tags:

agent_restart

FQCN:

outputs.flow_log.tunning.collector_queue_size

Upgrade from old version: static_config.flow-sender-queue-size

Default value:

outputs:
  flow_log:
    tunning:
      collector_queue_size: 65536

Schema:

KeyValue
Typeint
Range[65536, 64000000]

Description:

The length of the following queues:

  • 3-flow-to-collector-sender
  • 3-protolog-to-collector-sender

Collector Queue Count {#outputs.flow_log.tunning.collector_queue_count}

Tags:

agent_restart

FQCN:

outputs.flow_log.tunning.collector_queue_count

Upgrade from old version: static_config.flow-sender-queue-count

Default value:

outputs:
  flow_log:
    tunning:
      collector_queue_count: 1

Schema:

KeyValue
Typeint
Range[1, 64]

Description:

The number of replicas for each output queue of the FlowAggregator/SessionAggregator.

Flow Metrics {#outputs.flow_metrics}

Enabled {#outputs.flow_metrics.enabled}

Tags:

hot_update

FQCN:

outputs.flow_metrics.enabled

Upgrade from old version: collector_enabled

Default value:

outputs:
  flow_metrics:
    enabled: true

Schema:

KeyValue
Typebool

Description:

When disabled, deepflow-agent will not send metrics and logging data collected using eBPF and cBPF.

Attention: set to false will also disable l4_flow_log and l7_flow_log.

Filters {#outputs.flow_metrics.filters}

Inactive Server Port Aggregation {#outputs.flow_metrics.filters.inactive_server_port_aggregation}

Tags:

hot_update

FQCN:

outputs.flow_metrics.filters.inactive_server_port_aggregation

Upgrade from old version: inactive_server_port_enabled

Default value:

outputs:
  flow_metrics:
    filters:
      inactive_server_port_aggregation: false

Schema:

KeyValue
Typebool

Description:

When enabled, deepflow-agent will not generate detailed metrics for each inactive port (ports that only receive data, not send data), and the data of all inactive ports will be aggregated into the metrics with a tag 'server_port = 0'.

Inactive IP Aggregation {#outputs.flow_metrics.filters.inactive_ip_aggregation}

Tags:

hot_update

FQCN:

outputs.flow_metrics.filters.inactive_ip_aggregation

Upgrade from old version: inactive_ip_enabled

Default value:

outputs:
  flow_metrics:
    filters:
      inactive_ip_aggregation: false

Schema:

KeyValue
Typebool

Description:

When enabled, deepflow-agent will not generate detailed metrics for each inactive IP address (IP addresses that only receive data, not send data), and the data of all inactive IP addresses will be aggregated into the metrics with a tag 'ip = 0'.

NPM Metrics {#outputs.flow_metrics.filters.npm_metrics}

Tags:

hot_update

FQCN:

outputs.flow_metrics.filters.npm_metrics

Upgrade from old version: l4_performance_enabled

Default value:

outputs:
  flow_metrics:
    filters:
      npm_metrics: true

Schema:

KeyValue
Typebool

Description:

When closed, deepflow-agent only collects some basic throughput metrics.

APM Metrics {#outputs.flow_metrics.filters.apm_metrics}

Tags:

hot_update

FQCN:

outputs.flow_metrics.filters.apm_metrics

Upgrade from old version: l7_metrics_enabled

Default value:

outputs:
  flow_metrics:
    filters:
      apm_metrics: true

Schema:

KeyValue
Typebool

Description:

When closed, deepflow-agent will not collect RED (request/error/delay) metrics.

Second Metrics {#outputs.flow_metrics.filters.second_metrics}

Tags:

hot_update

FQCN:

outputs.flow_metrics.filters.second_metrics

Upgrade from old version: vtap_flow_1s_enabled

Default value:

outputs:
  flow_metrics:
    filters:
      second_metrics: true

Schema:

KeyValue
Typebool

Description:

Second granularity metrics.

Tunning {#outputs.flow_metrics.tunning}

Sender Queue Size {#outputs.flow_metrics.tunning.sender_queue_size}

Tags:

agent_restart

FQCN:

outputs.flow_metrics.tunning.sender_queue_size

Upgrade from old version: static_config.collector-sender-queue-size

Default value:

outputs:
  flow_metrics:
    tunning:
      sender_queue_size: 65536

Schema:

KeyValue
Typeint
Range[65536, 64000000]

Description:

The length of the following queues:

  • 2-doc-to-collector-sender

Sender Queue Count {#outputs.flow_metrics.tunning.sender_queue_count}

Tags:

agent_restart

FQCN:

outputs.flow_metrics.tunning.sender_queue_count

Upgrade from old version: static_config.collector-sender-queue-count

Default value:

outputs:
  flow_metrics:
    tunning:
      sender_queue_count: 1

Schema:

KeyValue
Typeint
Range[1, 64]

Description:

The number of replicas for each output queue of the collector.

NPB (Network Packet Broker) {#outputs.npb}

Maximum MTU {#outputs.npb.max_mtu}

Tags:

hot_update ee_feature

FQCN:

outputs.npb.max_mtu

Upgrade from old version: mtu

Default value:

outputs:
  npb:
    max_mtu: 1500

Schema:

KeyValue
Typeint
Unitbyte
Range[500, 10000]

Description:

Maximum MTU allowed when using UDP to transfer data.

Attention: Public cloud service providers may modify the content of the tail of the UDP packet whose packet length is close to 1500 bytes. When using UDP transmission, it is recommended to set a slightly smaller value.

RAW_UDP VLAN Tag {#outputs.npb.raw_udp_vlan_tag}

Tags:

hot_update ee_feature

FQCN:

outputs.npb.raw_udp_vlan_tag

Upgrade from old version: output_vlan

Default value:

outputs:
  npb:
    raw_udp_vlan_tag: 0

Schema:

KeyValue
Typeint
Range[0, 4095]

Description:

When using RAW_UDP Socket to transmit UDP data, this value can be used to set the VLAN tag. Default value 0 means no VLAN tag.

Extra VLAN Header {#outputs.npb.extra_vlan_header}

Tags:

hot_update ee_feature

FQCN:

outputs.npb.extra_vlan_header

Upgrade from old version: npb_vlan_mode

Default value:

outputs:
  npb:
    extra_vlan_header: 0

Enum options:

ValueNote
0None
1802.1Q
2QinQ

Schema:

KeyValue
Typeint

Description:

Whether to add an extra 802.1Q header to NPB traffic, when this value is set, deepflow-agent will insert a VLAN Tag into the NPB traffic header, and the value is the lower 12 bits of TunnelID in the VXLAN header.

Traffic Global Dedup {#outputs.npb.traffic_global_dedup}

Tags:

hot_update ee_feature

FQCN:

outputs.npb.traffic_global_dedup

Upgrade from old version: npb_dedup_enabled

Default value:

outputs:
  npb:
    traffic_global_dedup: true

Schema:

KeyValue
Typebool

Description:

Whether to enable global (distributed) traffic deduplication for the NPB feature.

Target Port {#outputs.npb.target_port}

Tags:

agent_restart ee_feature

FQCN:

outputs.npb.target_port

Upgrade from old version: static_config.npb-port

Default value:

outputs:
  npb:
    target_port: 4789

Schema:

KeyValue
Typeint
Range[1, 65535]

Description:

Server port for NPB.

Custom VXLAN Flags {#outputs.npb.custom_vxlan_flags}

Tags:

agent_restart ee_feature

FQCN:

outputs.npb.custom_vxlan_flags

Upgrade from old version: static_config.vxlan-flags

Default value:

outputs:
  npb:
    custom_vxlan_flags: 255

Schema:

KeyValue
Typeint
Range[0, 255]

Description:

NPB uses the first byte of the VXLAN Flag to identify the sending traffic to prevent the traffic sent by NPB from being collected by deepflow-agent.

Attention: To ensure that the VNI bit is set, the value configured here will be used after |= 0b1000_0000. Therefore, this value cannot be directly configured as 0b1000_0000.

Overlay VLAN Header Trimming {#outputs.npb.overlay_vlan_header_trimming}

Tags:

agent_restart ee_feature

FQCN:

outputs.npb.overlay_vlan_header_trimming

Upgrade from old version: static_config.ignore-overlay-vlan

Default value:

outputs:
  npb:
    overlay_vlan_header_trimming: false

Schema:

KeyValue
Typebool

Description:

This configuration only ignores the VLAN header in the captured original message and does not affect the configuration item: npb_vlan_mode

Maximum Tx Throughput {#outputs.npb.max_tx_throughput}

Tags:

hot_update ee_feature

FQCN:

outputs.npb.max_tx_throughput

Upgrade from old version: max_npb_bps

Default value:

outputs:
  npb:
    max_tx_throughput: 1000

Schema:

KeyValue
Typeint
UnitMbps
Range[1, 100000]

Description:

Maximum traffic rate allowed for npb sender.

Plugins {#plugins}

Wasm Plugins {#plugins.wasm_plugins}

Tags:

hot_update

FQCN:

plugins.wasm_plugins

Upgrade from old version: wasm_plugins

Default value:

plugins:
  wasm_plugins: []

Enum options:

ValueNote
DYNAMIC_OPTIONSDYNAMIC_OPTIONS

Schema:

KeyValue
Typestring

Description:

Wasm plugin need to load in agent

SO Plugins {#plugins.so_plugins}

Tags:

hot_update

FQCN:

plugins.so_plugins

Upgrade from old version: so_plugins

Default value:

plugins:
  so_plugins: []

Enum options:

ValueNote
DYNAMIC_OPTIONSDYNAMIC_OPTIONS

Schema:

KeyValue
Typestring

Description:

so plugin need to load in agent. so plugin use dlopen flag RTLD_LOCAL and RTLD_LAZY to open the so file, it mean that the so must solve the link problem by itself

Dev {#dev}

Feature Flags {#dev.feature_flags}

Tags:

agent_restart

FQCN:

dev.feature_flags

Upgrade from old version: static_config.feature-flags

Default value:

dev:
  feature_flags: []

Schema:

KeyValue
Typestring

Description:

Unreleased deepflow-agent features can be turned on by setting this switch.