package
0.38.0-preview.0
Repository: https://github.com/onflow/flow-go.git
Documentation: pkg.go.dev

# README

Tracking result approvals

Context

Among other things, an ExecutionResult specifies

  • BlockID: the ID of the executed block
  • PreviousResultID: the ID of the result that was used as starting state to run the computation

This implies the ExecutionResults form a tree. We call this the Execution Tree. However, nodes do not need to retain the entire tree, but can prune levels of lower height. By pruning lower levels, the full tree decomposes into a forest of subtrees.

Multiple verifier assignments

For a single Execution Result, there can be multiple assignments of Verification Nodes to the individual chunks. This is because:

  • Execution results are incorporated in blocks.
  • The block that incorporates the result is used to determine the verifier assignment.
  • The same result r can be incorporated in different forks.
  • The different blocks incorporating r will have different Sources of Randomness. Hence, the verifier assignments differ for the different forks.

At some point, verifier assignments can be discarded. For example, if the block incorporated the result is orphaned.

Assignment Collectors

From a computational perspective, it makes sense to group all assignments for the same result:

  • A ResultApproval is for a specific execution result. There is nothing in the approval that ties it to a specific assignment. So theoretically, the same approval could count towards multiple different assignments.
  • The dominant computational cost when processing a ResultApproval is verifying its correctness (cryptographic signature and SPoCK proof). In contrast, determining which assignments the approval can be counted towards is negligible. In other words, tracking more than the minimally necessary assignments adds no noteworthy computational overhead)

Hence, it makes sense to define the AssignmentCollector. It processes the ResultApprovals for one particular execution result:

  • Specifically, it memorizes all known verifier assignments for the result.
  • When it receives a ResultApproval, it verifies the approval once and then adds the approval to eligible assignments.

Assignment Collector Tree

It is beneficial to arrange the Assignment Collectors as a tree (or a forest depending on pruning). This allows us to draw conclusions

As illustrated by the figure above, the AssignmentCollectors form a tree Formally, we define:

  • All Verification assignments for the same result from an equivalence class. The equivalence class represents one vertex in the AssignmentCollectorTree. It is implemented as AssignmentCollector.

  • The edges are placed exactly like in the Execution Tree:

    • Let r[B] be an execution result for block B. We denote the corresponding AssignmentCollector as c[r[B]].
    • Let r_parent := r[B].PreviousResultID be the parent result and c[r_parent] the corresponding AssignmentCollector.

    In the Execution Tree, there is an edge r[B] --> r_parent. Correspondingly, we place the edge c[r[B]] --> c[r_parent] in the Assignment Collector Tree.

  • Lastly, for an AssignmentCollector, we define a level as the height of the executed block.

Assignment Collector Tree

Orphaning forks in the Assignment Collector Tree

There are two scenarios when we can orphan entire forks of the Assignment Collector Tree:

  1. The executed blocks were orphaned. This happens as a consequence of block finalization by the local HotStuff instance.
  2. If an execution result conflicts with a finalized result seal, the result itself as well as all the derived results are orphaned.

In addition, we can prune all levels in the Assignment Collector Tree that are below the latest finalized seal.

When an AssignmentCollector is orphaned, we can stop processing the corresponding approvals. However, for the following reason, we should not right away discard the respective vertices in the Assignment Collector Tree:

  • We might later learn about derives results (e.g. results for child blocks or different execution results).
  • In order to make the determination whether a new AssignmentCollector should be orphaned, we retain the orphaned forks in the Assignment Collector Tree. When adding an AssignmentCollector that extends an already orphaned fork, it is also orphaned.

Orphaned forks will eventually be pruned by height, when sealing processes beyond the fork's height.

The Assignment Collector's States

We already have discussed two different modes an assignment collector could operate in:

  • VerifyingApprovals: it is processing incoming ResultApprovals
  • Orphaned: it has been orphaned and discards all ResultApprovals

The Assignment Collector Tree is populated in a fully concurrent manner. Hence, when adding an AssignmentCollector, it can happen that the parent is not yet present (see following figure for illustration). This is the mode:

  • CachingApprovals: the collector caches all approvals without processing them. This mode is relatively rare and largely occurs as an edge-case if the node is behind and catching up (i.e. processing many blocks in a relatively short time). If the node is behind, it is likely that the result is already sealed by the time it caught up. We permit a simple implementation that is to not keep track of the order in which it received the approvals.

Assignment Collector Tree

In the future, we will also add further states for 'Extensive Checking Mode':

  • ExtensiveCheckingMode: the AssignmentCollector has not received sufficient approvals and sealing is lagging behind (exceeding a certain threshold). In this case, the should trigger extensive checkig mode, i.e. request additional verifiers to check.

Implementation Comments

Compare And Repeat Pattern

In multiple places we employ the Compare And Repeat Pattern. This pattern is applicable when we have:

  • an atomically-updatable state value
  • some business logic that requires an up-to-date value of the state

With the Compare And Repeat Pattern, we guarantee that the business logic was executed with the most recent state.

Pattern Details

  1. Atomically read the state before the operation.
  2. Execute the operation on the retrieved state.
  3. Atomically read the state after the operation.
    • If the state changed, we updated a stale state. In this case we go back to step 1.
    • If the state remained unchanged, we updated the most recent state. Hence, we are done.

Key for caching ResultApprovals

In case we don't process a ResultApproval right away, we generally cache it. If done naively, the caching could be exploited by byzantine nodes: they could send lots of approvals for the same chunk but vary some field in their approvals that is not validated right away (e.g. the SPoCK proof). If consensus nodes used the ID of the full approval as key for the cache, the malicious approvals would have all different keys, which would allow a malicious node to flood the cache.

Instead, we only want to cache one approval per Verification node for each specific chunk. Therefore, we use ResultApproval.PartialID(), which is only computed over the chunk-identifier plus the verifier's node ID.

Emergency Sealing

Emergency sealing is a temporary fallback to maintain liveness in case there are problems with verification. In a nutshell, emergency sealing means that (selected) consensus nodes can seal blocks without sufficient approvals from verification nodes.

Conceptually, emergency sealing follows a similar process as regular sealing: we consider verification assignments that have not yet been sealed (or orphaned). We inspect each assignment individually whether it satisfies the emergency sealing conditions (details below). If this is the case, we generate a candidate seal and place it in the consensus node's local IncorporatedResultSeals mempool. The block builder will then pick out a continuous sequence of seals from the mempool if such exists. This simplifies the sealing logic (happy path as well as emergency sealing fallback), because we can evaluate sealing conditions for each incorporated Execution Result [ER] individually.

Consider block A, whose Execution Result [ER] was incorporated in block B (illustrated below). The decision whether the ER can be emergency sealed is governed by two protocol parameters: DefaultEmergencySealingThresholdForFinalization and DefaultEmergencySealingThresholdForVerification. For an ER to be emergency sealed, all of the following conditions have to be satisfied:

Emergency sealing

  1. Let Δh1 be the height difference between the latest finalized block and block A. We require that Δh1 > DefaultEmergencySealingThresholdForFinalization.
    • This means that block A must have at least DefaultEmergencySealingThresholdForFinalization unsealed but finalized descendants.
    • This condition is enforced by the VerifyingAssignmentCollector.
    • We also use this condition in sealing.Core to evaluate which height range we need to check for emergency sealing.
  2. Let Δh2 be the height difference between the latest finalized block and block B, which incorporates the ER for block A. We require that Δh2 > DefaultEmergencySealingThresholdForVerification
    • This means that the block incorporating the ER must have at least DefaultEmergencySealingThresholdForVerification finalized descendants.
    • This condition is enforced by the VerifyingAssignmentCollector.
  3. There must be consistent execution results from at least two different Execution Nodes to even consider a result for emergency sealing.
    • This condition is enforced by the consensus.IncorporatedResultSeals. This mempool stores all candidate seals, but hides them from the block builder until at least two consistent execution results are known.
    • Comment: as a temporary safety measure, requiring at least two consistent results must hold for any block, no matter whether the seal was generated through the happy path or via emergency sealing.
  4. The parent block has to be sealed.
    • This condition is enforced by the consensus.Builder

# Packages

No description provided by the author
No description provided by the author

# Functions

NewAggregatedSignatures instantiates a AggregatedSignatures.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NewRequestTracker instantiates a new RequestTracker with blackout periods between min and max seconds.
NewRequestTrackerItem instantiates a new RequestTrackerItem where the NextTimeout is evaluated to the current time plus a random blackout period contained between min and max.
NewSignatureCollector instantiates a new SignatureCollector.
NewVerifyingAssignmentCollector instantiates a new VerifyingAssignmentCollector.

# Constants

CachingApprovals is a state descriptor for the AssignmentCollector.
DefaultEmergencySealingThresholdForFinalization is the minimal number of unsealed but finalized descendants that a block must have in order to be eligible for emergency sealing (further conditions apply for emergency sealing).
DefaultEmergencySealingThresholdForVerification is the minimal number of finalized descendants that the block _incorporating_ an Execution Result [ER] must have for the ER to be eligible for emergency sealing (further conditions apply for emergency sealing).
Orphaned is a state descriptor for the AssignmentCollector.
VerifyingApprovals is a state descriptor for the AssignmentCollector.

# Variables

No description provided by the author
No description provided by the author

# Structs

AggregatedSignatures is an utility struct that provides concurrency safe access to map of aggregated signatures indexed by chunk index.
ApprovalCollector is responsible for distributing work to chunk collectorTree, collecting aggregated signatures for chunks that reached seal construction threshold, creating and submitting seal candidates once signatures for every chunk are aggregated.
ApprovalsCache encapsulates a map for storing result approvals indexed by approval partial ID to provide concurrent access.
AssignmentCollectorBase holds the shared data and functionality for implementations of the AssignmentCollectorBase holds common dependencies and immutable values that are shared by the different states of an AssignmentCollector.
AssignmentCollectorStateMachine implements the `AssignmentCollector` interface.
AssignmentCollectorTree is a mempool holding assignment collectors, which is aware of the tree structure formed by the execution results.
BaseApprovalsTestSuite is a base suite for testing approvals processing related functionality At nutshell generates mock data that can be used to create approvals and provides all needed data to validate those approvals for respected execution result.
BaseAssignmentCollectorTestSuite is a base suite for testing assignment collectors, contains mocks for all classes that are used in base assignment collector and can be reused in different test suites.
CachingAssignmentCollector is an AssignmentCollectorState with the fixed `ProcessingStatus` of `CachingApprovals`.
ChunkApprovalCollector implements logic for checking chunks against assignments as well as accumulating signatures of already checked approvals.
IncorporatedResultsCache encapsulates a map for storing IncorporatedResults to provide concurrent access.
LazyInitCollector is a helper structure that is used to return collector which is lazy initialized.
LruCache is a wrapper over `simplelru.LRUCache` that provides needed api for processing result approvals Extends functionality of `simplelru.LRUCache` by introducing additional index for quicker access.
OrphanAssignmentCollector is an AssignmentCollectorState with the fixed `ProcessingStatus` of `Orphaned`.
RequestTracker is an index of RequestTrackerItems indexed by execution result Index on result ID, incorporated block ID and chunk index.
RequestTrackerItem is an object that keeps track of how many times a request has been made, as well as the time until a new request can be made.
SignatureCollector contains a set of of signatures from verifiers attesting to the validity of an execution result chunk.
VerifyingAssignmentCollector Context: - When the same result is incorporated in multiple different forks, unique verifier assignment is determined for each fork.

# Interfaces

AssignmentCollector tracks the known verifier assignments for a particular execution result.
AssignmentCollectorState represents an AssignmentCollector in one specific state (without any knowledge about state transitions).

# Type aliases

NewCollectorFactoryMethod is a factory method to generate an AssignmentCollector for an execution result.
ProcessingStatus is a state descriptor for the AssignmentCollector.