package
0.0.0-20181214184433-b04c0947ad2f
Repository: https://github.com/cbron/pkg.git
Documentation: pkg.go.dev

# README

Test

This directory contains tests and testing docs.

Running unit tests

To run all unit tests:

go test ./...

Test library

You can use the test library in this dir to:

Use common test flags

These flags are useful for running against an existing cluster, making use of your existing environment setup.

By importing github.com/knative/pkg/test you get access to a global variable called test.Flags which holds the values of the command line flags.

logger.Infof("Using namespace %s", test.Flags.Namespace)

See e2e_flags.go.

Output logs

When tests are run with --logverbose option, debug logs will be emitted to stdout.

We are using the common e2e logging library that uses the Knative logging library for structured logging. It is built on top of zap. Tests should initialize the global logger to use a test specifc context with logging.GetContextLogger:

// The convention is for the name of the logger to match the name of the test.
logging.GetContextLogger("TestHelloWorld")

Logs can then be emitted using the logger object which is required by many functions in the test library. To emit logs:

logger.Infof("Creating a new Route %s and Configuration %s", route, configuration)
logger.Debugf("The LogURL is %s, not yet verifying", logURL)

See logging.go.

Emit metrics

You can emit metrics from your tests using the opencensus library, which is being used inside Knative as well. These metrics will be emitted by the test if the test is run with the --emitmetrics option.

You can record arbitrary metrics with stats.Record or measure latency with trace.StartSpan:

ctx, span := trace.StartSpan(context.Background(), "MyMetric")

Metric format

When a trace metric is emitted, the format is metric <name> <startTime> <endTime> <duration>. The name of the metric is arbitrary and can be any string. The values are:

  • metric - Indicates this log is a metric
  • <name> - Arbitrary string indentifying the metric
  • <startTime> - Unix time in nanoseconds when measurement started
  • <endTime> - Unix time in nanoseconds when measurement ended
  • <duration> - The difference in ms between the startTime and endTime

For example:

metric WaitForConfigurationState/prodxiparjxt/ConfigurationUpdatedWithRevision 1529980772357637397 1529980772431586609 73.949212ms

The Wait methods (which poll resources) will prefix the metric names with the name of the function, and if applicable, the name of the resource, separated by /. In the example above, WaitForConfigurationState is the name of the function, and prodxiparjxt is the name of the configuration resource being polled. ConfigurationUpdatedWithRevision is the string passed to WaitForConfigurationState by the caller to identify what state is being polled for.

Check Knative Serving resources

WARNING: this code also exists in knative/serving.

After creating Knative Serving resources or making changes to them, you will need to wait for the system to realize those changes. You can use the Knative Serving CRD check and polling methods to check the resources are either in or reach the desired state.

The WaitFor* functions use the kubernetes wait package. To poll they use PollImmediate and the return values of the function you provide behave the same as ConditionFunc: a bool to indicate if the function should stop or continue polling, and an error to indicate if there has been an error.

For example, you can poll a Configuration object to find the name of the Revision that was created for it:

var revisionName string
err := test.WaitForConfigurationState(clients.ServingClient, configName, func(c *v1alpha1.Configuration) (bool, error) {
    if c.Status.LatestCreatedRevisionName != "" {
        revisionName = c.Status.LatestCreatedRevisionName
        return true, nil
    }
    return false, nil
}, "ConfigurationUpdatedWithRevision")

Metrics will be emitted for these Wait method tracking how long test poll for.

See kube_checks.go.

Ensure test cleanup

To ensure your test is cleaned up, you should defer cleanup to execute after your test completes and also ensure the cleanup occurs if the test is interrupted:

defer tearDown(clients)
test.CleanupOnInterrupt(func() { tearDown(clients) })

See cleanup.go.

Flags

Importing the test library adds flags that are useful for end to end tests that need to run against a cluster.

Tests importing github.com/knative/pkg/test recognize these flags:

Specifying kubeconfig

By default the tests will use the kubeconfig file at ~/.kube/config. If there is an error getting the current user, it will use kubeconfig instead as the default value. You can specify a different config file with the argument --kubeconfig.

To run tests with a non-default kubeconfig file:

go test ./test --kubeconfig /my/path/kubeconfig

Specifying cluster

The --cluster argument lets you use a different cluster than your specified kubeconfig's active context. This will default to the value of your K8S_CLUSTER_OVERRIDE environment variable if not specified.

go test ./test --cluster your-cluster-name

The current cluster names can be obtained by running:

kubectl config get-clusters

Specifying namespace

The --namespace argument lets you specify the namespace to use for the tests. By default, tests will use serving-tests.

go test ./test --namespace your-namespace-name

Output verbose logs

The --logverbose argument lets you see verbose test logs and k8s logs.

go test ./test --logverbose

Metrics flag

Running tests with the --emitmetrics argument will cause latency metrics to be emitted by the tests.

go test ./test --emitmetrics

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.

# Packages

No description provided by the author
No description provided by the author
No description provided by the author

# Functions

BuildClientConfig builds the client config specified by the config path and the cluster name.
CleanupOnInterrupt will execute the function cleanup if an interrupt signal is caught.
EventuallyMatchesBody checks that the response body *eventually* matches the expected body.
MatchesAny is a NOP matcher.
MatchesBody checks that the *first* response body matches the "expected" body, otherwise failing.
NewKubeClient instantiates and returns several clientsets required for making request to the kube client specified by the combination of clusterName and configPath.
NewSpoofingClient returns a spoofing client to make requests.
Retrying modifies a ResponseChecker to retry certain response codes.
WaitForDeploymentState polls the status of the Deployment called name from client every interval until inState returns `true` indicating it is done, returns an error or timeout.
WaitForEndpointState will poll an endpoint until inState indicates the state is achieved.
WaitForPodListState polls the status of the PodList from client every interval until inState returns `true` indicating it is done, returns an error or timeout.

# Variables

Flags holds the command line flags or defaults for settings in the user's environment.

# Structs

EnvironmentFlags define the flags that are needed to run the e2e tests.
KubeClient holds instances of interfaces for making requests to kubernetes client.