package
99.0.0+incompatible
Repository: https://github.com/poy/knative-pkg.git
Documentation: pkg.go.dev

# README

Test

This directory contains tests and testing docs.

Running unit tests

To run all unit tests:

go test ./...

Test library

You can use the test library in this dir to:

Use common test flags

These flags are useful for running against an existing cluster, making use of your existing environment setup.

By importing github.com/knative/pkg/test you get access to a global variable called test.Flags which holds the values of the command line flags.

logger.Infof("Using namespace %s", test.Flags.Namespace)

See e2e_flags.go.

Output logs

When tests are run with --logverbose option, debug logs will be emitted to stdout.

We are using a generic FormatLogger that can be passed in any existing logger that satisfies it. Test can use the generic logging methods to log info and error logs. All the common methods accept generic FormatLogger as a parameter and tests can pass in t.Logf like this:

_, err = pkgTest.WaitForEndpointState(
    clients.KubeClient,
    t.Logf,
    ...),

See logging.go.

Emit metrics

You can emit metrics from your tests using the opencensus library, which is being used inside Knative as well. These metrics will be emitted by the test if the test is run with the --emitmetrics option.

You can record arbitrary metrics with stats.Record or measure latency by creating a instance of trace.Span by using the helper method logging.GetEmitableSpan()

span := logging.GetEmitableSpan(context.Background(), "MyMetric")

Metric format

When a trace metric is emitted, the format is metric <name> <startTime> <endTime> <duration>. The name of the metric is arbitrary and can be any string. The values are:

  • metric - Indicates this log is a metric
  • <name> - Arbitrary string identifying the metric
  • <startTime> - Unix time in nanoseconds when measurement started
  • <endTime> - Unix time in nanoseconds when measurement ended
  • <duration> - The difference in ms between the startTime and endTime

For example:

metric WaitForConfigurationState/prodxiparjxt/ConfigurationUpdatedWithRevision 1529980772357637397 1529980772431586609 73.949212ms

The Wait methods (which poll resources) will prefix the metric names with the name of the function, and if applicable, the name of the resource, separated by /. In the example above, WaitForConfigurationState is the name of the function, and prodxiparjxt is the name of the configuration resource being polled. ConfigurationUpdatedWithRevision is the string passed to WaitForConfigurationState by the caller to identify what state is being polled for.

Check Knative Serving resources

WARNING: this code also exists in knative/serving.

After creating Knative Serving resources or making changes to them, you will need to wait for the system to realize those changes. You can use the Knative Serving CRD check and polling methods to check the resources are either in or reach the desired state.

The WaitFor* functions use the kubernetes wait package. To poll they use PollImmediate and the return values of the function you provide behave the same as ConditionFunc: a bool to indicate if the function should stop or continue polling, and an error to indicate if there has been an error.

For example, you can poll a Configuration object to find the name of the Revision that was created for it:

var revisionName string
err := test.WaitForConfigurationState(
    clients.ServingClient, configName, func(c *v1alpha1.Configuration) (bool, error) {
        if c.Status.LatestCreatedRevisionName != "" {
            revisionName = c.Status.LatestCreatedRevisionName
            return true, nil
        }
        return false, nil
    }, "ConfigurationUpdatedWithRevision")

Metrics will be emitted for these Wait method tracking how long test poll for.

See kube_checks.go.

Ensure test cleanup

To ensure your test is cleaned up, you should defer cleanup to execute after your test completes and also ensure the cleanup occurs if the test is interrupted:

defer tearDown(clients)
test.CleanupOnInterrupt(func() { tearDown(clients) })

See cleanup.go.

Flags

Importing the test library adds flags that are useful for end to end tests that need to run against a cluster.

Tests importing github.com/knative/pkg/test recognize these flags:

Specifying kubeconfig

By default the tests will use the kubeconfig file at ~/.kube/config. If there is an error getting the current user, it will use kubeconfig instead as the default value. You can specify a different config file with the argument --kubeconfig.

To run tests with a non-default kubeconfig file:

go test ./test --kubeconfig /my/path/kubeconfig

Specifying cluster

The --cluster argument lets you use a different cluster than your specified kubeconfig's active context.

go test ./test --cluster your-cluster-name

The current cluster names can be obtained by running:

kubectl config get-clusters

Specifying ingress endpoint

The --ingressendpoint argument lets you specify a static url to use as the ingress server during tests. This is useful for Kubernetes configurations which do not provide external IPs.

go test ./test --ingressendpoint <k8s-controller-ip>:32380

Specifying namespace

The --namespace argument lets you specify the namespace to use for the tests. By default, tests will use serving-tests.

go test ./test --namespace your-namespace-name

Output verbose logs

The --logverbose argument lets you see verbose test logs and k8s logs.

go test ./test --logverbose

Metrics flag

Running tests with the --emitmetrics argument will cause latency metrics to be emitted by the tests.

go test ./test --emitmetrics

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.

# Packages

No description provided by the author
No description provided by the author
No description provided by the author
Package logstream lets end-to-end tests incorporate controller logs into the error output of tests.
Package monitoring provides common methods for all the monitoring components used in the tests This package exposes following methods: CheckPortAvailability(port int) error Checks if the given port is available GetPods(kubeClientset *kubernetes.Clientset, app string) (*v1.PodList, error) Gets the list of pods that satisfy the lable selector app=<app> Cleanup(pid int) error Kill the current port forwarding process running in the background PortForward(logf logging.FormatLogger, podList *v1.PodList, localPort, remotePort int) (int, error) Create a background process that will port forward the first pod from the local to remote port It returns the process id for the background process created.
No description provided by the author
Package zipkin adds Zipkin tracing support that can be used in conjunction with SpoofingClient to log zipkin traces for requests that have encountered server errors i.e HTTP request that have HTTP status between 500 to 600.

# Functions

BuildClientConfig builds the client config specified by the config path and the cluster name.
CleanupOnInterrupt will execute the function cleanup if an interrupt signal is caught.
ClusterRoleBinding returns ClusterRoleBinding for given subject and role.
CoreV1ObjectReference returns a corev1.ObjectReference for the given name, kind and apiversion.
DeploymentScaledToZeroFunc returns a func that evaluates if a deployment has scaled to 0 pods.
EventuallyMatchesBody checks that the response body *eventually* matches the expected body.
GetConfigMap gets the configmaps for a given namespace.
ImagePath is a helper function to prefix image name with repo and suffix with tag.
IsOneOfStatusCodes checks that the response code is equal to the given one.
IsStatusOK checks that the response code is a 200.
MatchesAllOf combines multiple ResponseCheckers to one ResponseChecker with a logical AND.
MatchesBody checks that the *first* response body matches the "expected" body, otherwise failing.
NewKubeClient instantiates and returns several clientsets required for making request to the kube client specified by the combination of clusterName and configPath.
NewSpoofingClient returns a spoofing client to make requests.
NginxPod returns nginx pod defined in given namespace.
PodRunning will check the status conditions of the pod and return true if it's Running.
PodsRunning will check the status conditions of the pod list and return true all pods are Running.
Retrying modifies a ResponseChecker to retry certain response codes.
ServiceAccount returns ServiceAccount object in given namespace.
WaitForAllPodsRunning waits for all the pods to be in running state.
WaitForDeploymentState polls the status of the Deployment called name from client every interval until inState returns `true` indicating it is done, returns an error or timeout.
WaitForEndpointState will poll an endpoint until inState indicates the state is achieved, or default timeout is reached.
WaitForEndpointStateWithTimeout will poll an endpoint until inState indicates the state is achieved or the provided timeout is achieved.
WaitForLogContent waits until logs for given Pod/Container include the given content.
WaitForPodListState polls the status of the PodList from client every interval until inState returns `true` indicating it is done, returns an error or timeout.
WaitForPodRunning waits for the given pod to be in running state.
WithHeader will add the provided headers to the request.

# Variables

Flags holds the command line flags or defaults for settings in the user's environment.

# Structs

EnvironmentFlags define the flags that are needed to run the e2e tests.
KubeClient holds instances of interfaces for making requests to kubernetes client.

# Type aliases

RequestOption enables configuration of requests when polling for endpoint states.