upgrade/

directory
v0.15.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 1, 2020 License: Apache-2.0

README

Upgrade Tests

In order to get coverage for the upgrade process from an operator’s perspective, we need an additional suite of tests that perform a complete knative upgrade. Running these tests on every commit will ensure that we don’t introduce any non-upgradeable changes, so every commit should be releasable.

This is inspired by kubernetes upgrade testing.

These tests are a pretty big hammer in that they cover more than just version changes, but it’s one of the only ways to make sure we don’t accidentally make breaking changes for now.

Flow

We’d like to validate that the upgrade doesn’t break any resources (they still propagate events) and doesn't break our installation (we can still update resources).

At a high level, we want to do this:

  1. Install the latest knative release.
  2. Create some resources.
  3. Install knative at HEAD.
  4. Run any post-install jobs that apply for the release to be.
  5. Test those resources, verify that we didn’t break anything.

To achieve that, we just have three separate build tags:

  1. Install the latest release from GitHub.
  2. Run the preupgrade tests in this directory.
  3. Install at HEAD (ko apply -f config/).
  4. Run the post-install job. For v0.15 we need to migrate storage versions.
  5. Run the postupgrade tests in this directory.
  6. Install the latest release from GitHub.
  7. Run the postdowngrade tests in this directory.

Tests

Smoke test

This was stolen from the e2e tests as one of the simplest cases.

preupgrade, postupgrade, postdowngrade

Run the selected smoke test.

Probe test

In order to verify that we don't have data-plane unavailability during our control-plane outages (when we're upgrading the knative/eventing installation), we run a prober test that continually sends events to a service during the entire upgrade/downgrade process. When the upgrade completes, we make sure that all of those events propagated just once.

To achieve that a tool was prepared (https://github.com/cardil/wathola). It consists of 3 components: sender, forwarder, and receiver. Sender is the usual Kubernetes pod that publishes events to the default broker with given interval. When it closes (by either SIGTERM, or SIGINT), an finished event is generated. Forwarder is a knative serving service that scales from zero to receive the requested traffic. Forwarders receive events and forwards them to given target. Receiver is an ordinary pod that collects events from multiple forwarders and has an endpoint /report that can be polled to get the status of sent events.

Diagram below describe the setup:

  (pod)              (ksvc)             (pod)
+--------+       +-----------+       +----------+
|        |       |           ++      |          |
| Sender |   +-->| Forwarder ||----->+ Receiver |
|        |   |   |           ||      |          |
+---+----+   |   +------------|      +----------+
    |        |    +-----------+
    |        |
    |        |
    |     +--+-----+       +---------+
    +----->        |       |         +-+
          | Broker | < - - | Trigger | |
          |        |       |         | |
          +--------+       +---------+ |
           (default)        +----------+

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL