docs: Replace Casper docs with Puppeteer docs.

This commit also has a few tweaks to the Node testing documentation to
improve its introductory section.
This commit is contained in:
Tim Abbott
2020-08-30 18:39:34 -07:00
parent 2f5f5d7749
commit 0b2854f010
8 changed files with 175 additions and 201 deletions

View File

@@ -51,7 +51,7 @@ different flows:
When making changes to the hashchange system, it is **essential** to
test all of these flows, since we don't have great automated tests for
all of this (would be a good project to add them to the
[Casper suite][testing-with-casper]) and there's enough complexity
[Puppeteer suite][testing-with-puppeteer]) and there's enough complexity
that it's easy to accidentally break something.
The main external API is below:
@@ -119,6 +119,6 @@ browser, Zulip also does a few bookkeeping things on page reload (like
cleaning up its event queue, and saving any text in an open compose
box as a draft).
[testing-with-casper]: ../testing/testing-with-casper.md
[testing-with-puppeteer]: ../testing/testing-with-puppeteer.md
[self-server-reloads]: #server-initiated-reloads
[events-system]: ../subsystems/events-system.md

View File

@@ -9,7 +9,7 @@ Code testing
linters
testing-with-django
testing-with-node
testing-with-casper
testing-with-puppeteer
mypy
typescript
continuous-integration

View File

@@ -1,186 +0,0 @@
# Web frontend black-box CasperJS tests
These live in `frontend_tests/casper_tests/`. This is a "black box"
integration test; we load the frontend in a real (headless) browser,
from a real (development) server, and simulate UI interactions like
sending messages, narrowing, etc., by actually clicking around the UI
and waiting for things to change before doing the next step. These
tasks are fantastic for ensuring the overall health of the project,
but are also costly to maintain and keep free of nondeterministic
failures, so we usually prefer to write a Node test instead when
possible.
Since the Casper tests interact with a real dev server, they can often
catch backend bugs as well.
You can run the casper tests with `./tools/test-js-with-casper` or as
`./tools/test-js-with-casper 06-settings.js` to run a single test file
from `frontend_tests/casper_tests/`.
## Debugging CasperJS
When a Casper test fails, the first things to check (before you bother
trying to use the Casper debugging tools are:
* Does your branch actually work if you just open the webapp and try
to follow the flow being tested? Often the answer is no, and you'll
find the debugging experience in your browser to be a much more
convenient way to fix the issue.
* Does your branch use ES6 syntax like arrow functions in a context
that isn't transpiled (i.e. non-TypeScript code)? Casper uses the
PhantomJS browser, which doesn't support ES6 syntax, so use of
non-transpiled ES6 syntax will generally be first discovered via the
Casper tests failing..
* Are there any backend-errors (printed inline) while running the tests?
* You can check the screenshots of what the UI looked like at the time
of failures at `var/casper/casper-failure*.png`.
### Print debugging
If you need to use print debugging in casper, you can do using
`casper.log`; see <https://web.archive.org/web/20200108115113if_/https://docs.casperjs.org/en/latest/logging.html> for
details.
You can also enable casper's verbose logging mode using the `--verbose` flag. This
can sometimes give insight into exactly what's happening.
### Remote debugging
CasperJS (via PhantomJS) has support for remote debugging. However, it
is not perfect. Here are some steps for using it and gotchas you might
want to know; you'll likely also want to read the section on writing
tests (below) if you get stuck, since the advice on how to write
correct Casper selectors will likely be relevant.
This is a pain to set up with Vagrant because port `7777` and `9981`
aren't forwarded to the host by default, but can be pretty useful in
rare difficult cases.
To turn on remote debugging, pass `--remote-debug` to the
`./frontend_tests/run-casper` script. This will run the tests with port
`7777` open for remote debugging. You can now connect to
`localhost:7777` in a Webkit browser. Somewhat recent versions of Chrome
or Safari might be required.
- When connecting to the remote debugger, you will see a list of
pages, probably 2. One page called `about:blank` is the headless
page in which the CasperJS test itself is actually running in. This
is where your test code is.
- The other page, probably `localhost:9981`, is the Zulip page that
the test is testing---that is, the page running our app that our
test is exercising.
Since the tests are now running, you can open the `about:blank` page,
switch to the Scripts tab, and open the running `0x-foo.js` test. If you
set a breakpoint and it is hit, the inspector will pause and you can do
your normal JS debugging. You can also put breakpoints in the Zulip
webpage itself if you wish to inspect the state of the Zulip frontend.
### Reproducing races only seen in Travis CI
We've sometimes found it useful for reproducing Casper race conditions
in Casper tests that mostly only happen in Travis CI with really cheap
VPS servers (e.g. Scaleway's 2GB x86). This works because an ultra
slow machine is more likely to have things happen in an order similar
to what happens in Travis CI's very slow containers.
## Writing Casper tests
Probably the easiest way to learn how to write Casper tests is to study
some of the existing test files. There are a few tips that can be useful
for writing Casper tests in addition to the debugging notes below:
- Run just the file containing your new tests as described above to
have a fast debugging cycle.
- With frontend tests in general, it's very important to write your
code to wait for the right events. Before essentially every action
you take on the page, you'll want to use `waitUntilVisible`,
`waitWhileVisible`, or a similar function to make sure the page
or element is ready before you interact with it. For instance, if
you want to click a button that you can select via `#btn-submit`,
and then check that it causes `success-elt` to appear, you'll want
to write something like:
casper.waitUntilVisible("#btn-submit", function () {
casper.click('#btn-submit')
casper.test.assertExists("#success-elt");
});
In many cases, you will actually need to wait for the UI to update
clicking the button before doing asserts or the next step. This
will ensure that the UI has finished updating from the previous
step before Casper attempts to next step. The various wait
functions supported in Casper are documented in the Casper here:
<https://web.archive.org/web/20200108100925if_/https://docs.casperjs.org/en/latest/modules/casper.html#waitforselector>
and the various assert statements available are documented here:
<https://web.archive.org/web/20190814204845if_/https://docs.casperjs.org/en/latest/modules/tester.html#the-tester-prototype>
- The `casper.wait` style functions (`waitWhileVisible`,
`waitUntilVisible`, etc.) cannot be chained together in certain
conditions without creating race conditions where the test may
fail nondeterministically. For example, don't do this:
casper.waitUntilVisible('tag 1');
casper.click('button');
casper.waitUntilVisible('tag 2');
Instead, if you want to avoid race condition, wrap the second
`waitFor` in a `then` function like this:
casper.then(function () {
casper.waitUntilVisible('tag 1', function () {
casper.click('#btn-submit');
});
});
casper.then(function () {
casper.waitUntilVisible('tag 2', function () {
casper.test.assertExists('#success-elt');
});
});
(You'll also want to use selectors that are as explicit as
possible, to avoid accidentally clicking multiple buttons or the
wrong button in your test, which can cause nondeterministic failures)
- Generally `casper.waitUntilVisible` is preferable to
e.g. `casper.waitForSelector`, since the former will confirm the
thing is actually on screen. E.g. if you're waiting to switch
from one panel of the the settings overlay to another by waiting
for a particular widget to appear, `casper.waitForSelector` may
not actually wait (since the widget is probably in the DOM, just
not visible), but casper.waitUntilVisible will wait until it's
actually shown.
- The selectors (i.e. things you put inside
`casper.waitUntilVisible()` and friends) appearing in Casper tests
are CSS3 selectors, which is a slightly different syntax from the
jQuery selectors used in the rest of the Zulip codebase; in
particular, some expressions that work with jQuery (and thus
normal Zulip JavaScript code) won't work with CSS3. It's often
helpful to debug selectors interactively, which you can do in the
Chrome JavaScript console. The way to do it is
`$$("#settings-dropdown")`; that queries CSS3 selectors, so you
can debug your selector in the console and then paste it into your
Casper test once it's working. For other browsers like Firefox,
you can use `querySelectorAll("#settings-dropdown")`, syntax which
is only available in the browser's JavaScript console.
You can learn more about these selectors and other JavaScript console tools
[here](https://developers.google.com/web/tools/chrome-devtools/console/command-line-reference).
- The test suite uses a smaller set of default user accounts and other
data initialized in the database than the development environment;
to see what differs check out the section related to
`options["test_suite"]` in
`zilencer/management/commands/populate_db.py`.
- Casper effectively runs your test file in two phases -- first it
runs the code in the test file, which for most test files will just
collect a series of steps (each being a `casper.then` or
`casper.wait...` call). Then, usually at the end of the test file,
you'll have a `casper.run` call which actually runs that series of
steps. This means that if you write code in your test file outside a
`casper.then` or `casper.wait...` method, it will actually run
before all the Casper test steps that are declared in the file,
which can lead to confusing failures where the new code you write in
between two `casper.then` blocks actually runs before either of
them. See this for more details about how Casper works:
<https://web.archive.org/web/20200107035425if_/https://docs.casperjs.org/en/latest/faq.html#how-does-then-and-the-step-stack-work>

View File

@@ -37,7 +37,7 @@ There are many command line options for running Zulip tests, such
as a `--verbose` option. The
best way to learn the options is to use the online help:
./tools/test-backend -h
./tools/test-backend --help
We also have ways to instrument our tests for finding code coverage,
URL coverage, and slow tests. Use the `-h` option to discover these

View File

@@ -1,16 +1,20 @@
# JavaScript/TypeScript unit tests
Our node-based unit tests system is the preferred way to test
JavaScript/TypeScript code in Zulip. We prefer it over the [Casper
black-box whole-app testing](../testing/testing-with-casper.md),
JavaScript/TypeScript code in Zulip. We prefer it over the [Puppeteer
black-box whole-app testing](../testing/testing-with-puppeteer.md),
system since it is much (>100x) faster and also easier to do correctly
than the Casper system.
than the Puppeteer system.
You can run tests as follow:
You can run this test suite as follows:
```
tools/test-js-with-node
```
See `test-js-with-node --help` for useful options; even though the
whole suite is quite fast, it still saves time to run a single test by
name when debugging something.
The JS unit tests are written to work with node. You can find them
in `frontend_tests/node_tests`. Here is an example test from
`frontend_tests/node_tests/stream_data.js`:
@@ -44,7 +48,7 @@ A good first test to read is
## How the node tests work
Unlike the [Puppeteer unit tests](../testing/testing-with-casper.md),
Unlike the [Puppeteer unit tests](../testing/testing-with-puppeteer.md),
which use a headless Chromium browser connected to a running Zulip
development server, our node unit tests don't have a browser, don't
talk to a server, and generally don't use a complete virtual DOM (a

View File

@@ -0,0 +1,156 @@
# Web frontend black-box Puppeteer tests
While our [node test suite](../testing/testing-with-node.md) is the
preferred way to test most frontend code because they are easy to
write and maintain, some code is best tested in a real browser, either
because of navigation (E.g. login) or because we want to verify the
interaction between Zulip logic and browser behavior (E.g. copy/paste,
keyboard shortcuts, etc.).
## Running tests
You can run this test suite as follows:
```
tools/test-js-with-puppeteer
```
See `tools/test-js-with-puppeteer --help` for useful options,
especially running specific subsets of the tests to save time when
debugging.
The test files live in `frontend_tests/puppeteer_tests` and make use
of various useful helper functions defined in
`frontend_tests/puppeteer_lib/common.js`.
## How puppeteer tests work
The Puppeteer tests use a real Chromium browser (powered by
[puppeteer](https://github.com/puppeteer/puppeteer)), connected to a
real Zulip development server. These are black-box tests: Steps in a
Puppeteer test are largely things one might do as a user of the Zulip
webapp, like "Type this key", "Wait until this HTML element
appears/disappears", or "Click on this HTML element".
For example, this function might test the `x` keyboard shortcut to
open the compose box for a new private message:
```
async function test_private_message_compose_shortcut(page) {
await page.keyboard.press("KeyX");
await page.waitForSelector("#private_message_recipient", {visible: true});
await common.pm_recipient.expect(page, "");
await close_compose_box(page);
}
```
The test function presses the `x` key, waits for the
`#private_message_recipient` input element to appear, verifies its
content is empty, and then closes the compose box. The
`waitForSelector` step here (and in most tests) is critical; tests
that don't wait properly often fail nonderministically, because the
test will work or not depending on whether the browser updates the UI
before or after executing the next step in the test.
Black-box tests are fantastic for ensuring the overall health of the
project, but are also slow, costly to maintain, and require care to
avoid nondeterministic failures, so we usually prefer to write a Node
test instead when both are options.
They also can be a bit tricky to understand for contributors not
familiar with [async/await][learn-async-await].
## Debugging Puppeteer tests
The following questions are useful when debugging Puppeteer test
failures you might see in [continuous
integration](../testing/continuous-integration.md):
* Does the flow being tested work properly in the Zulip browser UI?
Test failures can reflect real bugs, and often it's easier and more
interactive to debug an issue in the normal Zulip development
environment than in the Puppeteer test suite.
* Does the change being tested adjust the HTML structure in a way that
affects any of the selectors used in the tests? If so, the test may
just need to be updated for your changes.
* Does the test fail deterministically when you run it locally using
E.g. `./tools/test-js-with-puppeteer 03`? If so, you can
iteratively debug to see the failure.
* Does the test fail nondeterministically? If so, the problem is
likely that a `waitForSelector` statement is either missing or not
waiting for the right thing. Tests fail nondeterministically much
more often on very slow systems like those used for Continuous
Integration (CI) services because small races are amplified in those
environments; this often explains failures in CI that cannot be
easily reproduced locally.
These tools/features are often useful when debugging:
* You can use `console.log` statements both in Puppeteer tests and the
code being tested to print-debug.
* Zulip's Puppeteer tests are configured to generate screenshots of
the state of the test browser when an assert statement fails; these
are stored under `var/puppeteer/*.png` and are extremely helpful for
debugging test failures.
* TODO: Mention how to access Puppeteer screenshots in CI.
* TODO: Add an option for using the `headless: false` debugging mode
of puppeteer so you can watch what's happening, and document how to
make that work with Vagrant.
* TODO: Document `--interactive`.
* TODO: Document how to run 100x in CI to check for nondeterminstic
failures.
* TODO: Document any other techniques/ideas that were helpful when porting
the Casper suite.
* The Zulip server powering these tests is just `run-dev.py` with some
extra [Django settings](../subsystems/settings.md) from
`zproject/test_extra_settings.py` to configure an isolated database
so that the tests will not interfere/interact with a normal
development environment. The console output while running the tests
includes the console output for the server; any Python exceptions
are likely actual bugs in the changes being tested.
See also [puppeteer upstream's debugging
tips](https://github.com/puppeteer/puppeteer#debugging-tips); some
tips may require temporary patches to functions like `run_test` or
`ensure_browser` in `frontend_tests/puppeteer_lib/common.js`.
## Writing Puppeteer tests
Probably the easiest way to learn how to write Puppeteer tests is to
study some of the existing test files. There are a few tips that can
be useful for writing Puppeteer tests in addition to the debugging
notes above:
- Run just the file containing your new tests as described above to
have a fast debugging cycle.
- When you're done writing a test, run it 100 times in a loop to
verify it does not fail nondeterminstically (see above for notes on
how to get CI to do it for you); this is important to avoid
introducing extremely annoying nondeterministic failures into
master.
- With black-box browser tests like these, it's very important to write your code
to wait for browser's UI to update before taking any action that
assumes the last step was processed by the browser (E.g. after you
click on a user's avatar, you need an explicit wait for the profile
popover to appear before you can try to click on a menu item in that
popover). This means that before essentially every action in your
Puppeteer tests, you'll want to use `waitForSelector` or similar
wait function to make sure the page or element is ready before you
interact with it. The [puppeteer docs site](https://pptr.dev/) is a
useful reference for the available wait functions.
- When using `waitForSelector`, you always want to use the `{visible:
true}` option; otherwise the test will stop waiting as soon as the
target selector is present in the DOM even if it's hidden. For the
common UI pattern of having an element always be present in the DOM
whose presence is managed via show/hide rather than adding/removing
it from the DOM, `waitForSelector` without `visible: true` won't
wait at all.
- The test suite uses a smaller set of default user accounts and other
data initialized in the database than the normal development
environment; specifically, it uses the same setup as the [backend
tests](../testing/testing-with-django.md). To see what differs from
the development environment, check out the conditions on
`options["test_suite"]` in
`zilencer/management/commands/populate_db.py`.
[learn-async-await]: https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Asynchronous/Async_await

View File

@@ -35,7 +35,7 @@ typically involve running subsets of the tests with commands like these:
./tools/lint zerver/lib/actions.py # Lint the file you just changed
./tools/test-backend zerver.tests.test_markdown.MarkdownTest.test_inline_youtube
./tools/test-backend MarkdownTest # Run `test-backend --help` for more options
./tools/test-js-with-casper 09-navigation.js
./tools/test-js-with-puppeteer 07-navigation.js
./tools/test-js-with-node utils.js
```
@@ -52,8 +52,8 @@ eventually work with, each with its own page detailing how it works:
- [Django](../testing/testing-with-django.md): Server/backend Python tests.
- [Node](../testing/testing-with-node.md): JavaScript tests for the
frontend run via node.js.
- [Casper (deprecated)](../testing/testing-with-casper.md): End-to-end
UI tests run via a browser.
- [Puppeteer](../testing/testing-with-puppeteer.md): End-to-end
UI tests run via a Chromium browser.
## Other test suites

View File

@@ -134,8 +134,8 @@ or JavaScript/TypeScript code that generates user-facing strings, be sure to
**Testing:** There are two types of frontend tests: node-based unit
tests and blackbox end-to-end tests. The blackbox tests are run in a
headless browser using CasperJS and are located in
`frontend_tests/casper_tests/`. The unit tests use Node's `assert`
headless Chromium browser using Puppeteer and are located in
`frontend_tests/puppeteer_tests/`. The unit tests use Node's `assert`
module are located in `frontend_tests/node_tests/`. For more
information on writing and running tests, see the
[testing documentation](../testing/testing.md).
@@ -625,7 +625,7 @@ Here are few important cases you should consider when testing your changes:
A great next step is to write front end tests. There are two types of
frontend tests: [node-based unit tests](../testing/testing-with-node.md) and
[Casper end-to-end tests](../testing/testing-with-casper.md).
[Puppeteer end-to-end tests](../testing/testing-with-puppeteer.md).
At the minimum, if you created a new function to update UI in
`settings_org.js`, you will need to mock that function in