docs: Apply bullet style changes from Prettier.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit is contained in:
Anders Kaseorg
2021-08-20 12:45:39 -07:00
committed by Anders Kaseorg
parent 6145fdf678
commit 915884bff7
93 changed files with 1667 additions and 1667 deletions

View File

@@ -13,10 +13,10 @@ many possible future bugs as possible, while minimizing both latency
and false positives, both of which can waste a lot of developer time.
There are a few implications of this overall goal:
* If a test is failing nondeterministically in CI, we consider that to
- If a test is failing nondeterministically in CI, we consider that to
be an urgent problem.
* If the tests become a lot slower, that is also an urgent problem.
* Everything we do in CI should also have a way to run it quickly
- If the tests become a lot slower, that is also an urgent problem.
- Everything we do in CI should also have a way to run it quickly
(under 1 minute, preferably under 3 seconds), in order to iterate fast
in development. Except when working on the CI configuration itself, a
developer should never have to repeatedly wait 10 minutes for a full CI
@@ -26,16 +26,16 @@ run to iteratively debug something.
### Useful debugging tips and tools
* GitHub Actions stores timestamps for every line in the logs. They
- GitHub Actions stores timestamps for every line in the logs. They
are hidden by default; you can see them by toggling the
`Show timestamps` option in the menu on any job's log page. (You can
get this sort of timestamp in a development environment by piping
output to `ts`).
* GitHub Actions runs on every branch you push on your Zulip fork.
- GitHub Actions runs on every branch you push on your Zulip fork.
This is helpful when debugging something complicated.
* You can also ssh into a container to debug failures. SSHing into
- You can also ssh into a container to debug failures. SSHing into
the containers can be helpful, especially in rare cases where the
tests are passing in your computer but failing in the CI. There are
various
@@ -106,8 +106,8 @@ between jobs the various caches that live under `/srv/` in a Zulip
development or production environment. In particular, we cache the
following:
* Python virtualenvs
* node_modules directories
- Python virtualenvs
- node_modules directories
This has a huge impact on the performance of running tests in GitHub Actions
CI; without these caches, the average test time would be several times

View File

@@ -52,12 +52,12 @@ You can also run them individually or pass specific files:
`./tools/lint` has many useful options; you can read about them in its
internal documentation using `./tools/lint --help`. Of particular
note are:
* `--fix`: Several of our linters support automatically fixing basic
- `--fix`: Several of our linters support automatically fixing basic
issues; this option will ask `tools/lint` to run those.
* `--verbose`: Provides detailed information on how to fix many common
- `--verbose`: Provides detailed information on how to fix many common
linter errors not covered by `--fix`.
* `--skip` and `--only`: Only run certain linters.
* `-m`: Only check modified files.
- `--skip` and `--only`: Only run certain linters.
- `-m`: Only check modified files.
Finally, you can rely on our continuous integration setup to run linters for you,
but it is good practice to run lint checks locally.

View File

@@ -16,7 +16,7 @@ def get_user(email: str, realm: Realm) -> UserProfile:
You can learn more about it at:
* The
- The
[mypy cheat sheet for Python 3](https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html)
is the best resource for quickly understanding how to write the PEP
484 type annotations used by mypy correctly. The
@@ -24,12 +24,12 @@ You can learn more about it at:
is useful for understanding the type comment syntax needed for our
few modules that need to support both Python 2 and 3.
* The
- The
[Python type annotation spec in PEP 484](https://www.python.org/dev/peps/pep-0484/).
* Our [blog post on being an early adopter of mypy][mypy-blog-post] from 2016.
- Our [blog post on being an early adopter of mypy][mypy-blog-post] from 2016.
* Our [best practices](#best-practices) section below.
- Our [best practices](#best-practices) section below.
The mypy type checker is run automatically as part of Zulip's Travis
CI testing process in the `backend` build.
@@ -76,12 +76,12 @@ errors, it's important to get to the bottom of the issue, not just do
something quick to silence the warnings, before we merge the changes.
Possible explanations include:
* A bug in any new type annotations you added.
* A bug in the existing type annotations.
* A bug in Zulip!
* Some Zulip code is correct but confusingly reuses variables with
- A bug in any new type annotations you added.
- A bug in the existing type annotations.
- A bug in Zulip!
- Some Zulip code is correct but confusingly reuses variables with
different types.
* A bug in mypy (though this is increasingly rare as mypy is now
- A bug in mypy (though this is increasingly rare as mypy is now
fairly mature as a project).
Each explanation has its own solution, but in every case the result
@@ -260,27 +260,27 @@ print(y.strip()) # runtime error
If you think you need to use `Any`, consider the following safer
alternatives first:
* To annotate a dictionary where different keys correspond to values
- To annotate a dictionary where different keys correspond to values
of different types, instead of writing `Dict[str, Any]`, try
declaring a
[**`dataclass`**](https://mypy.readthedocs.io/en/stable/additional_features.html#dataclasses)
or a
[**`TypedDict`**](https://mypy.readthedocs.io/en/stable/more_types.html#typeddict).
* If you're annotating a class or function that might be used with
- If you're annotating a class or function that might be used with
different data types at different call sites, similar to the builtin
`List` type or the `sorted` function, [**generic
types**](https://mypy.readthedocs.io/en/stable/generics.html) with
`TypeVar` might be what you need.
* If you need to accept data of several specific possible types at a
- If you need to accept data of several specific possible types at a
single site, you may want a [**`Union`
type**](https://mypy.readthedocs.io/en/stable/kinds_of_types.html#union-types).
`Union` is checked: before using `value: Union[str, int]` as a
`str`, mypy requires that you validate it with an
`instance(value, str)` test.
* If you really have no information about the type of a value, use the
- If you really have no information about the type of a value, use the
**`object` type**. Since every type is a subtype of `object`, you
can correctly annotate any value as `object`. The [difference
between `Any` and
@@ -289,7 +289,7 @@ alternatives first:
`isinstance` before using it in a way that expects a more specific
type.
* A common way for `Any` annotations to sneak into your code is the
- A common way for `Any` annotations to sneak into your code is the
interaction with untyped third-party libraries. Mypy treats any
value imported from an untyped library as annotated with `Any`, and
treats any type imported from an untyped library as equivalent to
@@ -310,7 +310,7 @@ print(x.strip()) # runtime error
Instead of using `cast`:
* You can use a [variable
- You can use a [variable
annotation](https://mypy.readthedocs.io/en/stable/type_inference_and_annotations.html#explicit-types-for-variables)
to be explicit or to disambiguate types that mypy can check but
cannot infer.
@@ -319,7 +319,7 @@ Instead of using `cast`:
l: List[int] = []
```
* You can use an [`isinstance`
- You can use an [`isinstance`
test](https://mypy.readthedocs.io/en/stable/common_issues.html#complex-type-tests)
to safely verify that a value really has the type you expect.
@@ -341,12 +341,12 @@ issue.
### Avoid other unchecked constructs
* As mentioned
- As mentioned
[above](#using-overload-to-accurately-describe-variations), we
**discourage writing overloaded functions** because their bodies are
not checked against the `@overload` signatures.
* **Avoid `Callable[..., T]`** (with literal ellipsis `...`), since
- **Avoid `Callable[..., T]`** (with literal ellipsis `...`), since
mypy cannot check the types of arguments passed to it. Provide the
specific argument types (`Callable[[int, str], T]`) in simple cases,
or use [callback
@@ -373,9 +373,9 @@ A collection such as `List` should only be `Optional` if `None` would
have a different meaning than the natural meaning of an empty
collection. For example:
* An include list where the default is to include everything should be
- An include list where the default is to include everything should be
`Optional` with default `None`.
* An exclude list where the default is to exclude nothing should be
- An exclude list where the default is to exclude nothing should be
non-`Optional` with default `[]`.
Don't test an `Optional` value using truthiness (`if value:`,
@@ -401,11 +401,11 @@ that was passed to it as an argument, especially by accident. To
avoid this, prefer annotating function parameters with read-only
types:
* [`Sequence`](https://docs.python.org/3/library/typing.html#typing.Sequence)
- [`Sequence`](https://docs.python.org/3/library/typing.html#typing.Sequence)
instead of `List`,
* [`Mapping`](https://docs.python.org/3/library/typing.html#typing.Mapping)
- [`Mapping`](https://docs.python.org/3/library/typing.html#typing.Mapping)
instead of `Dict`,
* [`AbstractSet`](https://docs.python.org/3/library/typing.html#typing.AbstractSet)
- [`AbstractSet`](https://docs.python.org/3/library/typing.html#typing.AbstractSet)
instead of `Set`.
This is especially important for parameters with default arguments,

View File

@@ -61,19 +61,19 @@ suite (`test-js-with-node`) to run in under 10 seconds.
It'd be a long blog post to summarize everything we do to help achieve
these goals, but a few techniques are worth highlighting:
* Our test suites are designed to not access the Internet, since the
- Our test suites are designed to not access the Internet, since the
Internet might be down or unreliable in the test environment. Where
outgoing HTTP requests are required to test something, we mock the
responses with libraries like `responses`.
* We carefully avoid the potential for contamination of data inside
- We carefully avoid the potential for contamination of data inside
services like PostgreSQL, Redis, and memcached from different tests.
* Every test case prepends a unique random prefix to all keys it
- Every test case prepends a unique random prefix to all keys it
uses when accessing Redis and memcached.
* Every test case runs inside a database transaction, which is
- Every test case runs inside a database transaction, which is
aborted after the test completes. Each test process interacts
only with a fresh copy of a special template database used for
server tests that is destroyed after the process completes.
* We rigorously investigate non-deterministically failing tests as though
- We rigorously investigate non-deterministically failing tests as though
they were priority bugs in the product.
## Integration testing or unit testing?
@@ -131,36 +131,36 @@ in any documentation for that interface.
Some examples of this philosophy:
* If you have a web service that's mainly an API, you want to write
- If you have a web service that's mainly an API, you want to write
your tests for that API.
* If you have a CLI program, you want to write your tests against the
- If you have a CLI program, you want to write your tests against the
CLI.
* If you have a compiler, an interpreter, etc., you want essentially
- If you have a compiler, an interpreter, etc., you want essentially
all your tests to be example programs, with a bit of metadata for
things like "should give an error at this line" or "should build and
run, and produce this output".
In the Zulip context:
* Zulip uses the same API for our web app as for our mobile clients and
- Zulip uses the same API for our web app as for our mobile clients and
third-party API clients, and most of our server tests are written
against the Zulip API.
* The tests for Zulip's incoming webhooks work by sending actual
- The tests for Zulip's incoming webhooks work by sending actual
payloads captured from the real third-party service to the webhook
endpoints, and verifies that the webhook produces the expected Zulip
message as output, to test the actual interface.
So, to summarize our approach to integration vs. unit testing:
* While we aim to achieve test coverage of every significant code path
- While we aim to achieve test coverage of every significant code path
in the Zulip server, which is commonly associated with unit testing,
most of our tests are integration tests in the sense of sending a
complete HTTP API query to the Zulip server and checking that the
HTTP response and the internal state of the server following the request
are both correct.
* Following the end-to-end principle in system design, where possible
- Following the end-to-end principle in system design, where possible
we write tests that execute a complete flow (e.g. registering a new
Zulip account) rather than testing the implementations of individual
functions.
* We invest in the performance of Zulip in part to give users a great
- We invest in the performance of Zulip in part to give users a great
experience, but just as much to make our test suite fast enough
that we can write our tests this way.

View File

@@ -124,19 +124,19 @@ Here are some example action methods that tests may use for data setup:
Some tests need to access the filesystem (e.g. `test_upload.py` tests
for `LocalUploadBackend` and the data import tests). Doing
this correctly requires care to avoid problems like:
* Leaking files after every test (which are clutter and can eventually
- Leaking files after every test (which are clutter and can eventually
run the development environment out of disk) or
* Interacting with other parallel processes of this `test-backend` run
- Interacting with other parallel processes of this `test-backend` run
(or another `test-backend` run), or with later tests run by this
process.
To avoid these problems, you can do the following:
* Use a subdirectory of `settings.TEST_WORKER_DIR`; this is a
- Use a subdirectory of `settings.TEST_WORKER_DIR`; this is a
subdirectory of `/var/<uuid>/test-backend` that is unique to the
test worker thread and will be automatically deleted when the
relevant `test-backend` process finishes.
* Delete any files created by the test in the test class's `tearDown`
- Delete any files created by the test in the test class's `tearDown`
method (which runs even if the test fails); this is valuable to
avoid conflicts with other tests run later by the same test process.
@@ -192,9 +192,9 @@ def greet(name_key: str) -> str:
return "Hello" + name
```
* You want to test `greet()`.
- You want to test `greet()`.
* In your test, you want to call `greet("Mario")` and verify that it returns the correct greeting:
- In your test, you want to call `greet("Mario")` and verify that it returns the correct greeting:
```python
from greetings import greet
@@ -208,9 +208,9 @@ def greet(name_key: str) -> str:
a database. *You haven't created that database for your tests, so your test would fail, even though
the code is correct.*
* Luckily, you know that `fetch_database("Mario")` should return "Mr. Mario Mario".
- Luckily, you know that `fetch_database("Mario")` should return "Mr. Mario Mario".
* *Hint*: Sometimes, you might not know the exact return value, but one that is equally valid and works
- *Hint*: Sometimes, you might not know the exact return value, but one that is equally valid and works
with the rest of the code. In that case, just use this one.
-> **Solution**: You mock `fetch_database()`. This is also referred to as "mocking out" `fetch_database()`.
@@ -281,13 +281,13 @@ On the other hand, if we had used `import os.urandom`, we would need to call `mo
#### Boilerplate code
* Including the Python mocking library:
- Including the Python mocking library:
```python
from unittest import mock
```
* Mocking a class with a context manager:
- Mocking a class with a context manager:
```python
with mock.patch('module.ClassName', foo=42, return_value='I am a mock') as my_mock:
@@ -296,7 +296,7 @@ On the other hand, if we had used `import os.urandom`, we would need to call `mo
# var = module.ClassName() will assign 'I am a mock' to var.
```
* Mocking a class with a decorator:
- Mocking a class with a decorator:
```python
@mock.patch('module.ClassName', foo=42, return_value='I am a mock')
@@ -305,7 +305,7 @@ On the other hand, if we had used `import os.urandom`, we would need to call `mo
# In here, 'module.ClassName' will behave as in the previous example.
```
* Mocking a class attribute:
- Mocking a class attribute:
```python
with mock.patch.object(module.ClassName, 'class_method', return_value=42)

View File

@@ -191,15 +191,15 @@ These instructions assume you're using the Vagrant development environment.
2. In WebStorm, navigate to `Preferences -> Tools -> Vagrant` and
configure the following:
* `Instance folder` should be the root of the `zulip` repository on
- `Instance folder` should be the root of the `zulip` repository on
your host (where the Vagrantfile is located).
* `Provider` should be `virtualbox` on macOS and Docker on Linux
* In `Boxes`, choose the one used for Zulip (unless you use
- `Provider` should be `virtualbox` on macOS and Docker on Linux
- In `Boxes`, choose the one used for Zulip (unless you use
Virtualbox for other things, there should only be one option).
You shouldn't need to set these additional settings:
* `Vagrant executable` should already be correctly `vagrant`.
* `Environment Variables` is not needed.
- `Vagrant executable` should already be correctly `vagrant`.
- `Environment Variables` is not needed.
3. You'll now need to set up a WebStorm "Debug Configuration". Open
the `Run/Debug Configuration` menu and create a new `Node.js` config:

View File

@@ -65,24 +65,24 @@ The following questions are useful when debugging Puppeteer test
failures you might see in [continuous
integration](../testing/continuous-integration.md):
* Does the flow being tested work properly in the Zulip browser UI?
- Does the flow being tested work properly in the Zulip browser UI?
Test failures can reflect real bugs, and often it's easier and more
interactive to debug an issue in the normal Zulip development
environment than in the Puppeteer test suite.
* Does the change being tested adjust the HTML structure in a way that
- Does the change being tested adjust the HTML structure in a way that
affects any of the selectors used in the tests? If so, the test may
just need to be updated for your changes.
* Does the test fail deterministically when you run it locally using
- Does the test fail deterministically when you run it locally using
E.g. `./tools/test-js-with-puppeteer compose.ts`? If so, you can
iteratively debug to see the failure.
* Does the test fail nondeterministically? If so, the problem is
- Does the test fail nondeterministically? If so, the problem is
likely that a `waitForSelector` statement is either missing or not
waiting for the right thing. Tests fail nondeterministically much
more often on very slow systems like those used for Continuous
Integration (CI) services because small races are amplified in those
environments; this often explains failures in CI that cannot be
easily reproduced locally.
* Does the test fail when you are typing (filling the form) on a modal
- Does the test fail when you are typing (filling the form) on a modal
or other just-opened UI widget? Puppeteer starts typing after focusing on
the text field, sending keystrokes one after another. So, if
application code explicitly focuses the modal after it
@@ -96,22 +96,22 @@ integration](../testing/continuous-integration.md):
These tools/features are often useful when debugging:
* You can use `console.log` statements both in Puppeteer tests and the
- You can use `console.log` statements both in Puppeteer tests and the
code being tested to print-debug.
* Zulip's Puppeteer tests are configured to generate screenshots of
- Zulip's Puppeteer tests are configured to generate screenshots of
the state of the test browser when an assert statement fails; these
are stored under `var/puppeteer/*.png` and are extremely helpful for
debugging test failures.
* TODO: Mention how to access Puppeteer screenshots in CI.
* TODO: Add an option for using the `headless: false` debugging mode
- TODO: Mention how to access Puppeteer screenshots in CI.
- TODO: Add an option for using the `headless: false` debugging mode
of Puppeteer so you can watch what's happening, and document how to
make that work with Vagrant.
* TODO: Document `--interactive`.
* TODO: Document how to run 100x in CI to check for nondeterminstic
- TODO: Document `--interactive`.
- TODO: Document how to run 100x in CI to check for nondeterminstic
failures.
* TODO: Document any other techniques/ideas that were helpful when porting
- TODO: Document any other techniques/ideas that were helpful when porting
the Casper suite.
* The Zulip server powering these tests is just `run-dev.py` with some
- The Zulip server powering these tests is just `run-dev.py` with some
extra [Django settings](../subsystems/settings.md) from
`zproject/test_extra_settings.py` to configure an isolated database
so that the tests will not interfere/interact with a normal

View File

@@ -107,9 +107,9 @@ As a policy matter, the Zulip test suites should never make outgoing
HTTP or other network requests. This is important for 2 major
reasons:
* Tests that make outgoing Internet requests will fail when the user
- Tests that make outgoing Internet requests will fail when the user
isn't on the Internet.
* Tests that make outgoing Internet requests often have a hidden
- Tests that make outgoing Internet requests often have a hidden
dependency on the uptime of a third-party service, and will fail
nondeterministically if that service has a temporary outage.
Nondeterministically failing tests can be a big waste of

View File

@@ -27,7 +27,7 @@ setdefault(key: K, value: V): V {
The following resources are valuable for learning TypeScript:
* The main documentation on [TypeScript syntax][typescript-handbook].
- The main documentation on [TypeScript syntax][typescript-handbook].
## Type checking
@@ -49,11 +49,11 @@ we plan to start with a style that is closer to the existing
JavaScript code, so that we can easily migrate individual modules
without too much code churn. A few examples:
* TypeScript generally prefers explicit `return undefined;`, whereas
- TypeScript generally prefers explicit `return undefined;`, whereas
our existing JavasScript style uses just `return;`.
* With TypeScript, we expect to make heavy use of `let` and `const`
- With TypeScript, we expect to make heavy use of `let` and `const`
rather than `var`.
* With TypeScript/ES6, we may no longer need to use `_.each()` as our
- With TypeScript/ES6, we may no longer need to use `_.each()` as our
recommended way to do loop iteration.
For each of the details, we will either want to bulk-migrate the
@@ -71,13 +71,13 @@ converts them from JS to TS.
Our plan is to order which modules we migrate carefully, starting with
those that:
* Appear frequently as reverse dependencies of other modules
- Appear frequently as reverse dependencies of other modules
(e.g. `people.js`). These are most valuable to do first because
then we have types on the data being interacted with by other
modules when we migrate those.
* Don't have large open pull requests (to avoid merge conflicts); one
- Don't have large open pull requests (to avoid merge conflicts); one
can scan for these using [TinglingGit](https://github.com/zulip/TinglingGit).
* Have good unit test coverage, which limits the risk of breaking
- Have good unit test coverage, which limits the risk of breaking
correctness through refactoring. Use
`tools/test-js-with-node --coverage` to get a coverage report.
@@ -85,14 +85,14 @@ When migrating a module, we want to be especially thoughtful about
putting together a commit structure that makes mistakes unlikely and
the changes easy to verify. E.g.:
* First a commit that just converts the language to TypeScript adding
- First a commit that just converts the language to TypeScript adding
types. The result may potentially have some violations of the
long-term style we want (e.g. not using `const`). Depending on how
we're handling linting, we set some override eslint rules at the top
of the module at this stage so CI still passes.
* Then a commit just migrating use of `var` to `const/let` without
- Then a commit just migrating use of `var` to `const/let` without
other changes (other than removing any relevant linter overrides).
* Etc.
- Etc.
With this approach, we should be able to produce a bunch of really
simple commits that can be merged the same day they're written without