mirror of
https://github.com/zulip/zulip.git
synced 2025-11-02 13:03:29 +00:00
docs: Extend documentation of event system testing.
This commit is contained in:
@@ -250,25 +250,30 @@ All the heavy lifting that pertains to `apply_events` happens within the
|
||||
call to `verify_action`, which is a test helper in the `BaseAction` class
|
||||
within `test_events.py`.
|
||||
|
||||
The `verify_action` effectively simulates the possible race condition in
|
||||
The `verify_action` function simulates the possible race condition in
|
||||
order to verify that the `apply_events` logic works correctly in the
|
||||
context of some action function. To use our concrete example above, we are
|
||||
seeing that applying the events from the `do_remove_default_stream` action
|
||||
inside of `apply_events` to a stale copy of your state has the same effect
|
||||
as doing the same action and computing a fresh copy of the state.
|
||||
context of some action function. To use our concrete example above,
|
||||
we are seeing that applying the events from the
|
||||
`do_remove_default_stream` action inside of `apply_events` to a stale
|
||||
copy of your state results in the same state dictionary as doing the
|
||||
action and then fetching a fresh copy of the state.
|
||||
|
||||
In particular, `verify_action` does the following:
|
||||
|
||||
* Call `fetch_initial_state_data` to get the current state.
|
||||
* Call the action function (e.g. `do_add_default_stream`).
|
||||
* Capture the events generated by the action function.
|
||||
* Check the events generated are documented in the [OpenAPI
|
||||
schema](../documentation/api.md) defined in
|
||||
`zerver/openapi/zulip.yaml`.
|
||||
* Call `apply_events(state, events)`, to get the resulting "hybrid state".
|
||||
* Call `fetch_initial_state_data` again to get the "normal state".
|
||||
* Compare the two results.
|
||||
|
||||
In the unlikely event that you write the `apply_events` logic correctly the
|
||||
first time, then the two states will be identical, and the `verify_action`
|
||||
call will succeed and return the events that came from the action.
|
||||
In the event that you wrote the `apply_events` logic correctly the
|
||||
first time, then the two states will be identical, and the
|
||||
`verify_action` call will succeed and return the events that came from
|
||||
the action.
|
||||
|
||||
Often you will get the `apply_events` logic wrong at first, which will
|
||||
cause `verify_action` to fail. To help you debug, it will print a diff
|
||||
@@ -291,22 +296,33 @@ There are some notable optional parameters for `verify_action`:
|
||||
* `state_change_expected` must be set to `False` if your action
|
||||
doesn't actually require state changes for some reason; otherwise,
|
||||
`verify_action` will complain that your test doesn't really
|
||||
exercise any `apply_events` logic
|
||||
exercise any `apply_events` logic. Typing notifications (which
|
||||
are ephemereal) are a common place where we use this.
|
||||
|
||||
* `num_events` will tell `verify_action` how many events you
|
||||
expect to get from an action (the default is 1)
|
||||
* `num_events` will tell `verify_action` how many events the
|
||||
`hamlet` user will receive after the action (the default is 1).
|
||||
|
||||
* parameters such as `client_gravatar` and `slim_presence` get
|
||||
passed along to `fetch_initial_state_data` (and it's important
|
||||
to test both boolean values of these parameters for relevant
|
||||
actions)
|
||||
actions).
|
||||
|
||||
For advanced use cases of `verify_action` we highly recommend reading
|
||||
For advanced use cases of `verify_action`, we highly recommend reading
|
||||
the code itself in `BaseAction` (in `test_events.py`).
|
||||
|
||||
#### Schema checking
|
||||
|
||||
Let's look at the last line of our example test snippet:
|
||||
The `test_events.py` system has two forms of schema checking. The
|
||||
first is verifying that you've updated the [GET /events API
|
||||
documentation](https://zulip.com/api/get-events) to document your new
|
||||
event's format for benefit of the developers of Zulip's mobile app,
|
||||
terminal app, and other API clients. See the [API documentation
|
||||
docs](../documentation/api.md) for details on the OpenAPI
|
||||
documentation.
|
||||
|
||||
The second is higher-detail check inside `test_events` that this
|
||||
specific test generated the expected series of events. Let's look at
|
||||
the last line of our example test snippet:
|
||||
|
||||
# ...
|
||||
events = self.verify_action(lambda: do_add_default_stream(stream))
|
||||
@@ -341,10 +357,19 @@ best way to understand how to write schema checkers is to read
|
||||
the file, and then you can skim the rest of the file to see the
|
||||
patterns.
|
||||
|
||||
When you create a new schema checker for a new event, you not
|
||||
only make the `test_events` test more rigorous, you also allow
|
||||
our other tools to use the same schema checker to validate data
|
||||
in our node test fixtures and our OpenAPI documentation.
|
||||
When you create a new schema checker for a new event, you not only
|
||||
make the `test_events` test more rigorous, you also allow our other
|
||||
tools to use the same schema checker to validate event formats in our
|
||||
node test fixtures and our OpenAPI documentation.
|
||||
|
||||
#### Node testing
|
||||
|
||||
Once you've completed backend testing, be sure to add an example event
|
||||
in `frontend_tests/node_tests/lib/events.js`, a test of the
|
||||
`server_events_dispatch.js` code for that event in
|
||||
`frontend_tests/node_tests/dispatch.js`, and verify your example
|
||||
against the two versions of the schema that you declared above using
|
||||
`tools/check-node-fixtures`.
|
||||
|
||||
#### Code coverage
|
||||
|
||||
|
||||
Reference in New Issue
Block a user