Fix bug where we failed to delete old push actions (#13194)

This happened if we encountered a stream ordering in `event_push_actions` that had more rows than the batch size of the delete, as If we don't delete any rows in an iteration then the next time round we get the exact same stream ordering and get stuck.
This commit is contained in:
Erik Johnston 2022-07-06 12:09:19 +01:00 committed by GitHub
parent 68db233f0c
commit a0f51b059c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 5 additions and 2 deletions

1
changelog.d/13194.bugfix Normal file
View File

@ -0,0 +1 @@
Fix bug where rows were not deleted from `event_push_actions` table on large servers. Introduced in v1.62.0.

View File

@ -1114,7 +1114,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
txn.execute(
"""
SELECT stream_ordering FROM event_push_actions
WHERE stream_ordering < ? AND highlight = 0
WHERE stream_ordering <= ? AND highlight = 0
ORDER BY stream_ordering ASC LIMIT 1 OFFSET ?
""",
(
@ -1129,10 +1129,12 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
else:
stream_ordering = max_stream_ordering_to_delete
# We need to use a inclusive bound here to handle the case where a
# single stream ordering has more than `batch_size` rows.
txn.execute(
"""
DELETE FROM event_push_actions
WHERE stream_ordering < ? AND highlight = 0
WHERE stream_ordering <= ? AND highlight = 0
""",
(stream_ordering,),
)