The Postgres logical decoding protocol is lacking documentation and
unclear about keepalive behavior when `wal_sender_timeout` is set to 0
(disabled). We have it disabled so that Postgres doesn't terminate our
connection for falling too far behind.
What we failed to take into account is that on some installations,
Postgres _never_ requests an immediate reply (keepalive with the reply
now bit set) if wal_sender_timeout is disabled. This means we would
always reply with the empty message, failing to advance the position of
the LSN.
In this PR, we fix that to always respond to every keepalive message
with a standby status update to advance the LSN position.
Relevant documentation:
https://www.postgresql.org/docs/current/protocol-replication.html#PROTOCOL-REPLICATION-STANDBY-STATUS-UPDATE
Firezone uses ICMP errors to signal to client applications that e.g. a
certain IP is not reachable. This happens for example if a DNS resource
only resolves to IPv4 addresses yet the client application attempted to
use an IPv6 proxy address to connect to it.
In the presence of traffic filters for such a resource that does _not_
allow ICMP, we currently filter out these ICMP errors because - well -
ICMP traffic is not allowed! However, even in the presence of ICMP
traffic being allowed, we would fail to evaluate this filter because the
ICMP error packet is not an ICMP echo reply and therefore doesn't have
an ICMP identifier. We require this in the DNS resource NAT to identify
"connections" and NAT them correctly. The same L4 component is used to
evaluate the traffic filters.
ICMP errors are critical to many usage scenarios and algorithms like
happy-eyeballs. Dropping them usually results in weird behaviour as
client applications can then only react to timeouts.
In #9733, we changed the replies of the handle_data messages which seems
to have caused Postgres to not respect our acknowledgements sent in the
keepalive.
To fix this, we revert to sending an empty message in response to write
messages.
Inserting a change log incurs some minor overhead for sending query over
the network and reacting to its response. In many cases, this makes up
the bulk of the actual time it takes to run the change log insert.
To reduce this overhead and avoid any kind of processing delay in the
WAL consumers, we introduce batch insert functionality with size `500`
and timeout `30` seconds. If either of those two are hit, we flush the
batch using `insert_all`.
`insert_all` does not use `Ecto.Changeset`, so we need to be a bit more
careful about the data we insert, and check the inserted LSNs to
determine what to update the acknowledged LSN pointer to.
The functionality to determine when to call the new `on_flush/1`
callback lives in the replication_connection module, but the actual
behavior of `on_flush/1` is left to the child modules to implement. The
`Events.ReplicationConnection` module does not use flush behavior, and
so does not override the defaults, which is not to use a flush
mechanism.
Related: #949
Bumps [phoenix_ecto](https://github.com/phoenixframework/phoenix_ecto)
from 4.6.4 to 4.6.5.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/phoenixframework/phoenix_ecto/blob/main/CHANGELOG.md">phoenix_ecto's
changelog</a>.</em></p>
<blockquote>
<h2>v4.6.5</h2>
<ul>
<li>Bug fixes
<ul>
<li>Unallow existing allowances when attempting to allow a Plug to
access a connection</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/phoenixframework/phoenix_ecto/commits/v4.6.5">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Why:
* We were previously only catching the `:rate_limited` error when
sending welcome emails. This update adds a catch-all case to gracefully
handle the error and alert us.
---------
Signed-off-by: Brian Manifold <bmanifold@users.noreply.github.com>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
Bumps [nimble_csv](https://github.com/dashbitco/nimble_csv) from 1.2.0
to 1.3.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/dashbitco/nimble_csv/blob/master/CHANGELOG.md">nimble_csv's
changelog</a>.</em></p>
<blockquote>
<h2>v1.3.0 (2025-06-24)</h2>
<ul>
<li>Require Elixir 1.15+</li>
<li>Add <code>generated: true</code> to <code>newlines_separator</code>
macro</li>
<li>Fix warnings on Elixir 1.20+</li>
<li>Document OWASP official recommendations for CSV injections</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2fc3cbf4b5"><code>2fc3cbf</code></a>
Release v1.3.0</li>
<li><a
href="1c3b383dd9"><code>1c3b383</code></a>
Update ex_doc</li>
<li><a
href="da38c6ac6b"><code>da38c6a</code></a>
Document OWASP official recommendations for CSV injections (<a
href="https://redirect.github.com/dashbitco/nimble_csv/issues/86">#86</a>)</li>
<li><a
href="2b4a5e0da6"><code>2b4a5e0</code></a>
Support Elixir 1.18 in CI (<a
href="https://redirect.github.com/dashbitco/nimble_csv/issues/84">#84</a>)</li>
<li><a
href="020ea8297e"><code>020ea82</code></a>
Update docs</li>
<li><a
href="106e298ec7"><code>106e298</code></a>
Fix warnings on Elixir 1.20+ (<a
href="https://redirect.github.com/dashbitco/nimble_csv/issues/83">#83</a>)</li>
<li><a
href="57e0858100"><code>57e0858</code></a>
CI housekeeping (<a
href="https://redirect.github.com/dashbitco/nimble_csv/issues/82">#82</a>)</li>
<li><a
href="8cf912e204"><code>8cf912e</code></a>
Add generated to newlines_separator macro (<a
href="https://redirect.github.com/dashbitco/nimble_csv/issues/81">#81</a>)</li>
<li><a
href="3b2e2f7f2a"><code>3b2e2f7</code></a>
Document parse error, closes <a
href="https://redirect.github.com/dashbitco/nimble_csv/issues/78">#78</a></li>
<li><a
href="0b2390440a"><code>0b23904</code></a>
Add benchmarks using benchee (<a
href="https://redirect.github.com/dashbitco/nimble_csv/issues/76">#76</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/dashbitco/nimble_csv/compare/v1.2.0...v1.3.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [mix_audit](https://github.com/mirego/mix_audit) from 2.1.4 to
2.1.5.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/mirego/mix_audit/blob/main/CHANGELOG.md">mix_audit's
changelog</a>.</em></p>
<blockquote>
<h2>2.1.5 (2025-06-09)</h2>
<ul>
<li>Update dependencies</li>
<li>Use <code>System.stop/1</code> instead of
<code>System.halt/1</code></li>
<li>Use <code>System.user_home()</code> instead of
<code>System.get_env("HOME")</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a3a6c1cbf4"><code>a3a6c1c</code></a>
v2.1.5</li>
<li><a
href="aa630f7e91"><code>aa630f7</code></a>
Update CHANGELOG.md</li>
<li><a
href="14f3511a92"><code>14f3511</code></a>
Bump ex_doc from 0.38.1 to 0.38.2</li>
<li><a
href="14698645b0"><code>1469864</code></a>
refactor: use <code>System.stop/1</code> to enable caller to rescue
tasks</li>
<li><a
href="5b43bcb989"><code>5b43bcb</code></a>
Bump ex_doc from 0.37.3 to 0.38.1</li>
<li><a
href="572a0bfdb3"><code>572a0bf</code></a>
Bump ex_doc from 0.34.2 to 0.37.3</li>
<li><a
href="7247db4543"><code>7247db4</code></a>
Bump jason from 1.4.3 to 1.4.4</li>
<li><a
href="11580d4125"><code>11580d4</code></a>
Change home to platform independent function</li>
<li><a
href="41c96c476a"><code>41c96c4</code></a>
Revert a commit that was temporary to test something locally 🤦♂️</li>
<li><a
href="876645af75"><code>876645a</code></a>
Use latest Ubuntu and support Elixir 1.18 and 1.17 in CI</li>
<li>Additional commits viewable in <a
href="https://github.com/mirego/mix_audit/compare/v2.1.4...v2.1.5">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [ecto_sql](https://github.com/elixir-ecto/ecto_sql) from 3.12.1 to
3.13.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/elixir-ecto/ecto_sql/blob/master/CHANGELOG.md">ecto_sql's
changelog</a>.</em></p>
<blockquote>
<h2>v3.13.2 (2025-06-24)</h2>
<h3>Enhancements</h3>
<ul>
<li>[sandbox] Allow passing through opts in
<code>Ecto.Adapters.SQL.Sandbox.allow/4</code> calls</li>
<li>[sql] Add support for <code>ON DELETE SET DEFAULT</code></li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[postgres] Fix nested array generated time columns</li>
</ul>
<h2>v3.13.1 (2025-06-20)</h2>
<h3>Bug fixes</h3>
<ul>
<li>[postgres] Fix nested array generated columns</li>
</ul>
<h2>v3.13.0 (2025-06-18)</h2>
<h3>Enhancements</h3>
<ul>
<li>[Ecto.Migration] Add support for index directions</li>
<li>[sql] Support <code>:log_stacktrace_mfa</code> for filtering or
modifying stacktrace-derived info in query logs</li>
<li>[mysql] Support arrays using JSON for MariaDB</li>
<li>[mysql] Allow to specify <code>:prepare</code> per operation</li>
<li>[postgres] Add support for collations in Postgres</li>
<li>[postgres] Allow source fields in
<code>json_extract_path</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="cf5080c1a4"><code>cf5080c</code></a>
Release v3.13.2</li>
<li><a
href="b87638180f"><code>b876381</code></a>
Refactor generated handling in column_type</li>
<li><a
href="62603f88b6"><code>62603f8</code></a>
Fix generated nested time array (<a
href="https://redirect.github.com/elixir-ecto/ecto_sql/issues/680">#680</a>)</li>
<li><a
href="701c99e97f"><code>701c99e</code></a>
Add support for <code>ON DELETE SET DEFAULT</code> (<a
href="https://redirect.github.com/elixir-ecto/ecto_sql/issues/677">#677</a>)</li>
<li><a
href="79590224dc"><code>7959022</code></a>
Allow passing through opts in Ecto.Adapters.SQL.Sandbox.allow/4 calls
(<a
href="https://redirect.github.com/elixir-ecto/ecto_sql/issues/678">#678</a>)</li>
<li><a
href="22c71121b7"><code>22c7112</code></a>
Release v3.13.1</li>
<li><a
href="35e27985ec"><code>35e2798</code></a>
Fix nested array generated columns (<a
href="https://redirect.github.com/elixir-ecto/ecto_sql/issues/676">#676</a>)</li>
<li><a
href="955f0fbf8f"><code>955f0fb</code></a>
Release v3.13.0</li>
<li><a
href="aa9a3291f7"><code>aa9a329</code></a>
Remove unused argument from private helper (<a
href="https://redirect.github.com/elixir-ecto/ecto_sql/issues/672">#672</a>)</li>
<li><a
href="3084d7150d"><code>3084d71</code></a>
Better docs for Repos that use <code>Ecto.Adapters.SQL.Adapter</code>
(<a
href="https://redirect.github.com/elixir-ecto/ecto_sql/issues/671">#671</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/elixir-ecto/ecto_sql/compare/v3.12.1...v3.13.2">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The domain nodes that run background jobs aren't user-facing and as
such, don't need short queue_interval times. They can afford to wait a
few seconds for a connection from the pool to become available.
This should be a relatively low-risk change as it does not increase the
resources used on the app server and database, it only relaxes the
length of time we want for a connection to be available for us to use.
This has been dead code for a long time. The feature this was meant to
support, #8353, will require a different domain model, views, and user
flows.
Related: #8353
When the ReplicationConnection dies, its Manager will die too on all
other nodes, and all domain Application supervisors on all nodes will
attempt to restart them. This allows the connection to migrate to a
healthy node automagically.
However, the default Supervisor behavior is to allow 3 restarts in 5
seconds before the whole tree is taken down. To prevent this, we trap
the exit in the ReplicationManager and attempt to reconnect right away,
beginning the backoff process.
We had an old bug in one of our acceptance tests that is just now being
hit again due to the faster runners.
- We need to wait for the dropdown to become visible before clicking
- We fix a minor timer issue that was calculating elapsed time
incorrectly when determining when time out finding an el.
The `expires_at` column on the `flows` table was never used outside of
the context in which the flow was created in the Client Channel. This
ephemeral state, which is created in the `Domain.Flows.authorize_flow/4`
function, is never read from the DB in any meaningful capacity, so it
can be safely removed.
The `expire_flows_for` family of functions now simply reads the needed
fields from the flows table in order to broadcast `{:expire_flow,
flow_id, client_id, resource_id}` directly to the subscribed entities.
This PR is step 1 in removing the reliance on `Flows` to manage
ephemeral access state. In a subsequent PR we will actually change the
structure of what state is kept in the channel PIDs such that reliance
on this Flows table will no longer be necessary.
Additionally, in a few places, we were referencing a Flows.Show view
that was never available in production, so this dead code has been
removed.
Lastly, the `flows` table subscription and associated hook processing
has been completely removed as it is no longer needed. We've implemented
in #9667 logic to remove publications from removed table subscriptions,
so we can expect to get a couple ingest warnings when we deploy this as
the `Hooks.Flows` processor no longer exists, and the WAL data may have
lingering flows records in the queue. These can be safely ignored.
Now that we know the bypass system works, it might be a good idea to
allow it to lag data up to 30m so that events accrued during deploys are
not lost.
Also, this PR fixes a small bug where we triggered the threshold _after_
a transaction already committed (`COMMIT`), instead of before the data
came through (`BEGIN`). Since the timestamps are identical (see below),
it would be more accurate to read the timestamp of the transaction
before acting on the data contained within.
```
[(domain 0.1.0+dev) lib/domain/change_logs/replication_connection.ex:4: Domain.ChangeLogs.ReplicationConnection.handle_message/3]
"BEGIN #{commit_timestamp}" #=> "BEGIN 2025-06-26 04:22:45.283151Z"
[(domain 0.1.0+dev) lib/domain/change_logs/replication_connection.ex:4: Domain.ChangeLogs.ReplicationConnection.handle_message/3]
"END #{commit_timestamp}" #=> "END 2025-06-26 04:22:45.283151Z"
```
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Brian Manifold <bmanifold@users.noreply.github.com>
When a client is updated, we may need to re-initialize it if "breaking"
fields are updated. If non-breaking fields are changed, such as name, we
don't need to re-initialize the client.
This PR also adds a helper `struct_from_params/2` which will create a
schema struct from WAL data in order to type cast any needed data for
convenience. This avoid having to do a DB hit - we _already have the
data from the DB_ - we just need to format and send it.
Related: #9501
Creating a table publication(s) (and associated replication slot) is
sticky. These will outlive the lifetime of the process that created
them.
We don't want to remove them on shutdown, because this will pause WAL
writing to disk.
However, when starting the _new_ application, it's possible
`table_subscriptions` has changed (such as if we decide we no longer
want events for a certain table). We weren't updating the created
publication(s) with these added/removed tables, so this PR updates the
replication connection setup state machine to pass through a few
conditionals to get these properly updated with the diff of old vs new.
Building on the WAL consumer that's been in development over the past
several weeks, we introduce a new `change_logs` table that stores very
lightly up-fitted data decoded from the WAL:
- `account_id` (indexed): a foreign key reference to an account.
- `inserted_at` (indexed): the timestamp of insert, for truncating rows
later.
- `table`: the table where the op took place.
- `op`: the operation performed (insert/update/delete)
- `old_data`: a nullable map of the old row data (update/delete)
- `data`: a nullable map of the new row data(insert/update)
- `vsn`: an integer version field we can bump to signify schema changes
in the data in case we need to apply operations to only new or only old
data.
Judging from our prod metrics, we're currently average about 1,000 write
operations a minute, which will generate about 1-2 dozen changelogs / s.
Doing the math on this, 30 days at our current volume will yield about
50M / month, which should be ok for some time, since this is an
append-only, rarely (if ever) read from table.
The one aspect of this we may need to handle sooner than later is
batch-inserting these. That raises an issue though - currently, in this
PR, we process each WAL event serially, ending with the final
acknowledgement `:ok` which will signal to Postgres our status in
processing the WAL.
If we do anything async here, this processing "cursor" then becomes
inaccurate, so we may need to think about what to track and what data we
care about.
Related: #7124
It's confusing that we clear this field upon sync failure. Instead, we
let it track the time of the last sync.
Will be cleaned up in #6294 so just applying a minimal fix now.
Fixes#7715
Adds the `account_slug` to the gateway's `init` message. When the
account slug is changed, the gateway's socket is disconnected using the
same mechanism as gateway deletion, which causes the gateway to
reconnect immediately and receive a new `init`.
Related: #9545
Why:
* After updating the Auth Provider changesets to trim all whitespace
from user editable string fields we realized we needed to do the same
for all forms/entities within Firezone. This commit updates all entities
to trim whitespace on string fields.
Fixes: #9579
Unfortunately #9608 did not handle the case where we receive more than
200 compressed metrics in a single call. To fix this, we ensure we
`flush` the metrics buffer inside the `reduce` so that we never grow the
accumulated metrics buffer larger than 200 points.
The log string was updated to roll the issue over in Sentry as well as
the old issue was set to delete and destroy to prevent issue spam.
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Instead of checking for buffer surpass _after_ adding new timeseries to
it, we should check before.
Variables were renamed to be a little more clear on what they represent.