This fixes a simple logic bug where we were mistakenly reacting to a
flow deletion event where flows still existed in the cache by sending
`reject_access`. This fixes that bug, and adds more comprehensive
logging to help diagnose issues like this more quickly in the future.
This PR also fixes the following issues found during the investigation:
- We were redundantly reacting to Token deletion in the channel pids.
This is unnecessary: we send a global socket disconnect from the Token
hook module instead.
- We had a bug that would crash the WAL consumer if a "global" token
(i.e. relay) was deleted or expired - these have no `account_id`.
- We now always use `min(max(all_conforming_polices_expiration),
token.expires_at)` when setting expiration on a new flow to minimize the
possibility for access churn.
- We now check to ensure the token and gateway are still undeleted when
re-authorizing a given flow. This prevents us from failing to send
`reject_access` when a token or gateway is deleted corresponding to a
flow, but the other entities would have granted access.
Related: https://firezone.statuspage.io/incidents/xrsm13tml3dh
Related: #10068
Related: #9501
In #7548, we added a feature to Firezone where TURN channels get bound
on-demand as they are needed. To ensure many communication paths work,
we also proactively bind them as soon as we receive a candidate from a
remote.
When a new remote candidate gets added, str0m forms pairs with all the
existing local candidates and starts testing these candidate pairs. For
local relay candidates, this means sending a channel data message from
the allocation.
At the moment, this results in the following pattern in the logs:
```
Received candidate from remote cid=20af9d29-c973-4d77-909a-abed5d7a0234 candidate=Candidate(relay=[3231E680683CFC98E69A12A60F426AA5E5F110CB]:62759/udp raddr=[59A533B0D4D3CB3717FD3D655E1D419E1C9C0772]:0 prio=37492735)
No channel to peer, binding new one active_socket=462A7A508E3C99875E69C2519CA020330A6004EC:3478 peer=[3231E680683CFC98E69A12A60F426AA5E5F110CB]:62759
Already binding a channel to peer active_socket=Some(462A7A508E3C99875E69C2519CA020330A6004EC:3478) peer=[3231E680683CFC98E69A12A60F426AA5E5F110CB]:62759
class=success response from=462A7A508E3C99875E69C2519CA020330A6004EC:3478 method=channel bind rtt=9.928424ms tid=042F52145848D6C1574BB997
```
What happens here is:
1. We receive a new candidate and proactively bind a channel (this is a
silent operation and therefore not visible in the logs).
2. str0m formed new pairs for these candidates and starts testing them,
triggering a new channel binding because the previous one isn't
completed yet.
3. We refuse to make another channel binding because we see that we
already have one in-flight.
4. The channel binding succeeds.
What we do now is:
If we want to send data to a peer through a channel, we check whether we
have a connected OR an in-flight channel and send it in both cases. If
the channel binding is still in-flight, we therefore just pipeline the
channel data message just after it. Chances are that - assuming no
packet re-orderings on the network - by the time our channel data
message arrives at the relay that binding is active and can be relayed.
This allows the very first binding attempt from str0m to already succeed
instead of waiting for the timeout and sending another binding request.
In addition, it makes these logs less confusing.
Our TURN traffic is fairly minimal for this to be okay on DEBUG (instead
of TRACE). However, it can be quite noisy when one is just scanning
through the logs. Putting it on another target allows us to filter those
out later.
Note that these only concern the TURN control protocol. Channel data
messages are separate from this and **not** logged.
When a mac device goes to sleep, it typically does not turn off the WiFi
radio. If the mac never leaves the network it was on upon sleep, then
upon wake it will never receive a path update, and we would not have
performed a connlib network reset.
To fix this, we now properly detect a wake from sleep event and issue a
network reset.
Fixes#10056
When starting a local client with a local portal, this URL is hit and
times out, causing noise in the local gateway log.
In order to develop against this API in local dev, it might be better to
use the local website URL as well.
This fixes an issue introduced when we moved to GHCR hosting where the
`latest` tag was being applied to each `main` build of the `client` and
`gateway` instead of publish builds only.
Spans only attach to logs of lower severity, i.e. a DEBUG span is only
visible for DEBUG and TRACE statements. In order to see the connection
ID here with our INFO statements, we need to make it an INFO span.
Also, a span does nothing unless it is entered 🤦♂️
These don't really tell us much. It appears that Windows is sometimes
failing to access the pipe but then succeeds on the next attempt, hence
why we have the retry loop in the first place. Logging a warning here
just spams Sentry unnecessarily.
With the removal of the NAT64/46 modules, we can now simplify the
internals of our `IpPacket` struct. The requirements for our `IpPacket`
struct are somewhat delicate.
On the one hand, we don't want to be overly restrictive in our parsing /
validation code because there is a lot of broken software out there that
doesn't necessarily follow RFCs. Hence, we want to be as lenient as
possible in what we accept.
On the other hand, we do need to verify certain aspects of the packet,
like the payload lengths. At the moment, we are somewhat too lenient
there which causes errors on the Gateway where we have to NAT or
otherwise manipulate the packets. See #9567 or #9552 for example.
To fix this, we make the parsing in the `IpPacket` constructor more
restrictive. If it is a UDP, TCP or ICMP packet, we attempt to fully
parse its headers and validate the payload lengths.
This parsing allows us to then rely on the integrity of the packet as
part of the implementation. This does create several code paths that can
in theory panic but in practice, should be impossible to hit. To ensure
that this does in fact not happen, we also tackle an issue that is long
overdue: Fuzzing.
Resolves: #6667Resolves: #9567Resolves: #9552
Rust 1.88 shipped a new std-function on `HashMap` to conditionally
extract elements from a `HashMap`. This is handy for time-based expiry
of resources on the Gateway.
When filtering through logs in Sentry, it is useful to narrow them down
by context of a client, gateway or resource. Currently, these fields are
sometimes called `client`, `cid`, `client_id` etc and the same for the
Gateway and Resources.
To make this filtering easier, name all of them `cid` for Client IDs,
`gid` for Gateway IDs and `rid` for Resource IDs.
These appear to happen on systems that e.g. don't have IPv6 support or
where the destination cannot be reached. It is a bit of a catch-all but
all the ones I am seeing in Sentry are false-positives. To reduce the
noise a bit, we log these on DEBUG now.
Packet loss is a reality on the modern internet. Ideally, Firezone
should be able to handle some level of packet loss and still function
reliably, especially considering all of the UDP-based protocols we rely
on.
To test this, we set an extreme packet loss of 20% and perform a 10 MB
download through Firezone. Doing so actually exposed a bug:
For DNS resources, we need to set up the DNS resource NAT on the Gateway
which happens through the p2p control protocol. This packet is resent at
most every 2s but only if there are any other DNS queries. If we don't
receive another DNS query but get traffic for the resource, we keep
buffering those packets without trying to re-send the `AssignedIp`s
packet.
Bumps the com-android group in /kotlin/android with 1 update:
com.android.application.
Updates `com.android.application` from 8.10.1 to 8.11.1
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
We use several buffer pools across `connlib` that are all backed by the
same buffer-pool library. Within that library, we currently use another
object-pool library to provide the actual pooling functionality.
Benchmarking has shown that spend quite a bit of time (a few % of total
CPU time), fighting for the lock to either add or remote a buffer from
the pool. This is unnecessary. By using a queue, we can remove buffers
from the front and add buffers at the back, both of which can be
implemented in a lock-free way such that they don't contend.
Using the well-known `crossbeam-queue` library, we have such a queue
directly available.
I wasn't able to directly measure a performance gain in terms of
throughput. What we can measure though, is how much time we spend
dealing with our buffer pool vs everything else. If we compare the
`perf` outputs that were recorded during an `iperf` run each, we can see
that we spend about 60% less time dealing with the buffer pool than we
did before.
|Before|After|
|---|---|
|<img width="1982" height="553" alt="Screenshot From 2025-07-24
20-27-50"
src="https://github.com/user-attachments/assets/1698f28b-5821-456f-95fa-d6f85d901920"
/>|<img width="1982" height="553" alt="Screenshot From 2025-07-24
20-27-53"
src="https://github.com/user-attachments/assets/4f26a2d1-03e3-4c0d-84da-82c53b9761dd"
/>|
The number in the thousands on the left is how often the respective
function was the currently executing function during the profiling run.
Resolves: #9972
Presently, for each UDP packet that we process in `snownet`, we check if
we have already seen this local address of ours and if not, add it to
our list of host candidates. This is a safe way for ensuring that we
consider all addresses that we receive data on as ones that we tell our
peers that they should try and contact us on.
Performance profiling has shown that hashing the socket address of each
packet that is coming in is quite wasteful. We spend about 4-5% of our
main thread time doing this. For comparison, decrypting packets is only
about 30%.
Most of the time, we will already know about this address and therefore,
spending all this CPU time is completely pointless. At the same time
though, we need to be sure that we do discover our local address
correctly.
Inspired by STUN, we therefore move this responsibility to the
`allocation` module. The `allocation` module is responsible for
interacting with our TURN servers and will yield server-reflexive and
relay candidates as a result. It also knows, what the local address is
that it received traffic on so we simply extend that to yield host
candidates as well in addition to server-reflexive and relay candidates.
On my local machine, this bumps us across the 3.5 Gbits/sec mark:
```
Connecting to host 172.20.0.110, port 5201
[ 5] local 100.93.174.92 port 57890 connected to 172.20.0.110 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 319 MBytes 2.67 Gbits/sec 18 548 KBytes
[ 5] 1.00-2.00 sec 413 MBytes 3.46 Gbits/sec 4 884 KBytes
[ 5] 2.00-3.00 sec 417 MBytes 3.50 Gbits/sec 4 1.10 MBytes
[ 5] 3.00-4.00 sec 425 MBytes 3.56 Gbits/sec 415 785 KBytes
[ 5] 4.00-5.00 sec 430 MBytes 3.60 Gbits/sec 154 820 KBytes
[ 5] 5.00-6.00 sec 434 MBytes 3.64 Gbits/sec 251 793 KBytes
[ 5] 6.00-7.00 sec 436 MBytes 3.66 Gbits/sec 123 811 KBytes
[ 5] 7.00-8.00 sec 435 MBytes 3.65 Gbits/sec 2 788 KBytes
[ 5] 8.00-9.00 sec 423 MBytes 3.55 Gbits/sec 0 1.06 MBytes
[ 5] 9.00-10.00 sec 433 MBytes 3.63 Gbits/sec 8 1017 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-20.00 sec 8.21 GBytes 3.53 Gbits/sec 1728 sender
[ 5] 0.00-20.00 sec 8.21 GBytes 3.53 Gbits/sec receiver
iperf Done.
```
These are otherwise hit pretty often in the hot-path and slow packet
routing down because tracing needs to evaluate whether it should log the
statement.
Despite still being in development, the `tauri-specta` project already
proves to be quite useful. It allows us to generate TypeScript bindings
for our commands and events, creating a type-safe contract between the
frontend and the backend.
For example, this ensures that the TypeScript code calls a command
actually with the required parameters and thus avoids runtime failures.
Similarly, the frontend can listen on type-safe events without having to
use any magic strings.
The full `account` struct is only used to render the client's interface,
and doesn't need to be stored in the `client` struct when the `subject`
struct already tracks it.
Oban includes its own configuration validation, which seems to prevent
`runtime.exs` from overriding any compile-time options. This prevents us
from using ENV vars to configure it, such as restricting job execution
to `domain` nodes by setting `queues: []`. To fix that, we make sure to
set Oban configuration in env-specific files `config/dev.exs` and
`config/test.exs`, and at runtime for prod with `config/runtime.exs`.
Fixes#10016
Whilst entering and leaving a span for every packet is very expensive,
doing the same whenever we make timeout related changes is just fine.
Thus, we re-introduce a span removed in #9949 but only for the
`handle_timeout` function.
This gives us the context of the connection ID for not just our own
logs, but also the ones from `boringtun`.
Unfortunately this seems to be a race condition with read or setting the
path properly for this exec block. I've verified `cargo` is in the PATH,
and have tried this on a fresh Mac with Android Studio (latest release
version).
Spawning cargo from `sh -c` fixes the issue.
In 45466e3b78, the `networkSettings`
variable was no longer saved on the `adapter` instance, causing all
calls of the iOS-specific version of getting system resolvers to return
the connlib sentinels after the tunnel first came up.
This PR fixes that logic bug and also cleans this area of the codebase
up just a tiny bit so it's easier to follow.
Lastly, we also fix a bug where if the tunnel came up while Firezone was
already running, `networkSettings` would be `nil`, and we would read the
default system resolvers, which were the connlib sentinels.
Fixes https://github.com/firezone/firezone/issues/10017
On iOS, we were using the tempfile directory to stage the log export,
and then moving this into place from the share sheet presented to the
user.
For some reason, this has stopped working in iOS 18.5.0, and we need to
stage the file in the standard documents directory instead.
Fixes#10014
By chance, I've discovered in a CI failure that we won't be able to
handshake a new session if the `preshared_key` changes. This makes a lot
of sense. The `preshared_key` needs to be the same on both ends as it is
a shared secret that gets mixed into the Noise handshake.
In following sequence of events, we would thus previously run into a
"failed to decrypt handshake packet" scenario:
1. Client requests a connection.
2. Gateway authorizes the connection.
3. Portal restarts / gets deployed. To my knowledge, this will rotate
the `preshared_key` to a new secret. Restarting the portal also cuts all
WebSockets and therefore, the Gateways response never arrives.
4. Client reconnects to the WebSocket, requests a new connection.
5. Gateway reuses the local connection but this connection still uses
the old `preshared_key`!
6. Client needs to wait for the Gateway's ICE timeout before it can
establish a new connection.
How exactly (3) happens doesn't matter. There are probably other
conditions as to where the WebSocket connections get cut and we cannot
complete our connection handshake.
Time-based policy conditions are tricky. When they authorize a flow, we
correctly tell the Gateway to remove access when the time window
expires.
However, we do nothing on the client to reset the connectivity state.
This means that whenever the window of time of access was re-entered,
the client would essentially never be able to connect to it again until
the resource was toggled.
To fix this, we add a 1-minute check in the client channel that
re-checks allowed resources, and updates the client state with the
difference. This means that policies that have time-based conditions are
only accurate to the minute, but this is how they're presented anyhow.
For good measure, we also add a periodic job that runs every minute to
delete expired Flows. This will propagate to the Gateway where, if the
access for a particular client-resource is determined to be actually
gone, will receive `reject_access`.
Zooming out a bit, this PR furthers the theme that:
- Client channels react to underlying resource / policy / membership
changes directly, while
- Gateway channels react primarily to flows being deleted, or the
downstream effects of a prior client authorization
Whenever a client requests a connection to gateway, we need to generate
a preshared key that will be used for the underlying WireGuard tunnel.
When the connection setup broke or otherwise was lost, _after_ the
gateway the received the authorize_flow call, but _before_ the client
could receive the response (and initiate a tunnel), we would have to
wait until an ICE timeout occurred in order to reset state on the
gateway.
This is because the psk was not used to determine if this was a _new_
flow authorization. So the old authorization would be matched, and the
client would never be able to connect, since its tunnel was using the
new psk, and the gateway the old.
To fix this, we generate a secure random 32-byte `psk_base` on each
client and gateway. When a client wishes to connect to a gateway, we
compute the WireGuard preshared key as an HMAC over these two inputs.
This fixes the issue by ensuring that subsequent flow authorization
requests from a particular client to a particular gateway will yield the
same psk.
Related: #9999
Related: https://github.com/firezone/infra/issues/99
Previously, our idle timer was only driven by incoming and outgoing
packets. To detect whether the tunnel is idle, we checked whether either
the last incoming or last outgoing packet was more than 20s ago.
For one, having two timestamps here is unnecessarily complex. We can
simply combine them and always update this timestamp as `last_activity`.
Two, recently, we have started to also take into account not only
packets but other changes to the tunnel, such as an upsert of the
connection or adding new candidate. What we failed to do though, is
update these timestamps because their variable name was related to
packets and not to any activity.
The problem with not updating these timestamps however is that we will
very quickly move out of "connected" back to "idle" because the old
timestamps are still more than 20s ago. Hence, the previous fixes of
moving out of idle on new candidates and connection upsert were
ineffective.
By combining and renaming the timestamps, it is now much more obvious
that we need to update this timestamp in the respective handler
functions which then grants us another 20s of non-idling. This is
important for e.g. connection upserts to ensure the Gateway runs into an
ICE timeout within a short amount of time, should there be something
wrong with the connection that the Client just upserted.
As profiling shows, even if the log target isn't enabled, simply
checking whether or not it is enabled is a significant performance hit.
By guarding these behind `debug_assertions`, I was able to almost
achieve 3.75 Gbits/s locally (when rebased onto #9998). Obviously, this
doesn't quite translate into real-world improvements but it is
nonetheless a welcome improvement.
```
Connecting to host 172.20.0.110, port 5201
[ 5] local 100.93.174.92 port 34678 connected to 172.20.0.110 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 401 MBytes 3.37 Gbits/sec 14 644 KBytes
[ 5] 1.00-2.00 sec 448 MBytes 3.76 Gbits/sec 3 976 KBytes
[ 5] 2.00-3.00 sec 453 MBytes 3.80 Gbits/sec 43 979 KBytes
[ 5] 3.00-4.00 sec 449 MBytes 3.77 Gbits/sec 21 911 KBytes
[ 5] 4.00-5.00 sec 452 MBytes 3.79 Gbits/sec 4 1.15 MBytes
[ 5] 5.00-6.00 sec 451 MBytes 3.78 Gbits/sec 81 1.01 MBytes
[ 5] 6.00-7.00 sec 445 MBytes 3.73 Gbits/sec 39 705 KBytes
[ 5] 7.00-8.00 sec 436 MBytes 3.66 Gbits/sec 3 1016 KBytes
[ 5] 8.00-9.00 sec 460 MBytes 3.85 Gbits/sec 1 956 KBytes
[ 5] 9.00-10.00 sec 453 MBytes 3.80 Gbits/sec 0 1.19 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 4.34 GBytes 3.73 Gbits/sec 209 sender
[ 5] 0.00-10.00 sec 4.34 GBytes 3.73 Gbits/sec receiver
```
I didn't want to remove the `wire` logs entirely because they are quite
useful for debugging. However, they are also exactly this: A debugging
tool. In a production build, we are very unlikely to turn these on which
makes `debug_assertions` a good tool for keeping these around without
interfering with performance.
Fixes an edge case where a WiFi interface could go offline, then come
back online with the same connectivity, preventing the path update
handler from reset connlib state.
This would cause an issue especially if the WiFi was disabled for more
than 30 seconds / 2 minutes.
Bumps
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
from 22.15.30 to 24.0.15.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Our `ThreadedUdpSocket` uses a background thread for the actual socket
operation. It merely represents a handle to send and receive from these
sockets but not the socket itself. Dropping the handle will shutdown the
background thread but that is an asynchronous operation.
In order to be sure that we can rebind the same port, we need to wait
for the background thread to stop.
We thus add a `Drop` implementation for the `ThreadedUdpSocket` that
waits for its background thread to disappear before it continues.
Resolves: #9992
The `flows` table tracks authorizations we've made for a resource and
persists them, so that we can determine which authorizations are still
valid across deploys or hiccups in the control plane connections.
Before, when the "in-use" authorization for a resource was deleted, we
would have flapped the resource in the client, and sent `reject_access`
to the gateway. However, that would cause issues in the following edge
case:
- Client is currently connected to Resource A through Policy B
- Client websocket goes down
- Policy B is created for Resource A (for another actor group), and
Policy A is deleted by admin
- Client reconnects
- Client sees that its resource list is the same
- Gateway has since received `reject_access` because no new flows were
created for this client-resource combination
To prevent this from happening, we now try to "reauthorize" the flow
whenever the last cached flow is removed for a particular
client-resource pair. This avoids needing to toggle the resource on the
client since we won't have sent `reject_access` to the gateway.
Bumps [mixpanel-browser](https://github.com/mixpanel/mixpanel-js) and
[@types/mixpanel-browser](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/mixpanel-browser).
These dependencies needed to be updated together.
Updates `mixpanel-browser` from 2.65.0 to 2.67.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mixpanel/mixpanel-js/releases">mixpanel-browser's
releases</a>.</em></p>
<blockquote>
<h2>Fixes and minor updates</h2>
<ul>
<li><code>get_api_host()</code> is now used consistently across the SDK
to ensure that per-endpoint API host configs are respected
everywhere</li>
<li>A fix is included for the ordering of (asynchronous) operations when
calling <code>mixpanel.reset()</code> while a session recording is
active</li>
<li>Default Feature Flag context now includes <code>device_id</code>
alongside <code>distinct_id</code></li>
<li><code>$experiment_started</code> events now include several
API-latency-tracking properties</li>
</ul>
<h2>Fine-grained API host configuration and session recording fixes</h2>
<p>A new <code>api_hosts</code> configuration option enables different
endpoints (events, profiles, groups, session recordings) to be sent to
different hosts, for selective proxying, e.g.:</p>
<pre lang="js"><code>mixpanel.init('<TOKEN>', {
api_hosts: {
// proxy only session-recording requests, and leave the rest on the
default host api-js.mixpanel.com
'record': 'https://my-proxy.com',
},
});
</code></pre>
<p>This release also fixes a race condition when calling
<code>mixpanel.reset()</code> while a session recording is active, and
adds an initial TypeScript <code>types.d.ts</code> file.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/mixpanel/mixpanel-js/blob/master/CHANGELOG.md">mixpanel-browser's
changelog</a>.</em></p>
<blockquote>
<p><strong>2.67.0</strong> (17 Jul 2025)</p>
<ul>
<li>Use <code>get_api_host()</code> consistently across the SDK</li>
<li>Include <code>device_id</code> in default Feature Flag context</li>
<li>Track latency props in <code>$experiment_started</code> event</li>
<li>Fix async behavior in <code>mixpanel.reset()</code> when a session
recording is active</li>
<li>Fix recorder integration test race conditions</li>
</ul>
<p><strong>2.66.0</strong> (8 Jul 2025)</p>
<ul>
<li>Add <code>api_host</code> configuration option to support different
hosts/proxies for different endpoints (thanks <a
href="https://github.com/chrisknu"><code>@chrisknu</code></a>)</li>
<li>Add types.d.ts from existing public repo</li>
<li>Fix race condition when calling <code>mixpanel.reset()</code> while
a session recording is active</li>
</ul>
<p><strong>2.65.0</strong> (20 May 2025)</p>
<ul>
<li><code>mixpanel.people.track_charge()</code> (deprecated) no longer
sets profile property</li>
<li>Adds page height and width tracking to autocapture click
tracking</li>
<li>Session recording now stops when mixpanel.reset() is called</li>
<li>Support for adding arbitrary query string params to tracking
requests (thanks <a
href="https://github.com/dylan-asos"><code>@dylan-asos</code></a>)</li>
<li>Feature flagging API revisions</li>
<li>Whale Browser detection</li>
</ul>
<p><strong>2.64.0</strong> (15 Apr 2025)</p>
<ul>
<li>Add <code>record_heatmap_data</code> init option for Session
Recording to ensure click events are captured for Heat Maps</li>
<li>Initial support for feature flagging</li>
</ul>
<p><strong>2.63.0</strong> (1 Apr 2025)</p>
<ul>
<li>Update rrweb to latest alpha version</li>
<li>Refactor SDK build process to rely mainly on Rollup</li>
</ul>
<p><strong>2.62.0</strong> (26 Mar 2025)</p>
<ul>
<li>Replace UUID generator with UUIDv4 (using native API when
available)</li>
<li>Consistently use native JSON serialization when available</li>
<li>Fix for session recording idle timeout race condition</li>
</ul>
<p><strong>2.61.2</strong> (14 Mar 2025)</p>
<ul>
<li>Revert 10ms throttle on enqueueing events to improve tracking
reliability on page unload</li>
</ul>
<p><strong>2.61.1</strong> (11 Mar 2025)</p>
<ul>
<li>Session recording stops if initial DOM snapshot fails</li>
<li>Errors triggered by rrweb's record function are now caught</li>
<li>Fix for issue causing opt-out check error messages in
<code>debug</code> mode</li>
</ul>
<p><strong>2.61.0</strong> (6 Mar 2025)</p>
<ul>
<li>Session recordings now continue across page loads within the same
tab, using IndexedDB for persistence</li>
</ul>
<p><strong>2.60.0</strong> (31 Jan 2025)</p>
<ul>
<li>Expanded Autocapture configs</li>
<li>Prevent duplicate values in persistence when using people.union
(thanks <a
href="https://github.com/chrisdeely"><code>@chrisdeely</code></a>)</li>
</ul>
<p><strong>2.59.0</strong> (21 Jan 2025)</p>
<ul>
<li>Initial Autocapture support</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ec87a0ab39"><code>ec87a0a</code></a>
2.67.0</li>
<li><a
href="952dea91ab"><code>952dea9</code></a>
changelog for 2.67.0</li>
<li><a
href="0ea33fd603"><code>0ea33fd</code></a>
Merge branch '2.67.0-rc'</li>
<li><a
href="e51b679cdd"><code>e51b679</code></a>
Update conventions, ignore is no longer used</li>
<li><a
href="742fabb992"><code>742fabb</code></a>
Create dependabot yml from template, set ignore rule from example
directory (...</li>
<li><a
href="6cdecd6ad0"><code>6cdecd6</code></a>
Push to 2.67</li>
<li><a
href="2033e9eb48"><code>2033e9e</code></a>
Dist files</li>
<li><a
href="b1c6a3f796"><code>b1c6a3f</code></a>
send in variant fetch start/complete as date strings</li>
<li><a
href="ba23ab0404"><code>ba23ab0</code></a>
Merge pull request <a
href="https://redirect.github.com/mixpanel/mixpanel-js/issues/281">#281</a>
from mixpanel/jg-final-flush</li>
<li><a
href="591fa5050f"><code>591fa50</code></a>
test fix</li>
<li>Additional commits viewable in <a
href="https://github.com/mixpanel/mixpanel-js/compare/v2.65.0...v2.67.0">compare
view</a></li>
</ul>
</details>
<br />
Updates `@types/mixpanel-browser` from 2.60.0 to 2.66.0
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/mixpanel-browser">compare
view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>