When a Gateway or Client is running in an environment without IPv4 or
IPv6 connectivity, our initial probes for sending packets to the relays
will fail with network unreachable. That isn't a very big concern and
happens a lot in the wild. There is no need to report these as telemetry
events.
Resolves: #7514.
For persistent applications like the IPC service, it is possible that
telemetry gets initialised with different parameters depending on what
the user logs in with. Currently, only the first one is persisted and
all consecutive ones are ignored, leading to events that may be wrongly
tagged for a certain user / environment.
To fix this, we only skip the init if we are still in the same
environment. Otherwise, the close the previous session and initialise a
new one.
Fixes: #7525.
In order to achieve concurrency within `connlib`, we needed to create a
way for IP packets to own the piece of memory they are sitting in. This
allows us to concurrently read IP packets and them batch-process them
(as opposed to have a dedicated buffer and reference it). At the moment,
those IP packets are defined on the stack. With a size of ~1300 bytes
that isn't very large but still causes _some_ amount of copying.
We can avoid this copying by relying on a buffer pool:
1. When reading a new IP packet, we request a new buffer from the pool.
2. When the IP packet gets dropped, the buffer gets returned to the
pool.
This allows us to reuse an allocation for a packet once it finished
processing, resulting in less CPU time spent on copying around memory.
This causes us to make more _individual_ heap-allocations in the
beginning: Each packet is being processed by `connlib` is allocated on
the heap somewhere. At some point during the lifetime of the tunnel,
this will settle in an ideal state where we have allocated enough slots
to cover new packets whilst also reusing memory from packets that
finished processing already.
The actual `IpPacket` data type is now just a pointer. As a result, the
channels to and from the TUN thread (where we were holding multiple of
these packets) are now significantly smaller, leading to roughly the
same memory usage overall.
In my local testing on Linux, the client still only uses about ~15MB of
RAM even with multiple concurrent speedtests running.
Similar to #7497, when we receive a `ConnectResult`, we can simply
silently bail out of the function and not change our state instead of
printing a loud warning.
Normally, there always be exactly on pending flow per resource. It
appears though that it can sometimes happen that we first request a flow
for a resource but by the time it is authorised, we've already cleared
its local state.
Regardless, this isn't a concerning error and not worth logging on WARN
(which happens one layer up).
Windows appears to randomly fail to update the tray menu. There is
nothing we can do about that. Hence, we downgrade these errors to debug
and make the functions infallible, reducing the complexity for the
caller.
There is nothing we can do if the user doesn't have any DNS servers
defined. The default log level is INFO so a user reading the logs will
still come across this message in case they are trying to debug what is
happening.
Long term, problems like these would probably warrant some kind of
notification channel from `connlib` to the GUI where we can display
messages to the user.
There are several reasons why we can disconnect from a relay at runtime:
- STUN is blocked
- We have invalid credentials
- The TURN server is not protocol-conform
The first two are very much possible in production and there is nothing
we can do about them. When relays reboot, their credentials change and
if the Internet connection of a user / gateway gets cut, we may
disconnect from the relay because the messages get lost.
The last one should never happen if we are connected to our own relays.
Firezone can be self-hosted so ultimately, we don't have control over
what we are talking to. That error however is more of a safe-guard for
`connlib` itself to disconnect from the server as soon as it detects
that it is behaving weirdly.
None of these reasons are worth reporting to Sentry as a problem because
they aren't really fixable as such. It is more important that the user
sees them in the logs if they decide to dig into them which they will
still do on INFO level.
The communication between the GUI client, the IPC service and `connlib`
are asynchronous. As such, it may happen that the state machines run out
of sync. Receiving a `TunnelReady` despite not being in the right state
for that is no concern and can be handled gracefully.
In most cases, the caller of this function already handled the case of
it failing gracefully by logging. From Sentry alerts, we can see that if
this fails, there isn't much we can do about it and most likely, the
next refresh will work again (this has only happened a single time).
Logging this on `debug` is good enough in case something doesn't work
and we need to reproduce it or something really bad happens we need see
it in the breadcrumbs of another Sentry event.
When a client disconnects, we clear up the connection on the gateway.
There might still be packets arriving from resources that we then cannot
route. This isn't worth returning an error.
We are already handling one case where we are trying to remove a route
that doesn't exist. `ESRCH` is another variant of this error that
manifests as "No such process". According to the Internet, this just
means the route doesn't exist so we can bail out early here.
Currently, the Gateway logs all errors that happen when the event-loop
exits on ERROR level. This creates Sentry alerts for things like
"Unauthorized" errors or "404 Not found".
That isn't useful to us. To mitigate this, we polish the code a bit to
only log an ERROR when we actually fail to setup something during
startup (like the TUN device). In all other cases, we now log a more
user-friendly message on INFO but still exit with the appropriate exit
code (0 on CTRL+C, 1 on any other error).
Attempting to refresh an allocation is the only idempotent way in TURN
to test whether one has an active allocation. As such, logging this on
WARN is too aggressive.
Resolves: #7481.
At present, `connlib` will always drop all IP packets until a connection
is established and the DNS resource NAT is created. This causes an
unnecessary delay until the connection is working because we need to
wait for retransmission timers of the host's network stack to resend
those packets.
With the new idempotent control protocol, it is now much easier to
buffer these packets and send them to the gateway once the connection is
established.
The buffer sizes are chosen somewhat conservatively to ensure we don't
consume a lot of memory. The hypothesis here is that every protocol -
even if the transport layer is unreliable like UDP - will start with a
handshake involving only one or at most a few packets and waiting for a
reply before sending more. Thus, as long as we can set up a connection
quicker than the re-transmit timer in the host's network stack,
buffering those packets should result in no packet loss. Typically,
setting up a new connection takes at most 500ms which should be fast
enough to not trigger any re-transmits.
Resolves: #3246.
On the one hand, learning about in which edgecases our software fails is
useful and thus having telemetry also active for self-hosted users is
beneficial. On the other hand, we have neither control nor a contact to
those self-hosted and whatever they are doing might spam our Sentry
account with errors that we can't do anything about.
To mitigate this, we disable telemetry for self-hosted users with the
next release.
Once we have more resources, we can consider enabling this again.
As per the RFC, the IPv6 traffic class should be 1-to-1 translated to
the IPv4 DSCP value. However, it appears that not all values here are
valid. In particular, when attempting to reach GitHub over IPv6, we
receive an IPv6 packet that has a traffic class value of 72 which is
out-of-range for the IPv4 DSCP value, resulting in the following error
on the Gateway:
```
Failed to translate packet: NAT64 failed: Error '72' is too big to be a 'IPv4 DSCP (Differentiated Services Code Point)' (maximum allowed value is '63')
```
The bigger scope of this issue is that this causes the ICMP packets
returned to the client to be dropped which means that `ssh` spawned by
`git` doesn't learn that the IPv6 address assigned by Firezone is not
actually routable.
Related: #7476.
In order for Sentry to parse our releases as semver, they need to be in
the form of `package@version` [0]. Without this, the feature of "Mark
this issue as resolved in the _next_ version" doesn't work properly
because Sentry compares the versions as to when it first saw them vs
parsing the semver string itself. We test versions prior to releasing
them, meaning Sentry learns about a 1.4.0 version before it is actually
released. This causes false-positive "regressions" even though they are
fixed in a later (as per semver) release.
This create some redundancy with the different DSNs that we are already
using. I think it would make sense to consider merging the two projects
we have for the GUI client for example. That is really just one project
that happens to run as two binaries.
For all other projects, I think the separation still makes sense because
we e.g. may add Sentry to the "host" applications of Android and
MacOS/iOS as well. For those, we would reuse the DSN and thus funnel the
issues into the same Sentry project.
As per Sentry's docs, releases are organisation-wide and therefore need
a package identifier to be grouped correctly.
[0]:
https://docs.sentry.io/platforms/javascript/configuration/releases/#bind-the-version
From within the FFI code, we have no control over the lifecycle of the
host application and `connect` may be called multiple times from within
the same process. Therefore, we cannot rely on the global logger state
to **not** be set when `connect` gets called.
To fix this, we cache the handles for the file logger and a
reload-handle for the log filter in a `static` variable. This allows us
to apply the new log-filter of a repeated `connect` call to the existing
logger, even if `connect` is called multiple times from the same
process.
## Context
At present, we only have a single thread that reads and writes to the
TUN device on all platforms. On Linux, it is possible to open the file
descriptor of a TUN device multiple times by setting the
`IFF_MULTI_QUEUE` option using `ioctl`. Using multi-queue, we can then
spawn multiple threads that concurrently read and write to the TUN
device. This is critical for achieving a better throughput.
## Solution
`IFF_MULTI_QUEUE` is a Linux-only thing and therefore only applies to
headless-client, GUI-client on Linux and the Gateway (it may also be
possible on Android, I haven't tried). As such, we need to first change
our internal abstractions a bit to move the creation of the TUN thread
to the `Tun` abstraction itself. For this, we change the interface of
`Tun` to the following:
- `poll_recv_many`: An API, inspired by tokio's `mpsc::Receiver` where
multiple items in a channel can be batch-received.
- `poll_send_ready`: Mimics the API of `Sink` to check whether more
items can be written.
- `send`: Mimics the API of `Sink` to actually send an item.
With these APIs in place, we can implement various (performance)
improvements for the different platforms.
- On Linux, this allows us to spawn multiple threads to read and write
from the TUN device and send all packets into the same channel. The `Io`
component of `connlib` then uses `poll_recv_many` to read batches of up
to 100 packets at once. This ties in well with #7210 because we can then
use GSO to send the encrypted packets in single syscalls to the OS.
- On Windows, we already have a dedicated recv thread because `WinTun`'s
most-convenient API uses blocking IO. As such, we can now also tie into
that by batch-receiving from this channel.
- In addition to using multiple threads, this API now also uses correct
readiness checks on Linux, Darwin and Android to uphold backpressure in
case we cannot write to the TUN device.
## Configuration
Local testing has shown that 2 threads give the best performance for a
local `iperf3` run. I suspect this is because there is only so much
traffic that a single application (i.e. `iperf3`) can generate. With
more than 2 threads, the throughput actually drops drastically because
`connlib`'s main thread is too busy with lock-contention and triggering
`Waker`s for the TUN threads (which mostly idle around if there are 4+
of them). I've made it configurable on the Gateway though so we can
experiment with this during concurrent speedtests etc.
In addition, switching `connlib` to a single-threaded tokio runtime
further increased the throughput. I suspect due to less task / context
switching.
## Results
Local testing with `iperf3` shows some very promising results. We now
achieve a throughput of 2+ Gbit/s.
```
Connecting to host 172.20.0.110, port 5201
Reverse mode, remote host 172.20.0.110 is sending
[ 5] local 100.80.159.34 port 57040 connected to 172.20.0.110 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 274 MBytes 2.30 Gbits/sec
[ 5] 1.00-2.00 sec 279 MBytes 2.34 Gbits/sec
[ 5] 2.00-3.00 sec 216 MBytes 1.82 Gbits/sec
[ 5] 3.00-4.00 sec 224 MBytes 1.88 Gbits/sec
[ 5] 4.00-5.00 sec 234 MBytes 1.96 Gbits/sec
[ 5] 5.00-6.00 sec 238 MBytes 2.00 Gbits/sec
[ 5] 6.00-7.00 sec 229 MBytes 1.92 Gbits/sec
[ 5] 7.00-8.00 sec 222 MBytes 1.86 Gbits/sec
[ 5] 8.00-9.00 sec 223 MBytes 1.87 Gbits/sec
[ 5] 9.00-10.00 sec 217 MBytes 1.82 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.30 GBytes 1.98 Gbits/sec 22247 sender
[ 5] 0.00-10.00 sec 2.30 GBytes 1.98 Gbits/sec receiver
iperf Done.
```
This is a pretty solid improvement over what is in `main`:
```
Connecting to host 172.20.0.110, port 5201
[ 5] local 100.65.159.3 port 56970 connected to 172.20.0.110 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 90.4 MBytes 758 Mbits/sec 1800 106 KBytes
[ 5] 1.00-2.00 sec 93.4 MBytes 783 Mbits/sec 1550 51.6 KBytes
[ 5] 2.00-3.00 sec 92.6 MBytes 777 Mbits/sec 1350 76.8 KBytes
[ 5] 3.00-4.00 sec 92.9 MBytes 779 Mbits/sec 1800 56.4 KBytes
[ 5] 4.00-5.00 sec 93.4 MBytes 783 Mbits/sec 1650 69.6 KBytes
[ 5] 5.00-6.00 sec 90.6 MBytes 760 Mbits/sec 1500 73.2 KBytes
[ 5] 6.00-7.00 sec 87.6 MBytes 735 Mbits/sec 1400 76.8 KBytes
[ 5] 7.00-8.00 sec 92.6 MBytes 777 Mbits/sec 1600 82.7 KBytes
[ 5] 8.00-9.00 sec 91.1 MBytes 764 Mbits/sec 1500 70.8 KBytes
[ 5] 9.00-10.00 sec 92.0 MBytes 771 Mbits/sec 1550 85.1 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 917 MBytes 769 Mbits/sec 15700 sender
[ 5] 0.00-10.00 sec 916 MBytes 768 Mbits/sec receiver
iperf Done.
```
In order to release the new control protocol to users, we need to bump
the versions of the clients to 1.4.0. The portal has a version gate to
only select gateways with version >= 1.4.0 for clients >= 1.4.0. Thus,
bumping these versions can only happen once testing has completed and
the gateway has actually been released as 1.4.0.
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
Building on top of the gateway PR (#6941), this PR transitions the
clients to the new control protocol. Clients are **not**
backwards-compatible with old gateways. As a result, a certain customer
environment MUST have at least one gateway with the above PR running in
order for clients to be able to establish connections.
With this transition, Clients send explicit events to Gateways whenever
they assign IPs to a DNS resource name. The actual assignment only
happens once and the IPs then remain stable for the duration of the
client session.
When the Gateway receives such an event, it will perform a DNS
resolution of the requested domain name and set up the NAT between the
assigned proxy IPs and the IPs the domain actually resolves to. In order
to support self-healing of any problems that happen during this process,
the client will send an "Assigned IPs" event every time it receives a
DNS query for a particular domain. This in turn will trigger another DNS
resolution on the Gateway. Effectively, this means that DNS queries for
DNS resources propagate to the Gateway, triggering a DNS resolution
there. In case the domain resolves to the same set of IPs, no state is
changed to ensure existing connections are not interrupted.
With this new functionality in place, we can delete the old logic around
detecting "expired" IPs. This is considered a bugfix as this logic isn't
currently working as intended. It has been observed multiple times that
the Gateway can loop on this behaviour and resolving the same domain
over and over again. The only theoretical "incompatibility" here is that
pre-1.4.0 clients won't have access to this functionality of triggering
DNS refreshes on a Gateway 1.4.2+ Gateway. However, as soon as this PR
merges, we expect all admins to have already upgraded to a 1.4.0+
Gateway anyway which already mandates clients to be on 1.4.0+.
Resolves: #7391.
Resolves: #6828.
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.210 to
1.0.215.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/serde/releases">serde's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.215</h2>
<ul>
<li>Produce warning when multiple fields or variants have the same
deserialization name (<a
href="https://redirect.github.com/serde-rs/serde/issues/2855">#2855</a>,
<a
href="https://redirect.github.com/serde-rs/serde/issues/2856">#2856</a>,
<a
href="https://redirect.github.com/serde-rs/serde/issues/2857">#2857</a>)</li>
</ul>
<h2>v1.0.214</h2>
<ul>
<li>Implement IntoDeserializer for all Deserializers in serde::de::value
module (<a
href="https://redirect.github.com/serde-rs/serde/issues/2568">#2568</a>,
thanks <a
href="https://github.com/Mingun"><code>@Mingun</code></a>)</li>
</ul>
<h2>v1.0.213</h2>
<ul>
<li>Fix support for macro-generated <code>with</code> attributes inside
a newtype struct (<a
href="https://redirect.github.com/serde-rs/serde/issues/2847">#2847</a>)</li>
</ul>
<h2>v1.0.212</h2>
<ul>
<li>Fix hygiene of macro-generated local variable accesses in
serde(with) wrappers (<a
href="https://redirect.github.com/serde-rs/serde/issues/2845">#2845</a>)</li>
</ul>
<h2>v1.0.211</h2>
<ul>
<li>Improve error reporting about mismatched signature in
<code>with</code> and <code>default</code> attributes (<a
href="https://redirect.github.com/serde-rs/serde/issues/2558">#2558</a>,
thanks <a
href="https://github.com/Mingun"><code>@Mingun</code></a>)</li>
<li>Show variant aliases in error message when variant deserialization
fails (<a
href="https://redirect.github.com/serde-rs/serde/issues/2566">#2566</a>,
thanks <a
href="https://github.com/Mingun"><code>@Mingun</code></a>)</li>
<li>Improve binary size of untagged enum and internally tagged enum
deserialization by about 12% (<a
href="https://redirect.github.com/serde-rs/serde/issues/2821">#2821</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8939af48fe"><code>8939af4</code></a>
Release 1.0.215</li>
<li><a
href="fa5d58cd00"><code>fa5d58c</code></a>
Use ui test syntax that does not interfere with rustfmt</li>
<li><a
href="1a3cf4b3c1"><code>1a3cf4b</code></a>
Update PR 2562 ui tests</li>
<li><a
href="7d96352e96"><code>7d96352</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2857">#2857</a>
from dtolnay/collide</li>
<li><a
href="111ecc5d8c"><code>111ecc5</code></a>
Update ui tests for warning on colliding aliases</li>
<li><a
href="edd6fe954b"><code>edd6fe9</code></a>
Revert "Add checks for conflicts for aliases"</li>
<li><a
href="a20e9249c5"><code>a20e924</code></a>
Revert "pacify clippy"</li>
<li><a
href="b1353a99cd"><code>b1353a9</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2856">#2856</a>
from dtolnay/dename</li>
<li><a
href="c59e876bb3"><code>c59e876</code></a>
Produce a separate warning for every colliding name</li>
<li><a
href="7f1e697c0d"><code>7f1e697</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2855">#2855</a>
from dtolnay/namespan</li>
<li>Additional commits viewable in <a
href="https://github.com/serde-rs/serde/compare/v1.0.210...v1.0.215">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Fixes a similar issue as #7443 where we were deleting the
`IPHONEOS_DEPLOYMENT_TARGET` variable in our Rust build script, which
caused lots of warnings about building for a different OS than being
linked against.
At present, the GUI client shares the current log-directives with the
IPC service via the file system. Supposedly, this has been done to allow
the IPC service to start back up with the same log filter as before.
This behaviour appears to be buggy though as we are receiving a fair
number of error reports where this file is not writable.
Instead of relying on the file system to communicate, we send the
current log-directives to the IPC service as soon as we start up. The
IPC service then uses the file system as a cache that log string and
re-apply it on the next startup. This way, no two programs need to read
/ write the same file. The IPC service runs with higher privileges, so
this should resolve the permission errors we are seeing in Sentry.
Bumps [@tauri-apps/api](https://github.com/tauri-apps/tauri) from 2.0.3
to 2.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tauri-apps/tauri/releases"><code>@tauri-apps/api</code>'s
releases</a>.</em></p>
<blockquote>
<h2><code>@tauri-apps/api</code> v2.1.1</h2>
<!-- raw HTML omitted -->
<pre><code>No known vulnerabilities found
</code></pre>
<!-- raw HTML omitted -->
<h2>[2.1.1]</h2>
<h3>Bug Fixes</h3>
<ul>
<li><a
href="7f81f05236"><code>5e9435487</code></a>
(<a
href="https://redirect.github.com/tauri-apps/tauri/pull/11639">#11645</a>
by <a href="https://github.com/dgerhardt">dgerhardt</a>) Fix regression
in <code>toLogical</code> and <code>toPhysical</code> for position types
in <code>dpi</code> module returning incorrect <code>y</code>
value.</li>
<li><a
href="e8a50f6d76"><code>e8a50f6d7</code></a>
(<a
href="https://redirect.github.com/tauri-apps/tauri/pull/11645">#11645</a>)
Fix integer values of <code>BasDirectory.Home</code> and
<code>BaseDirectory.Font</code> regression which broke path APIs in
JS.</li>
</ul>
<!-- raw HTML omitted -->
<pre><code>> @tauri-apps/api@2.1.1 npm-publish
/home/runner/work/tauri/tauri/packages/api
> pnpm build && cd ./dist && pnpm publish --access
public --loglevel silly --no-git-checks
<p>> <code>@tauri-apps/api</code><a
href="https://github.com/2"><code>@2</code></a>.1.1 build
/home/runner/work/tauri/tauri/packages/api
> rollup -c --configPlugin typescript</p>
<p>[36m
[1m./src/app.ts, ./src/core.ts, ./src/dpi.ts, ./src/event.ts,
./src/image.ts, ./src/index.ts, ./src/menu.ts, ./src/mocks.ts,
./src/path.ts, ./src/tray.ts, ./src/webview.ts, ./src/webviewWindow.ts,
./src/window.ts[22m → [1m./dist, ./dist[22m...[39m
[32mcreated [1m./dist, ./dist[22m in [1m1.3s[22m[39m
[36m
[1msrc/index.ts[22m →
[1m../../crates/tauri/scripts/bundle.global.js[22m...[39m
[32mcreated [1m../../crates/tauri/scripts/bundle.global.js[22m in
[1m1.8s[22m[39m
npm verbose cli /opt/hostedtoolcache/node/20.18.0/x64/bin/node
/opt/hostedtoolcache/node/20.18.0/x64/bin/npm
npm info using npm@10.8.2
npm info using node@v20.18.0
npm silly config
load:file:/opt/hostedtoolcache/node/20.18.0/x64/lib/node_modules/npm/npmrc
npm silly config load:file:/tmp/6ad0a380b775be1a5293f739102d3639/.npmrc
npm silly config load:file:/home/runner/work/_temp/.npmrc
npm silly config
load:file:/opt/hostedtoolcache/node/20.18.0/x64/etc/npmrc
npm verbose title npm publish tauri-apps-api-2.1.1.tgz
npm verbose argv "publish" "--ignore-scripts"
"tauri-apps-api-2.1.1.tgz" "--access"
"public" "--loglevel" "silly"
"--no-git-checks"
npm verbose logfile logs-max:10
dir:/home/runner/.npm/_logs/2024-11-11T14_49_08_178Z-
npm verbose logfile
/home/runner/.npm/_logs/2024-11-11T14_49_08_178Z-debug-0.log
npm verbose publish [ 'tauri-apps-api-2.1.1.tgz' ]
npm silly logfile done cleaning log files
npm notice
npm notice 📦 <code>@tauri-apps/api</code><a
href="https://github.com/2"><code>@2</code></a>.1.1
npm notice Tarball Contents
</tr></table>
</code></pre></p>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ef2592b5a8"><code>ef2592b</code></a>
Apply Version Updates From Current Changes (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/11646">#11646</a>)</li>
<li><a
href="7f81f05236"><code>7f81f05</code></a>
chore: rename change file</li>
<li><a
href="e8a50f6d76"><code>e8a50f6</code></a>
fix(core): hard code <code>BaseDirectory</code> integer values to avoid
regressions when...</li>
<li><a
href="5e94354875"><code>5e94354</code></a>
fix(api/dpi): fix toLogical and toPhysical for positions (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/11639">#11639</a>)</li>
<li><a
href="0fcef3f941"><code>0fcef3f</code></a>
docs: document vanilla JS import alternative (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/11632">#11632</a>)</li>
<li><a
href="86f22f0ec9"><code>86f22f0</code></a>
apply version updates (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/11440">#11440</a>)</li>
<li><a
href="3f6f07a1b8"><code>3f6f07a</code></a>
chore(deps): update <code>wry</code> to <code>0.47</code> and
<code>tao</code> to <code>0.30.6</code> (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/11627">#11627</a>)</li>
<li><a
href="60e86d5f6e"><code>60e86d5</code></a>
fix(cli): <code>android dev</code> not working on Windows without
<code>--host</code> (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/11624">#11624</a>)</li>
<li><a
href="b28435860c"><code>b284358</code></a>
chore(deps) Update Rust crate thiserror to v2 (dev) (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/11604">#11604</a>)</li>
<li><a
href="229d7f8e22"><code>229d7f8</code></a>
fix(core): fix child webviews on macOS and Windows treated as full
webview wi...</li>
<li>Additional commits viewable in <a
href="https://github.com/tauri-apps/tauri/compare/@tauri-apps/api-v2.0.3...@tauri-apps/api-v2.1.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>