Bumps [zip](https://github.com/zip-rs/zip2) from 2.1.3 to 2.1.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/zip-rs/zip2/releases">zip's
releases</a>.</em></p>
<blockquote>
<h2>v2.1.5</h2>
<h3><!-- raw HTML omitted -->🚜 Refactor</h3>
<ul>
<li>change invalid_state() return type to io::Result<!-- raw HTML
omitted --></li>
</ul>
<h2>v2.1.4</h2>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>fix(<a
href="https://redirect.github.com/zip-rs/zip2/pull/215">#215</a>):
Upgrade to deflate64 0.1.9</li>
<li>Panic when reading a file truncated in the middle of an XZ block
header</li>
<li>Some archives with over u16::MAX files were handled incorrectly or
slowly (<a
href="https://redirect.github.com/zip-rs/zip2/pull/189">#189</a>)</li>
<li>Check number of files when deciding whether a CDE is the real
one</li>
<li>Could still select a fake CDE over a real one in some cases</li>
<li>May have to consider multiple CDEs before filtering for
validity</li>
<li>We now keep searching for a real CDE header after read an invalid
one from the file comment</li>
<li>Always search for data start when opening an archive for append, and
reject the header if data appears to start after central directory</li>
<li><code>deep_copy_file</code> no longer allows overwriting an existing
file, to match the behavior of <code>shallow_copy_file</code></li>
<li>File start position was wrong when extra data was present</li>
<li>Abort file if central extra data is too large</li>
<li>Overflow panic when central directory extra data is too large</li>
<li>ZIP64 header was being written twice when copying a file</li>
<li>ZIP64 header was being written to central header twice</li>
<li>Start position was incorrect when file had no extra data</li>
<li>Allow all reserved headers we can create</li>
<li>Fix a bug where alignment padding interacts with other extra-data
fields</li>
<li>Fix bugs involving alignment padding and Unicode extra fields</li>
<li>Incorrect header when adding AES-encrypted files</li>
<li>Parse the extra field and reject it if invalid</li>
<li>Incorrect behavior following a rare combination of
<code>merge_archive</code>, <code>abort_file</code> and
<code>deep_copy_file</code>. As well, we now return an error when a file
is being copied to itself.</li>
<li>path_to_string now properly handles the case of an empty path</li>
<li>Implement <code>Debug</code> for <code>ZipWriter</code> even when
it's not implemented for the inner writer's type</li>
<li>Fix an issue where the central directory could be incorrectly
detected</li>
<li><code>finish_into_readable()</code> would corrupt the archive if the
central directory had moved</li>
</ul>
<h3><!-- raw HTML omitted -->🚜 Refactor</h3>
<ul>
<li>Verify with debug assertions that no FixedSizeBlock expects a
multi-byte alignment (<a
href="https://redirect.github.com/zip-rs/zip2/pull/198">#198</a>)</li>
<li>Use new do_or_abort_file method</li>
</ul>
<h3><!-- raw HTML omitted -->⚡ Performance</h3>
<ul>
<li>Speed up CRC when encrypting small files</li>
<li>Limit the number of extra fields</li>
<li>Refactor extra-data validation</li>
<li>Store extra data in plain vectors until after validation</li>
<li>Only build one IndexMap after choosing among the possible valid
headers</li>
<li>Simplify validation of empty extra-data fields</li>
<li>Validate automatic extra-data fields only once, even if several are
present</li>
<li>Remove redundant <code>validate_extra_data()</code> call</li>
<li>Skip searching for the ZIP32 header if a valid ZIP64 header is
present (<a
href="https://redirect.github.com/zip-rs/zip2/pull/189">#189</a>)</li>
</ul>
<h3><!-- raw HTML omitted -->⚙️ Miscellaneous Tasks</h3>
<ul>
<li>Fix a bug introduced by c934c824</li>
<li>Fix a failing unit test</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/zip-rs/zip2/blob/master/CHANGELOG.md">zip's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/zip-rs/zip2/compare/v2.1.4...v2.1.5">2.1.5</a>
- 2024-07-20</h2>
<h3><!-- raw HTML omitted -->🚜 Refactor</h3>
<ul>
<li>change invalid_state() return type to io::Result<!-- raw HTML
omitted --></li>
</ul>
<h2><a
href="https://github.com/zip-rs/zip2/compare/v2.1.3...v2.1.4">2.1.4</a>
- 2024-07-18</h2>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>fix(<a
href="https://redirect.github.com/zip-rs/zip2/pull/215">#215</a>):
Upgrade to deflate64 0.1.9</li>
<li>Panic when reading a file truncated in the middle of an XZ block
header</li>
<li>Some archives with over u16::MAX files were handled incorrectly or
slowly (<a
href="https://redirect.github.com/zip-rs/zip2/pull/189">#189</a>)</li>
<li>Check number of files when deciding whether a CDE is the real
one</li>
<li>Could still select a fake CDE over a real one in some cases</li>
<li>May have to consider multiple CDEs before filtering for
validity</li>
<li>We now keep searching for a real CDE header after read an invalid
one from the file comment</li>
<li>Always search for data start when opening an archive for append, and
reject the header if data appears to start after central directory</li>
<li><code>deep_copy_file</code> no longer allows overwriting an existing
file, to match the behavior of <code>shallow_copy_file</code></li>
<li>File start position was wrong when extra data was present</li>
<li>Abort file if central extra data is too large</li>
<li>Overflow panic when central directory extra data is too large</li>
<li>ZIP64 header was being written twice when copying a file</li>
<li>ZIP64 header was being written to central header twice</li>
<li>Start position was incorrect when file had no extra data</li>
<li>Allow all reserved headers we can create</li>
<li>Fix a bug where alignment padding interacts with other extra-data
fields</li>
<li>Fix bugs involving alignment padding and Unicode extra fields</li>
<li>Incorrect header when adding AES-encrypted files</li>
<li>Parse the extra field and reject it if invalid</li>
<li>Incorrect behavior following a rare combination of
<code>merge_archive</code>, <code>abort_file</code> and
<code>deep_copy_file</code>. As well, we now return an error when a file
is being copied to itself.</li>
<li>path_to_string now properly handles the case of an empty path</li>
<li>Implement <code>Debug</code> for <code>ZipWriter</code> even when
it's not implemented for the inner writer's type</li>
<li>Fix an issue where the central directory could be incorrectly
detected</li>
<li><code>finish_into_readable()</code> would corrupt the archive if the
central directory had moved</li>
</ul>
<h3><!-- raw HTML omitted -->🚜 Refactor</h3>
<ul>
<li>Verify with debug assertions that no FixedSizeBlock expects a
multi-byte alignment (<a
href="https://redirect.github.com/zip-rs/zip2/pull/198">#198</a>)</li>
<li>Use new do_or_abort_file method</li>
</ul>
<h3><!-- raw HTML omitted -->⚡ Performance</h3>
<ul>
<li>Speed up CRC when encrypting small files</li>
<li>Limit the number of extra fields</li>
<li>Refactor extra-data validation</li>
<li>Store extra data in plain vectors until after validation</li>
<li>Only build one IndexMap after choosing among the possible valid
headers</li>
<li>Simplify validation of empty extra-data fields</li>
<li>Validate automatic extra-data fields only once, even if several are
present</li>
<li>Remove redundant <code>validate_extra_data()</code> call</li>
<li>Skip searching for the ZIP32 header if a valid ZIP64 header is
present (<a
href="https://redirect.github.com/zip-rs/zip2/pull/189">#189</a>)</li>
</ul>
<h3><!-- raw HTML omitted -->⚙️ Miscellaneous Tasks</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8fb107ad5e"><code>8fb107a</code></a>
chore: release (<a
href="https://redirect.github.com/zip-rs/zip2/issues/222">#222</a>)</li>
<li><a
href="a7c1230dfa"><code>a7c1230</code></a>
publicly export and document the zip64 threshold constants (<a
href="https://redirect.github.com/zip-rs/zip2/issues/79">#79</a>)</li>
<li><a
href="a60bd79826"><code>a60bd79</code></a>
Merge pull request <a
href="https://redirect.github.com/zip-rs/zip2/issues/210">#210</a> from
a1phyr/multiple_refactors</li>
<li><a
href="7471cf526f"><code>7471cf5</code></a>
refactor: change invalid_state() return type to io::Result<T></li>
<li><a
href="9caa3b678f"><code>9caa3b6</code></a>
Merge pull request <a
href="https://redirect.github.com/zip-rs/zip2/issues/194">#194</a> from
zip-rs/release-plz-2024-06-15T04-17-17Z</li>
<li><a
href="8b11361b9e"><code>8b11361</code></a>
chore: release</li>
<li><a
href="55c2c64249"><code>55c2c64</code></a>
ci(fuzz): Set max length closer to current corpus entries' length</li>
<li><a
href="193bbe125b"><code>193bbe1</code></a>
fix(<a
href="https://redirect.github.com/zip-rs/zip2/issues/215">#215</a>):
Upgrade to deflate64 0.1.9</li>
<li><a
href="4e971d07ab"><code>4e971d0</code></a>
Commit unfinished corpus</li>
<li><a
href="c14986806a"><code>c149868</code></a>
Fix divergence from origin/master</li>
<li>Additional commits viewable in <a
href="https://github.com/zip-rs/zip2/compare/v2.1.3...v2.1.5">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
In `connlib`, traffic is sent through sockets via one of three ways:
1. Direct p2p traffic between clients and gateways: For these, we always
explicitly set the source IP (and thus interface).
2. UDP traffic to the relays: For these, we let the OS pick an
appropriate source interface.
3. WebSocket traffic over TCP to the portal: For this too, we let the OS
pick the source interface.
For (2) and (3), it is possible to run into routing loops, depending on
the routes that we have configured on the TUN device.
In Linux, we can prevent routing loops by marking a socket [0] and
repeating the mark when we add routes [1]. Packets sent via a marked
socket won't be routed by a rule that contains this mark. On Android, we
can do something similar by "protecting" a socket via a syscall on the
Java side [2].
On Windows, routing works slightly different. There, the source
interface is determined based on a computed metric [3] [4]. To prevent
routing loops on Windows, we thus need to find the "next best" interface
after our TUN interface. We can achieve this with a combination of
several syscalls:
1. List all interfaces on the machine
2. Ask Windows for the best route on each interface, except our TUN
interface.
3. Sort by Windows' routing metric and pick the lowest one (lower is
better).
Thanks to the abstraction of `SocketFactory` that we already previously
introduced, Integrating this into `connlib` isn't too difficult:
1. For TCP sockets, we simply resolve the best route after creating the
socket and then bind it to that local interface. That way, all packets
will always going via that interface, regardless of which routes are
present on our TUN interface.
2. UDP is connection-less so we need to decide per-packet, which
interface to use. "Pick the best interface for me" is modelled in
`connlib` via the `DatagramOut::src` field being `None`.
- To ensure those packets don't cause a routing loop, we introduce a
"source IP resolver" for our `UdpSocket`. This function gets called
every time we need to send a packet without a source IP.
- For improved performance, we cache these results. The Windows client
uses this source IP resolver to use the above devised strategy to find a
suitable source IP.
- In case the source IP resolution fails, we don't send the packet. This
is important, otherwise, the kernel might choose our TUN interface again
and trigger a routing loop.
The last remark to make here is that this also works for connection
roaming. The TCP socket gets thrown away when we reconnect to the
portal. Thus, the new socket will pick the new best interface as it is
re-created. The UDP sockets also get thrown away as part of roaming.
That clears the above cache which is what we want: Upon roaming, the
best interface for a given destination IP will likely have changed.
[0]:
59014a9622/rust/headless-client/src/linux.rs (L19-L29)
[1]:
59014a9622/rust/bin-shared/src/tun_device_manager/linux.rs (L204-L224)
[2]:
59014a9622/rust/connlib/clients/android/src/lib.rs (L535-L549)
[3]:
https://learn.microsoft.com/en-us/previous-versions/technet-magazine/cc137807(v=msdn.10)?redirectedfrom=MSDN
[4]:
https://learn.microsoft.com/en-us/windows-server/networking/technologies/network-subsystem/net-sub-interface-metricFixes: #5955.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
`connlib`'s event loop performs work in a very particular order:
1. Local buffers like IP, UDP and DNS packets are emptied.
2. Time-sensitive tasks, if any, are performed.
3. New UDP packets are processed.
4. New IP packets (from the TUN device) are processed.
This priority ensures we don't accept more work (i.e. new packets) until
we have finished processing existing work. As a result, we can keep
local buffers small and processing latencies low.
I am not completely confident on the issue of #6067 but if the busy-loop
originates from a bad timer, then the above priority means we never get
to the part where we read new UDP or IP packets and components such a
`PhoenixChannel` - which operate outside of `connlib'`s event loop -
don't get any CPU time.
A naive fix for this problem is to just de-prioritise the polling of the
timer within `Io::poll`. I say naive because without additional changes,
this could delay the processing of time-sensitive tasks on a very busy
client / gateway where packets are constantly arriving and thus we
never[^1] reach the part where the timer gets polled.
To fix this, we make two distinct changes:
1. We pro-actively break from `connlib'`s event loop every 5000
iterations. This ensures that even on a very busy system, other
components like the `PhoenixChannel` get a chance to do _some_ work once
in a while.
2. In case we force-yield from the event loop, we call `handle_timeout`
and immediately schedule a new wake-up. This ensures time does advance
in regular intervals as well and we don't get wrongly suspended by the
runtime.
These changes don't prevent any timer-loops by themselves. With a
timer-loop, we still busy-loop for 5000 iterations and thus
unnecessarily burn through some CPU cycles. The important bit however is
that we stay operational and can accept packets and portal messages. Any
of them might change the state such that the timer value changes, thus
allowing `connlib` to self-heal from this loop.
Fixes: #6067.
[^1]: This is an assumption based on the possible control flow. In
practise, I believe that reading from the sockets or the TUN device is a
much slower operation than processing the packets. Thus, we should
eventually hit the the timer path too.
Why:
* Before the REST API is release to all Firezone users a closed beta
program will be run. Rather than blurring out the API Clients page for
users that are not apart of the closed beta program, a 'beta' page will
be shown that will allow users to request access to the closed beta.
Once the REST API is released to all accounts, all of this can be
removed.
Closes: #5920
### Screenshot
<img width="1445" alt="Screenshot 2024-07-24 at 6 55 36 PM"
src="https://github.com/user-attachments/assets/a09591bc-190c-4bd4-9716-9a74a0f09e0a">
We don't want the timer to fire multiple times at the same `Instant`
unless it has been specifically set to that `Instant` again. Thus, clear
the timer after it fired.
I don't think this fixed#6067 but it can't hurt.
Connection roaming within `connlib` has changed a fair-bit since we
introduced the `reconnect` function. The new implementation is basically
a hard-reset of all state within `connlib`. Renaming this function
across all layers makes this more obvious.
Resolves: #6038.
When a user copy-pastes an address into the `address` field that
contains a leading or trailing whitespace, it's not apparent why the
address is invalid. This is common when copy-pasting DNS names from
cloud consoles that have poor UIs, such as Azure.
Fixes#6059
The compose service I defined is called `otel` not `otlp`. With this fix
in place, the relay successfully connects to the OTLP exporter.
it is worthwhile noting that the connection to the OTLP exporter itself
is not critical for relay operation. Even if it fails, it won't affect
the actual data plane. I do think it makes sense to still have a working
OTLP exporter in the compose definition. As it makes it easier to test
whether the ingestion of metrics and traces works as expected.
This messes with the build cache because the locally run rust-analyzer
doesn't recognise those variables and thus keeps poisoning the cache. If
other Nix users want `mold`, they should set it up in their user
configuration.
Windows may delete the default route during roaming. To prevent this
from causing problems, we make `set_routes` add all routes regardless of
the previously stored ones. The known routes are only used to compute,
what routes are to be removed.
For Linux we do the same to make it consistent across platforms.
This also give us the chance to not clear the cache when ips are set,
since now all routes are always added, meaning they will be always
re-added when roaming.
Overall, this more closely aligns Linux and Windows with how Firezone
works on Apple and Android. There, we always remove all routes and set
new ones. Removing routes happens very rarely (only when CIDR resources
are deactivated), thus, not removing all and re-adding the routes is
still deemed to be worth it.
With the new implementation, this is guaranteed to always make the new
routes take effect and at the same time be idempotent.
---------
Signed-off-by: Gabi <gabrielalejandro7@gmail.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
~~Relays are still failing to boot~~. By setting OTLP_ENDPOINT for both
relays in CI we will more closely mimic staging/prod env.
Edit: They just started working randomly. It had failed with a core dump
error after #6034 was merged, which is a bit concerning.
The dependency update in #6003 introduced a regression: Connecting to
the OTLP exporter was hanging forever and thus the relay failed to start
up.
The hang seems to be related to _dropping_ the `meter_provider`. Looking
at the changelog update, this change was actually called out:
https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-otlp/CHANGELOG.md#v0170.
By setting these providers globally, the relay starts up just fine.
To ensure this doesn't regress again, we add an OTEL collector to our
`docker-compose.yml` and configure the `relay-1` to connect to it.
Note that for GUI Clients, listening is still done by the GUI process,
not the IPC service.
Yak shave towards #5846. This allows for faster dev cycles since I won't
have to compile all the GUI stuff.
Some changes in here were extracted from other draft PRs.
Changes:
- Remove `thiserror` that was never matched on
- Don't return the DNS resolvers from the notifier directly, just send a
notification and allow the caller to check the resolvers itself if
needed
- Rename `DnsListener` to `DnsNotifier`
- Rename `Worker` to `NetworkNotifier`
- remove `unwrap_or_default` when getting resolvers. I don't know why
it's there, if there's a good reason then it should be handled inside
the function, not in the caller
```[tasklist]
### Tasks
- [x] Rename `*Listener` to `*Notifier`
- [x] (not needed) ~~Support `/etc/resolv.conf` DNS control method too?~~
```
As part of debugging full-route tunneling on Windows, we discovered that
we need to always explicitly choose the interface through which we want
to send packets, otherwise Windows may cause a routing loop by routing
our packets back into the TUN device.
We already have a `SocketFactory` abstraction in `connlib` that is used
by each platform to customise the setup of each socket to prevent
routing loops.
So far, this abstraction directly returns tokio sockets which don't
allow us to intercept the actual sending of packets. For some of our
traffic, i.e. the UDP packets exchanged with relays, we don't specify a
source address. To make full-route work on Windows, we need to intercept
these packets and explicitly set the source address.
To achieve that, we introduce dedicated `TcpSocket` and `UdpSocket`
structs within `socket-factory`. With this in place, we will be able to
add Windows-conditional code to looks up and sets the source address of
outgoing UDP packets. For TCP sockets, the lookup will happen prior to
connecting to the address and used to bind to the correct interface.
Related: #2667.
Related: #5955.
It appears that the configuration via env variables doesn't work as
expected. This PR changes bencher's config to use commandline arguments.
With that, the `--branch-start-point` actually takes effect and copies
over the thresholds configured on bencher for the `main` branch.
With the thresholds in place, we can configure bencher to only alert us
if a threshold is exceeded and otherwise be quiet and not post a
comment.
In staging and production, setting up the logger for the relay is a
fairly complicated setup. To make debugging easier, we always log these
initial steps on `TRACE` level until the real logger is initialised.
Bumps
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
from 20.14.9 to 20.14.12.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Fixes a UX issue somewhat introduced by
https://github.com/firezone/firezone/pull/5870 where we changed behavior
to make the redirect consistent with other CRUD operations.
The behavior we had prior to
https://github.com/firezone/firezone/pull/5870 was to redirect to
Resource show, but feedback from customer (which makes sense) is that
you almost _always_ create a Policy after creating a Resource, so this
PR streamlines the hot path flow there.
This has occurred to a couple users in Discord as well, so by taking
them directly to policies/new it hopefully make clear the user needs to
create a Policy after creating a Resource.
This papercut occurred while customer was demo'ing Firezone to another
potential customer.
Fixes#5929
cc @jameswinegar
Bumps the com-android group in /kotlin/android with 1 update:
com.android.application.
Updates `com.android.application` from 8.5.0 to 8.5.1
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps androidx.test.espresso:espresso-contrib from 3.5.1 to 3.6.1.
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Closes#5953
In all my testing on Windows I've never seen these work. I tried them a
couple days ago on Linux and I haven't seen them work there either. No
clue why. Tauri bug? Windows bug?
This explanation of the processes is no longer accurate after the IPC
service split.
---------
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Bumps [@tauri-apps/api](https://github.com/tauri-apps/tauri) from 1.5.6
to 1.6.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tauri-apps/tauri/releases"><code>@tauri-apps/api</code>'s
releases</a>.</em></p>
<blockquote>
<h2><code>@tauri-apps/api</code> v1.6.0</h2>
<!-- raw HTML omitted -->
<pre><code>yarn audit v1.22.22
info No lockfile found.
0 vulnerabilities found - Packages audited: 146
Done in 2.09s.
</code></pre>
<!-- raw HTML omitted -->
<h2>[1.6.0]</h2>
<h3>Enhancements</h3>
<ul>
<li><a
href="44e3335da8"><code>44e3335da</code></a>
(<a
href="https://redirect.github.com/tauri-apps/tauri/pull/9796">#9796</a>)
Enhance the speed of The JS <code>Command.execute</code> API from
<code>shell</code> module.</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><a
href="44e3335da8"><code>44e3335da</code></a>
(<a
href="https://redirect.github.com/tauri-apps/tauri/pull/9796">#9796</a>)
Fix The JS <code>Command.execute</code> API from <code>shell</code>
module including extra new lines.</li>
</ul>
<!-- raw HTML omitted -->
<pre><code>yarn run v1.22.22
$ yarn build && cd ./dist && yarn publish --access
public --loglevel silly
$ rollup -c --configPlugin typescript
[36m
[1m./src/app.ts, ./src/cli.ts, ./src/clipboard.ts, ./src/dialog.ts,
./src/event.ts, ./src/fs.ts, ./src/globalShortcut.ts, ./src/http.ts,
./src/index.ts, ./src/mocks.ts, ./src/notification.ts, ./src/os.ts,
./src/path.ts, ./src/process.ts, ./src/shell.ts, ./src/tauri.ts,
./src/updater.ts, ./src/window.ts[22m → [1m./dist, ./dist[22m...[39m
[32mcreated [1m./dist, ./dist[22m in [1m1.4s[22m[39m
[36m
[1msrc/index.ts[22m →
[1m../../core/tauri/scripts/bundle.global.js[22m...[39m
[32mcreated [1m../../core/tauri/scripts/bundle.global.js[22m in
[1m1.6s[22m[39m
[1/4] Bumping version...
info Current version: 1.6.0
[2/4] Logging in...
[3/4] Publishing...
success Published.
[4/4] Revoking token...
info Not revoking login token, specified via config file.
Done in 7.85s.
</code></pre>
<!-- raw HTML omitted -->
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="cf331cdc3e"><code>cf331cd</code></a>
fix(core): lint</li>
<li><a
href="574076541a"><code>5740765</code></a>
fix(ci): downgrade crates for MSRV check</li>
<li><a
href="89f3048f52"><code>89f3048</code></a>
apply version updates (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9871">#9871</a>)</li>
<li><a
href="08f57efefd"><code>08f57ef</code></a>
fix(cli): parse <code>--profile=\<profile></code> syntax (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/10136">#10136</a>)</li>
<li><a
href="63da834ce4"><code>63da834</code></a>
ci: Fix msrv check (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/10118">#10118</a>)</li>
<li><a
href="c2d3afa4fb"><code>c2d3afa</code></a>
prevent uncomment collision in 1.x invoke_key templating (fix <a
href="https://redirect.github.com/tauri-apps/tauri/issues/10084">#10084</a>)
(<a
href="https://redirect.github.com/tauri-apps/tauri/issues/10087">#10087</a>)</li>
<li><a
href="924387092e"><code>9243870</code></a>
feat: add dmg settings, cherry picked from <a
href="https://redirect.github.com/tauri-apps/tauri/issues/7964">#7964</a>
(<a
href="https://redirect.github.com/tauri-apps/tauri/issues/8334">#8334</a>)</li>
<li><a
href="d2786bf699"><code>d2786bf</code></a>
chore(template): template format error (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/10018">#10018</a>)</li>
<li><a
href="674accad75"><code>674acca</code></a>
fix: missing depends for rpm package (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/10015">#10015</a>)</li>
<li><a
href="09152d83e1"><code>09152d8</code></a>
ci(msrv-list): Downgrade os_pipe (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/10014">#10014</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/tauri-apps/tauri/compare/@tauri-apps/api-v1.5.6...@tauri-apps/api-v1.6">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
(External contribution)
Hi, first thanks to @bmanifold for his awesome work! I've not yet tested
the API but here is a first PR fixing various small mistakes in the
generated openapi spec:
Schema names cannot contain spaces
Add missing path parameters in the spec
Remove duplicated endpoint for creating an identity (not sure about
that, I'll let you check)
If you want to validate the generated spec you can paste it here:
https://editor.swagger.io/ (or at the bottow of your swagger ui)
Please review commit by commit
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Antoine Labarussias <antoinelabarussias@gmail.com>