Compare commits

...

166 Commits

Author SHA1 Message Date
Zoey
0da1cc184f fix #2847 2026-03-02 21:41:30 +01:00
Zoey
19c819fc95 fix #2843 by moving the advanced_config to the top of custom locations, but this could cause other isses 2026-03-02 21:41:30 +01:00
Zoey
8521cf19cc invert default of NGINX_TRUST_SECPR1 to true / add AUTH_REQUEST_ANUBIS_USE_CUSTOM_IMAGES env 2026-03-02 21:41:30 +01:00
Zoey
daed77142f fix some things there made async a few commits ago 2026-03-02 21:41:30 +01:00
Zoey
537ca98f8f improve php-fpm settings
Signed-off-by: Zoey <zoey@z0ey.de>
2026-03-02 18:42:15 +01:00
renovate[bot]
f1e95f7ba6 dep updates
Signed-off-by: Zoey <zoey@z0ey.de>
2026-03-02 18:42:15 +01:00
Zoey
842b7d9a72 do not build images for PRs
Signed-off-by: Zoey <zoey@z0ey.de>
2026-03-01 11:25:23 +01:00
Zoey
ca8f602466 merge upstream 2026-02-27 23:12:46 +01:00
Zoey
57605b455c Merge remote-tracking branch 'upstream/develop' into develop 2026-02-27 23:02:50 +01:00
Zoey
bd06d48f0b make more async 2026-02-27 22:57:26 +01:00
jc21
c1d09eaceb Merge pull request #5353 from NginxProxyManager/dependabot/npm_and_yarn/docs/rollup-4.59.0
Bump rollup from 4.24.0 to 4.59.0 in /docs
2026-02-27 10:34:54 +10:00
jc21
9c509f30de Merge pull request #5355 from NginxProxyManager/dependabot/npm_and_yarn/frontend/rollup-4.59.0
Bump rollup from 4.57.1 to 4.59.0 in /frontend
2026-02-27 10:19:55 +10:00
dependabot[bot]
c85b11ee33 Bump rollup from 4.57.1 to 4.59.0 in /frontend
Bumps [rollup](https://github.com/rollup/rollup) from 4.57.1 to 4.59.0.
- [Release notes](https://github.com/rollup/rollup/releases)
- [Changelog](https://github.com/rollup/rollup/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rollup/rollup/compare/v4.57.1...v4.59.0)

---
updated-dependencies:
- dependency-name: rollup
  dependency-version: 4.59.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-26 22:11:06 +00:00
jc21
cd5ef390b9 Merge pull request #5350 from NginxProxyManager/dependabot/npm_and_yarn/backend/basic-ftp-5.2.0
Bump basic-ftp from 5.1.0 to 5.2.0 in /backend
2026-02-27 08:09:57 +10:00
dependabot[bot]
d49cab1c0e Bump rollup from 4.24.0 to 4.59.0 in /docs
Bumps [rollup](https://github.com/rollup/rollup) from 4.24.0 to 4.59.0.
- [Release notes](https://github.com/rollup/rollup/releases)
- [Changelog](https://github.com/rollup/rollup/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rollup/rollup/compare/v4.24.0...v4.59.0)

---
updated-dependencies:
- dependency-name: rollup
  dependency-version: 4.59.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-26 11:05:43 +00:00
renovate[bot]
d2b446192f dep updates
Signed-off-by: Zoey <zoey@z0ey.de>
2026-02-26 09:17:55 +01:00
dependabot[bot]
33b1a993ec Bump basic-ftp from 5.1.0 to 5.2.0 in /backend
Bumps [basic-ftp](https://github.com/patrickjuchli/basic-ftp) from 5.1.0 to 5.2.0.
- [Release notes](https://github.com/patrickjuchli/basic-ftp/releases)
- [Changelog](https://github.com/patrickjuchli/basic-ftp/blob/master/CHANGELOG.md)
- [Commits](https://github.com/patrickjuchli/basic-ftp/compare/v5.1.0...v5.2.0)

---
updated-dependencies:
- dependency-name: basic-ftp
  dependency-version: 5.2.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-25 23:29:28 +00:00
Jamie Curnow
67d40e186f Attempt to fix #5335 by allowing resovler generation to be opt-out with a env var 2026-02-26 08:32:02 +10:00
jc21
52be66c43e Merge pull request #5346 from siimaarmaa/develop
Added Estonia langugae support.
2026-02-25 08:38:17 +10:00
jc21
ec46cabcd4 Merge pull request #5334 from bill-mahoney/fix/atomic-ipv6-config-write
Fix silent nginx config corruption in 50-ipv6.sh
2026-02-25 08:35:11 +10:00
jc21
a7a9cc3acb Merge pull request #5337 from NginxProxyManager/dependabot/npm_and_yarn/test/prod-patch-updates-2e2637830d
Bump eslint from 10.0.0 to 10.0.1 in /test in the prod-patch-updates group
2026-02-25 08:31:03 +10:00
jc21
020b3ebb33 Merge pull request #5338 from NginxProxyManager/dependabot/npm_and_yarn/backend/dev-patch-updates-d8f01b0b39
Bump nodemon from 3.1.13 to 3.1.14 in /backend in the dev-patch-updates group
2026-02-25 08:30:51 +10:00
jc21
c1c4baf389 Merge pull request #5339 from NginxProxyManager/dependabot/npm_and_yarn/test/prod-minor-updates-22dc529b9a
Bump eslint-plugin-cypress from 6.0.0 to 6.1.0 in /test in the prod-minor-updates group
2026-02-25 08:30:40 +10:00
jc21
672b5d6dd9 Merge pull request #5341 from NginxProxyManager/dependabot/npm_and_yarn/backend/prod-patch-updates-07cfa309fd
Bump mysql2 from 3.17.3 to 3.17.5 in /backend in the prod-patch-updates group
2026-02-25 08:30:00 +10:00
jc21
cd230b5878 Merge pull request #5342 from NginxProxyManager/dependabot/npm_and_yarn/frontend/dev-minor-updates-a1de0f9639
Bump happy-dom from 20.6.3 to 20.7.0 in /frontend in the dev-minor-updates group
2026-02-25 08:29:49 +10:00
jc21
a8f35062af Merge pull request #5343 from NginxProxyManager/dependabot/npm_and_yarn/frontend/prod-patch-updates-bfb85ae48b
Bump country-flag-icons from 1.6.13 to 1.6.14 in /frontend in the prod-patch-updates group
2026-02-25 08:29:06 +10:00
Jamie Curnow
da5955412d Command to regenerate nginx configs 2026-02-25 08:13:38 +10:00
siimaarmaa
adb27fe67d Added Estonia langugae support. First Estonia lanuage update is HelpDocs. By Siim Aarmaa 2026-02-23 20:49:06 +02:00
dependabot[bot]
d874af8692 Bump country-flag-icons in /frontend in the prod-patch-updates group
Bumps the prod-patch-updates group in /frontend with 1 update: [country-flag-icons](https://gitlab.com/catamphetamine/country-flag-icons).


Updates `country-flag-icons` from 1.6.13 to 1.6.14
- [Changelog](https://gitlab.com/catamphetamine/country-flag-icons/blob/master/CHANGELOG.md)
- [Commits](https://gitlab.com/catamphetamine/country-flag-icons/compare/v1.6.13...v1.6.14)

---
updated-dependencies:
- dependency-name: country-flag-icons
  dependency-version: 1.6.14
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: prod-patch-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-23 13:51:21 +00:00
dependabot[bot]
0844dade98 Bump happy-dom in /frontend in the dev-minor-updates group
Bumps the dev-minor-updates group in /frontend with 1 update: [happy-dom](https://github.com/capricorn86/happy-dom).


Updates `happy-dom` from 20.6.3 to 20.7.0
- [Release notes](https://github.com/capricorn86/happy-dom/releases)
- [Commits](https://github.com/capricorn86/happy-dom/compare/v20.6.3...v20.7.0)

---
updated-dependencies:
- dependency-name: happy-dom
  dependency-version: 20.7.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: dev-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-23 13:50:56 +00:00
dependabot[bot]
71d59516e8 Bump mysql2 in /backend in the prod-patch-updates group
Bumps the prod-patch-updates group in /backend with 1 update: [mysql2](https://github.com/sidorares/node-mysql2).


Updates `mysql2` from 3.17.3 to 3.17.5
- [Release notes](https://github.com/sidorares/node-mysql2/releases)
- [Changelog](https://github.com/sidorares/node-mysql2/blob/master/Changelog.md)
- [Commits](https://github.com/sidorares/node-mysql2/compare/v3.17.3...v3.17.5)

---
updated-dependencies:
- dependency-name: mysql2
  dependency-version: 3.17.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: prod-patch-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-23 13:50:35 +00:00
dependabot[bot]
06e220e184 Bump nodemon in /backend in the dev-patch-updates group
Bumps the dev-patch-updates group in /backend with 1 update: [nodemon](https://github.com/remy/nodemon).


Updates `nodemon` from 3.1.13 to 3.1.14
- [Release notes](https://github.com/remy/nodemon/releases)
- [Commits](https://github.com/remy/nodemon/compare/v3.1.13...v3.1.14)

---
updated-dependencies:
- dependency-name: nodemon
  dependency-version: 3.1.14
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: dev-patch-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-23 13:50:13 +00:00
dependabot[bot]
dc53647e76 Bump eslint-plugin-cypress in /test in the prod-minor-updates group
Bumps the prod-minor-updates group in /test with 1 update: [eslint-plugin-cypress](https://github.com/cypress-io/eslint-plugin-cypress).


Updates `eslint-plugin-cypress` from 6.0.0 to 6.1.0
- [Release notes](https://github.com/cypress-io/eslint-plugin-cypress/releases)
- [Commits](https://github.com/cypress-io/eslint-plugin-cypress/compare/v6.0.0...v6.1.0)

---
updated-dependencies:
- dependency-name: eslint-plugin-cypress
  dependency-version: 6.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-23 13:50:13 +00:00
dependabot[bot]
4c04e89483 Bump eslint in /test in the prod-patch-updates group
Bumps the prod-patch-updates group in /test with 1 update: [eslint](https://github.com/eslint/eslint).


Updates `eslint` from 10.0.0 to 10.0.1
- [Release notes](https://github.com/eslint/eslint/releases)
- [Commits](https://github.com/eslint/eslint/compare/v10.0.0...v10.0.1)

---
updated-dependencies:
- dependency-name: eslint
  dependency-version: 10.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: prod-patch-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-23 13:50:02 +00:00
Zoey
8c3e0f2809 use node:crypto instead of openssl/use dayjs instead of moment 2026-02-22 18:45:28 +01:00
Zoey
074a01546a add docs route 2026-02-22 18:04:58 +01:00
renovate[bot]
246d31c2fd dep updates 2026-02-22 18:04:58 +01:00
William Mahoney
7241869a9e Fix silent config corruption in 50-ipv6.sh on NFS volumes
Replace unsafe `echo "$(sed ...)" > $FILE` with atomic temp-file write.

The current pattern reads a file with sed inside a command substitution,
then writes the result back via echo redirection. If sed reads an empty
or momentarily unreadable file (e.g., NFS transient issue during
container recreation by Watchtower or similar tools), it produces no
output. The echo then writes exactly 1 byte (a newline) to the config
file, silently destroying its contents.

The fix writes sed output to a temp file first, checks it's non-empty
with `[ -s ]`, then atomically replaces the original via `mv`. If sed
produces empty output, the original file is preserved and a warning is
logged to stderr.
2026-02-20 21:24:40 -07:00
Zoey
0333ab08f3 Merge remote-tracking branch 'upstream/develop' into develop 2026-02-20 17:47:45 +01:00
Brian Norman
dda1c5ebe0 Adding additional instructions to help those using open-appsec cloud and crowdsec as it took me hours to work out why crowdsec was not seeing the events (#2790)
Signed-off-by: Brian Norman <703352+gingemonster@users.noreply.github.com>
Signed-off-by: Zoey <zoey@z0ey.de>
Co-authored-by: Zoey <zoey@z0ey.de>
2026-02-20 17:46:52 +01:00
Zoey
951062a6b9 switch to aws-lc/add patches for zlib-ng and brotli cert compression 2026-02-20 17:41:02 +01:00
jc21
94f6191a21 Merge pull request #5332 from NginxProxyManager/update-deps
Update deps
2026-02-20 11:54:46 +10:00
Jamie Curnow
cac52dd0ff Update linked deps 2026-02-20 11:20:59 +10:00
dependabot[bot]
906f177960 Bump tar from 7.5.7 to 7.5.9 in /test
Bumps [tar](https://github.com/isaacs/node-tar) from 7.5.7 to 7.5.9.
- [Release notes](https://github.com/isaacs/node-tar/releases)
- [Changelog](https://github.com/isaacs/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/isaacs/node-tar/compare/v7.5.7...v7.5.9)

---
updated-dependencies:
- dependency-name: tar
  dependency-version: 7.5.9
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-20 11:20:59 +10:00
dependabot[bot]
f52afced5d Bump systeminformation from 5.30.6 to 5.31.1 in /test
Bumps [systeminformation](https://github.com/sebhildebrandt/systeminformation) from 5.30.6 to 5.31.1.
- [Release notes](https://github.com/sebhildebrandt/systeminformation/releases)
- [Changelog](https://github.com/sebhildebrandt/systeminformation/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sebhildebrandt/systeminformation/compare/v5.30.6...v5.31.1)

---
updated-dependencies:
- dependency-name: systeminformation
  dependency-version: 5.31.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-20 11:20:59 +10:00
Jamie Curnow
e8224ff0af Update all dependencies 2026-02-20 11:02:56 +10:00
jc21
a4fa83d0ce Merge pull request #5326 from NginxProxyManager/dependabot/npm_and_yarn/test/tar-7.5.9
Bump tar from 7.5.7 to 7.5.9 in /test
2026-02-20 10:53:03 +10:00
jc21
770716ebf8 Merge pull request #5327 from NginxProxyManager/dependabot/npm_and_yarn/test/systeminformation-5.31.1
Bump systeminformation from 5.30.6 to 5.31.1 in /test
2026-02-20 10:52:51 +10:00
Zoey
17c2a68ff0 fix sec-fetch for oidc 2026-02-19 22:27:02 +01:00
Zoey
d144f54a6c fix missing default / location if custom location / is disabled 2026-02-19 22:09:54 +01:00
renovate[bot]
27fe362854 dep updates 2026-02-19 21:41:30 +01:00
Zoey
507a71cf9b improve error logging of ratelimited requests 2026-02-19 21:23:57 +01:00
Zoey
1b713e3a88 patch openappsec attachment to use zlib-ng 2026-02-19 21:04:10 +01:00
Zoey
c0c4f748b2 many security improvements: rate limits, limit upload size, fix: disabling totp and recretaing backup codes now requires a valid code, dep updates 2026-02-19 19:11:52 +01:00
Zoey
ae13514410 fix ga-IE langname in selection/update and pin dep
Signed-off-by: Zoey <zoey@z0ey.de>
2026-02-19 18:33:40 +01:00
Zoey
814827be4e merge upstream 2026-02-18 23:39:34 +01:00
Zoey
30ad65c5a6 Merge remote-tracking branch 'upstream/develop' into develop 2026-02-18 23:38:20 +01:00
dependabot[bot]
f1067d3308 Bump systeminformation from 5.30.6 to 5.31.1 in /test
Bumps [systeminformation](https://github.com/sebhildebrandt/systeminformation) from 5.30.6 to 5.31.1.
- [Release notes](https://github.com/sebhildebrandt/systeminformation/releases)
- [Changelog](https://github.com/sebhildebrandt/systeminformation/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sebhildebrandt/systeminformation/compare/v5.30.6...v5.31.1)

---
updated-dependencies:
- dependency-name: systeminformation
  dependency-version: 5.31.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-18 22:37:39 +00:00
dependabot[bot]
85c1a935ea Bump tar from 7.5.7 to 7.5.9 in /test
Bumps [tar](https://github.com/isaacs/node-tar) from 7.5.7 to 7.5.9.
- [Release notes](https://github.com/isaacs/node-tar/releases)
- [Changelog](https://github.com/isaacs/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/isaacs/node-tar/compare/v7.5.7...v7.5.9)

---
updated-dependencies:
- dependency-name: tar
  dependency-version: 7.5.9
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-18 22:37:36 +00:00
Jamie Curnow
51ef7f3b86 Docs update, use package version instead of latest, refer to better mariadb image 2026-02-19 08:36:41 +10:00
Zoey
8484c69af8 merge upstream 2026-02-18 23:36:21 +01:00
Zoey
316cdf3479 Merge remote-tracking branch 'upstream/develop' into develop 2026-02-18 23:29:08 +01:00
Zoey
dffa4a9888 use zlib-ng instead of zlib/use quickjs-ng for njs/fix #2781/dep updates/ 2026-02-18 23:26:22 +01:00
jc21
846b94f7e8 Merge pull request #5324 from biodland/develop
chore: added Norwegian translation
2026-02-19 07:51:23 +10:00
Zoey
ace499a546 Merge remote-tracking branch 'upstream/develop' into develop 2026-02-18 15:57:54 +01:00
Birger
19e24c7e7e Rename langNO import to langNo for consistency 2026-02-18 12:24:20 +01:00
Birger
c1bc471dac chore: added Norwegian translation, added missing references. 2026-02-18 12:23:08 +01:00
Birger
608dc0b6bf chore: added Norwegian translation 2026-02-18 11:31:42 +01:00
Jamie Curnow
0dbf268f37 Fix #5284 for older sqlite3 configurations 2026-02-18 08:32:17 +10:00
Zoey
f9a49092ba merge upstream 2026-02-17 07:57:18 +01:00
Zoey
23c49447ab Merge remote-tracking branch 'upstream/develop' into develop 2026-02-17 07:41:00 +01:00
Zoey
dde694b57d force https for the npmplus and goaccess ui
Signed-off-by: Zoey <zoey@z0ey.de>
2026-02-17 07:29:54 +01:00
Zoey
9c9f82dc26 dep and doc updates
Signed-off-by: Zoey <zoey@z0ey.de>
2026-02-17 07:29:54 +01:00
jc21
c7437ddf8f Merge branch 'master' into develop 2026-02-17 15:02:29 +10:00
jc21
627f43c729 Merge pull request #5314 from NginxProxyManager/dependabot/npm_and_yarn/frontend/dev-minor-updates-d71d2fefd7
Bump the dev-minor-updates group in /frontend with 2 updates
2026-02-17 14:57:17 +10:00
jc21
fc4c5aac86 Merge pull request #5315 from NginxProxyManager/dependabot/npm_and_yarn/frontend/prod-patch-updates-95db6732c0
Bump the prod-patch-updates group in /frontend with 2 updates
2026-02-17 14:57:03 +10:00
jc21
aff390f35d Merge pull request #5317 from Tech-no-1/fix-custom-certificates
Fix uploading of custom certificates
2026-02-17 14:50:07 +10:00
Jamie Curnow
5f5a3870e4 Drop support for armv7 builds, bump version, update docs 2026-02-17 12:55:56 +10:00
Tech-no-1
40f363bd4f Fix uploading of custom certificates 2026-02-17 02:58:18 +01:00
dependabot[bot]
678fdd22c6 Bump the dev-minor-updates group in /frontend with 2 updates
Bumps the dev-minor-updates group in /frontend with 2 updates: [@biomejs/biome](https://github.com/biomejs/biome/tree/HEAD/packages/@biomejs/biome) and [happy-dom](https://github.com/capricorn86/happy-dom).


Updates `@biomejs/biome` from 2.3.14 to 2.4.0
- [Release notes](https://github.com/biomejs/biome/releases)
- [Changelog](https://github.com/biomejs/biome/blob/main/packages/@biomejs/biome/CHANGELOG.md)
- [Commits](https://github.com/biomejs/biome/commits/@biomejs/biome@2.4.0/packages/@biomejs/biome)

Updates `happy-dom` from 20.5.3 to 20.6.1
- [Release notes](https://github.com/capricorn86/happy-dom/releases)
- [Commits](https://github.com/capricorn86/happy-dom/compare/v20.5.3...v20.6.1)

---
updated-dependencies:
- dependency-name: "@biomejs/biome"
  dependency-version: 2.4.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: dev-minor-updates
- dependency-name: happy-dom
  dependency-version: 20.6.1
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: dev-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-17 01:43:56 +00:00
dependabot[bot]
6c3cc83d66 Bump the prod-patch-updates group in /frontend with 2 updates
Bumps the prod-patch-updates group in /frontend with 2 updates: [@tanstack/react-query](https://github.com/TanStack/query/tree/HEAD/packages/react-query) and [country-flag-icons](https://gitlab.com/catamphetamine/country-flag-icons).


Updates `@tanstack/react-query` from 5.90.20 to 5.90.21
- [Release notes](https://github.com/TanStack/query/releases)
- [Changelog](https://github.com/TanStack/query/blob/main/packages/react-query/CHANGELOG.md)
- [Commits](https://github.com/TanStack/query/commits/@tanstack/react-query@5.90.21/packages/react-query)

Updates `country-flag-icons` from 1.6.12 to 1.6.13
- [Changelog](https://gitlab.com/catamphetamine/country-flag-icons/blob/master/CHANGELOG.md)
- [Commits](https://gitlab.com/catamphetamine/country-flag-icons/compare/v1.6.12...v1.6.13)

---
updated-dependencies:
- dependency-name: "@tanstack/react-query"
  dependency-version: 5.90.21
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: prod-patch-updates
- dependency-name: country-flag-icons
  dependency-version: 1.6.13
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: prod-patch-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-17 01:43:37 +00:00
jc21
5916fd5bee Merge pull request #5313 from NginxProxyManager/dependabot/npm_and_yarn/backend/prod-minor-updates-4d12c0f7cc
Bump the prod-minor-updates group in /backend with 3 updates
2026-02-17 11:42:02 +10:00
jc21
f105673904 Merge pull request #5312 from NginxProxyManager/dependabot/npm_and_yarn/frontend/dev-patch-updates-38f6d3601d
Bump the dev-patch-updates group in /frontend with 3 updates
2026-02-17 11:41:44 +10:00
jc21
a37d0b88d6 Merge pull request #5308 from YTKme/ytkme/fix-sqlite-internal-error
Fix SQLite Internal Error
2026-02-17 11:41:31 +10:00
Jamie Curnow
43bc2a743e Add note to docs about retiring armv7 after June 2026 2026-02-17 11:38:17 +10:00
jc21
269545256a Merge pull request #5283 from broker-consulting/feat/add-czech-translation
Add Czech translation and related locale files
2026-02-17 11:06:40 +10:00
jc21
e5df45e9ef Merge pull request #5279 from dodog/develop
Update Slovak translation
2026-02-17 11:05:51 +10:00
dependabot[bot]
5601dd14fc Bump the prod-minor-updates group in /backend with 3 updates
Bumps the prod-minor-updates group in /backend with 3 updates: [ajv](https://github.com/ajv-validator/ajv), [mysql2](https://github.com/sidorares/node-mysql2) and [otplib](https://github.com/yeojz/otplib/tree/HEAD/packages/otplib).


Updates `ajv` from 8.17.1 to 8.18.0
- [Release notes](https://github.com/ajv-validator/ajv/releases)
- [Commits](https://github.com/ajv-validator/ajv/compare/v8.17.1...v8.18.0)

Updates `mysql2` from 3.16.3 to 3.17.1
- [Release notes](https://github.com/sidorares/node-mysql2/releases)
- [Changelog](https://github.com/sidorares/node-mysql2/blob/master/Changelog.md)
- [Commits](https://github.com/sidorares/node-mysql2/compare/v3.16.3...v3.17.1)

Updates `otplib` from 13.2.1 to 13.3.0
- [Release notes](https://github.com/yeojz/otplib/releases)
- [Changelog](https://github.com/yeojz/otplib/blob/main/release.config.json)
- [Commits](https://github.com/yeojz/otplib/commits/v13.3.0/packages/otplib)

---
updated-dependencies:
- dependency-name: ajv
  dependency-version: 8.18.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: mysql2
  dependency-version: 3.17.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: otplib
  dependency-version: 13.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-16 14:01:58 +00:00
dependabot[bot]
3e5655cfcd Bump the dev-patch-updates group in /frontend with 3 updates
Bumps the dev-patch-updates group in /frontend with 3 updates: [@types/react](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/react), [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/tree/HEAD/packages/plugin-react) and [vite-tsconfig-paths](https://github.com/aleclarson/vite-tsconfig-paths).


Updates `@types/react` from 19.2.13 to 19.2.14
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/react)

Updates `@vitejs/plugin-react` from 5.1.3 to 5.1.4
- [Release notes](https://github.com/vitejs/vite-plugin-react/releases)
- [Changelog](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite-plugin-react/commits/plugin-react@5.1.4/packages/plugin-react)

Updates `vite-tsconfig-paths` from 6.1.0 to 6.1.1
- [Release notes](https://github.com/aleclarson/vite-tsconfig-paths/releases)
- [Commits](https://github.com/aleclarson/vite-tsconfig-paths/compare/v6.1.0...v6.1.1)

---
updated-dependencies:
- dependency-name: "@types/react"
  dependency-version: 19.2.14
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: dev-patch-updates
- dependency-name: "@vitejs/plugin-react"
  dependency-version: 5.1.4
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: dev-patch-updates
- dependency-name: vite-tsconfig-paths
  dependency-version: 6.1.1
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: dev-patch-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-16 14:01:52 +00:00
Zoey
1e723b2f88 merge upstream 2026-02-16 09:35:32 +01:00
Zoey
27510c888d Merge remote-tracking branch 'upstream/develop' into develop 2026-02-16 09:35:25 +01:00
Zoey
7499388f49 re-add AES128-GCM-SHA256 cipher(suites) 2026-02-16 09:32:58 +01:00
Zoey
ed70405773 dep/doc updates
Signed-off-by: Zoey <zoey@z0ey.de>
2026-02-16 09:32:58 +01:00
jc21
a90af83270 Merge pull request #5309 from NginxProxyManager/dependabot/npm_and_yarn/test/prod-patch-updates-d4d031af8e
Bump @quobix/vacuum from 0.23.5 to 0.23.8 in /test in the prod-patch-updates group across 1 directory
2026-02-16 11:57:14 +10:00
jc21
619a8e5acc Merge pull request #5310 from NginxProxyManager/dependabot/npm_and_yarn/test/prod-minor-updates-aef0194d28
Bump eslint-plugin-cypress from 5.2.1 to 5.3.0 in /test in the prod-minor-updates group across 1 directory
2026-02-16 11:57:05 +10:00
jc21
6dcdefb57e Merge pull request #5294 from NginxProxyManager/dependabot/npm_and_yarn/frontend/dev-patch-updates-cb582034f5
Bump the dev-patch-updates group in /frontend with 2 updates
2026-02-16 10:30:46 +10:00
jc21
787616010b Merge pull request #5289 from kiaxseventh/develop
Added ArvanCloud DNS plugin support via certbot-dns-arvan package
2026-02-16 10:30:34 +10:00
dependabot[bot]
5891c291d2 Bump eslint-plugin-cypress
Bumps the prod-minor-updates group with 1 update in the /test directory: [eslint-plugin-cypress](https://github.com/cypress-io/eslint-plugin-cypress).


Updates `eslint-plugin-cypress` from 5.2.1 to 5.3.0
- [Release notes](https://github.com/cypress-io/eslint-plugin-cypress/releases)
- [Commits](https://github.com/cypress-io/eslint-plugin-cypress/compare/v5.2.1...v5.3.0)

---
updated-dependencies:
- dependency-name: eslint-plugin-cypress
  dependency-version: 5.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-16 00:29:59 +00:00
dependabot[bot]
41a2a41e67 Bump @quobix/vacuum
Bumps the prod-patch-updates group with 1 update in the /test directory: [@quobix/vacuum](https://github.com/daveshanley/vacuum).


Updates `@quobix/vacuum` from 0.23.5 to 0.23.8
- [Release notes](https://github.com/daveshanley/vacuum/releases)
- [Commits](https://github.com/daveshanley/vacuum/compare/v0.23.5...v0.23.8)

---
updated-dependencies:
- dependency-name: "@quobix/vacuum"
  dependency-version: 0.23.8
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: prod-patch-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-16 00:29:36 +00:00
jc21
379099d7ed Merge pull request #5292 from NginxProxyManager/dependabot/npm_and_yarn/test/prod-minor-updates-69046ce3b8
Bump cypress from 15.9.0 to 15.10.0 in /test in the prod-minor-updates group
2026-02-16 10:29:02 +10:00
jc21
dbeab93c02 Merge pull request #5293 from NginxProxyManager/dependabot/npm_and_yarn/test/eslint-10.0.0
Bump eslint from 9.39.2 to 10.0.0 in /test
2026-02-16 10:28:52 +10:00
jc21
010cb562a0 Merge pull request #5295 from NginxProxyManager/dependabot/npm_and_yarn/frontend/dev-minor-updates-24fcbaaf54
Bump vite-tsconfig-paths from 6.0.5 to 6.1.0 in /frontend in the dev-minor-updates group
2026-02-16 10:28:39 +10:00
jc21
7ff2fc1900 Merge pull request #5299 from NginxProxyManager/dependabot/npm_and_yarn/test/axios-1.13.5
Bump axios from 1.13.4 to 1.13.5 in /test
2026-02-16 10:28:26 +10:00
jc21
1c189a1888 Merge pull request #5300 from NginxProxyManager/dependabot/npm_and_yarn/test/jsonpath-1.2.1
Bump jsonpath from 1.1.1 to 1.2.1 in /test
2026-02-16 10:28:16 +10:00
jc21
f3c46487f6 Merge pull request #5303 from 7heMech/fix-2fa-logout
Add guardrail to fix disabling 2fa
2026-02-16 10:27:58 +10:00
jc21
fcca481d1b Merge pull request #5305 from NginxProxyManager/dependabot/npm_and_yarn/backend/qs-6.14.2
Bump qs from 6.14.1 to 6.14.2 in /backend
2026-02-16 10:27:28 +10:00
jc21
c59c237000 Merge pull request #5306 from NginxProxyManager/dependabot/npm_and_yarn/test/qs-6.14.2
Bump qs from 6.14.1 to 6.14.2 in /test
2026-02-16 10:27:17 +10:00
Zoey
94a0e4a42f fix default of npmplus_x_frame_options in custom locations again 2026-02-15 10:09:23 +01:00
Zoey
8cd52e7f65 fix default of npmplus_upstream_compression/npmplus_x_frame_options in custom locations 2026-02-15 09:57:40 +01:00
Zoey
04b3c36f8f small ui improvements 2026-02-15 09:44:18 +01:00
Zoey
8af895cc67 fix custom locations being always marked as off 2026-02-15 09:35:43 +01:00
Yan Kuang
a62b6de9f2 Update SQLite client configuration from sqlite3 to better-sqlite3 2026-02-14 23:53:43 -08:00
Zoey
c2c33709d6 readd NGINX_WORKER_CONNECTIONS env/small fixes 2026-02-15 08:21:57 +01:00
Zoey
a2ba84ea6f prepare next release 2026-02-14 22:46:43 +01:00
Zoey
ea935ab578 split fancyindex/upstreamm compression button and add button to disable crowdsec appsec 2026-02-14 21:42:10 +01:00
Zoey
1025a4fcf3 add button to disable custom locations 2026-02-14 21:42:10 +01:00
Zoey
bdfc5a6086 remove NGINX_LOAD_GEOIP_MODULE (NOT geoip2) 2026-02-14 21:42:10 +01:00
Zoey
a03b9e008d update docs for new selections and csp
Signed-off-by: Zoey <zoey@z0ey.de>
2026-02-14 21:42:10 +01:00
Zoey
51a2f0549e hide http2 button in frontend and add http3 button 2026-02-14 21:42:10 +01:00
Zoey
559b5d2ab8 rename http3 column in backend 2026-02-14 21:42:10 +01:00
Zoey
50f898f805 invert SKIP_IP_RANGES by renaming it to TRUST_CLOUDFLARE 2026-02-14 17:51:15 +01:00
Zoey
3cdfb6d08d validate AUTH_REQUEST_ envs/fix proxying to sub paths
Signed-off-by: Zoey <zoey@z0ey.de>
2026-02-14 17:51:15 +01:00
Zoey
a0f8078dae add authentik-send-basic-auth 2026-02-14 17:51:15 +01:00
Zoey
10db251d49 use execFileSync in vite config
Signed-off-by: Zoey <zoey@z0ey.de>
2026-02-14 17:51:15 +01:00
Zoey
ac6d62aa4d fix csp 2026-02-14 17:51:15 +01:00
Zoey
d43a4f8fc2 only send X-Original-URL/X-Original-Method if needed 2026-02-14 17:51:15 +01:00
renovate[bot]
b9191f296f dep updates 2026-02-14 17:51:15 +01:00
dependabot[bot]
d92cc953e1 Bump qs from 6.14.1 to 6.14.2 in /test
Bumps [qs](https://github.com/ljharb/qs) from 6.14.1 to 6.14.2.
- [Changelog](https://github.com/ljharb/qs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ljharb/qs/compare/v6.14.1...v6.14.2)

---
updated-dependencies:
- dependency-name: qs
  dependency-version: 6.14.2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-14 12:44:07 +00:00
dependabot[bot]
1b6412688b Bump qs from 6.14.1 to 6.14.2 in /backend
Bumps [qs](https://github.com/ljharb/qs) from 6.14.1 to 6.14.2.
- [Changelog](https://github.com/ljharb/qs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ljharb/qs/compare/v6.14.1...v6.14.2)

---
updated-dependencies:
- dependency-name: qs
  dependency-version: 6.14.2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-14 12:44:05 +00:00
7heMech
1d14f72ba5 Add guardrail for disable 2fa 2026-02-14 06:28:59 +00:00
dependabot[bot]
099243aff7 Bump jsonpath from 1.1.1 to 1.2.1 in /test
Bumps [jsonpath](https://github.com/dchester/jsonpath) from 1.1.1 to 1.2.1.
- [Commits](https://github.com/dchester/jsonpath/commits/1.2.1)

---
updated-dependencies:
- dependency-name: jsonpath
  dependency-version: 1.2.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-12 15:31:26 +00:00
Zoey
27863855f9 fix: slow page loading with basic auth and only saved hashed passwords in the db 2026-02-12 00:44:45 +01:00
dependabot[bot]
5fe12f69ba Bump axios from 1.13.4 to 1.13.5 in /test
Bumps [axios](https://github.com/axios/axios) from 1.13.4 to 1.13.5.
- [Release notes](https://github.com/axios/axios/releases)
- [Changelog](https://github.com/axios/axios/blob/v1.x/CHANGELOG.md)
- [Commits](https://github.com/axios/axios/compare/v1.13.4...v1.13.5)

---
updated-dependencies:
- dependency-name: axios
  dependency-version: 1.13.5
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-11 20:22:09 +00:00
Zoey
c536f98483 fix #2740 2026-02-11 19:09:04 +01:00
Zoey
6d7f4a8b74 merge upstream 2026-02-11 19:09:04 +01:00
Zoey
f6dc10bf54 Merge remote-tracking branch 'upstream/develop' into develop 2026-02-11 18:55:21 +01:00
Zoey
644e3de10e add CSP drafts to NPMplus UI and goaccess 2026-02-11 18:04:38 +01:00
renovate[bot]
2630a628d4 dep updates 2026-02-11 18:04:38 +01:00
jc21
011191f645 Merge pull request #5260 from jerry-yuan/develop
Add trust_forwarded_proto option for SSL redirect handling in r…
2026-02-11 14:54:00 +10:00
Zoey
1178bfbc88 add some untested templates for auth providers 2026-02-10 23:06:36 +01:00
Zoey
d0554a2a5b Merge remote-tracking branch 'upstream/develop' into develop 2026-02-10 20:07:23 +01:00
Zoey
39ae2e6c51 fix: unsetting the acme profile doe snot reste it for existing certs, which will causes issues when switching to a diffrent ca which does not support this profile 2026-02-10 20:06:21 +01:00
Zoey
4ce99b36ee add x_frame_options to the webui (and auth_request but it does nothing currently) 2026-02-10 20:06:21 +01:00
Zoey
0a41246be9 breaking: change proxy host api/split proxy buffering button 2026-02-10 20:06:21 +01:00
Zoey
312c3f1183 keep upstreams Referrer-Policy if sent 2026-02-10 20:06:21 +01:00
Zoey
f9d89e21a8 fix #2704 2026-02-10 20:06:21 +01:00
renovate[bot]
7309882798 dep updates 2026-02-10 20:06:20 +01:00
jerry-yuan
eeab425ea4 fix: unknown "trust_forwarded_proto" variable error when run with already created old virtual hosts 2026-02-10 10:53:17 +00:00
Jamie Curnow
13fbc53591 Fix bug when adding invalid custom certs 2026-02-10 14:54:33 +10:00
dependabot[bot]
3f2aec7b86 Bump vite-tsconfig-paths in /frontend in the dev-minor-updates group
Bumps the dev-minor-updates group in /frontend with 1 update: [vite-tsconfig-paths](https://github.com/aleclarson/vite-tsconfig-paths).


Updates `vite-tsconfig-paths` from 6.0.5 to 6.1.0
- [Release notes](https://github.com/aleclarson/vite-tsconfig-paths/releases)
- [Commits](https://github.com/aleclarson/vite-tsconfig-paths/compare/v6.0.5...v6.1.0)

---
updated-dependencies:
- dependency-name: vite-tsconfig-paths
  dependency-version: 6.1.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: dev-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-09 14:29:32 +00:00
dependabot[bot]
09a3d65aa1 Bump the dev-patch-updates group in /frontend with 2 updates
Bumps the dev-patch-updates group in /frontend with 2 updates: [@types/react](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/react) and [happy-dom](https://github.com/capricorn86/happy-dom).


Updates `@types/react` from 19.2.10 to 19.2.13
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/react)

Updates `happy-dom` from 20.5.0 to 20.5.3
- [Release notes](https://github.com/capricorn86/happy-dom/releases)
- [Commits](https://github.com/capricorn86/happy-dom/compare/v20.5.0...v20.5.3)

---
updated-dependencies:
- dependency-name: "@types/react"
  dependency-version: 19.2.13
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: dev-patch-updates
- dependency-name: happy-dom
  dependency-version: 20.5.3
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: dev-patch-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-09 14:29:16 +00:00
dependabot[bot]
c910cf9512 Bump eslint from 9.39.2 to 10.0.0 in /test
Bumps [eslint](https://github.com/eslint/eslint) from 9.39.2 to 10.0.0.
- [Release notes](https://github.com/eslint/eslint/releases)
- [Commits](https://github.com/eslint/eslint/compare/v9.39.2...v10.0.0)

---
updated-dependencies:
- dependency-name: eslint
  dependency-version: 10.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-09 14:29:11 +00:00
dependabot[bot]
304c51aae8 Bump cypress in /test in the prod-minor-updates group
Bumps the prod-minor-updates group in /test with 1 update: [cypress](https://github.com/cypress-io/cypress).


Updates `cypress` from 15.9.0 to 15.10.0
- [Release notes](https://github.com/cypress-io/cypress/releases)
- [Changelog](https://github.com/cypress-io/cypress/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/cypress-io/cypress/compare/v15.9.0...v15.10.0)

---
updated-dependencies:
- dependency-name: cypress
  dependency-version: 15.10.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-09 14:28:59 +00:00
kiaxseventh
b552eb90ed Add ArvanCloud DNS support 2026-02-09 13:02:18 +03:30
Zoey
c4e28331d3 fix #2652 2026-02-07 12:10:06 +01:00
Zoey
4b5be67742 improve readability of marked text
Signed-off-by: Zoey <zoey@z0ey.de>
2026-02-07 12:10:06 +01:00
Tomáš Novák
b78ef9bcd3 Add Czech translation and related locale files 2026-02-06 17:02:47 +01:00
Zoey
bb4286614d fix #2702 2026-02-06 07:14:54 +01:00
Jozef Gaal
7c67fafedf Update Slovak translation
Updated Slovak translations for 2FA and other features
2026-02-06 01:23:59 +01:00
jc21
47b367d61e Merge pull request #5276 from NginxProxyManager/develop
v2.13.7
2026-02-06 07:11:49 +10:00
Jerry8块
b7402d47a0 Merge branch 'NginxProxyManager:develop' into develop 2026-02-03 15:10:13 +08:00
jerry-yuan
21f63e3db3 fix: delete advanced options from redir_host/dead_host/streams 2026-02-01 10:38:09 +00:00
Jerry
232b5b759a fix: make variable name meaningful 2026-02-01 00:16:17 +08:00
jerry-yuan
054742539f fix: Supplement Swagger documentation 2026-01-31 14:17:05 +00:00
jerry-yuan
2b6a617599 fix: reformat migration scripts 2026-01-31 13:28:53 +00:00
jerry-yuan
187d21a0d5 feat: add trust_forwarded_proto option for SSL redirect handling in reverse proxy scenarios
When Nginx is behind another proxy server (like CloudFlare or AWS ALB), the force-SSL
feature can cause redirect loops because Nginx sees the connection as plain HTTP
while SSL is already handled upstream. This adds a new boolean option to trust
the X-Forwarded-Proto header from upstream proxies.

Changes:
- Add `trust_forwarded_proto` column to proxy_host table (migration)
- Update model and API schema to support the new boolean field
- Modify force-ssl Nginx template to check X-Forwarded-Proto/X-Forwarded-Scheme
- Add map directives in nginx.conf to validate and sanitize forwarded headers
- Add advanced option toggle in frontend UI with i18n support (EN/ZH)
- Set proxy headers from validated map variables instead of $scheme

This allows administrators to control SSL redirect behavior when Nginx is deployed
behind a TLS-terminating proxy.
2026-01-31 13:11:47 +00:00
222 changed files with 6538 additions and 2109 deletions

View File

@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Read version
id: version
run: echo "version=$(cat caddy/Dockerfile | grep "^COPY --from=caddy:.*$" | head -1 | sed "s|COPY --from=caddy:\([0-9.]\+\).*|\1|g")" >> $GITHUB_OUTPUT

View File

@@ -13,28 +13,28 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3
with:
platforms: all
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
with:
driver-opts: env.BUILDKIT_STEP_LOG_MAX_SIZE=-1
- name: Login to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ghcr.io
username: zoeyvid
password: ${{ github.token }}
- name: Build
uses: docker/build-push-action@v6
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: caddy
platforms: linux/amd64,linux/arm64

View File

@@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update nginx version
id: update
run: |
@@ -26,7 +26,7 @@ jobs:
sed -i "s|ARG NGINX_VER=.*|ARG NGINX_VER=$NGINX_VER|" Dockerfile
echo "version=$NGINX_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -39,24 +39,22 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update dynamic_tls_records version
id: update
run: |
git clone https://github.com/nginx-modules/ngx_http_tls_dyn_size ngx_http_tls_dyn_size
git clone --depth 1 https://github.com/nginx-modules/ngx_http_tls_dyn_size ngx_http_tls_dyn_size
DTR_VER="$(
ls ngx_http_tls_dyn_size/nginx__dynamic_tls_records_*.patch \
| sed "s|ngx_http_tls_dyn_size/nginx__dynamic_tls_records_\([0-9.]\+\)+.patch|\1|g" \
| sort -V \
| grep -v rc \
| tail -1 \
| sed "s|\^{}||g"
| tail -1
)"
rm -r ngx_http_tls_dyn_size
sed -i "s|ARG DTR_VER=.*|ARG DTR_VER=$DTR_VER|" Dockerfile
echo "version=$DTR_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -68,22 +66,21 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update resolver_conf_parsing version
id: update
run: |
git clone https://github.com/openresty/openresty openresty
git clone --depth 1 https://github.com/openresty/openresty openresty
RCP_VER="$(
ls openresty/patches/nginx \
| sort -V \
| tail -1 \
| sed "s|\^{}||g"
| tail -1
)"
rm -r openresty
sed -i "s|ARG RCP_VER=.*|ARG RCP_VER=$RCP_VER|" Dockerfile
echo "version=$RCP_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -91,12 +88,39 @@ jobs:
branch: update-resolver_conf_parsing-version
title: update resolver_conf_parsing version to ${{ steps.update.outputs.version }}
body: update resolver_conf_parsing version to ${{ steps.update.outputs.version }}
zlib-ng-patch-update:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update zlib-ng-patch version
id: update
run: |
git clone --depth 1 https://github.com/zlib-ng/patches zlib-ng-patches
ZNP_VER="$(
ls zlib-ng-patches/nginx/*-zlib-ng.patch \
| sed "s|zlib-ng-patches/nginx/\([0-9.]\+\)-zlib-ng.patch|\1|g" \
| sort -V \
| tail -1
)"
rm -r zlib-ng-patches
sed -i "s|ARG ZNP_VER=.*|ARG ZNP_VER=$ZNP_VER|" Dockerfile
echo "version=$ZNP_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
commit-message: update zlib-ng-patch version to ${{ steps.update.outputs.version }}
branch: update-zlib-ng-patch-version
title: update zlib-ng-patch version to ${{ steps.update.outputs.version }}
body: update zlib-ng-patch version to ${{ steps.update.outputs.version }}
ngx_brotli-update:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update ngx_brotli version
id: update
run: |
@@ -111,7 +135,7 @@ jobs:
sed -i "s|ARG NB_VER=.*|ARG NB_VER=$NB_VER|" Dockerfile
echo "version=$NB_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
if: ${{ steps.update.outputs.version != '' }}
with:
signoff: true
@@ -124,7 +148,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update ngx_unbrotli version
id: update
run: |
@@ -139,7 +163,7 @@ jobs:
sed -i "s|ARG NUB_VER=.*|ARG NUB_VER=$NUB_VER|" Dockerfile
echo "version=$NUB_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
if: ${{ steps.update.outputs.version != '' }}
with:
signoff: true
@@ -152,7 +176,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update zstd-nginx-module version
id: update
run: |
@@ -167,7 +191,7 @@ jobs:
sed -i "s|ARG ZNM_VER=.*|ARG ZNM_VER=$ZNM_VER|" Dockerfile
echo "version=$ZNM_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
if: ${{ steps.update.outputs.version != '0.1.1' }}
with:
signoff: true
@@ -180,7 +204,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update ngx_http_unzstd_filter_module version
id: update
run: |
@@ -195,7 +219,7 @@ jobs:
sed -i "s|ARG NHUZFM_VER=.*|ARG NHUZFM_VER=$NHUZFM_VER|" Dockerfile
echo "version=$NHUZFM_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
if: ${{ steps.update.outputs.version != '' }}
with:
signoff: true
@@ -208,7 +232,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update ngx-fancyindex version
id: update
run: |
@@ -223,7 +247,7 @@ jobs:
sed -i "s|ARG NF_VER=.*|ARG NF_VER=$NF_VER|" Dockerfile
echo "version=$NF_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
if: ${{ steps.update.outputs.version != 'v0.5.2' }}
with:
signoff: true
@@ -236,7 +260,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update headers-more-nginx-module version
id: update
run: |
@@ -251,7 +275,7 @@ jobs:
sed -i "s|ARG HMNM_VER=.*|ARG HMNM_VER=$HMNM_VER|" Dockerfile
echo "version=$HMNM_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -263,7 +287,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update ngx_devel_kit version
id: update
run: |
@@ -278,7 +302,7 @@ jobs:
sed -i "s|ARG NDK_VER=.*|ARG NDK_VER=$NDK_VER|" Dockerfile
echo "version=$NDK_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -290,7 +314,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update lua-nginx-module version
id: update
run: |
@@ -305,7 +329,7 @@ jobs:
sed -i "s|ARG LNM_VER=.*|ARG LNM_VER=$LNM_VER|" Dockerfile
echo "version=$LNM_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -319,7 +343,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update njs version
id: update
run: |
@@ -334,7 +358,7 @@ jobs:
sed -i "s|ARG NJS_VER=.*|ARG NJS_VER=$NJS_VER|" Dockerfile
echo "version=$NJS_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -346,7 +370,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update nginx-auth-ldap version
id: update
run: |
@@ -361,7 +385,7 @@ jobs:
sed -i "s|ARG NAL_VER=.*|ARG NAL_VER=$NAL_VER|" Dockerfile
echo "version=$NAL_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
if: ${{ steps.update.outputs.version != 'v0.1' }}
with:
signoff: true
@@ -374,7 +398,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update vts version
id: update
run: |
@@ -389,7 +413,7 @@ jobs:
sed -i "s|ARG VTS_VER=.*|ARG VTS_VER=$VTS_VER|" Dockerfile
echo "version=$VTS_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -401,7 +425,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update nginx-ntlm-module version
id: update
run: |
@@ -416,7 +440,7 @@ jobs:
sed -i "s|ARG NNTLM_VER=.*|ARG NNTLM_VER=$NNTLM_VER|" Dockerfile
echo "version=$NNTLM_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
if: ${{ steps.update.outputs.version != 'v1.19.3-beta.1' }}
with:
signoff: true
@@ -429,7 +453,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update ngx_http_geoip2_module version
id: update
run: |
@@ -444,7 +468,7 @@ jobs:
sed -i "s|ARG NHG2M_VER=.*|ARG NHG2M_VER=$NHG2M_VER|" Dockerfile
echo "version=$NHG2M_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -457,7 +481,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update lua-resty-core version
id: update
run: |
@@ -472,7 +496,7 @@ jobs:
sed -i "s|ARG LRC_VER=.*|ARG LRC_VER=$LRC_VER|" Dockerfile
echo "version=$LRC_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -484,7 +508,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update lua-resty-lrucache version
id: update
run: |
@@ -499,7 +523,7 @@ jobs:
sed -i "s|ARG LRL_VER=.*|ARG LRL_VER=$LRL_VER|" Dockerfile
echo "version=$LRL_VER" >> $GITHUB_OUTPUT
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true
@@ -512,7 +536,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: update lua-cs-bouncer version
id: update
run: |
@@ -529,7 +553,7 @@ jobs:
wget https://raw.githubusercontent.com/crowdsecurity/cs-nginx-bouncer/refs/heads/main/nginx/crowdsec_nginx.conf -O rootfs/usr/local/nginx/conf/conf.d/crowdsec.conf.original
wget https://raw.githubusercontent.com/crowdsecurity/lua-cs-bouncer/refs/tags/"$LCSB_VER"/config_example.conf -O rootfs/etc/crowdsec.conf.original
- name: Create Pull Request
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
signoff: true
delete-branch: true

View File

@@ -12,13 +12,13 @@ jobs:
if: ${{ github.repository_owner == 'ZoeyVid' }}
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
with:
driver-opts: env.BUILDKIT_STEP_LOG_MAX_SIZE=-1
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ghcr.io
username: zoeyvid
@@ -28,7 +28,7 @@ jobs:
sed -i "s|\"0.0.0\"|\"${{ inputs.tag }}-$(git rev-parse --short HEAD)-$(cat .version)\"|g" frontend/package.json
sed -i "s|\"0.0.0\"|\"${{ inputs.tag }}-$(git rev-parse --short HEAD)-$(cat .version)\"|g" backend/package.json
- name: Build
uses: docker/build-push-action@v6
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
file: ./Dockerfile
@@ -42,13 +42,13 @@ jobs:
if: ${{ github.repository_owner == 'ZoeyVid' }}
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
with:
driver-opts: env.BUILDKIT_STEP_LOG_MAX_SIZE=-1
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ghcr.io
username: zoeyvid
@@ -58,7 +58,7 @@ jobs:
sed -i "s|\"0.0.0\"|\"${{ inputs.tag }}-$(git rev-parse --short HEAD)-$(cat .version)\"|g" frontend/package.json
sed -i "s|\"0.0.0\"|\"${{ inputs.tag }}-$(git rev-parse --short HEAD)-$(cat .version)\"|g" backend/package.json
- name: Build
uses: docker/build-push-action@v6
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
file: ./Dockerfile
@@ -73,12 +73,12 @@ jobs:
if: ${{ github.repository_owner == 'ZoeyVid' }}
steps:
- name: Login to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ghcr.io
username: zoeyvid

View File

@@ -12,13 +12,13 @@ jobs:
if: ${{ github.repository_owner == 'ZoeyVid' }}
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
with:
driver-opts: env.BUILDKIT_STEP_LOG_MAX_SIZE=-1
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ghcr.io
username: zoeyvid
@@ -28,7 +28,7 @@ jobs:
sed -i "s|\"0.0.0\"|\"${{ inputs.tag }}-$(git rev-parse --short HEAD)-$(cat .version)\"|g" frontend/package.json
sed -i "s|\"0.0.0\"|\"${{ inputs.tag }}-$(git rev-parse --short HEAD)-$(cat .version)\"|g" backend/package.json
- name: Build
uses: docker/build-push-action@v6
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
file: ./Dockerfile
@@ -42,13 +42,13 @@ jobs:
if: ${{ github.repository_owner == 'ZoeyVid' }}
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
with:
driver-opts: env.BUILDKIT_STEP_LOG_MAX_SIZE=-1
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ghcr.io
username: zoeyvid
@@ -58,7 +58,7 @@ jobs:
sed -i "s|\"0.0.0\"|\"${{ inputs.tag }}-$(git rev-parse --short HEAD)-$(cat .version)\"|g" frontend/package.json
sed -i "s|\"0.0.0\"|\"${{ inputs.tag }}-$(git rev-parse --short HEAD)-$(cat .version)\"|g" backend/package.json
- name: Build
uses: docker/build-push-action@v6
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
file: ./Dockerfile
@@ -73,12 +73,12 @@ jobs:
if: ${{ github.repository_owner == 'ZoeyVid' }}
steps:
- name: Login to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ghcr.io
username: zoeyvid

View File

@@ -3,7 +3,6 @@ on:
push:
branches:
- develop
pull_request:
workflow_dispatch:
jobs:
build-x86_64:
@@ -11,13 +10,13 @@ jobs:
if: ${{ github.repository_owner == 'ZoeyVid' }}
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
with:
driver-opts: env.BUILDKIT_STEP_LOG_MAX_SIZE=-1
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ghcr.io
username: zoeyvid
@@ -27,8 +26,7 @@ jobs:
sed -i "s|\"0.0.0\"|\"$(git rev-parse --short HEAD)\"|g" frontend/package.json
sed -i "s|\"0.0.0\"|\"$(git rev-parse --short HEAD)\"|g" backend/package.json
- name: Build
uses: docker/build-push-action@v6
if: ${{ github.event_name != 'pull_request' }}
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
file: ./Dockerfile
@@ -36,33 +34,19 @@ jobs:
tags: ghcr.io/zoeyvid/npmplus:develop-x86_64
build-args: |
FLAGS=-march=x86-64-v2 -mtune=generic -fcf-protection=full
- name: Set PR-Number (PR)
if: ${{ github.event_name == 'pull_request' }}
id: pr
run: echo "pr=$(echo pr-develop | sed "s|refs/pull/:||g" | sed "s|/merge||g")" >> $GITHUB_OUTPUT
- name: Build (PR)
uses: docker/build-push-action@v6
if: ${{ github.event_name == 'pull_request' }}
with:
context: .
file: ./Dockerfile
push: true
tags: ghcr.io/zoeyvid/npmplus:${{ steps.pr.outputs.pr }}-x86_64
build-args: |
FLAGS=-march=x86-64-v2 -mtune=generic -fcf-protection=full
build-aarch64:
runs-on: ubuntu-24.04-arm
if: ${{ github.repository_owner == 'ZoeyVid' }}
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
with:
driver-opts: env.BUILDKIT_STEP_LOG_MAX_SIZE=-1
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ghcr.io
username: zoeyvid
@@ -72,8 +56,7 @@ jobs:
sed -i "s|\"0.0.0\"|\"$(git rev-parse --short HEAD)\"|g" frontend/package.json
sed -i "s|\"0.0.0\"|\"$(git rev-parse --short HEAD)\"|g" backend/package.json
- name: Build
uses: docker/build-push-action@v6
if: ${{ github.event_name != 'pull_request' }}
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
file: ./Dockerfile
@@ -81,20 +64,6 @@ jobs:
tags: ghcr.io/zoeyvid/npmplus:develop-aarch64
build-args: |
FLAGS=-mbranch-protection=standard
- name: Set PR-Number (PR)
if: ${{ github.event_name == 'pull_request' }}
id: pr
run: echo "pr=$(echo pr-develop | sed "s|refs/pull/:||g" | sed "s|/merge||g")" >> $GITHUB_OUTPUT
- name: Build (PR)
uses: docker/build-push-action@v6
if: ${{ github.event_name == 'pull_request' }}
with:
context: .
file: ./Dockerfile
push: true
tags: ghcr.io/zoeyvid/npmplus:${{ steps.pr.outputs.pr }}-aarch64
build-args: |
FLAGS=-mbranch-protection=standard
merge:
runs-on: ubuntu-latest
@@ -102,33 +71,17 @@ jobs:
if: ${{ github.repository_owner == 'ZoeyVid' }}
steps:
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ghcr.io
username: zoeyvid
password: ${{ github.token }}
- name: create multiarch
if: ${{ github.event_name != 'pull_request' }}
run: |
docker buildx imagetools create --tag zoeyvid/npmplus:develop ghcr.io/zoeyvid/npmplus:develop-x86_64 ghcr.io/zoeyvid/npmplus:develop-aarch64
docker buildx imagetools create --tag ghcr.io/zoeyvid/npmplus:develop ghcr.io/zoeyvid/npmplus:develop-x86_64 ghcr.io/zoeyvid/npmplus:develop-aarch64
- name: Set PR-Number (PR)
if: ${{ github.event_name == 'pull_request' }}
id: pr
run: echo "pr=$(echo pr-develop | sed "s|refs/pull/:||g" | sed "s|/merge||g")" >> $GITHUB_OUTPUT
- name: create multiarch (PR)
if: ${{ github.event_name == 'pull_request' }}
run: docker buildx imagetools create --tag ghcr.io/zoeyvid/npmplus:${{ steps.pr.outputs.pr }} ghcr.io/zoeyvid/npmplus:${{ steps.pr.outputs.pr }}-x86_64 ghcr.io/zoeyvid/npmplus:${{ steps.pr.outputs.pr }}-aarch64
- name: add comment (PR)
uses: mshick/add-pr-comment@v2
if: ${{ github.event_name == 'pull_request' }}
with:
message: "The Docker Image can now be found here: `ghcr.io/zoeyvid/npmplus:${{ steps.pr.outputs.pr }}`"
repo-token: ${{ github.token }}
refresh-message-position: true

View File

@@ -11,7 +11,7 @@ jobs:
name: docker-lint
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install hadolint
run: |
sudo wget https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64 -O /usr/bin/hadolint

View File

@@ -9,8 +9,8 @@ jobs:
test-json:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: json-syntax-check
uses: limitusus/json-syntax-check@v2
uses: limitusus/json-syntax-check@77d5756026b93886eaa3dc6ca1c4b17dd19dc703 # v2
with:
pattern: "\\.json"

View File

@@ -10,11 +10,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
- uses: actions/setup-node@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6
with:
node-version: lts/*
- uses: pnpm/action-setup@v4
- uses: pnpm/action-setup@41ff72655975bd51cab0327fa583b6e92b6d3061 # v4
with:
version: latest
- name: install-sponge

View File

@@ -10,7 +10,7 @@ jobs:
name: Check Shell
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Run Shellcheck
uses: ludeeus/action-shellcheck@master
with:

View File

@@ -11,9 +11,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code.
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Check spelling
uses: codespell-project/actions-codespell@v2
uses: codespell-project/actions-codespell@406322ec52dd7b488e48c1c4b82e2a8b3a1bf630 # v2
with:
check_filenames: true
check_hidden: true

View File

@@ -1 +1 @@
2.13.7
2.14.0

View File

@@ -8,12 +8,13 @@ ARG LUAJIT_LIB=/usr/lib
ARG NGINX_VER=release-1.29.5
ARG DTR_VER=1.29.2
ARG RCP_VER=1.29.4
ARG ZNP_VER=1.26.3
ARG NB_VER=master
ARG NUB_VER=main
ARG ZNM_VER=master
ARG NHUZFM_VER=main
ARG NF_VER=master
ARG NF_VER=v0.6.0
ARG HMNM_VER=v0.39
ARG NDK_VER=v0.3.4
ARG LNM_VER=v0.10.29R2
@@ -26,20 +27,17 @@ ARG NHG2M_VER=3.4
ARG FLAGS
ARG CC=clang
ARG CFLAGS="$FLAGS -m64 -O3 -pipe -flto=thin -fstack-clash-protection -fstack-protector-strong -ftrivial-auto-var-init=zero -fno-delete-null-pointer-checks -fno-strict-overflow -fno-strict-aliasing -fno-plt -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -Wformat=2 -Werror=format-security -Wno-sign-compare"
ARG CFLAGS="$FLAGS -m64 -O3 -pipe -flto=full -fstack-clash-protection -fstack-protector-strong -ftrivial-auto-var-init=zero -fno-delete-null-pointer-checks -fno-strict-overflow -fno-strict-aliasing -fno-plt -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -Wformat=2 -Werror=format-security -Wno-sign-compare"
ARG CXX=clang++
ARG CXXFLAGS="$FLAGS -m64 -O3 -pipe -flto=thin -fstack-clash-protection -fstack-protector-strong -ftrivial-auto-var-init=zero -fno-delete-null-pointer-checks -fno-strict-overflow -fno-strict-aliasing -fno-plt -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -D_GLIBCXX_ASSERTIONS -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS=1 -D_LIBCPP_HARDENING_MODE=_LIBCPP_HARDENING_MODE_FAST -Wformat=2 -Werror=format-security -Wno-sign-compare"
ARG LDFLAGS="-fuse-ld=lld -m64 -Wl,-s -Wl,-O1 -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -Wl,--sort-common -Wl,--as-needed -Wl,-z,pack-relative-relocs"
ARG CXXFLAGS="$FLAGS -m64 -O3 -pipe -flto=full -fstack-clash-protection -fstack-protector-strong -ftrivial-auto-var-init=zero -fno-delete-null-pointer-checks -fno-strict-overflow -fno-strict-aliasing -fno-plt -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -D_GLIBCXX_ASSERTIONS -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS=1 -D_LIBCPP_HARDENING_MODE=_LIBCPP_HARDENING_MODE_FAST -Wformat=2 -Werror=format-security -Wno-sign-compare"
ARG LDFLAGS="-fuse-ld=lld -m64 -Wl,-s -Wl,-O2 -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -Wl,--sort-common -Wl,--as-needed -Wl,-z,pack-relative-relocs"
COPY nginx/nginx.patch /src/nginx.patch
COPY nginx/ngx_brotli.patch /src/ngx_brotli.patch
COPY nginx/ngx_unbrotli.patch /src/ngx_unbrotli.patch
COPY nginx/zstd-nginx-module.patch /src/zstd-nginx-module.patch
COPY nginx/attachment.patch /src/attachment.patch
WORKDIR /src
COPY patches/*.patch /src
RUN apk upgrade --no-cache -a && \
apk add --no-cache ca-certificates build-base clang lld cmake ninja git \
linux-headers libatomic_ops-dev openssl-dev pcre2-dev luajit-dev zlib-dev brotli-dev zstd-dev geoip-dev libxslt-dev openldap-dev libmaxminddb-dev
apk add --no-cache git make clang lld cmake ninja file \
linux-headers libatomic_ops-dev aws-lc aws-lc-dev pcre2-dev luajit-dev zlib-ng-dev brotli-dev zstd-dev libxslt-dev openldap-dev quickjs-ng-dev libmaxminddb-dev clang-dev
RUN git clone --depth 1 https://github.com/nginx/nginx --branch "$NGINX_VER" /src/nginx && \
cd /src/nginx && \
@@ -49,6 +47,8 @@ RUN git clone --depth 1 https://github.com/nginx/nginx --branch "$NGINX_VER" /sr
git apply /src/nginx/2.patch && \
wget -q https://patch-diff.githubusercontent.com/raw/nginx/nginx/pull/689.patch -O /src/nginx/3.patch && \
git apply /src/nginx/3.patch && \
wget -q https://raw.githubusercontent.com/zlib-ng/patches/refs/heads/master/nginx/"$ZNP_VER"-zlib-ng.patch -O /src/nginx/4.patch && \
git apply /src/nginx/4.patch && \
git apply /src/nginx.patch && \
\
git clone --depth 1 https://github.com/google/ngx_brotli --branch "$NB_VER" /src/ngx_brotli && \
@@ -65,10 +65,12 @@ RUN git clone --depth 1 https://github.com/nginx/nginx --branch "$NGINX_VER" /sr
git apply /src/zstd-nginx-module/1.patch && \
git apply /src/zstd-nginx-module/2.patch && \
git clone --depth 1 https://github.com/HanadaLee/ngx_http_unzstd_filter_module --branch "$NHUZFM_VER" /src/ngx_http_unzstd_filter_module && \
git clone --depth 1 https://github.com/Zoey2936/ngx-fancyindex --branch "$NF_VER" /src/ngx-fancyindex && \
git clone --depth 1 https://github.com/aperezdc/ngx-fancyindex --branch "$NF_VER" /src/ngx-fancyindex && \
git clone --depth 1 https://github.com/openresty/headers-more-nginx-module --branch "$HMNM_VER" /src/headers-more-nginx-module && \
git clone --depth 1 https://github.com/vision5/ngx_devel_kit --branch "$NDK_VER" /src/ngx_devel_kit && \
git clone --depth 1 https://github.com/openresty/lua-nginx-module --branch "$LNM_VER" /src/lua-nginx-module && \
cd /src/lua-nginx-module && \
git apply /src/lua-nginx-module.patch && \
\
git clone --depth 1 https://github.com/nginx/njs --branch "$NJS_VER" /src/njs && \
git clone --depth 1 https://github.com/kvspb/nginx-auth-ldap --branch "$NAL_VER" /src/nginx-auth-ldap && \
@@ -110,8 +112,6 @@ RUN cd /src/nginx && \
--add-module=/src/headers-more-nginx-module \
--add-module=/src/ngx_devel_kit \
--add-module=/src/lua-nginx-module \
--with-http_geoip_module=dynamic \
--with-stream_geoip_module=dynamic \
--add-dynamic-module=/src/njs/nginx \
--add-dynamic-module=/src/nginx-auth-ldap \
--add-dynamic-module=/src/nginx-module-vts \
@@ -190,7 +190,7 @@ COPY COPYING /COPYING
WORKDIR /app
RUN apk upgrade --no-cache -a && \
apk add --no-cache tzdata tini \
libssl3 libcrypto3 pcre2 luajit zlib brotli zstd lua5.1-cjson geoip libxml2 libldap libmaxminddb-libs \
aws-lc pcre2 luajit zlib-ng brotli zstd lua5.1-cjson libxml2 libldap quickjs-ng-libs libmaxminddb-libs \
curl coreutils findutils grep jq openssl shadow su-exec util-linux-misc \
bash bash-completion nano \
logrotate goaccess fcgi \

158
README.md
View File

@@ -43,10 +43,11 @@ If you don't need the web GUI of NPMplus, you may also have a look at caddy: htt
- I test NPMplus with docker, but podman should also work (I disrecommend you to run the NPMplus container inside an LXC container, it will work, but please don't do it, it will work better without, install docker/podman on the host or in a KVM and run NPMplus with this)
- MariaDB(/MySQL)/PostgreSQL may work as Databases for NPMplus (configuration like in upstream), but are unsupported, have no advantage over SQLite (at least with NPMplus) and are not recommended. Please note that you can't migrate from any of these to SQLite without making a fresh install and/or copying everything yourself.
- NPMplus uses https instead of http for the admin interface
- NPMplus won't trust cloudflare until you set the env SKIP_IP_RANGES to false, but please read [this](#notes-on-cloudflare) first before setting the env to true.
- route53 is not supported as dns-challenge provider and Amazon CloudFront IPs can't be automatically trusted in NPMplus, even if you set SKIP_IP_RANGES env to false.
- NPMplus won't trust cloudflare until you set the env TRUST_CLOUDFLARE to true, but please read [this](#notes-on-cloudflare) first before setting the env to true.
- route53 is not supported as dns-challenge provider and Amazon CloudFront IPs can't be automatically trusted in NPMplus, even if you set TRUST_CLOUDFLARE env to true.
- The following certbot dns plugins have been replaced, which means that certs using one of these proivder will not renew and need to be recreated (not renewed): `certbot-dns-he`, `certbot-dns-dnspod`, `certbot-dns-online`, `certbot-dns-powerdns` and `certbot-dns-do` (`certbot-dns-do` was replaced in upstream with v2.12.4 and then merged into NPMplus)
- many forms have changed behavior, see [Comments on some buttons](#comments-on-some-buttons)
- There are many changed and improvements to the nginx config, so please don't follow guides in the internet about custom/advanced config, they are either redundant or should not be used at all with NPMplus
- Many forms have changed behavior, see [Comments on some buttons](#comments-on-some-buttons)
## Quick Setup
1. Install Docker and Docker Compose (podman or docker rootless may also work)
@@ -72,8 +73,9 @@ docker compose up -d
8. You should now remove the `/etc/letsencrypt` mount, since it was moved to `/data` while migration, then redeploy the compose file
9. Since many forms have changed, please check if they are still correct for every host you have.
10. If you proxy NPM(plus) through NPM(plus) make sure to change the scheme from http to https
11. Maybe setup crowdsec (see below)
12. Please report all (migration) issues you may have
11. Because of a added CSP-rules gravatar images will not load, to fix this you need to open the form to edit a users name and save it without changes
12. Maybe setup crowdsec (see below)
13. Please report all (migration) issues you may have
# Crowdsec
<!--Note: Using Immich behind NPMplus with enabled appsec causes issues, see here: [#1241](https://github.com/ZoeyVid/NPMplus/discussions/1241) <br>-->
@@ -93,8 +95,12 @@ name: appsec
source: appsec
labels:
type: appsec
# if you use openappsec you can enable this
#---
# If you use open-appsec, uncomment the section below.
# If connecting to open-appsec cloud, you must edit the default 'log trigger'
# in the cloud dashboard: check "Log to > gateway / agent" and click 'enforce'.
# Otherwise, no intrusion events will be logged to the local agent
# for CrowdSec to process.
#source: file
#filenames:
# - /opt/openappsec/logs/cp-nano-http-transaction-handler.log*
@@ -116,7 +122,7 @@ labels:
2. Make other settings (like TLS)
3. Create a custom location `/` set the scheme to `path`, put in the path, the press the gear button and fill this in (edit the last line):
```
location ~* \.php(?:$|/) {
location ~* [^/]\.php(?:$|/) {
fastcgi_split_path_info ^(.*\.php)(/.*)$;
try_files $fastcgi_script_name =404;
fastcgi_pass ...; # set this to the address of your php-fpm (socket/tcp): https://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_pass
@@ -129,131 +135,41 @@ location ~* \.php(?:$|/) {
## Comments on some buttons
- Forward Hostname / IP / Path: if the scheme is set to path you can just put here a path in and nginx works as a file server, otherwise you need to input ip/domain, you can also append a path to the ip/domain like `127.0.0.1/path` to proxy to a subpath.
- For custom locations with a set path, the path of the location will be stripped. So a request `GET /cdf/abc` to a custom location `/cdf` which proxies to `127.0.0.1/abc` will proxy to `127.0.0.1/abc/abc`, a custom location `/cdf/` which proxies to `127.0.0.1/` will proxy to `127.0.0.1/abc` and a custom location `/cdf` which proxies to `127.0.0.1` will proxy to `127.0.0.1/cdf/abc`
- For custom locations with a set path, dns will be only refreshed on nginx reloads and the path of the location will be stripped. So a request `GET /cdf/abc` to a custom location `/cdf` which proxies to `127.0.0.1/abc` will proxy to `127.0.0.1/abc/abc`, a custom location `/cdf/` which proxies to `127.0.0.1/` will proxy to `127.0.0.1/abc` and a custom location `/cdf` which proxies to `127.0.0.1` will proxy to `127.0.0.1/cdf/abc`
- If the scheme is set to `path`, a path ending with a `/` will be searched relative to the custom location (is uses nginx alias) and a path ending without a `/` will be searched relative to the main `/` location (it uses nginx root)
- Forward Port (optional): port of upstream or php version if scheme is `path`
- Enable fancyindex/compression by upstream:
- for scheme set to `path` this will enabled fancyindex, which shows a index of all files in the folder if there is no index file, only enable this if you know what you are doing and you need the index
- for scheme set to http(s)/grpc(s) this will allow the backend to compress files, I recommend you to keep this disabled, there may be cases where this is needed since otherwise the upstream missbehaves for some reason (like collabora in nextcloud all-in-one)
- Disable Request/Response Buffering: Most time you want keep buffering enabled, you may want to disable this if you for example want to stream videos and have a fast and stable connection to the upstream server
- Send noindex header and block some user agents: This does what is says, it appends a header to all responses which says that the site should not be indexed while blocking requests of crawlers based on the user agent sent with the request
- Wbesockets: this button was removed, websockets are now always enabled
- Disable Crowdsec Appsec: this will disable crowdsec appsec only for one host/one location, this will only do something if appsec is configured
- Disable Response Buffering: Most time you want keep buffering enabled, you may want to disable this if you for example want to stream videos and you have a fast and stable connection to the upstream server, this effects the connection from the upstream server to NPMplus
- Disable Request Buffering: Most time you want keep buffering enabled, request buffering will always be enabled if crowdsec appsec is enabled, you may want to disable this if you for example want to upload huge files and have a fast and stable connection to the upstream server, this effects the connection from the NPMplus to the upstream server
- Enable compression by upstream: this will allow the backend to compress files, I recommend you to keep this disabled, there may be cases where this is needed since otherwise the upstream missbehaves for some reason (like collabora in nextcloud all-in-one)
- Enable fancyindex: this will enabled fancyindex, which shows a index of all files in the folder if there is no index file, only enable this if you know what you are doing and you need the index
- Websockets: this button was removed, websockets are now always enabled
- Reuse Key: this will make the new cert always keep its key unless you force renew it, I recommend you to keep this disabled (not to keep the key), a reason to keep the key would be TLSA/pubkey pinning
- TLS to upstream (for Streams): This can be used if your stream target already uses tls but you want to override it with a NPMplus cert
- TLS to upstream (for Streams): This can be used if your stream target already uses tls but you want to override it with a NPMplus cert, do not enable if you don't set a new cert, since this will downgrade the connecting to be unencrypted
- X-Frame-Options: will control the X-Frame-Options header, none will remove the header, SAMEORIGIN/DENY will set it to these values and upstream will keep what upstream sends
## Examples of implementing some services using auth_request
These example need to be defined for each hosts (whitelist), if you want to configure them globally with exemptions (blacklist), please create a discussion, I can try to help you with that.
### Anubis config (supported)
1. deploy an anubis container (see the compose.yaml for an example and information)
### Anubis
1. Deploy an anubis container (see the compose.yaml for an example and information)
2. In the mounted anubis bot policy file the "status_codes" should be set to 401 and 403, like this:
```yaml
status_codes:
CHALLENGE: 401
DENY: 403
```
3. Put this in the advacned tab or create a custom location / (or the location you want to use), set your proxy settings, then press the gear button and paste the following in the new text field:
```
auth_request /.within.website/x/cmd/anubis/api/check;
error_page 401 403 =200 /.within.website/?redir=$request_uri;
```
4. Create a location with the path `/.within.website`, this should proxy to your anubis, example: `http://127.0.0.1:8923`, then press the gear button and paste the following in the new text field
```
proxy_redirect ~^[^/]+/.*$ /;
proxy_method GET;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
```
5. You can override the images used by default by creating a custom location `/.within.website/x/cmd/anubis/static/img` which acts as a file server and serves the files `happy.webp`, `pensive.webp` and `reject.webp`
3. Set the AUTH_REQUEST_ANUBIS_UPSTREAM env in the NPMplus compose.yaml and select anubis in the Auth Request selection, no custom/advanced config/locations needed
4. You can override the "allow", "checking" and "blocked" images used by default by setting the `AUTH_REQUEST_ANUBIS_USE_CUSTOM_IMAGES` env to true and putting put your custom images as happy.webp, pensive.webp and reject.webp to /opt/npmplus/anubis
### Tinyauth config example (some support)
1. Put this in the advacned tab or create a custom location / (or the location you want to use), set your proxy settings, then press the gear button and paste the following in the new text field
```
auth_request /tinyauth;
error_page 401 = @tinyauth_login;
```
2. Create a custom location with the path `/tinyauth`, this should proxy to your tinyauth, example: `http://<ip>:<port>/api/auth/nginx`, then press the gear button and paste the following in the new text field
```
internal;
proxy_method GET;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
```
3. Create a custom location `@tinyauth_login`, set the scheme to `empty`, then press the gear button and paste the following in the new text field, you need to replace `tinyauth.example.org` with the domain of your tinyauth.
```
internal;
return 302 http://tinyauth.example.org/login?redirect_uri=$scheme://$host$is_request_port$request_port$request_uri;
```
### Tinyauth
1. Set the AUTH_REQUEST_TINYAUTH_UPSTREAM and AUTH_REQUEST_TINYAUTH_DOMAIM env in the NPMplus compose.yaml and select tinyauth in the Auth Request selection, no custom/advanced config/locations needed
### Authelia config example (limited support)
1. Create a custom location / (or the location you want to use), set your proxy settings, then press the gear button and paste the following in the new text field or paste it in the advanced tab (but then the headers won't work):
```
auth_request /internal/authelia/authz;
auth_request_set $redirection_url $upstream_http_location;
error_page 401 =302 $redirection_url;
### Authelia (modern)
1. Set the AUTH_REQUEST_AUTHELIA_UPSTREAM env in the NPMplus compose.yaml and select authelia (modern) in the Auth Request selection, no custom/advanced config/locations needed
auth_request_set $user $upstream_http_remote_user;
auth_request_set $groups $upstream_http_remote_groups;
auth_request_set $name $upstream_http_remote_name;
auth_request_set $email $upstream_http_remote_email;
proxy_set_header Remote-User $user;
proxy_set_header Remote-Groups $groups;
proxy_set_header Remote-Email $email;
proxy_set_header Remote-Name $name;
```
2. Create a location with the path `/internal/authelia/authz`, this should proxy to your authelia, example `http://127.0.0.1:9091/api/authz/auth-request`, then press the gear button and paste the following in the new text field
```
internal;
proxy_method GET;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
```
### Authentik config example (very limited support)
1. create a custom location / (or the location you want to use), set your proxy settings, then press the gear button and paste the following in the new text field or paste it in the advanced tab (but then the headers won't work), you may need to adjust the last lines:
```
auth_request /outpost.goauthentik.io/auth/nginx;
error_page 401 = @goauthentik_proxy_signin;
auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header Set-Cookie $auth_cookie;
auth_request_set $authentik_username $upstream_http_x_authentik_username;
auth_request_set $authentik_groups $upstream_http_x_authentik_groups;
auth_request_set $authentik_entitlements $upstream_http_x_authentik_entitlements;
auth_request_set $authentik_email $upstream_http_x_authentik_email;
auth_request_set $authentik_name $upstream_http_x_authentik_name;
auth_request_set $authentik_uid $upstream_http_x_authentik_uid;
proxy_set_header X-authentik-username $authentik_username;
proxy_set_header X-authentik-groups $authentik_groups;
proxy_set_header X-authentik-entitlements $authentik_entitlements;
proxy_set_header X-authentik-email $authentik_email;
proxy_set_header X-authentik-name $authentik_name;
proxy_set_header X-authentik-uid $authentik_uid;
# This section should be uncommented when the "Send HTTP Basic authentication" option is enabled in the proxy provider
#auth_request_set $authentik_auth $upstream_http_authorization;
#proxy_set_header Authorization $authentik_auth;
```
2. Create a location with the path `/outpost.goauthentik.io`, this should proxy to your authentik, examples: `https://127.0.0.1:9443/outpost.goauthentik.io` for embedded outpost (or `https://127.0.0.1:9443` for manual outpost deployments), then press the gear button and paste the following in the new text field
```
auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header Set-Cookie $auth_cookie;
proxy_method GET;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
```
3. Create a custom location `@goauthentik_proxy_signin`, set the scheme to `empty`, then press the gear button and paste the following in the new text field, you may need to adjust the last lines:
```
internal;
add_header Set-Cookie $auth_cookie;
return 302 /outpost.goauthentik.io/start?rd=$request_uri;
## For domain level, use the below error_page to redirect to your authentik server with the full redirect path
#return 302 https://authentik.company/outpost.goauthentik.io/start?rd=$scheme://$host$is_request_port$request_port$request_uri;
```
### Authentik
1. Set the AUTH_REQUEST_AUTHENTIK_UPSTREAM env (and optional AUTH_REQUEST_AUTHENTIK_DOMAIN env if you use the "domain level" variant in authentik, do not set this env if you use the "single application" variant) in the NPMplus compose.yaml and select authentik/authentik-send-basic-auth in the Auth Request selection, no custom/advanced config/locations needed
## Load Balancing
1. Open and edit this file: `/opt/npmplus/custom_nginx/http_top.conf` (or `/opt/npmplus/custom_nginx/stream_top.conf` for streams), if you changed /opt/npmplus to a different path make sure to change the path to fit
@@ -296,7 +212,7 @@ geoip2 /data/goaccess/geoip/GeoLite2-Country.mmdb {
#}
# uncomment if you block/don't allow IPs with unknown country codes
#geo $geo_is_private_ip {
#geo $is_private_ip {
# default no;
# 127.0.0.0/8 yes;
# 10.0.0.0/8 yes;
@@ -311,7 +227,7 @@ geoip2 /data/goaccess/geoip/GeoLite2-Country.mmdb {
4a. to set it per location: create a custom location / (or the location you want to use), set your proxy settings, then press the gear button and paste the following in the new text field, you may want to adjust the last lines (do not use the advanced tab with this example as it may break cert renewals):
```yaml
# uncomment if you block/don't allow IPs with unknown country codes
#if ($geo_is_private_ip = yes) {
#if ($is_private_ip = yes) {
# set $geoip2_country_rule yes;
#}
if ($geoip2_country_rule = no) {
@@ -321,7 +237,7 @@ if ($geoip2_country_rule = no) {
4b. to set it for an entire host: put this in the advanced tab:
```yaml
# uncomment if you block/don't allow IPs with unknown country codes
#if ($geo_is_private_ip = yes) {
#if ($is_private_ip = yes) {
# set $geoip2_country_rule yes;
#}
if ($request_uri ~* "^/\.well-known/acme-challenge/") {
@@ -334,7 +250,7 @@ if ($geoip2_country_rule = no) {
4c. to set it for all http hosts of them same type: put this in the `custom_nginx/server_proxy.conf` / `custom_nginx/server_redirect.conf` / `custom_nginx/server_dead.conf` file(s):
```yaml
# uncomment if you block/don't allow IPs with unknown country codes
#if ($geo_is_private_ip = yes) {
#if ($is_private_ip = yes) {
# set $geoip2_country_rule yes;
#}
if ($request_uri ~* "^/\.well-known/acme-challenge/") {
@@ -347,7 +263,7 @@ if ($geoip2_country_rule = no) {
4d. to set it for all http hosts: put this in the `custom_nginx/server_http.conf` file:
```yaml
# uncomment if you block/don't allow IPs with unknown country codes
#if ($geo_is_private_ip = yes) {
#if ($is_private_ip = yes) {
# set $geoip2_country_rule yes;
#}
if ($request_uri ~* "^/\.well-known/acme-challenge/") {

View File

@@ -8,7 +8,11 @@ import mainRoutes from "./routes/main.js";
* App
*/
const app = express();
app.use(fileUpload());
app.use(
fileUpload({
limits: { fileSize: 1024 * 1024 },
}),
);
app.use(cookieParser());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
@@ -21,6 +25,25 @@ app.disable("x-powered-by");
app.enable("trust proxy", ["loopback", "linklocal", "uniquelocal"]);
app.enable("strict routing");
app.use((req, res, next) => {
if (["same-origin", undefined, "none"].includes(req.get("sec-fetch-site"))) {
return next();
}
if (
req.method === "GET" &&
req.path === "/api/oidc/callback" &&
req.get("sec-fetch-mode") === "navigate" &&
req.get("sec-fetch-dest") === "document"
) {
return next();
}
res.status(403).json({
error: { message: "Rejected Sec-Fetch-Site Value." },
});
});
// pretty print JSON when not live
app.set("json spaces", 2);

View File

@@ -1,5 +1,5 @@
{
"$schema": "https://biomejs.dev/schemas/2.3.14/schema.json",
"$schema": "https://biomejs.dev/schemas/2.4.5/schema.json",
"vcs": {
"enabled": true,
"clientKind": "git",

View File

@@ -17,6 +17,12 @@
"credentials": "dns_aliyun_access_key = 12345678\ndns_aliyun_access_key_secret = 1234567890abcdef1234567890abcdef",
"full_plugin_name": "dns-aliyun"
},
"arvan": {
"name": "ArvanCloud",
"package_name": "certbot-dns-arvan",
"credentials": "dns_arvan_key = Apikey xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"full_plugin_name": "dns-arvan"
},
"azure": {
"name": "Azure",
"package_name": "certbot-dns-azure",

View File

@@ -9,22 +9,18 @@ import { migrateUp } from "./migrate.js";
import { getCompiledSchema } from "./schema/index.js";
import setup from "./setup.js";
const IP_RANGES_FETCH_ENABLED = process.env.SKIP_IP_RANGES === "false";
async function appStart() {
return migrateUp()
.then(setup)
.then(getCompiledSchema)
.then(() => {
if (!IP_RANGES_FETCH_ENABLED) {
logger.info("IP Ranges fetch is disabled by environment variable");
if (process.env.TRUST_CLOUDFLARE === "false") {
logger.info("Cloudflares IPs are NOT trusted");
return;
}
logger.info("IP Ranges fetch is enabled");
logger.info("Cloudflares IPs are trusted");
internalIpRanges.initTimer();
return internalIpRanges.fetch().catch((err) => {
logger.error("IP Ranges fetch failed, continuing anyway:", err.message);
});
return internalIpRanges.fetch();
})
.then(() => {
internalCertificate.initTimer();

View File

@@ -116,7 +116,7 @@ const internal2fa = {
throw new errs.ValidationError("No pending 2FA setup found");
}
codeTrim = code.trim();
const codeTrim = code.trim();
const result = await verify({ token: codeTrim, secret });
if (!result.valid) {
@@ -162,14 +162,18 @@ const internal2fa = {
throw new errs.ValidationError("2FA is not enabled");
}
codeTrim = code.trim();
const codeTrim = code.trim();
if (codeTrim.length !== 6 && codeTrim.length !== 8) {
throw new errs.ValidationError("Invalid verification code");
}
// Try TOTP code first, if it's 6 chars. it will throw errors if it's not 6 chars
// and the backup codes are 8 chars.
if (codeTrim.length === 6) {
const result = await verify({
token: codeTrim,
secret,
secret: auth.meta.totp_secret,
// These guardrails lower the minimum length requirement for secrets.
// In v12 of otplib the default minimum length is 10 and in v13 it is 16.
// Since there are 2fa secrets in the wild generated with v12 we need to allow shorter secrets
@@ -239,7 +243,7 @@ const internal2fa = {
return false;
}
tokenTrim = token.trim();
const tokenTrim = token.trim();
// Try TOTP code first, if it's 6 chars. it will throw errors if it's not 6 chars
// and the backup codes are 8 chars.
@@ -305,7 +309,11 @@ const internal2fa = {
throw new errs.ValidationError("No 2FA secret found");
}
tokenTrim = token.trim();
const tokenTrim = token.trim();
if (tokenTrim.length !== 6 && tokenTrim.length !== 8) {
throw new errs.ValidationError("Invalid verification code");
}
// Try TOTP code first, if it's 6 chars. it will throw errors if it's not 6 chars
// and the backup codes are 8 chars.

View File

@@ -42,7 +42,7 @@ const internalAccessList = {
accessListAuthModel.query().insert({
access_list_id: row.id,
username: item.username,
password: item.password,
password: bcrypt.hashSync(item.password, 6),
}),
);
return true;
@@ -129,7 +129,7 @@ const internalAccessList = {
accessListAuthModel.query().insert({
access_list_id: data.id,
username: item.username,
password: item.password,
password: bcrypt.hashSync(item.password, 6),
}),
);
} else {
@@ -432,7 +432,7 @@ const internalAccessList = {
logger.info(`Adding: ${item.username}`);
try {
fs.appendFileSync(htpasswdFile, `${item.username}:${await bcrypt.hash(item.password, 13)}\n`, {
fs.appendFileSync(htpasswdFile, `${item.username}:${item.password}\n`, {
encoding: "utf8",
});
} catch (err) {

View File

@@ -1,10 +1,11 @@
import { createPrivateKey, X509Certificate } from "node:crypto";
import { mkdir, readFile, rm, writeFile } from "node:fs/promises";
import fs from "node:fs";
import path from "node:path";
import { domainToASCII } from "node:url";
import archiver from "archiver";
import dayjs from "dayjs";
import _ from "lodash";
import moment from "moment";
import tempWrite from "temp-write";
import dnsPlugins from "../certbot/dns-plugins.json" with { type: "json" };
import { installPlugin } from "../lib/certbot.js";
import error from "../lib/error.js";
@@ -20,7 +21,7 @@ const omissions = () => {
};
const internalCertificate = {
allowedSslFiles: ["certificate", "certificate_key", "intermediate_certificate"],
allowedSslFiles: ["certificate", "certificate_key"],
intervalTimeout: 1000 * 60 * 60 * Number.parseInt(process.env.CRT, 10),
interval: null,
intervalProcessing: false,
@@ -85,7 +86,7 @@ const internalCertificate = {
.where("id", certificate.id)
.andWhere("provider", "letsencrypt")
.patch({
expires_on: moment(certInfo.dates.to, "X").format("YYYY-MM-DD HH:mm:ss"),
expires_on: dayjs.unix(certInfo.dates.to).format("YYYY-MM-DD HH:mm:ss"),
});
} catch (err) {
// Don't want to stop the train here, just log the error
@@ -137,7 +138,7 @@ const internalCertificate = {
const savedRow = await certificateModel
.query()
.patchAndFetchById(certificate.id, {
expires_on: moment(certInfo.dates.to, "X").format("YYYY-MM-DD HH:mm:ss"),
expires_on: dayjs.unix(certInfo.dates.to).format("YYYY-MM-DD HH:mm:ss"),
})
.then(utils.omitRow(omissions()));
@@ -361,8 +362,8 @@ const internalCertificate = {
// Revoke the cert
await internalCertificate.revokeCertbot(row);
} else {
fs.rmSync(`/data/tls/custom/npm-${row.id}`, { force: true, recursive: true });
fs.rmSync(`/data/tls/custom/npm-${row.id}.der`, { force: true });
await rm(`/data/tls/custom/npm-${row.id}`, { force: true, recursive: true });
await rm(`/data/tls/custom/npm-${row.id}.der`, { force: true });
}
return true;
},
@@ -430,48 +431,17 @@ const internalCertificate = {
* @returns {Promise}
*/
writeCustomCert: async (certificate) => {
logger.info("Writing Custom Certificate:", certificate);
if (certificate.provider === "letsencrypt") {
throw new Error("Refusing to write certbot certs here");
}
logger.info("Writing Custom Certificate:", certificate.id);
const dir = `/data/tls/custom/npm-${certificate.id}`;
return new Promise((resolve, reject) => {
if (certificate.provider === "letsencrypt") {
reject(new Error("Refusing to write certbot certs here"));
return;
}
let certData = certificate.meta.certificate;
if (typeof certificate.meta.intermediate_certificate !== "undefined") {
certData = `${certData}\n${certificate.meta.intermediate_certificate}`;
}
try {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir);
}
} catch (err) {
reject(err);
return;
}
fs.writeFile(`${dir}/fullchain.pem`, certData, (err) => {
if (err) {
reject(err);
} else {
resolve();
}
});
}).then(() => {
return new Promise((resolve, reject) => {
fs.writeFile(`${dir}/privkey.pem`, certificate.meta.certificate_key, (err) => {
if (err) {
reject(err);
} else {
resolve();
}
});
});
});
await mkdir(dir, { recursive: true });
await writeFile(`${dir}/fullchain.pem`, certificate.meta.certificate);
await writeFile(`${dir}/privkey.pem`, certificate.meta.certificate_key);
},
/**
@@ -496,40 +466,22 @@ const internalCertificate = {
* @param {Object} data.files
* @returns {Promise}
*/
validate: (data) => {
// Put file contents into an object
const files = {};
_.map(data.files, (file, name) => {
if (internalCertificate.allowedSslFiles.indexOf(name) !== -1) {
files[name] = file.data.toString();
validate: async (data) => {
const finalData = {};
for (const [name, file] of Object.entries(data.files)) {
if (internalCertificate.allowedSslFiles.includes(name)) {
const content = file.data.toString();
let res;
if (name === "certificate_key") {
res = await internalCertificate.checkPrivateKey(content);
} else {
res = await internalCertificate.getCertificateInfo(content, true);
}
finalData[name] = res;
}
});
}
// For each file, create a temp file and write the contents to it
// Then test it depending on the file type
const promises = [];
_.map(files, (content, type) => {
promises.push(
new Promise((resolve) => {
if (type === "certificate_key") {
resolve(internalCertificate.checkPrivateKey(content));
} else {
// this should handle `certificate` and intermediate certificate
resolve(internalCertificate.getCertificateInfo(content, true));
}
}).then((res) => {
return { [type]: res };
}),
);
});
return Promise.all(promises).then((files) => {
let data = {};
_.each(files, (file) => {
data = _.assign({}, data, file);
});
return data;
});
return finalData;
},
/**
@@ -546,26 +498,27 @@ const internalCertificate = {
}
const validations = await internalCertificate.validate(data);
if (typeof validations.certificate === "undefined") {
throw new error.ValidationError("Certificate file was not provided");
if (typeof validations.certificate === "undefined" || typeof validations.certificate_key === "undefined") {
throw new error.ValidationError("Certificate and Certificate Key files were not provided");
}
const certs = {};
_.map(data.files, (file, name) => {
if (internalCertificate.allowedSslFiles.indexOf(name) !== -1) {
row.meta[name] = file.data.toString();
certs[name] = file.data.toString();
}
});
const certificate = await internalCertificate.update(access, {
id: data.id,
expires_on: moment(validations.certificate.dates.to, "X").format("YYYY-MM-DD HH:mm:ss"),
domain_names: [validations.certificate.cn],
expires_on: dayjs.unix(validations.certificate.dates.to).format("YYYY-MM-DD HH:mm:ss"),
domain_names: validations.certificate.cn,
meta: _.clone(row.meta), // Prevent the update method from changing this value that we'll use later
});
certificate.meta = row.meta;
certificate.meta = _.assign({}, row.meta, certs);
await internalCertificate.writeCustomCert(certificate);
return _.pick(row.meta, internalCertificate.allowedSslFiles);
return _.omit(certificate.meta, internalCertificate.allowedSslFiles);
},
/**
@@ -575,24 +528,10 @@ const internalCertificate = {
* @param {String} privateKey This is the entire key contents as a string
*/
checkPrivateKey: async (privateKey) => {
const filepath = await tempWrite(privateKey, "/tmp");
const failTimeout = setTimeout(() => {
throw new error.ValidationError(
"Result Validation Error: Validation timed out. This could be due to the key being passphrase-protected.",
);
}, 10000);
try {
const result = await utils.execFile("openssl", ["pkey", "-in", filepath, "-check", "-noout"]);
clearTimeout(failTimeout);
if (!result.toLowerCase().includes("key is valid")) {
throw new error.ValidationError(`Result Validation Error: ${result}`);
}
fs.unlinkSync(filepath);
createPrivateKey(privateKey);
return true;
} catch (err) {
clearTimeout(failTimeout);
fs.unlinkSync(filepath);
throw new error.ValidationError(`Certificate Key is not valid (${err.message})`, err);
}
},
@@ -605,77 +544,38 @@ const internalCertificate = {
* @param {Boolean} [throwExpired] Throw when the certificate is out of date
*/
getCertificateInfo: async (certificate, throwExpired) => {
try {
const filepath = await tempWrite(certificate, "/tmp");
const certData = await internalCertificate.getCertificateInfoFromFile(filepath, throwExpired);
fs.unlinkSync(filepath);
return certData;
} catch (err) {
fs.unlinkSync(filepath);
throw err;
}
},
/**
* Uses the openssl command to both validate and get info out of the certificate.
* It will save the file to disk first, then run commands on it, then delete the file.
*
* @param {String} certificateFile The file location on disk
* @param {Boolean} [throw_expired] Throw when the certificate is out of date
*/
getCertificateInfoFromFile: async (certificateFile, throw_expired) => {
const certData = {};
try {
const result = await utils.execFile("openssl", ["x509", "-in", certificateFile, "-subject", "-noout"]);
// Examples:
// subject=CN = *.jc21.com
// subject=CN = something.example.com
const regex = /(?:subject=)?[^=]+=\s*(\S+)/gim;
const match = regex.exec(result);
if (match && typeof match[1] !== "undefined") {
certData.cn = match[1];
}
const cert = new X509Certificate(certificate);
const result2 = await utils.execFile("openssl", ["x509", "-in", certificateFile, "-issuer", "-noout"]);
// Examples:
// issuer=C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
// issuer=C = US, O = Let's Encrypt, CN = E5
// issuer=O = NginxProxyManager, CN = NginxProxyManager Intermediate CA","O = NginxProxyManager, CN = NginxProxyManager Intermediate CA
const regex2 = /^(?:issuer=)?(.*)$/gim;
const match2 = regex2.exec(result2);
if (match2 && typeof match2[1] !== "undefined") {
certData.issuer = match2[1];
}
const result3 = await utils.execFile("openssl", ["x509", "-in", certificateFile, "-dates", "-noout"]);
// notBefore=Jul 14 04:04:29 2018 GMT
// notAfter=Oct 12 04:04:29 2018 GMT
let validFrom = null;
let validTo = null;
const lines = result3.split("\n");
lines.map((str) => {
const regex = /^(\S+)=(.*)$/gim;
const match = regex.exec(str.trim());
if (match && typeof match[2] !== "undefined") {
const date = Number.parseInt(moment(match[2], "MMM DD HH:mm:ss YYYY z").format("X"), 10);
if (match[1].toLowerCase() === "notbefore") {
validFrom = date;
} else if (match[1].toLowerCase() === "notafter") {
validTo = date;
}
if (cert.subjectAltName) {
certData.cn = cert.subjectAltName.split(", ").map((entry) => {
const firstColonIdx = entry.indexOf(":");
return firstColonIdx === -1 ? entry.trim() : entry.substring(firstColonIdx + 1).trim();
});
} else {
const cnMatch = /\bCN=([^\n]+)/i.exec(cert.subject);
if (cnMatch?.[1]) {
certData.cn = [cnMatch[1].trim()];
} else {
certData.cn = [];
}
return true;
});
if (!validFrom || !validTo) {
throw new error.ValidationError(`Could not determine dates from certificate: ${result}`);
}
if (throw_expired && validTo < Number.parseInt(moment().format("X"), 10)) {
if (cert.issuer) {
certData.issuer = cert.issuer.replace(/\n/g, ", ");
}
const validFrom = Math.floor(new Date(cert.validFrom).getTime() / 1000);
const validTo = Math.floor(new Date(cert.validTo).getTime() / 1000);
if (Number.isNaN(validFrom) || Number.isNaN(validTo)) {
throw new error.ValidationError("Could not determine dates from certificate");
}
const now = Math.floor(Date.now() / 1000);
if (throwExpired && validTo < now) {
throw new error.ValidationError("Certificate has expired");
}
@@ -690,6 +590,18 @@ const internalCertificate = {
}
},
/**
* Uses the openssl command to both validate and get info out of the certificate.
* It will save the file to disk first, then run commands on it, then delete the file.
*
* @param {String} certificateFile The file location on disk
* @param {Boolean} [throwExpired] Throw when the certificate is out of date
*/
getCertificateInfoFromFile: async (certificateFile, throwExpired) => {
const certContent = await readFile(certificateFile);
return internalCertificate.getCertificateInfo(certContent, throwExpired);
},
/**
* Cleans the tls keys from the meta object and sets them
* @param {String} email the email address to use for registration to "true"
@@ -756,7 +668,7 @@ const internalCertificate = {
);
const credentialsLocation = `/tmp/certbot-credentials/credentials-${certificate.id}`;
fs.writeFileSync(credentialsLocation, certificate.meta.dns_provider_credentials, { mode: 0o600 });
await writeFile(credentialsLocation, certificate.meta.dns_provider_credentials, { mode: 0o600 });
try {
const result = await utils.execFile("certbot", [
@@ -782,8 +694,7 @@ const internalCertificate = {
logger.info(result);
return result;
} catch (err) {
// Don't fail if file does not exist, so no need for action in the callback
fs.unlink(credentialsLocation, () => {});
await rm(credentialsLocation, { force: true });
throw err;
}
},
@@ -809,7 +720,7 @@ const internalCertificate = {
);
const updatedCertificate = await certificateModel.query().patchAndFetchById(certificate.id, {
expires_on: moment(certInfo.dates.to, "X").format("YYYY-MM-DD HH:mm:ss"),
expires_on: dayjs.unix(certInfo.dates.to).format("YYYY-MM-DD HH:mm:ss"),
});
// Add to audit log
@@ -933,7 +844,7 @@ const internalCertificate = {
"unspecified",
"--delete-after-revoke",
]);
fs.rmSync(`/data/tls/certbot/live/npm-${certificate.id}.der`, { force: true });
await rm(`/data/tls/certbot/live/npm-${certificate.id}.der`, { force: true });
logger.info(result);
return result;
} catch (err) {
@@ -957,7 +868,7 @@ const internalCertificate = {
const testChallengeDir = "/data/tls/certbot/acme-challenge/.well-known/acme-challenge";
const testChallengeFile = `${testChallengeDir}/test-challenge`;
fs.mkdirSync(testChallengeDir, { recursive: true });
fs.writeFileSync(testChallengeFile, "Success", { encoding: "utf8" });
await writeFile(testChallengeFile, "Success", { encoding: "utf8" });
const results = [];
@@ -969,7 +880,7 @@ const internalCertificate = {
}
// Remove the test challenge file
fs.unlinkSync(testChallengeFile);
await rm(testChallengeFile, { force: true });
return results;
},

View File

@@ -1,4 +1,4 @@
import fs from "node:fs";
import { readFile, writeFile } from "node:fs/promises";
import { dirname } from "node:path";
import { fileURLToPath } from "node:url";
import utils from "../lib/utils.js";
@@ -82,20 +82,21 @@ const internalIpRanges = {
generateConfig: async (ip_ranges) => {
try {
const renderEngine = utils.getRenderEngine();
const template = fs.readFileSync(`${__dirname}/../templates/ip_ranges.conf`, { encoding: "utf8" });
const template = await readFile(`${__dirname}/../templates/ip_ranges.conf`, { encoding: "utf8" });
const newConfig = await renderEngine.parseAndRender(template, { ip_ranges: ip_ranges });
const filePath = "/usr/local/nginx/conf/conf.d/ip_ranges.conf";
if (fs.existsSync("/usr/local/nginx/conf/conf.d/ip_ranges.conf")) {
const oldConfig = fs.readFileSync("/usr/local/nginx/conf/conf.d/ip_ranges.conf", {
try {
const oldConfig = await readFile(filePath, {
encoding: "utf8",
});
if (oldConfig === newConfig) {
logger.info("Not updating Cloudflared IPs");
return false;
}
}
} catch {}
fs.writeFileSync("/usr/local/nginx/conf/conf.d/ip_ranges.conf", newConfig, { encoding: "utf8" });
await writeFile(filePath, newConfig, { encoding: "utf8" });
logger.info("Updated Cloudflared IPs");
return true;
} catch (err) {

View File

@@ -1,4 +1,4 @@
import fs from "node:fs";
import { readFile, rename, rm, writeFile } from "node:fs/promises";
import { dirname } from "node:path";
import { domainToASCII, fileURLToPath } from "node:url";
import _ from "lodash";
@@ -24,111 +24,81 @@ const internalNginx = {
* @param {Object} host
* @returns {Promise}
*/
configure: (model, host_type, host) => {
configure: async (model, host_type, host) => {
let combined_meta = {};
return internalNginx
.test()
.then(() => {
return internalNginx.deleteConfig(host_type, host);
})
.then(() => {
return internalNginx.reload();
})
.then(() => {
return internalNginx.generateConfig(host_type, host);
})
.then(() => {
// Test nginx again and update meta with result
return internalNginx
.test()
.then(() => {
// nginx is ok
combined_meta = _.assign({}, host.meta, {
nginx_online: true,
nginx_err: null,
});
await internalNginx.deleteConfig(host_type, host);
await internalNginx.generateConfig(host_type, host);
return model.query().where("id", host.id).patch({
meta: combined_meta,
});
})
.catch((err) => {
logger.error(err.message);
// config is bad, update meta and rename config
combined_meta = _.assign({}, host.meta, {
nginx_online: false,
nginx_err: err.message,
});
return model
.query()
.where("id", host.id)
.patch({
meta: combined_meta,
})
.then(() => {
internalNginx.renameConfigAsError(host_type, host);
});
});
})
.then(() => {
return internalNginx.reload();
})
.then(() => {
return combined_meta;
try {
await internalNginx.test();
combined_meta = _.assign({}, host.meta, {
nginx_online: true,
nginx_err: null,
});
await model.query().where("id", host.id).patch({
meta: combined_meta,
});
} catch (err) {
logger.error(err.message);
// config is bad, update meta and rename config
combined_meta = _.assign({}, host.meta, {
nginx_online: false,
nginx_err: err.message,
});
await model.query().where("id", host.id).patch({
meta: combined_meta,
});
await internalNginx.renameConfigAsError(host_type, host);
}
await internalNginx.reload();
return combined_meta;
},
/**
* @returns {Promise}
*/
test: () => {
test: async () => {
return utils.execFile("nginx", ["-tq"]);
},
/**
* @returns {Promise}
*/
reload: () => {
const promises = [];
reload: async () => {
if (process.env.ACME_OCSP_STAPLING === "true") {
promises.push(
utils
.execFile("certbot-ocsp-fetcher.sh", [
"-c",
"/data/tls/certbot/live",
"-o",
"/data/tls/certbot/live",
"--no-reload-webserver",
"--quiet",
])
.catch(() => {}),
);
try {
await utils.execFile("certbot-ocsp-fetcher.sh", [
"-c",
"/data/tls/certbot/live",
"-o",
"/data/tls/certbot/live",
"--no-reload-webserver",
"--quiet",
]);
} catch {}
}
if (process.env.CUSTOM_OCSP_STAPLING === "true") {
promises.push(
utils
.execFile("certbot-ocsp-fetcher.sh", [
"-c",
"/data/tls/custom",
"-o",
"/data/tls/custom",
"--no-reload-webserver",
"--quiet",
])
.catch(() => {}),
);
try {
await utils.execFile("certbot-ocsp-fetcher.sh", [
"-c",
"/data/tls/custom",
"-o",
"/data/tls/custom",
"--no-reload-webserver",
"--quiet",
]);
} catch {}
}
return Promise.all(promises).finally(() => {
return internalNginx.test().then(() => {
return utils.execFile("nginx", ["-s", "reload"]);
});
});
await internalNginx.test();
return utils.execFile("nginx", ["-s", "reload"]);
},
/**
@@ -148,41 +118,40 @@ const internalNginx = {
* @param {Object} host
* @returns {Promise}
*/
renderLocations: (host) => {
return new Promise((resolve, reject) => {
let template;
renderLocations: async (host) => {
let template;
try {
template = fs.readFileSync(`${__dirname}/../templates/_proxy_host_custom_location.conf`, {
encoding: "utf8",
});
} catch (err) {
reject(new errs.ConfigurationError(err.message));
return;
try {
template = await readFile(`${__dirname}/../templates/_proxy_host_custom_location.conf`, {
encoding: "utf8",
});
} catch (err) {
throw new errs.ConfigurationError(err.message);
}
const renderEngine = utils.getRenderEngine();
let renderedLocations = "";
for (const location of host.locations) {
if (location.npmplus_enabled === false) {
continue;
}
const renderEngine = utils.getRenderEngine();
let renderedLocations = "";
if (
location.forward_host.indexOf("/") > -1 &&
!location.forward_host.startsWith("/") &&
!location.forward_host.startsWith("unix")
) {
const split = location.forward_host.split("/");
const locationRendering = async () => {
for (let i = 0; i < host.locations.length; i++) {
if (
host.locations[i].forward_host.indexOf("/") > -1 &&
!host.locations[i].forward_host.startsWith("/") &&
!host.locations[i].forward_host.startsWith("unix")
) {
const split = host.locations[i].forward_host.split("/");
location.forward_host = split.shift();
location.forward_path = `/${split.join("/")}`;
}
host.locations[i].forward_host = split.shift();
host.locations[i].forward_path = `/${split.join("/")}`;
}
renderedLocations += await renderEngine.parseAndRender(template, location);
}
renderedLocations += await renderEngine.parseAndRender(template, host.locations[i]);
}
};
locationRendering().then(() => resolve(renderedLocations));
});
return renderedLocations;
},
/**
@@ -190,117 +159,97 @@ const internalNginx = {
* @param {Object} host
* @returns {Promise}
*/
generateConfig: (host_type, host_row) => {
generateConfig: async (host_type, host_row) => {
// Prevent modifying the original object:
const host = JSON.parse(JSON.stringify(host_row));
const nice_host_type = internalNginx.getFileFriendlyHostType(host_type);
const renderEngine = utils.getRenderEngine();
return new Promise((resolve, reject) => {
let template = null;
const filename = internalNginx.getConfigName(nice_host_type, host.id);
let template = null;
const filename = internalNginx.getConfigName(nice_host_type, host.id);
try {
template = fs.readFileSync(`${__dirname}/../templates/${nice_host_type}.conf`, { encoding: "utf8" });
} catch (err) {
reject(new errs.ConfigurationError(err.message));
return;
}
let locationsPromise;
let origLocations;
// Manipulate the data a bit before sending it to the template
if (nice_host_type !== "default") {
host.use_default_location = true;
if (typeof host.advanced_config !== "undefined" && host.advanced_config) {
host.use_default_location = !internalNginx.advancedConfigHasDefaultLocation(host.advanced_config);
}
}
// For redirection hosts, if the scheme is not http or https, set it to $scheme
if (
nice_host_type === "redirection_host" &&
["http", "https"].indexOf(host.forward_scheme.toLowerCase()) === -1
) {
host.forward_scheme = "$scheme";
}
if (host.locations) {
//logger.info ('host.locations = ' + JSON.stringify(host.locations, null, 2));
origLocations = [].concat(host.locations);
locationsPromise = internalNginx.renderLocations(host).then((renderedLocations) => {
host.locations = renderedLocations;
});
// Allow someone who is using / custom location path to use it, and skip the default / location
_.map(host.locations, (location) => {
if (location.path === "/" && location.location_type !== "= ") {
host.use_default_location = false;
}
});
} else {
locationsPromise = Promise.resolve();
}
if (
host.forward_host &&
host.forward_host.indexOf("/") > -1 &&
!host.forward_host.startsWith("/") &&
!host.forward_host.startsWith("unix")
) {
const split = host.forward_host.split("/");
host.forward_host = split.shift();
host.forward_path = `/${split.join("/")}`;
}
if (host.domain_names) {
host.server_names = host.domain_names.map((domain_name) => domainToASCII(domain_name) || domain_name);
}
host.env = process.env;
locationsPromise.then(() => {
renderEngine
.parseAndRender(template, host)
.then((config_text) => {
fs.writeFileSync(filename, config_text, { encoding: "utf8" });
debug(logger, "Wrote config:", filename);
// Restore locations array
host.locations = origLocations;
resolve(true);
})
.catch((err) => {
debug(logger, `Could not write ${filename}:`, err.message);
reject(new errs.ConfigurationError(err.message));
})
.then(() => {
if (process.env.DISABLE_NGINX_BEAUTIFIER === "false") {
utils.execFile("nginxbeautifier", ["-s", "4", filename]).catch(() => {});
}
});
});
});
},
/**
* A simple wrapper around unlinkSync that writes to the logger
*
* @param {String} filename
*/
deleteFile: (filename) => {
if (!fs.existsSync(filename)) {
return;
}
try {
debug(logger, `Deleting file: ${filename}`);
fs.unlinkSync(filename);
template = await readFile(`${__dirname}/../templates/${nice_host_type}.conf`, { encoding: "utf8" });
} catch (err) {
debug(logger, "Could not delete file:", JSON.stringify(err, null, 2));
throw new errs.ConfigurationError(err.message);
}
let origLocations;
// Manipulate the data a bit before sending it to the template
if (nice_host_type !== "default") {
host.use_default_location = true;
if (typeof host.advanced_config !== "undefined" && host.advanced_config) {
host.use_default_location = !internalNginx.advancedConfigHasDefaultLocation(host.advanced_config);
}
}
// For redirection hosts, if the scheme is not http or https, set it to $scheme
if (
nice_host_type === "redirection_host" &&
["http", "https"].indexOf(host.forward_scheme.toLowerCase()) === -1
) {
host.forward_scheme = "$scheme";
}
if (host.locations) {
_.map(host.locations, (location) => {
if (location.path === "/" && location.location_type !== "= " && location.npmplus_enabled !== false) {
host.use_default_location = false;
}
if (location.npmplus_auth_request === "anubis") {
host.create_anubis_locations = true;
}
if (location.npmplus_auth_request === "tinyauth") {
host.create_tinyauth_locations = true;
}
if (location.npmplus_auth_request === "authelia") {
host.create_authelia_locations = true;
}
if (
location.npmplus_auth_request === "authentik" ||
location.npmplus_auth_request === "authentik-send-basic-auth"
) {
host.create_authentik_locations = true;
}
});
host.locations = await internalNginx.renderLocations(host);
}
if (
host.forward_host &&
host.forward_host.indexOf("/") > -1 &&
!host.forward_host.startsWith("/") &&
!host.forward_host.startsWith("unix")
) {
const split = host.forward_host.split("/");
host.forward_host = split.shift();
host.forward_path = `/${split.join("/")}`;
}
if (host.domain_names) {
host.server_names = host.domain_names.map((domain_name) => domainToASCII(domain_name) || domain_name);
}
host.env = process.env;
try {
const config_text = await renderEngine.parseAndRender(template, host);
await writeFile(filename, config_text, { encoding: "utf8" });
debug(logger, "Wrote config:", filename);
if (process.env.DISABLE_NGINX_BEAUTIFIER === "false") {
await utils.execFile("nginxbeautifier", ["-s", "4", filename]).catch(() => {});
}
return true;
} catch (err) {
debug(logger, `Could not write ${filename}:`, err.message);
throw new errs.ConfigurationError(err.message);
}
},
@@ -318,17 +267,22 @@ const internalNginx = {
* @param {Object} [host]
* @returns {Promise}
*/
deleteConfig: (host_type, host) => {
deleteConfig: async (host_type, host) => {
const config_file = internalNginx.getConfigName(
internalNginx.getFileFriendlyHostType(host_type),
typeof host === "undefined" ? 0 : host.id,
);
return new Promise((resolve /*, reject*/) => {
internalNginx.deleteFile(config_file);
internalNginx.deleteFile(`${config_file}.err`);
resolve();
});
const filesToDelete = [config_file, `${config_file}.err`];
for (const filename of filesToDelete) {
try {
debug(logger, `Deleting file: ${filename}`);
await rm(filename, { force: true });
} catch (err) {
debug(logger, "Could not delete file:", JSON.stringify(err, null, 2));
}
}
},
/**
@@ -336,17 +290,15 @@ const internalNginx = {
* @param {Object} [host]
* @returns {Promise}
*/
renameConfigAsError: (host_type, host) => {
renameConfigAsError: async (host_type, host) => {
const config_file = internalNginx.getConfigName(
internalNginx.getFileFriendlyHostType(host_type),
typeof host === "undefined" ? 0 : host.id,
);
return new Promise((resolve /*, reject */) => {
fs.rename(config_file, `${config_file}.err`, () => {
resolve();
});
});
try {
await rename(config_file, `${config_file}.err`);
} catch {}
},
/**

View File

@@ -74,7 +74,7 @@ export default {
};
}
// Create a moment of the expiry expression
// Create a dayjs of the expiry expression
const expiry = parseDatePeriod(data.expiry);
if (expiry === null) {
throw new errs.AuthError(`Invalid expiry time: ${data.expiry}`);
@@ -117,7 +117,7 @@ export default {
throw new errs.AuthError(ERROR_MESSAGE_INVALID_AUTH);
}
// Create a moment of the expiry expression
// Create a dayjs of the expiry expression
const expiry = parseDatePeriod(data.expiry);
if (expiry === null) {
throw new errs.AuthError(`Invalid expiry time: ${data.expiry}`);
@@ -152,7 +152,7 @@ export default {
thisData.expiry = thisData.expiry || "1d";
if (access?.token.getUserId(0)) {
// Create a moment of the expiry expression
// Create a dayjs of the expiry expression
const expiry = parseDatePeriod(thisData.expiry);
if (expiry === null) {
throw new errs.AuthError(`Invalid expiry time: ${thisData.expiry}`);

View File

@@ -3,40 +3,13 @@ import crypto from "node:crypto";
import { global as logger } from "../logger.js";
const keysFile = "/data/npmplus/keys.json";
const sqliteEngine = "better-sqlite3";
const mysqlEngine = "mysql2";
const postgresEngine = "pg";
const sqliteClientName = "better-sqlite3";
let instance = null;
// 1. Load from config file first (not recommended anymore)
// 2. Use config env variables next
const configure = () => {
const filename = "/data/npmplus/default.json";
if (fs.existsSync(filename)) {
let configData;
try {
// Load this json synchronously
const rawData = fs.readFileSync(filename);
configData = JSON.parse(rawData);
} catch (_) {
// do nothing
}
if (configData?.database) {
logger.info(`Using configuration from file: ${filename}`);
// Migrate those who have "mysql" engine to "mysql2"
if (configData.database.engine === "mysql") {
configData.database.engine = mysqlEngine;
}
instance = configData;
instance.keys = getKeys();
return;
}
}
const toBool = (v) => /^(1|true|yes|on)$/i.test((v || "").trim());
const envMysqlHost = process.env.DB_MYSQL_HOST || null;
@@ -98,7 +71,7 @@ const configure = () => {
database: {
engine: "knex-native",
knex: {
client: sqliteClientName,
client: sqliteEngine,
connection: {
filename: envSqliteFile,
},
@@ -195,7 +168,7 @@ const configGet = (key) => {
*/
const isSqlite = () => {
instance === null && configure();
return instance.database.knex && instance.database.knex.client === sqliteClientName;
return instance.database.knex && instance.database.knex.client === sqliteEngine;
};
/**

View File

@@ -20,7 +20,7 @@ export default () => {
next();
} catch {
res.clearCookie("token", { path: "/api" });
return res.status(401).json({
return res.status(403).json({
error: {
message: "Invalid or expired token",
},

View File

@@ -1,9 +1,9 @@
import moment from "moment";
import dayjs from "dayjs";
import { ref } from "objection";
import { isPostgres } from "./config.js";
/**
* Takes an expression such as 30d and returns a moment object of that date in future
* Takes an expression such as 30d and returns a dayjs object of that date in future
*
* Key Shorthand
* ==================
@@ -23,7 +23,7 @@ import { isPostgres } from "./config.js";
const parseDatePeriod = (expression) => {
const matches = expression.match(/^([0-9]+)(y|Q|M|w|d|h|m|s|ms)$/m);
if (matches) {
return moment().add(matches[1], matches[2]);
return dayjs().add(matches[1], matches[2]);
}
return null;

View File

@@ -1,4 +1,5 @@
import { execFile as nodeExecFile } from "node:child_process";
import { promisify } from "node:util";
import { dirname } from "node:path";
import { fileURLToPath } from "node:url";
import { Liquid } from "liquidjs";
@@ -11,6 +12,8 @@ import errs from "./error.js";
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const nodeExecFilePromises = promisify(nodeExecFile);
const writeHash = () => {
const envVars = fs.readdirSync(`${__dirname}/../templates`).flatMap((file) => {
const content = fs.readFileSync(`${__dirname}/../templates/${file}`, "utf8");
@@ -31,18 +34,18 @@ const writeHash = () => {
* @param {Array} args
* @returns {Promise}
*/
const execFile = (cmd, args) => {
const execFile = async (cmd, args) => {
debug(logger, `CMD: ${cmd} ${args ? args.join(" ") : ""}`);
return new Promise((resolve, reject) => {
nodeExecFile(cmd, args, (err, stdout, stderr) => {
if (err && typeof err === "object") {
reject(new errs.CommandError((stdout + stderr).trim(), 1, err));
} else {
resolve((stdout + stderr).trim());
}
});
});
try {
const { stdout, stderr } = await nodeExecFilePromises(cmd, args);
return `${stdout || ""}${stderr || ""}`.trim();
} catch (err) {
if (err && typeof err === "object") {
throw new errs.CommandError(`${err.stdout || ""}${err.stderr || ""}`.trim(), 1, err);
}
throw err;
}
};
/**

View File

@@ -0,0 +1,23 @@
/**
* Migrate
*
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}
*/
const up = (_knex) => {
return Promise.resolve(true);
};
/**
* Undo Migrate
*
* @param {Object} knex
* @returns {Promise}
*/
const down = (_knex) => {
return Promise.resolve(true);
};
export { up, down };

View File

@@ -5,7 +5,7 @@ const migrateName = "initial-schema";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "websockets";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "forward_host";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "http2_support";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "forward_scheme";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "disabled";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -6,7 +6,7 @@ const migrateName = "custom_locations";
* Migrate
* Extends proxy_host table with locations field
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "hsts";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "settings";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "access_list_client";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "access_list_client_fix";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "pass_auth";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "redirection_scheme";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "redirection_status_code";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "stream_domain";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -26,7 +26,7 @@ async function regenerateDefaultHost(knex) {
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "stream_ssl";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "change_incoming_port_to_string";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -26,7 +26,7 @@ async function regenerateDefaultHost(knex) {
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "change_forwarding_port_to_string";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "allow_empty_forwarding_port";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "allow_empty_stream_forwarding_port";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "stream_proxy_protocol_forwarding";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "redirect_auto_scheme";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "stream_proxy_ssl";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -5,7 +5,7 @@ const migrateName = "stream_rename_pp_and_tls";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -0,0 +1,43 @@
import { migrate as logger } from "../logger.js";
const migrateName = "trust_forwarded_proto";
/**
* Migrate
*
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}
*/
const up = (knex) => {
logger.info(`[${migrateName}] Migrating Up...`);
return knex.schema
.alterTable("proxy_host", (table) => {
table.tinyint("trust_forwarded_proto").notNullable().defaultTo(0);
})
.then(() => {
logger.info(`[${migrateName}] proxy_host Table altered`);
});
};
/**
* Undo Migrate
*
* @param {Object} knex
* @returns {Promise}
*/
const down = (knex) => {
logger.info(`[${migrateName}] Migrating Down...`);
return knex.schema
.alterTable("proxy_host", (table) => {
table.dropColumn("trust_forwarded_proto");
})
.then(() => {
logger.info(`[${migrateName}] proxy_host Table altered`);
});
};
export { up, down };

View File

@@ -5,7 +5,7 @@ const migrateName = "reset_button_values";
/**
* Migrate
*
* @see http://knexjs.org/#Schema
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}

View File

@@ -0,0 +1,23 @@
/**
* Migrate
*
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}
*/
const up = (_knex) => {
return Promise.resolve(true);
};
/**
* Undo Migrate
*
* @param {Object} knex
* @returns {Promise}
*/
const down = (_knex) => {
return Promise.resolve(true);
};
export { up, down };

View File

@@ -0,0 +1,57 @@
import { migrate as logger } from "../logger.js";
const migrateName = "new_proxy_buttons";
/**
* Migrate
*
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}
*/
const up = async (knex) => {
const hasRequestBuffering = await knex.schema.hasColumn("proxy_host", "npmplus_proxy_request_buffering");
const hasResponseBuffering = await knex.schema.hasColumn("proxy_host", "npmplus_proxy_response_buffering");
const hasFancyindexUpstreamCompression = await knex.schema.hasColumn(
"proxy_host",
"npmplus_fancyindex_upstream_compression",
);
const hasNoindex = await knex.schema.hasColumn("proxy_host", "npmplus_noindex");
if (hasRequestBuffering && hasResponseBuffering && hasFancyindexUpstreamCompression && hasNoindex) {
return;
}
logger.info(`[${migrateName}] Migrating Up...`);
await knex.schema.table("proxy_host", (proxy_host) => {
if (!hasRequestBuffering) {
proxy_host.integer("npmplus_proxy_request_buffering").notNull().unsigned().defaultTo(0);
}
if (!hasResponseBuffering) {
proxy_host.integer("npmplus_proxy_response_buffering").notNull().unsigned().defaultTo(0);
}
if (!hasFancyindexUpstreamCompression) {
proxy_host.integer("npmplus_fancyindex_upstream_compression").notNull().unsigned().defaultTo(0);
}
if (!hasNoindex) {
proxy_host.integer("npmplus_noindex").notNull().unsigned().defaultTo(0);
}
});
logger.info(`[${migrateName}] proxy_host Table altered`);
};
/**
* Undo Migrate
*
* @param {Object} knex
* @returns {Promise}
*/
const down = (_knex) => {
logger.warn(`[${migrateName}] You can't migrate down this one.`);
return Promise.resolve(true);
};
export { up, down };

View File

@@ -0,0 +1,37 @@
import { migrate as logger } from "../logger.js";
const migrateName = "new_proxy_selections";
/**
* Migrate
*
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}
*/
const up = (knex) => {
logger.info(`[${migrateName}] Migrating Up...`);
return knex.schema
.table("proxy_host", (proxy_host) => {
proxy_host.string("npmplus_x_frame_options").notNull().defaultTo("SAMEORIGIN");
proxy_host.string("npmplus_auth_request").notNull().defaultTo("none");
})
.then(() => {
logger.info(`[${migrateName}] proxy_host Table altered`);
});
};
/**
* Undo Migrate
*
* @param {Object} knex
* @returns {Promise}
*/
const down = (_knex) => {
logger.warn(`[${migrateName}] You can't migrate down this one.`);
return Promise.resolve(true);
};
export { up, down };

View File

@@ -0,0 +1,57 @@
import { migrate as logger } from "../logger.js";
const migrateName = "npmplus_http3_support";
/**
* Migrate
*
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}
*/
const up = async (knex) => {
const proxyHostHasHttp3Column = await knex.schema.hasColumn("proxy_host", "npmplus_http3_support");
const redirectHostHasHttp3Column = await knex.schema.hasColumn("redirection_host", "npmplus_http3_support");
const deadHostHasHttp3Column = await knex.schema.hasColumn("dead_host", "npmplus_http3_support");
if (proxyHostHasHttp3Column && redirectHostHasHttp3Column && deadHostHasHttp3Column) {
return;
}
logger.info(`[${migrateName}] Migrating Up...`);
if (!proxyHostHasHttp3Column) {
await knex.schema.table("proxy_host", (proxy_host) => {
proxy_host.integer("npmplus_http3_support").notNull().unsigned().defaultTo(0);
});
logger.info(`[${migrateName}] proxy_host Table altered`);
}
if (!redirectHostHasHttp3Column) {
await knex.schema.table("redirection_host", (redirection_host) => {
redirection_host.integer("npmplus_http3_support").notNull().unsigned().defaultTo(0);
});
logger.info(`[${migrateName}] redirection_host Table altered`);
}
if (!deadHostHasHttp3Column) {
await knex.schema.table("dead_host", (dead_host) => {
dead_host.integer("npmplus_http3_support").notNull().unsigned().defaultTo(0);
});
logger.info(`[${migrateName}] dead_host Table altered`);
}
};
/**
* Undo Migrate
*
* @param {Object} knex
* @returns {Promise}
*/
const down = (_knex) => {
logger.warn(`[${migrateName}] You can't migrate down this one.`);
return Promise.resolve(true);
};
export { up, down };

View File

@@ -0,0 +1,23 @@
/**
* Migrate
*
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}
*/
const up = (_knex) => {
return Promise.resolve(true);
};
/**
* Undo Migrate
*
* @param {Object} knex
* @returns {Promise}
*/
const down = (_knex) => {
return Promise.resolve(true);
};
export { up, down };

View File

@@ -0,0 +1,57 @@
import { migrate as logger } from "../logger.js";
const migrateName = "new_and_split_proxy_buttons";
/**
* Migrate
*
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}
*/
const up = async (knex) => {
const hasCrowdsecAppsec = await knex.schema.hasColumn("proxy_host", "npmplus_crowdsec_appsec");
const hasUpstreamCompression = await knex.schema.hasColumn("proxy_host", "npmplus_upstream_compression");
const hasFancyindex = await knex.schema.hasColumn("proxy_host", "npmplus_fancyindex");
const hasFancyindexUpstreamCompression = await knex.schema.hasColumn(
"proxy_host",
"npmplus_fancyindex_upstream_compression",
);
if (hasCrowdsecAppsec && hasUpstreamCompression && hasFancyindex && !hasFancyindexUpstreamCompression) {
return;
}
logger.info(`[${migrateName}] Migrating Up...`);
await knex.schema.table("proxy_host", (proxy_host) => {
if (!hasCrowdsecAppsec) {
proxy_host.integer("npmplus_crowdsec_appsec").notNull().unsigned().defaultTo(0);
}
if (!hasUpstreamCompression) {
proxy_host.integer("npmplus_upstream_compression").notNull().unsigned().defaultTo(0);
}
if (!hasFancyindex) {
proxy_host.integer("npmplus_fancyindex").notNull().unsigned().defaultTo(0);
}
if (hasFancyindexUpstreamCompression) {
proxy_host.dropColumn("npmplus_fancyindex_upstream_compression");
}
});
logger.info(`[${migrateName}] proxy_host Table altered`);
};
/**
* Undo Migrate
*
* @param {Object} knex
* @returns {Promise}
*/
const down = (_knex) => {
logger.warn(`[${migrateName}] You can't migrate down this one.`);
return Promise.resolve(true);
};
export { up, down };

View File

@@ -0,0 +1,23 @@
/**
* Migrate
*
* @see https://knexjs.org/guide/migrations.html#migration-api
*
* @param {Object} knex
* @returns {Promise}
*/
const up = (_knex) => {
return Promise.resolve(true);
};
/**
* Undo Migrate
*
* @param {Object} knex
* @returns {Promise}
*/
const down = (_knex) => {
return Promise.resolve(true);
};
export { up, down };

View File

@@ -10,7 +10,15 @@ import User from "./user.js";
Model.knex(db());
const boolFields = ["is_deleted", "ssl_forced", "http2_support", "enabled", "hsts_enabled", "hsts_subdomains"];
const boolFields = [
"is_deleted",
"ssl_forced",
"http2_support",
"npmplus_http3_support",
"enabled",
"hsts_enabled",
"hsts_subdomains",
];
class DeadHost extends Model {
$beforeInsert() {

View File

@@ -18,9 +18,17 @@ const boolFields = [
"block_exploits",
"allow_websocket_upgrade",
"http2_support",
"npmplus_http3_support",
"enabled",
"hsts_enabled",
"hsts_subdomains",
"trust_forwarded_proto",
"npmplus_noindex",
"npmplus_crowdsec_appsec",
"npmplus_proxy_request_buffering",
"npmplus_proxy_response_buffering",
"npmplus_upstream_compression",
"npmplus_fancyindex",
];
class ProxyHost extends Model {

View File

@@ -19,6 +19,7 @@ const boolFields = [
"hsts_enabled",
"hsts_subdomains",
"http2_support",
"npmplus_http3_support",
];
class RedirectionHost extends Model {

View File

@@ -7,29 +7,30 @@
"main": "index.js",
"type": "module",
"dependencies": {
"@apidevtools/json-schema-ref-parser": "15.2.2",
"ajv": "8.17.1",
"@apidevtools/json-schema-ref-parser": "15.3.1",
"ajv": "8.18.0",
"archiver": "7.0.1",
"bcryptjs": "3.0.3",
"better-sqlite3": "12.6.2",
"cookie-parser": "1.4.7",
"dayjs": "1.11.19",
"express": "5.2.1",
"express-fileupload": "1.5.2",
"express-rate-limit": "8.2.1",
"jsonwebtoken": "9.0.3",
"knex": "3.1.0",
"liquidjs": "10.24.0",
"lodash": "4.17.23",
"moment": "2.30.1",
"mysql2": "3.16.3",
"mysql2": "3.18.2",
"objection": "3.1.5",
"otplib": "13.2.1",
"openid-client": "6.8.1",
"pg": "8.18.0",
"openid-client": "6.8.2",
"otplib": "13.3.0",
"pg": "8.19.0",
"signale": "1.4.0",
"temp-write": "6.0.1"
"swagger-ui-express": "5.0.1"
},
"devDependencies": {
"@apidevtools/swagger-parser": "12.1.0",
"@biomejs/biome": "2.3.14"
"@biomejs/biome": "2.4.5"
}
}

View File

@@ -2,7 +2,7 @@
// based on: https://github.com/jlesage/docker-nginx-proxy-manager/blob/796734a3f9a87e0b1561b47fd418f82216359634/rootfs/opt/nginx-proxy-manager/bin/reset-password
import fs from "node:fs";
import { existsSync } from "node:fs";
import bcrypt from "bcryptjs";
import Database from "better-sqlite3";
@@ -13,7 +13,7 @@ Reset password of a NPMplus user.
Arguments:
USER_EMAIL Email address of the user to reset the password.
PASSWORD Optional new password of the user. If not set, password is set to 'changeme'.`);
PASSWORD New password of the user.`);
process.exit(1);
}
@@ -21,57 +21,35 @@ const args = process.argv.slice(2);
const USER_EMAIL = args[0];
const PASSWORD = args[1];
if (!USER_EMAIL && !PASSWORD) {
console.error("ERROR: User email address must be set.");
console.error("ERROR: Password must be set.");
if (!USER_EMAIL || !PASSWORD) {
if (!USER_EMAIL) console.error("ERROR: User email address must be set.");
if (!PASSWORD) console.error("ERROR: Password must be set.");
usage();
}
if (!USER_EMAIL) {
console.error("ERROR: User email address must be set.");
usage();
}
if (!PASSWORD) {
console.error("ERROR: Password must be set.");
usage();
}
if (fs.existsSync("/data/npmplus/database.sqlite")) {
bcrypt.hash(PASSWORD, 13, (err, PASSWORD_HASH) => {
if (err) {
console.error(err);
process.exit(1);
}
const db = new Database("/data/npmplus/database.sqlite");
try {
const stmt = db.prepare(`
UPDATE auth
SET secret = ?
WHERE EXISTS (
SELECT *
FROM user
WHERE user.id = auth.user_id AND user.email = ?
)`);
const result = stmt.run(PASSWORD_HASH, USER_EMAIL);
if (result.changes > 0) {
console.log(`Password for user ${USER_EMAIL} has been reset.`);
} else {
console.log(`No user found with email ${USER_EMAIL}.`);
}
} catch (error) {
console.error(error);
process.exit(1);
} finally {
db.close();
}
process.exit(0);
});
} else {
if (!existsSync("/data/npmplus/database.sqlite")) {
console.error("ERROR: Cannot connect to the sqlite database.");
process.exit(1);
}
let db;
try {
db = new Database("/data/npmplus/database.sqlite");
const PASSWORD_HASH = bcrypt.hashSync(PASSWORD, 13);
const stmt = db.prepare(
"UPDATE auth SET secret = ? WHERE EXISTS (SELECT * FROM user WHERE user.id = auth.user_id AND user.email = ?)",
);
const result = stmt.run(PASSWORD_HASH, USER_EMAIL);
if (result.changes > 0) {
console.log(`Password for user ${USER_EMAIL} has been reset.`);
} else {
console.log(`No user found with email ${USER_EMAIL}.`);
}
} catch (error) {
console.error(error);
process.exitCode = 1;
} finally {
if (db) db.close();
}

553
backend/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

36
backend/routes/docs.js Normal file
View File

@@ -0,0 +1,36 @@
import express from "express";
import swaggerUi from "swagger-ui-express";
import { debug, express as logger } from "../logger.js";
import PACKAGE from "../package.json" with { type: "json" };
import { getCompiledSchema } from "../schema/index.js";
const router = express.Router({
caseSensitive: true,
strict: true,
mergeParams: true,
});
router.use("/", swaggerUi.serve);
router
.route("/")
.options((_, res) => {
res.sendStatus(204);
})
/**
* GET / (Now serves the Swagger UI interface)
*/
.get(async (req, res, next) => {
try {
const swaggerJSON = await getCompiledSchema();
swaggerJSON.info.version = PACKAGE.version;
swaggerJSON.servers[0].url = `${req.protocol}://${req.get("host")}/api`;
res.status(200).send(swaggerUi.generateHTML(swaggerJSON));
} catch (err) {
debug(logger, `${req.method.toUpperCase()} ${req.originalUrl}: ${err}`);
next(err);
}
});
export default router;

View File

@@ -10,6 +10,7 @@ import proxyHostsRoutes from "./nginx/proxy_hosts.js";
import redirectionHostsRoutes from "./nginx/redirection_hosts.js";
import streamsRoutes from "./nginx/streams.js";
import reportsRoutes from "./reports.js";
import docsRoutes from "./docs.js";
import schemaRoutes from "./schema.js";
import settingsRoutes from "./settings.js";
import tokensRoutes from "./tokens.js";
@@ -44,6 +45,7 @@ router.get(["/api", "/api/"], async (_, res /*, next*/) => {
});
});
router.use("/api/docs", docsRoutes);
router.use("/api/schema", schemaRoutes);
router.use("/api/tokens", tokensRoutes);
if (isOIDCenabled) router.use("/api/oidc", oidcRoutes);

View File

@@ -155,8 +155,8 @@ router
* Validate certificates
*/
.post(async (req, res, next) => {
if (!req.files) {
res.status(400).send({ error: "No files were uploaded" });
if (!req.files || Object.keys(req.files).length !== 2 || !req.files.certificate || !req.files.certificate_key) {
res.status(400).send({ error: "certificate and certificate_key were not uploaded" });
return;
}
@@ -254,8 +254,8 @@ router
* Upload certificates
*/
.post(async (req, res, next) => {
if (!req.files) {
res.status(400).send({ error: "No files were uploaded" });
if (!req.files || Object.keys(req.files).length !== 2 || !req.files.certificate || !req.files.certificate_key) {
res.status(400).send({ error: "certificate and certificate_key were not uploaded" });
return;
}

View File

@@ -1,5 +1,6 @@
import * as client from "openid-client";
import express from "express";
import { rateLimit } from "express-rate-limit";
import errs from "../lib/error.js";
import internalToken from "../internal/token.js";
import { oidc as logger } from "../logger.js";
@@ -10,6 +11,18 @@ const router = express.Router({
mergeParams: true,
});
const limiter = rateLimit({
windowMs: 10 * 60 * 1000,
limit: 10,
standardHeaders: "draft-8",
legacyHeaders: false,
ipv6Subnet: 64,
skipSuccessfulRequests: true,
validate: { trustProxy: false },
});
router.use(limiter);
router
.route("/")
.options((_, res) => {
@@ -39,30 +52,30 @@ router
code_challenge: await client.calculatePKCECodeChallenge(code_verifier),
};
res.cookie("npmplus_oidc_no_redirect", "true", { secure: true, sameSite: "lax" });
res.cookie("npmplus_oidc_no_redirect", "true", { secure: true, sameSite: "Strict" });
res.cookie("npmplus_oidc_code_verifier", code_verifier, {
httpOnly: true,
secure: true,
sameSite: "lax",
sameSite: "Lax",
path: "/api/oidc",
});
res.cookie("npmplus_oidc_state", parameters.state, {
httpOnly: true,
secure: true,
sameSite: "lax",
sameSite: "Lax",
path: "/api/oidc",
});
res.cookie("npmplus_oidc_nonce", parameters.nonce, {
httpOnly: true,
secure: true,
sameSite: "lax",
sameSite: "Lax",
path: "/api/oidc",
});
res.redirect(await client.buildAuthorizationUrl(config, parameters).toString());
} catch (err) {
logger.error(`Callback error: ${err.message}`);
res.cookie("npmplus_oidc_no_redirect", "true", { secure: true, sameSite: "lax" });
res.cookie("npmplus_oidc_no_redirect", "true", { secure: true, sameSite: "Strict" });
res.clearCookie("npmplus_oidc_state", { path: "/api/oidc" });
res.clearCookie("npmplus_oidc_nonce", { path: "/api/oidc" });
res.clearCookie("npmplus_oidc_code_verifier", { path: "/api/oidc" });
@@ -115,7 +128,7 @@ router
res.cookie("token", data.token, {
httpOnly: true,
secure: true,
sameSite: "lax",
sameSite: "Strict",
path: "/api",
expires: new Date(data.expires),
});

View File

@@ -18,22 +18,11 @@ router
/**
* GET /schema
*/
.get(async (req, res) => {
.get(async (req, res, next) => {
try {
const swaggerJSON = await getCompiledSchema();
let proto = req.protocol;
if (typeof req.headers["x-forwarded-proto"] !== "undefined" && req.headers["x-forwarded-proto"]) {
proto = req.headers["x-forwarded-proto"];
}
let origin = `${proto}://${req.hostname}`;
if (typeof req.headers.origin !== "undefined" && req.headers.origin) {
origin = req.headers.origin;
}
swaggerJSON.info.version = PACKAGE.version;
swaggerJSON.servers[0].url = `${origin}/api`;
swaggerJSON.servers[0].url = `${req.protocol}://${req.get("host")}/api`;
res.status(200).send(swaggerJSON);
} catch (err) {
debug(logger, `${req.method.toUpperCase()} ${req.originalUrl}: ${err}`);

View File

@@ -1,4 +1,5 @@
import express from "express";
import { rateLimit } from "express-rate-limit";
import internalToken from "../internal/token.js";
import errs from "../lib/error.js";
import jwtdecode from "../lib/express/jwt-decode.js";
@@ -12,6 +13,19 @@ const router = express.Router({
mergeParams: true,
});
const limiter = rateLimit({
windowMs: 5 * 60 * 1000,
limit: 10,
message: { error: { message: "Too many requests, please try again later." } },
standardHeaders: "draft-8",
legacyHeaders: false,
ipv6Subnet: 64,
skipSuccessfulRequests: true,
validate: { trustProxy: false },
});
router.use(limiter);
router
.route("/")
.options((_, res) => {
@@ -26,6 +40,12 @@ router
* for services like Job board and Worker.
*/
.get(jwtdecode(), async (req, res, next) => {
if (!req.cookies?.token) {
res.clearCookie("token", { path: "/api" });
res.cookie("npmplus_oidc_no_redirect", "true", { secure: true, sameSite: "Strict" });
return res.status(401).send({ expires: new Date(0).toISOString() });
}
try {
const data = await internalToken.getFreshToken(res.locals.access, {
expiry: typeof req.query.expiry !== "undefined" ? req.query.expiry : null,
@@ -35,7 +55,7 @@ router
res.cookie("token", data.token, {
httpOnly: true,
secure: true,
sameSite: "lax",
sameSite: "Strict",
path: "/api",
expires: new Date(data.expires),
});
@@ -66,7 +86,7 @@ router
res.cookie("token", result.token, {
httpOnly: true,
secure: true,
sameSite: "lax",
sameSite: "Strict",
path: "/api",
expires: new Date(result.expires),
});
@@ -87,8 +107,8 @@ router
.delete(async (req, res, next) => {
try {
res.clearCookie("token", { path: "/api" });
res.cookie("npmplus_oidc_no_redirect", "true", { secure: true, sameSite: "lax" });
res.status(200).send(true);
res.cookie("npmplus_oidc_no_redirect", "true", { secure: true, sameSite: "Strict" });
res.status(200).send({ expires: new Date(0).toISOString() });
} catch (err) {
debug(logger, `${req.method.toUpperCase()} ${req.originalUrl}: ${err}`);
next(err);

View File

@@ -297,7 +297,7 @@ router
res.cookie("token", result.token, {
httpOnly: true,
secure: true,
sameSite: "lax",
sameSite: "Strict",
path: "/api",
expires: new Date(result.expires),
});

View File

@@ -121,16 +121,56 @@
"type": "boolean",
"example": true
},
"npmplus_http3_support": {
"description": "HTTP3 Protocol Support",
"type": "boolean",
"example": true
},
"block_exploits": {
"description": "Should we block common exploits",
"description": "This is always disabled. Your value will be ignored",
"type": "boolean",
"example": false
},
"caching_enabled": {
"description": "Should we cache assets",
"description": "This is always disabled. Your value will be ignored",
"type": "boolean",
"example": false
},
"allow_websocket_upgrade": {
"description": "This is always enabled. Your value will be ignored",
"type": "boolean",
"example": true
},
"npmplus_noindex": {
"description": "Send noindex header and block some user agents",
"type": "boolean",
"example": false
},
"npmplus_crowdsec_appsec": {
"description": "Disable Crowdsec Appsec",
"type": "boolean",
"example": false
},
"npmplus_proxy_request_buffering": {
"description": "Disable Request Buffering",
"type": "boolean",
"example": false
},
"npmplus_proxy_response_buffering": {
"description": "Disable Response Buffering",
"type": "boolean",
"example": false
},
"npmplus_upstream_compression": {
"description": "Enable compression by upstream",
"type": "boolean",
"example": false
},
"npmplus_fancyindex": {
"description": "Enable fancyindex",
"type": "boolean",
"example": false
},
"email": {
"description": "Email address",
"type": "string",
@@ -220,10 +260,6 @@
"certificate_key": {
"type": "string",
"example": "-----BEGIN CERTIFICATE-----\nMIID...-----END CERTIFICATE-----"
},
"intermediate_certificate": {
"type": "string",
"example": "-----BEGIN CERTIFICATE-----\nMIID...-----END CERTIFICATE-----"
}
}
},

View File

@@ -12,6 +12,7 @@
"hsts_enabled",
"hsts_subdomains",
"http2_support",
"npmplus_http3_support",
"advanced_config",
"enabled",
"meta"
@@ -48,6 +49,9 @@
"http2_support": {
"$ref": "../common.json#/properties/http2_support"
},
"npmplus_http3_support": {
"$ref": "../common.json#/properties/npmplus_http3_support"
},
"advanced_config": {
"type": "string",
"example": ""

View File

@@ -18,11 +18,21 @@
"meta",
"allow_websocket_upgrade",
"http2_support",
"npmplus_http3_support",
"forward_scheme",
"enabled",
"locations",
"hsts_enabled",
"hsts_subdomains"
"hsts_subdomains",
"trust_forwarded_proto",
"npmplus_noindex",
"npmplus_crowdsec_appsec",
"npmplus_proxy_request_buffering",
"npmplus_proxy_response_buffering",
"npmplus_upstream_compression",
"npmplus_fancyindex",
"npmplus_x_frame_options",
"npmplus_auth_request"
],
"properties": {
"id": {
@@ -68,6 +78,49 @@
"block_exploits": {
"$ref": "../common.json#/properties/block_exploits"
},
"allow_websocket_upgrade": {
"$ref": "../common.json#/properties/allow_websocket_upgrade"
},
"npmplus_noindex": {
"$ref": "../common.json#/properties/npmplus_noindex"
},
"npmplus_crowdsec_appsec": {
"$ref": "../common.json#/properties/npmplus_crowdsec_appsec"
},
"npmplus_proxy_request_buffering": {
"$ref": "../common.json#/properties/npmplus_proxy_request_buffering"
},
"npmplus_proxy_response_buffering": {
"$ref": "../common.json#/properties/npmplus_proxy_response_buffering"
},
"npmplus_upstream_compression": {
"$ref": "../common.json#/properties/npmplus_upstream_compression"
},
"npmplus_fancyindex": {
"$ref": "../common.json#/properties/npmplus_fancyindex"
},
"npmplus_x_frame_options": {
"type": "string",
"enum": [
"DENY",
"SAMEORIGIN",
"upstream",
"none"
],
"example": "DENY"
},
"npmplus_auth_request": {
"type": "string",
"enum": [
"none",
"anubis",
"tinyauth",
"authelia",
"authentik",
"authentik-send-basic-auth"
],
"example": "none"
},
"advanced_config": {
"type": "string",
"example": ""
@@ -79,14 +132,12 @@
"nginx_err": null
}
},
"allow_websocket_upgrade": {
"description": "Allow Websocket Upgrade for all paths",
"type": "boolean",
"example": true
},
"http2_support": {
"$ref": "../common.json#/properties/http2_support"
},
"npmplus_http3_support": {
"$ref": "../common.json#/properties/npmplus_http3_support"
},
"forward_scheme": {
"type": "string",
"enum": [
@@ -121,6 +172,9 @@
"null"
]
},
"npmplus_enabled": {
"$ref": "../common.json#/properties/enabled"
},
"path": {
"type": "string",
"minLength": 1
@@ -154,6 +208,35 @@
"allow_websocket_upgrade": {
"$ref": "#/properties/allow_websocket_upgrade"
},
"npmplus_fancyindex_upstream_compression": {
"description": "This just exists so that old custom locations don't break",
"type": "boolean",
"example": false
},
"npmplus_noindex": {
"$ref": "#/properties/npmplus_noindex"
},
"npmplus_crowdsec_appsec": {
"$ref": "#/properties/npmplus_crowdsec_appsec"
},
"npmplus_proxy_request_buffering": {
"$ref": "#/properties/npmplus_proxy_request_buffering"
},
"npmplus_proxy_response_buffering": {
"$ref": "#/properties/npmplus_proxy_response_buffering"
},
"npmplus_upstream_compression": {
"$ref": "#/properties/npmplus_upstream_compression"
},
"npmplus_fancyindex": {
"$ref": "#/properties/npmplus_fancyindex"
},
"npmplus_x_frame_options": {
"$ref": "#/properties/npmplus_x_frame_options"
},
"npmplus_auth_request": {
"$ref": "#/properties/npmplus_auth_request"
},
"advanced_config": {
"type": "string"
}
@@ -174,6 +257,11 @@
"hsts_subdomains": {
"$ref": "../common.json#/properties/hsts_subdomains"
},
"trust_forwarded_proto":{
"type": "boolean",
"description": "Trust the forwarded headers",
"example": false
},
"certificate": {
"oneOf": [
{

View File

@@ -16,6 +16,7 @@
"hsts_enabled",
"hsts_subdomains",
"http2_support",
"npmplus_http3_support",
"block_exploits",
"advanced_config",
"enabled",
@@ -81,6 +82,9 @@
"http2_support": {
"$ref": "../common.json#/properties/http2_support"
},
"npmplus_http3_support": {
"$ref": "../common.json#/properties/npmplus_http3_support"
},
"block_exploits": {
"$ref": "../common.json#/properties/block_exploits"
},

View File

@@ -1,8 +1,7 @@
{
"bearerAuth": {
"type": "http",
"scheme": "bearer",
"bearerFormat": "JWT",
"description": "JWT Bearer Token authentication"
"cookieAuth": {
"type": "apiKey",
"in": "cookie",
"name": "token"
}
}

View File

@@ -2,8 +2,7 @@
"type": "object",
"description": "Token object",
"required": [
"expires",
"token"
"expires"
],
"additionalProperties": false,
"properties": {
@@ -11,11 +10,6 @@
"description": "Token Expiry ISO Time String",
"example": "2025-02-04T20:40:46.340Z",
"type": "string"
},
"token": {
"description": "JWT Token",
"example": "eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCJ9.ey...xaHKYr3Kk6MvkUjcC4",
"type": "string"
}
}
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"admin"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"admin"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"access_lists.view"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"access_lists.manage"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"access_lists.view"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"access_lists.manage"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"access_lists.manage"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"certificates.manage"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"certificates.manage"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"certificates.view"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"certificates.manage"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"certificates.manage"
]
}
@@ -57,11 +57,6 @@
"type": "string",
"minLength": 1,
"example": "-----BEGIN CERTIFICATE-----\nMIID...-----END CERTIFICATE-----"
},
"intermediate_certificate": {
"type": "string",
"minLength": 1,
"example": "-----BEGIN CERTIFICATE-----\nMIID...-----END CERTIFICATE-----"
}
}
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"certificates.view"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"certificates.view"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"certificates.manage"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"certificates.view"
]
}

View File

@@ -6,7 +6,7 @@
],
"security": [
{
"bearerAuth": [
"cookieAuth": [
"certificates.manage"
]
}

Some files were not shown because too many files have changed in this diff Show More