mirror of
https://github.com/optim-enterprises-bv/kubernetes.git
synced 2025-11-01 10:48:15 +00:00
23
LICENSES/vendor/github.com/go-task/slim-sprig/LICENSE
generated
vendored
Normal file
23
LICENSES/vendor/github.com/go-task/slim-sprig/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
= vendor/github.com/go-task/slim-sprig licensed under: =
|
||||||
|
|
||||||
|
Copyright (C) 2013-2020 Masterminds
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in
|
||||||
|
all copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||||
|
THE SOFTWARE.
|
||||||
|
|
||||||
|
= vendor/github.com/go-task/slim-sprig/LICENSE.txt 4ed8d725bea5f035fcea1ab05a767f78
|
||||||
206
LICENSES/vendor/github.com/google/pprof/LICENSE
generated
vendored
Normal file
206
LICENSES/vendor/github.com/google/pprof/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,206 @@
|
|||||||
|
= vendor/github.com/google/pprof licensed under: =
|
||||||
|
|
||||||
|
|
||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright [yyyy] [name of copyright owner]
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
|
||||||
|
= vendor/github.com/google/pprof/LICENSE 3b83ef96387f14655fc854ddc3c6bd57
|
||||||
@@ -106,7 +106,7 @@ ginkgo:
|
|||||||
echo "$$GINKGO_HELP_INFO"
|
echo "$$GINKGO_HELP_INFO"
|
||||||
else
|
else
|
||||||
ginkgo:
|
ginkgo:
|
||||||
hack/make-rules/build.sh github.com/onsi/ginkgo/ginkgo
|
hack/make-rules/build.sh github.com/onsi/ginkgo/v2/ginkgo
|
||||||
endif
|
endif
|
||||||
|
|
||||||
define VERIFY_HELP_INFO
|
define VERIFY_HELP_INFO
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ package tools
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
// build script dependencies
|
// build script dependencies
|
||||||
_ "github.com/onsi/ginkgo/v2"
|
_ "github.com/onsi/ginkgo/v2/ginkgo"
|
||||||
_ "k8s.io/code-generator/cmd/go-to-protobuf"
|
_ "k8s.io/code-generator/cmd/go-to-protobuf"
|
||||||
_ "k8s.io/code-generator/cmd/go-to-protobuf/protoc-gen-gogo"
|
_ "k8s.io/code-generator/cmd/go-to-protobuf/protoc-gen-gogo"
|
||||||
_ "k8s.io/gengo/examples/deepcopy-gen/generators"
|
_ "k8s.io/gengo/examples/deepcopy-gen/generators"
|
||||||
|
|||||||
2
go.mod
2
go.mod
@@ -173,10 +173,12 @@ require (
|
|||||||
github.com/go-openapi/jsonreference v0.19.5 // indirect
|
github.com/go-openapi/jsonreference v0.19.5 // indirect
|
||||||
github.com/go-openapi/swag v0.19.14 // indirect
|
github.com/go-openapi/swag v0.19.14 // indirect
|
||||||
github.com/go-ozzo/ozzo-validation v3.5.0+incompatible // indirect
|
github.com/go-ozzo/ozzo-validation v3.5.0+incompatible // indirect
|
||||||
|
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 // indirect
|
||||||
github.com/gofrs/uuid v4.0.0+incompatible // indirect
|
github.com/gofrs/uuid v4.0.0+incompatible // indirect
|
||||||
github.com/golang-jwt/jwt/v4 v4.2.0 // indirect
|
github.com/golang-jwt/jwt/v4 v4.2.0 // indirect
|
||||||
github.com/google/btree v1.0.1 // indirect
|
github.com/google/btree v1.0.1 // indirect
|
||||||
github.com/google/cel-go v0.11.2 // indirect
|
github.com/google/cel-go v0.11.2 // indirect
|
||||||
|
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 // indirect
|
||||||
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
|
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
|
||||||
github.com/googleapis/gax-go/v2 v2.1.1 // indirect
|
github.com/googleapis/gax-go/v2 v2.1.1 // indirect
|
||||||
github.com/gophercloud/gophercloud v0.1.0 // indirect
|
github.com/gophercloud/gophercloud v0.1.0 // indirect
|
||||||
|
|||||||
2
go.sum
2
go.sum
@@ -191,6 +191,7 @@ github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/
|
|||||||
github.com/go-ozzo/ozzo-validation v3.5.0+incompatible h1:sUy/in/P6askYr16XJgTKq/0SZhiWsdg4WZGaLsGQkM=
|
github.com/go-ozzo/ozzo-validation v3.5.0+incompatible h1:sUy/in/P6askYr16XJgTKq/0SZhiWsdg4WZGaLsGQkM=
|
||||||
github.com/go-ozzo/ozzo-validation v3.5.0+incompatible/go.mod h1:gsEKFIVnabGBt6mXmxK0MoFy+cZoTJY6mu5Ll3LVLBU=
|
github.com/go-ozzo/ozzo-validation v3.5.0+incompatible/go.mod h1:gsEKFIVnabGBt6mXmxK0MoFy+cZoTJY6mu5Ll3LVLBU=
|
||||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||||
|
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 h1:p104kn46Q8WdvHunIJ9dAyjPVtrBPhSr3KT2yUst43I=
|
||||||
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
|
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
|
||||||
github.com/godbus/dbus/v5 v5.0.6 h1:mkgN1ofwASrYnJ5W6U/BxG15eXXXjirgZc7CLqkcaro=
|
github.com/godbus/dbus/v5 v5.0.6 h1:mkgN1ofwASrYnJ5W6U/BxG15eXXXjirgZc7CLqkcaro=
|
||||||
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||||
@@ -226,6 +227,7 @@ github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
|
|||||||
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
|
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
|
||||||
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||||
github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
|
github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
|
||||||
|
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 h1:yAJXTCF9TqKcTiHJAE8dj7HMvPfh66eeA2JYW7eFpSE=
|
||||||
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
||||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||||
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
|
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ IMAGE="${REGISTRY}/conformance-amd64:${VERSION}"
|
|||||||
|
|
||||||
kube::build::verify_prereqs
|
kube::build::verify_prereqs
|
||||||
kube::build::build_image
|
kube::build::build_image
|
||||||
kube::build::run_build_command make WHAT="github.com/onsi/ginkgo/ginkgo test/e2e/e2e.test cmd/kubectl test/conformance/image/go-runner"
|
kube::build::run_build_command make WHAT="github.com/onsi/ginkgo/v2/ginkgo test/e2e/e2e.test cmd/kubectl test/conformance/image/go-runner"
|
||||||
kube::build::copy_output
|
kube::build::copy_output
|
||||||
|
|
||||||
make -C "${KUBE_ROOT}/test/conformance/image" build
|
make -C "${KUBE_ROOT}/test/conformance/image" build
|
||||||
|
|||||||
@@ -111,7 +111,7 @@ readonly KUBE_SERVER_IMAGE_BINARIES=("${KUBE_SERVER_IMAGE_TARGETS[@]##*/}")
|
|||||||
kube::golang::conformance_image_targets() {
|
kube::golang::conformance_image_targets() {
|
||||||
# NOTE: this contains cmd targets for kube::release::build_conformance_image
|
# NOTE: this contains cmd targets for kube::release::build_conformance_image
|
||||||
local targets=(
|
local targets=(
|
||||||
github.com/onsi/ginkgo/ginkgo
|
github.com/onsi/ginkgo/v2/ginkgo
|
||||||
test/e2e/e2e.test
|
test/e2e/e2e.test
|
||||||
test/conformance/image/go-runner
|
test/conformance/image/go-runner
|
||||||
cmd/kubectl
|
cmd/kubectl
|
||||||
@@ -274,7 +274,7 @@ kube::golang::test_targets() {
|
|||||||
cmd/genyaml
|
cmd/genyaml
|
||||||
cmd/genswaggertypedocs
|
cmd/genswaggertypedocs
|
||||||
cmd/linkcheck
|
cmd/linkcheck
|
||||||
github.com/onsi/ginkgo/ginkgo
|
github.com/onsi/ginkgo/v2/ginkgo
|
||||||
test/e2e/e2e.test
|
test/e2e/e2e.test
|
||||||
test/conformance/image/go-runner
|
test/conformance/image/go-runner
|
||||||
)
|
)
|
||||||
@@ -301,7 +301,7 @@ readonly KUBE_TEST_PORTABLE=(
|
|||||||
kube::golang::server_test_targets() {
|
kube::golang::server_test_targets() {
|
||||||
local targets=(
|
local targets=(
|
||||||
cmd/kubemark
|
cmd/kubemark
|
||||||
github.com/onsi/ginkgo/ginkgo
|
github.com/onsi/ginkgo/v2/ginkgo
|
||||||
)
|
)
|
||||||
|
|
||||||
if [[ "${OSTYPE:-}" == "linux"* ]]; then
|
if [[ "${OSTYPE:-}" == "linux"* ]]; then
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ cd "${KUBE_ROOT}"
|
|||||||
# NOTE: we do *not* use `make WHAT=...` because we do *not* want to be running
|
# NOTE: we do *not* use `make WHAT=...` because we do *not* want to be running
|
||||||
# make generated_files when diffing things (see: hack/verify-conformance-yaml.sh)
|
# make generated_files when diffing things (see: hack/verify-conformance-yaml.sh)
|
||||||
# other update/verify already handle the generated files
|
# other update/verify already handle the generated files
|
||||||
hack/make-rules/build.sh github.com/onsi/ginkgo/ginkgo test/e2e/e2e.test
|
hack/make-rules/build.sh github.com/onsi/ginkgo/v2/ginkgo test/e2e/e2e.test
|
||||||
|
|
||||||
# dump spec
|
# dump spec
|
||||||
./_output/bin/ginkgo --dryRun=true --focus='[Conformance]' ./_output/bin/e2e.test -- --spec-dump "${KUBE_ROOT}/_output/specsummaries.json" > /dev/null
|
./_output/bin/ginkgo --dryRun=true --focus='[Conformance]' ./_output/bin/e2e.test -- --spec-dump "${KUBE_ROOT}/_output/specsummaries.json" > /dev/null
|
||||||
|
|||||||
@@ -7,7 +7,7 @@
|
|||||||
|
|
||||||
```console
|
```console
|
||||||
# First, build the binaries by running make from the root directory
|
# First, build the binaries by running make from the root directory
|
||||||
$ make WHAT="test/e2e/e2e.test vendor/github.com/onsi/ginkgo/ginkgo cmd/kubectl test/conformance/image/go-runner"
|
$ make WHAT="test/e2e/e2e.test github.com/onsi/ginkgo/v2/ginkgo cmd/kubectl test/conformance/image/go-runner"
|
||||||
|
|
||||||
# Build for linux/amd64 (default)
|
# Build for linux/amd64 (default)
|
||||||
# export REGISTRY=$HOST/$ORG to switch from registry.k8s.io
|
# export REGISTRY=$HOST/$ORG to switch from registry.k8s.io
|
||||||
|
|||||||
14
vendor/github.com/go-task/slim-sprig/.editorconfig
generated
vendored
Normal file
14
vendor/github.com/go-task/slim-sprig/.editorconfig
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
# editorconfig.org
|
||||||
|
|
||||||
|
root = true
|
||||||
|
|
||||||
|
[*]
|
||||||
|
insert_final_newline = true
|
||||||
|
charset = utf-8
|
||||||
|
trim_trailing_whitespace = true
|
||||||
|
indent_style = tab
|
||||||
|
indent_size = 8
|
||||||
|
|
||||||
|
[*.{md,yml,yaml,json}]
|
||||||
|
indent_style = space
|
||||||
|
indent_size = 2
|
||||||
1
vendor/github.com/go-task/slim-sprig/.gitattributes
generated
vendored
Normal file
1
vendor/github.com/go-task/slim-sprig/.gitattributes
generated
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
* text=auto
|
||||||
2
vendor/github.com/go-task/slim-sprig/.gitignore
generated
vendored
Normal file
2
vendor/github.com/go-task/slim-sprig/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
vendor/
|
||||||
|
/.glide
|
||||||
364
vendor/github.com/go-task/slim-sprig/CHANGELOG.md
generated
vendored
Normal file
364
vendor/github.com/go-task/slim-sprig/CHANGELOG.md
generated
vendored
Normal file
@@ -0,0 +1,364 @@
|
|||||||
|
# Changelog
|
||||||
|
|
||||||
|
## Release 3.2.0 (2020-12-14)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- #211: Added randInt function (thanks @kochurovro)
|
||||||
|
- #223: Added fromJson and mustFromJson functions (thanks @mholt)
|
||||||
|
- #242: Added a bcrypt function (thanks @robbiet480)
|
||||||
|
- #253: Added randBytes function (thanks @MikaelSmith)
|
||||||
|
- #254: Added dig function for dicts (thanks @nyarly)
|
||||||
|
- #257: Added regexQuoteMeta for quoting regex metadata (thanks @rheaton)
|
||||||
|
- #261: Added filepath functions osBase, osDir, osExt, osClean, osIsAbs (thanks @zugl)
|
||||||
|
- #268: Added and and all functions for testing conditions (thanks @phuslu)
|
||||||
|
- #181: Added float64 arithmetic addf, add1f, subf, divf, mulf, maxf, and minf
|
||||||
|
(thanks @andrewmostello)
|
||||||
|
- #265: Added chunk function to split array into smaller arrays (thanks @karelbilek)
|
||||||
|
- #270: Extend certificate functions to handle non-RSA keys + add support for
|
||||||
|
ed25519 keys (thanks @misberner)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- Removed testing and support for Go 1.12. ed25519 support requires Go 1.13 or newer
|
||||||
|
- Using semver 3.1.1 and mergo 0.3.11
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- #249: Fix htmlDateInZone example (thanks @spawnia)
|
||||||
|
|
||||||
|
NOTE: The dependency github.com/imdario/mergo reverted the breaking change in
|
||||||
|
0.3.9 via 0.3.10 release.
|
||||||
|
|
||||||
|
## Release 3.1.0 (2020-04-16)
|
||||||
|
|
||||||
|
NOTE: The dependency github.com/imdario/mergo made a behavior change in 0.3.9
|
||||||
|
that impacts sprig functionality. Do not use sprig with a version newer than 0.3.8.
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- #225: Added support for generating htpasswd hash (thanks @rustycl0ck)
|
||||||
|
- #224: Added duration filter (thanks @frebib)
|
||||||
|
- #205: Added `seq` function (thanks @thadc23)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- #203: Unlambda functions with correct signature (thanks @muesli)
|
||||||
|
- #236: Updated the license formatting for GitHub display purposes
|
||||||
|
- #238: Updated package dependency versions. Note, mergo not updated to 0.3.9
|
||||||
|
as it causes a breaking change for sprig. That issue is tracked at
|
||||||
|
https://github.com/imdario/mergo/issues/139
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- #229: Fix `seq` example in docs (thanks @kalmant)
|
||||||
|
|
||||||
|
## Release 3.0.2 (2019-12-13)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- #220: Updating to semver v3.0.3 to fix issue with <= ranges
|
||||||
|
- #218: fix typo elyptical->elliptic in ecdsa key description (thanks @laverya)
|
||||||
|
|
||||||
|
## Release 3.0.1 (2019-12-08)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- #212: Updated semver fixing broken constraint checking with ^0.0
|
||||||
|
|
||||||
|
## Release 3.0.0 (2019-10-02)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- #187: Added durationRound function (thanks @yjp20)
|
||||||
|
- #189: Added numerous template functions that return errors rather than panic (thanks @nrvnrvn)
|
||||||
|
- #193: Added toRawJson support (thanks @Dean-Coakley)
|
||||||
|
- #197: Added get support to dicts (thanks @Dean-Coakley)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- #186: Moving dependency management to Go modules
|
||||||
|
- #186: Updated semver to v3. This has changes in the way ^ is handled
|
||||||
|
- #194: Updated documentation on merging and how it copies. Added example using deepCopy
|
||||||
|
- #196: trunc now supports negative values (thanks @Dean-Coakley)
|
||||||
|
|
||||||
|
## Release 2.22.0 (2019-10-02)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- #173: Added getHostByName function to resolve dns names to ips (thanks @fcgravalos)
|
||||||
|
- #195: Added deepCopy function for use with dicts
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- Updated merge and mergeOverwrite documentation to explain copying and how to
|
||||||
|
use deepCopy with it
|
||||||
|
|
||||||
|
## Release 2.21.0 (2019-09-18)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- #122: Added encryptAES/decryptAES functions (thanks @n0madic)
|
||||||
|
- #128: Added toDecimal support (thanks @Dean-Coakley)
|
||||||
|
- #169: Added list contcat (thanks @astorath)
|
||||||
|
- #174: Added deepEqual function (thanks @bonifaido)
|
||||||
|
- #170: Added url parse and join functions (thanks @astorath)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- #171: Updated glide config for Google UUID to v1 and to add ranges to semver and testify
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- #172: Fix semver wildcard example (thanks @piepmatz)
|
||||||
|
- #175: Fix dateInZone doc example (thanks @s3than)
|
||||||
|
|
||||||
|
## Release 2.20.0 (2019-06-18)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- #164: Adding function to get unix epoch for a time (@mattfarina)
|
||||||
|
- #166: Adding tests for date_in_zone (@mattfarina)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- #144: Fix function comments based on best practices from Effective Go (@CodeLingoTeam)
|
||||||
|
- #150: Handles pointer type for time.Time in "htmlDate" (@mapreal19)
|
||||||
|
- #161, #157, #160, #153, #158, #156, #155, #159, #152 documentation updates (@badeadan)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
## Release 2.19.0 (2019-03-02)
|
||||||
|
|
||||||
|
IMPORTANT: This release reverts a change from 2.18.0
|
||||||
|
|
||||||
|
In the previous release (2.18), we prematurely merged a partial change to the crypto functions that led to creating two sets of crypto functions (I blame @technosophos -- since that's me). This release rolls back that change, and does what was originally intended: It alters the existing crypto functions to use secure random.
|
||||||
|
|
||||||
|
We debated whether this classifies as a change worthy of major revision, but given the proximity to the last release, we have decided that treating 2.18 as a faulty release is the correct course of action. We apologize for any inconvenience.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- Fix substr panic 35fb796 (Alexey igrychev)
|
||||||
|
- Remove extra period 1eb7729 (Matthew Lorimor)
|
||||||
|
- Make random string functions use crypto by default 6ceff26 (Matthew Lorimor)
|
||||||
|
- README edits/fixes/suggestions 08fe136 (Lauri Apple)
|
||||||
|
|
||||||
|
|
||||||
|
## Release 2.18.0 (2019-02-12)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- Added mergeOverwrite function
|
||||||
|
- cryptographic functions that use secure random (see fe1de12)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- Improve documentation of regexMatch function, resolves #139 90b89ce (Jan Tagscherer)
|
||||||
|
- Handle has for nil list 9c10885 (Daniel Cohen)
|
||||||
|
- Document behaviour of mergeOverwrite fe0dbe9 (Lukas Rieder)
|
||||||
|
- doc: adds missing documentation. 4b871e6 (Fernandez Ludovic)
|
||||||
|
- Replace outdated goutils imports 01893d2 (Matthew Lorimor)
|
||||||
|
- Surface crypto secure random strings from goutils fe1de12 (Matthew Lorimor)
|
||||||
|
- Handle untyped nil values as paramters to string functions 2b2ec8f (Morten Torkildsen)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- Fix dict merge issue and provide mergeOverwrite .dst .src1 to overwrite from src -> dst 4c59c12 (Lukas Rieder)
|
||||||
|
- Fix substr var names and comments d581f80 (Dean Coakley)
|
||||||
|
- Fix substr documentation 2737203 (Dean Coakley)
|
||||||
|
|
||||||
|
## Release 2.17.1 (2019-01-03)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
The 2.17.0 release did not have a version pinned for xstrings, which caused compilation failures when xstrings < 1.2 was used. This adds the correct version string to glide.yaml.
|
||||||
|
|
||||||
|
## Release 2.17.0 (2019-01-03)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- adds alder32sum function and test 6908fc2 (marshallford)
|
||||||
|
- Added kebabcase function ca331a1 (Ilyes512)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- Update goutils to 1.1.0 4e1125d (Matt Butcher)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- Fix 'has' documentation e3f2a85 (dean-coakley)
|
||||||
|
- docs(dict): fix typo in pick example dc424f9 (Dustin Specker)
|
||||||
|
- fixes spelling errors... not sure how that happened 4cf188a (marshallford)
|
||||||
|
|
||||||
|
## Release 2.16.0 (2018-08-13)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- add splitn function fccb0b0 (Helgi Þorbjörnsson)
|
||||||
|
- Add slice func df28ca7 (gongdo)
|
||||||
|
- Generate serial number a3bdffd (Cody Coons)
|
||||||
|
- Extract values of dict with values function df39312 (Lawrence Jones)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- Modify panic message for list.slice ae38335 (gongdo)
|
||||||
|
- Minor improvement in code quality - Removed an unreachable piece of code at defaults.go#L26:6 - Resolve formatting issues. 5834241 (Abhishek Kashyap)
|
||||||
|
- Remove duplicated documentation 1d97af1 (Matthew Fisher)
|
||||||
|
- Test on go 1.11 49df809 (Helgi Þormar Þorbjörnsson)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- Fix file permissions c5f40b5 (gongdo)
|
||||||
|
- Fix example for buildCustomCert 7779e0d (Tin Lam)
|
||||||
|
|
||||||
|
## Release 2.15.0 (2018-04-02)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- #68 and #69: Add json helpers to docs (thanks @arunvelsriram)
|
||||||
|
- #66: Add ternary function (thanks @binoculars)
|
||||||
|
- #67: Allow keys function to take multiple dicts (thanks @binoculars)
|
||||||
|
- #89: Added sha1sum to crypto function (thanks @benkeil)
|
||||||
|
- #81: Allow customizing Root CA that used by genSignedCert (thanks @chenzhiwei)
|
||||||
|
- #92: Add travis testing for go 1.10
|
||||||
|
- #93: Adding appveyor config for windows testing
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- #90: Updating to more recent dependencies
|
||||||
|
- #73: replace satori/go.uuid with google/uuid (thanks @petterw)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- #76: Fixed documentation typos (thanks @Thiht)
|
||||||
|
- Fixed rounding issue on the `ago` function. Note, the removes support for Go 1.8 and older
|
||||||
|
|
||||||
|
## Release 2.14.1 (2017-12-01)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- #60: Fix typo in function name documentation (thanks @neil-ca-moore)
|
||||||
|
- #61: Removing line with {{ due to blocking github pages genertion
|
||||||
|
- #64: Update the list functions to handle int, string, and other slices for compatibility
|
||||||
|
|
||||||
|
## Release 2.14.0 (2017-10-06)
|
||||||
|
|
||||||
|
This new version of Sprig adds a set of functions for generating and working with SSL certificates.
|
||||||
|
|
||||||
|
- `genCA` generates an SSL Certificate Authority
|
||||||
|
- `genSelfSignedCert` generates an SSL self-signed certificate
|
||||||
|
- `genSignedCert` generates an SSL certificate and key based on a given CA
|
||||||
|
|
||||||
|
## Release 2.13.0 (2017-09-18)
|
||||||
|
|
||||||
|
This release adds new functions, including:
|
||||||
|
|
||||||
|
- `regexMatch`, `regexFindAll`, `regexFind`, `regexReplaceAll`, `regexReplaceAllLiteral`, and `regexSplit` to work with regular expressions
|
||||||
|
- `floor`, `ceil`, and `round` math functions
|
||||||
|
- `toDate` converts a string to a date
|
||||||
|
- `nindent` is just like `indent` but also prepends a new line
|
||||||
|
- `ago` returns the time from `time.Now`
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- #40: Added basic regex functionality (thanks @alanquillin)
|
||||||
|
- #41: Added ceil floor and round functions (thanks @alanquillin)
|
||||||
|
- #48: Added toDate function (thanks @andreynering)
|
||||||
|
- #50: Added nindent function (thanks @binoculars)
|
||||||
|
- #46: Added ago function (thanks @slayer)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- #51: Updated godocs to include new string functions (thanks @curtisallen)
|
||||||
|
- #49: Added ability to merge multiple dicts (thanks @binoculars)
|
||||||
|
|
||||||
|
## Release 2.12.0 (2017-05-17)
|
||||||
|
|
||||||
|
- `snakecase`, `camelcase`, and `shuffle` are three new string functions
|
||||||
|
- `fail` allows you to bail out of a template render when conditions are not met
|
||||||
|
|
||||||
|
## Release 2.11.0 (2017-05-02)
|
||||||
|
|
||||||
|
- Added `toJson` and `toPrettyJson`
|
||||||
|
- Added `merge`
|
||||||
|
- Refactored documentation
|
||||||
|
|
||||||
|
## Release 2.10.0 (2017-03-15)
|
||||||
|
|
||||||
|
- Added `semver` and `semverCompare` for Semantic Versions
|
||||||
|
- `list` replaces `tuple`
|
||||||
|
- Fixed issue with `join`
|
||||||
|
- Added `first`, `last`, `intial`, `rest`, `prepend`, `append`, `toString`, `toStrings`, `sortAlpha`, `reverse`, `coalesce`, `pluck`, `pick`, `compact`, `keys`, `omit`, `uniq`, `has`, `without`
|
||||||
|
|
||||||
|
## Release 2.9.0 (2017-02-23)
|
||||||
|
|
||||||
|
- Added `splitList` to split a list
|
||||||
|
- Added crypto functions of `genPrivateKey` and `derivePassword`
|
||||||
|
|
||||||
|
## Release 2.8.0 (2016-12-21)
|
||||||
|
|
||||||
|
- Added access to several path functions (`base`, `dir`, `clean`, `ext`, and `abs`)
|
||||||
|
- Added functions for _mutating_ dictionaries (`set`, `unset`, `hasKey`)
|
||||||
|
|
||||||
|
## Release 2.7.0 (2016-12-01)
|
||||||
|
|
||||||
|
- Added `sha256sum` to generate a hash of an input
|
||||||
|
- Added functions to convert a numeric or string to `int`, `int64`, `float64`
|
||||||
|
|
||||||
|
## Release 2.6.0 (2016-10-03)
|
||||||
|
|
||||||
|
- Added a `uuidv4` template function for generating UUIDs inside of a template.
|
||||||
|
|
||||||
|
## Release 2.5.0 (2016-08-19)
|
||||||
|
|
||||||
|
- New `trimSuffix`, `trimPrefix`, `hasSuffix`, and `hasPrefix` functions
|
||||||
|
- New aliases have been added for a few functions that didn't follow the naming conventions (`trimAll` and `abbrevBoth`)
|
||||||
|
- `trimall` and `abbrevboth` (notice the case) are deprecated and will be removed in 3.0.0
|
||||||
|
|
||||||
|
## Release 2.4.0 (2016-08-16)
|
||||||
|
|
||||||
|
- Adds two functions: `until` and `untilStep`
|
||||||
|
|
||||||
|
## Release 2.3.0 (2016-06-21)
|
||||||
|
|
||||||
|
- cat: Concatenate strings with whitespace separators.
|
||||||
|
- replace: Replace parts of a string: `replace " " "-" "Me First"` renders "Me-First"
|
||||||
|
- plural: Format plurals: `len "foo" | plural "one foo" "many foos"` renders "many foos"
|
||||||
|
- indent: Indent blocks of text in a way that is sensitive to "\n" characters.
|
||||||
|
|
||||||
|
## Release 2.2.0 (2016-04-21)
|
||||||
|
|
||||||
|
- Added a `genPrivateKey` function (Thanks @bacongobbler)
|
||||||
|
|
||||||
|
## Release 2.1.0 (2016-03-30)
|
||||||
|
|
||||||
|
- `default` now prints the default value when it does not receive a value down the pipeline. It is much safer now to do `{{.Foo | default "bar"}}`.
|
||||||
|
- Added accessors for "hermetic" functions. These return only functions that, when given the same input, produce the same output.
|
||||||
|
|
||||||
|
## Release 2.0.0 (2016-03-29)
|
||||||
|
|
||||||
|
Because we switched from `int` to `int64` as the return value for all integer math functions, the library's major version number has been incremented.
|
||||||
|
|
||||||
|
- `min` complements `max` (formerly `biggest`)
|
||||||
|
- `empty` indicates that a value is the empty value for its type
|
||||||
|
- `tuple` creates a tuple inside of a template: `{{$t := tuple "a", "b" "c"}}`
|
||||||
|
- `dict` creates a dictionary inside of a template `{{$d := dict "key1" "val1" "key2" "val2"}}`
|
||||||
|
- Date formatters have been added for HTML dates (as used in `date` input fields)
|
||||||
|
- Integer math functions can convert from a number of types, including `string` (via `strconv.ParseInt`).
|
||||||
|
|
||||||
|
## Release 1.2.0 (2016-02-01)
|
||||||
|
|
||||||
|
- Added quote and squote
|
||||||
|
- Added b32enc and b32dec
|
||||||
|
- add now takes varargs
|
||||||
|
- biggest now takes varargs
|
||||||
|
|
||||||
|
## Release 1.1.0 (2015-12-29)
|
||||||
|
|
||||||
|
- Added #4: Added contains function. strings.Contains, but with the arguments
|
||||||
|
switched to simplify common pipelines. (thanks krancour)
|
||||||
|
- Added Travis-CI testing support
|
||||||
|
|
||||||
|
## Release 1.0.0 (2015-12-23)
|
||||||
|
|
||||||
|
- Initial release
|
||||||
19
vendor/github.com/go-task/slim-sprig/LICENSE.txt
generated
vendored
Normal file
19
vendor/github.com/go-task/slim-sprig/LICENSE.txt
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
Copyright (C) 2013-2020 Masterminds
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in
|
||||||
|
all copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||||
|
THE SOFTWARE.
|
||||||
73
vendor/github.com/go-task/slim-sprig/README.md
generated
vendored
Normal file
73
vendor/github.com/go-task/slim-sprig/README.md
generated
vendored
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
# Slim-Sprig: Template functions for Go templates [](https://godoc.org/github.com/go-task/slim-sprig) [](https://goreportcard.com/report/github.com/go-task/slim-sprig)
|
||||||
|
|
||||||
|
Slim-Sprig is a fork of [Sprig](https://github.com/Masterminds/sprig), but with
|
||||||
|
all functions that depend on external (non standard library) or crypto packages
|
||||||
|
removed.
|
||||||
|
The reason for this is to make this library more lightweight. Most of these
|
||||||
|
functions (specially crypto ones) are not needed on most apps, but costs a lot
|
||||||
|
in terms of binary size and compilation time.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
**Template developers**: Please use Slim-Sprig's [function documentation](https://go-task.github.io/slim-sprig/) for
|
||||||
|
detailed instructions and code snippets for the >100 template functions available.
|
||||||
|
|
||||||
|
**Go developers**: If you'd like to include Slim-Sprig as a library in your program,
|
||||||
|
our API documentation is available [at GoDoc.org](http://godoc.org/github.com/go-task/slim-sprig).
|
||||||
|
|
||||||
|
For standard usage, read on.
|
||||||
|
|
||||||
|
### Load the Slim-Sprig library
|
||||||
|
|
||||||
|
To load the Slim-Sprig `FuncMap`:
|
||||||
|
|
||||||
|
```go
|
||||||
|
|
||||||
|
import (
|
||||||
|
"html/template"
|
||||||
|
|
||||||
|
"github.com/go-task/slim-sprig"
|
||||||
|
)
|
||||||
|
|
||||||
|
// This example illustrates that the FuncMap *must* be set before the
|
||||||
|
// templates themselves are loaded.
|
||||||
|
tpl := template.Must(
|
||||||
|
template.New("base").Funcs(sprig.FuncMap()).ParseGlob("*.html")
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Calling the functions inside of templates
|
||||||
|
|
||||||
|
By convention, all functions are lowercase. This seems to follow the Go
|
||||||
|
idiom for template functions (as opposed to template methods, which are
|
||||||
|
TitleCase). For example, this:
|
||||||
|
|
||||||
|
```
|
||||||
|
{{ "hello!" | upper | repeat 5 }}
|
||||||
|
```
|
||||||
|
|
||||||
|
produces this:
|
||||||
|
|
||||||
|
```
|
||||||
|
HELLO!HELLO!HELLO!HELLO!HELLO!
|
||||||
|
```
|
||||||
|
|
||||||
|
## Principles Driving Our Function Selection
|
||||||
|
|
||||||
|
We followed these principles to decide which functions to add and how to implement them:
|
||||||
|
|
||||||
|
- Use template functions to build layout. The following
|
||||||
|
types of operations are within the domain of template functions:
|
||||||
|
- Formatting
|
||||||
|
- Layout
|
||||||
|
- Simple type conversions
|
||||||
|
- Utilities that assist in handling common formatting and layout needs (e.g. arithmetic)
|
||||||
|
- Template functions should not return errors unless there is no way to print
|
||||||
|
a sensible value. For example, converting a string to an integer should not
|
||||||
|
produce an error if conversion fails. Instead, it should display a default
|
||||||
|
value.
|
||||||
|
- Simple math is necessary for grid layouts, pagers, and so on. Complex math
|
||||||
|
(anything other than arithmetic) should be done outside of templates.
|
||||||
|
- Template functions only deal with the data passed into them. They never retrieve
|
||||||
|
data from a source.
|
||||||
|
- Finally, do not override core Go template functions.
|
||||||
12
vendor/github.com/go-task/slim-sprig/Taskfile.yml
generated
vendored
Normal file
12
vendor/github.com/go-task/slim-sprig/Taskfile.yml
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
# https://taskfile.dev
|
||||||
|
|
||||||
|
version: '2'
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
default:
|
||||||
|
cmds:
|
||||||
|
- task: test
|
||||||
|
|
||||||
|
test:
|
||||||
|
cmds:
|
||||||
|
- go test -v .
|
||||||
24
vendor/github.com/go-task/slim-sprig/crypto.go
generated
vendored
Normal file
24
vendor/github.com/go-task/slim-sprig/crypto.go
generated
vendored
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/sha1"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
"fmt"
|
||||||
|
"hash/adler32"
|
||||||
|
)
|
||||||
|
|
||||||
|
func sha256sum(input string) string {
|
||||||
|
hash := sha256.Sum256([]byte(input))
|
||||||
|
return hex.EncodeToString(hash[:])
|
||||||
|
}
|
||||||
|
|
||||||
|
func sha1sum(input string) string {
|
||||||
|
hash := sha1.Sum([]byte(input))
|
||||||
|
return hex.EncodeToString(hash[:])
|
||||||
|
}
|
||||||
|
|
||||||
|
func adler32sum(input string) string {
|
||||||
|
hash := adler32.Checksum([]byte(input))
|
||||||
|
return fmt.Sprintf("%d", hash)
|
||||||
|
}
|
||||||
152
vendor/github.com/go-task/slim-sprig/date.go
generated
vendored
Normal file
152
vendor/github.com/go-task/slim-sprig/date.go
generated
vendored
Normal file
@@ -0,0 +1,152 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strconv"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Given a format and a date, format the date string.
|
||||||
|
//
|
||||||
|
// Date can be a `time.Time` or an `int, int32, int64`.
|
||||||
|
// In the later case, it is treated as seconds since UNIX
|
||||||
|
// epoch.
|
||||||
|
func date(fmt string, date interface{}) string {
|
||||||
|
return dateInZone(fmt, date, "Local")
|
||||||
|
}
|
||||||
|
|
||||||
|
func htmlDate(date interface{}) string {
|
||||||
|
return dateInZone("2006-01-02", date, "Local")
|
||||||
|
}
|
||||||
|
|
||||||
|
func htmlDateInZone(date interface{}, zone string) string {
|
||||||
|
return dateInZone("2006-01-02", date, zone)
|
||||||
|
}
|
||||||
|
|
||||||
|
func dateInZone(fmt string, date interface{}, zone string) string {
|
||||||
|
var t time.Time
|
||||||
|
switch date := date.(type) {
|
||||||
|
default:
|
||||||
|
t = time.Now()
|
||||||
|
case time.Time:
|
||||||
|
t = date
|
||||||
|
case *time.Time:
|
||||||
|
t = *date
|
||||||
|
case int64:
|
||||||
|
t = time.Unix(date, 0)
|
||||||
|
case int:
|
||||||
|
t = time.Unix(int64(date), 0)
|
||||||
|
case int32:
|
||||||
|
t = time.Unix(int64(date), 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
loc, err := time.LoadLocation(zone)
|
||||||
|
if err != nil {
|
||||||
|
loc, _ = time.LoadLocation("UTC")
|
||||||
|
}
|
||||||
|
|
||||||
|
return t.In(loc).Format(fmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
func dateModify(fmt string, date time.Time) time.Time {
|
||||||
|
d, err := time.ParseDuration(fmt)
|
||||||
|
if err != nil {
|
||||||
|
return date
|
||||||
|
}
|
||||||
|
return date.Add(d)
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustDateModify(fmt string, date time.Time) (time.Time, error) {
|
||||||
|
d, err := time.ParseDuration(fmt)
|
||||||
|
if err != nil {
|
||||||
|
return time.Time{}, err
|
||||||
|
}
|
||||||
|
return date.Add(d), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func dateAgo(date interface{}) string {
|
||||||
|
var t time.Time
|
||||||
|
|
||||||
|
switch date := date.(type) {
|
||||||
|
default:
|
||||||
|
t = time.Now()
|
||||||
|
case time.Time:
|
||||||
|
t = date
|
||||||
|
case int64:
|
||||||
|
t = time.Unix(date, 0)
|
||||||
|
case int:
|
||||||
|
t = time.Unix(int64(date), 0)
|
||||||
|
}
|
||||||
|
// Drop resolution to seconds
|
||||||
|
duration := time.Since(t).Round(time.Second)
|
||||||
|
return duration.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
func duration(sec interface{}) string {
|
||||||
|
var n int64
|
||||||
|
switch value := sec.(type) {
|
||||||
|
default:
|
||||||
|
n = 0
|
||||||
|
case string:
|
||||||
|
n, _ = strconv.ParseInt(value, 10, 64)
|
||||||
|
case int64:
|
||||||
|
n = value
|
||||||
|
}
|
||||||
|
return (time.Duration(n) * time.Second).String()
|
||||||
|
}
|
||||||
|
|
||||||
|
func durationRound(duration interface{}) string {
|
||||||
|
var d time.Duration
|
||||||
|
switch duration := duration.(type) {
|
||||||
|
default:
|
||||||
|
d = 0
|
||||||
|
case string:
|
||||||
|
d, _ = time.ParseDuration(duration)
|
||||||
|
case int64:
|
||||||
|
d = time.Duration(duration)
|
||||||
|
case time.Time:
|
||||||
|
d = time.Since(duration)
|
||||||
|
}
|
||||||
|
|
||||||
|
u := uint64(d)
|
||||||
|
neg := d < 0
|
||||||
|
if neg {
|
||||||
|
u = -u
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
year = uint64(time.Hour) * 24 * 365
|
||||||
|
month = uint64(time.Hour) * 24 * 30
|
||||||
|
day = uint64(time.Hour) * 24
|
||||||
|
hour = uint64(time.Hour)
|
||||||
|
minute = uint64(time.Minute)
|
||||||
|
second = uint64(time.Second)
|
||||||
|
)
|
||||||
|
switch {
|
||||||
|
case u > year:
|
||||||
|
return strconv.FormatUint(u/year, 10) + "y"
|
||||||
|
case u > month:
|
||||||
|
return strconv.FormatUint(u/month, 10) + "mo"
|
||||||
|
case u > day:
|
||||||
|
return strconv.FormatUint(u/day, 10) + "d"
|
||||||
|
case u > hour:
|
||||||
|
return strconv.FormatUint(u/hour, 10) + "h"
|
||||||
|
case u > minute:
|
||||||
|
return strconv.FormatUint(u/minute, 10) + "m"
|
||||||
|
case u > second:
|
||||||
|
return strconv.FormatUint(u/second, 10) + "s"
|
||||||
|
}
|
||||||
|
return "0s"
|
||||||
|
}
|
||||||
|
|
||||||
|
func toDate(fmt, str string) time.Time {
|
||||||
|
t, _ := time.ParseInLocation(fmt, str, time.Local)
|
||||||
|
return t
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustToDate(fmt, str string) (time.Time, error) {
|
||||||
|
return time.ParseInLocation(fmt, str, time.Local)
|
||||||
|
}
|
||||||
|
|
||||||
|
func unixEpoch(date time.Time) string {
|
||||||
|
return strconv.FormatInt(date.Unix(), 10)
|
||||||
|
}
|
||||||
163
vendor/github.com/go-task/slim-sprig/defaults.go
generated
vendored
Normal file
163
vendor/github.com/go-task/slim-sprig/defaults.go
generated
vendored
Normal file
@@ -0,0 +1,163 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/json"
|
||||||
|
"math/rand"
|
||||||
|
"reflect"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rand.Seed(time.Now().UnixNano())
|
||||||
|
}
|
||||||
|
|
||||||
|
// dfault checks whether `given` is set, and returns default if not set.
|
||||||
|
//
|
||||||
|
// This returns `d` if `given` appears not to be set, and `given` otherwise.
|
||||||
|
//
|
||||||
|
// For numeric types 0 is unset.
|
||||||
|
// For strings, maps, arrays, and slices, len() = 0 is considered unset.
|
||||||
|
// For bool, false is unset.
|
||||||
|
// Structs are never considered unset.
|
||||||
|
//
|
||||||
|
// For everything else, including pointers, a nil value is unset.
|
||||||
|
func dfault(d interface{}, given ...interface{}) interface{} {
|
||||||
|
|
||||||
|
if empty(given) || empty(given[0]) {
|
||||||
|
return d
|
||||||
|
}
|
||||||
|
return given[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
// empty returns true if the given value has the zero value for its type.
|
||||||
|
func empty(given interface{}) bool {
|
||||||
|
g := reflect.ValueOf(given)
|
||||||
|
if !g.IsValid() {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Basically adapted from text/template.isTrue
|
||||||
|
switch g.Kind() {
|
||||||
|
default:
|
||||||
|
return g.IsNil()
|
||||||
|
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
|
||||||
|
return g.Len() == 0
|
||||||
|
case reflect.Bool:
|
||||||
|
return !g.Bool()
|
||||||
|
case reflect.Complex64, reflect.Complex128:
|
||||||
|
return g.Complex() == 0
|
||||||
|
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||||
|
return g.Int() == 0
|
||||||
|
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||||
|
return g.Uint() == 0
|
||||||
|
case reflect.Float32, reflect.Float64:
|
||||||
|
return g.Float() == 0
|
||||||
|
case reflect.Struct:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// coalesce returns the first non-empty value.
|
||||||
|
func coalesce(v ...interface{}) interface{} {
|
||||||
|
for _, val := range v {
|
||||||
|
if !empty(val) {
|
||||||
|
return val
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// all returns true if empty(x) is false for all values x in the list.
|
||||||
|
// If the list is empty, return true.
|
||||||
|
func all(v ...interface{}) bool {
|
||||||
|
for _, val := range v {
|
||||||
|
if empty(val) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// any returns true if empty(x) is false for any x in the list.
|
||||||
|
// If the list is empty, return false.
|
||||||
|
func any(v ...interface{}) bool {
|
||||||
|
for _, val := range v {
|
||||||
|
if !empty(val) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// fromJson decodes JSON into a structured value, ignoring errors.
|
||||||
|
func fromJson(v string) interface{} {
|
||||||
|
output, _ := mustFromJson(v)
|
||||||
|
return output
|
||||||
|
}
|
||||||
|
|
||||||
|
// mustFromJson decodes JSON into a structured value, returning errors.
|
||||||
|
func mustFromJson(v string) (interface{}, error) {
|
||||||
|
var output interface{}
|
||||||
|
err := json.Unmarshal([]byte(v), &output)
|
||||||
|
return output, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// toJson encodes an item into a JSON string
|
||||||
|
func toJson(v interface{}) string {
|
||||||
|
output, _ := json.Marshal(v)
|
||||||
|
return string(output)
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustToJson(v interface{}) (string, error) {
|
||||||
|
output, err := json.Marshal(v)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return string(output), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// toPrettyJson encodes an item into a pretty (indented) JSON string
|
||||||
|
func toPrettyJson(v interface{}) string {
|
||||||
|
output, _ := json.MarshalIndent(v, "", " ")
|
||||||
|
return string(output)
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustToPrettyJson(v interface{}) (string, error) {
|
||||||
|
output, err := json.MarshalIndent(v, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return string(output), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// toRawJson encodes an item into a JSON string with no escaping of HTML characters.
|
||||||
|
func toRawJson(v interface{}) string {
|
||||||
|
output, err := mustToRawJson(v)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return string(output)
|
||||||
|
}
|
||||||
|
|
||||||
|
// mustToRawJson encodes an item into a JSON string with no escaping of HTML characters.
|
||||||
|
func mustToRawJson(v interface{}) (string, error) {
|
||||||
|
buf := new(bytes.Buffer)
|
||||||
|
enc := json.NewEncoder(buf)
|
||||||
|
enc.SetEscapeHTML(false)
|
||||||
|
err := enc.Encode(&v)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return strings.TrimSuffix(buf.String(), "\n"), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ternary returns the first value if the last value is true, otherwise returns the second value.
|
||||||
|
func ternary(vt interface{}, vf interface{}, v bool) interface{} {
|
||||||
|
if v {
|
||||||
|
return vt
|
||||||
|
}
|
||||||
|
|
||||||
|
return vf
|
||||||
|
}
|
||||||
118
vendor/github.com/go-task/slim-sprig/dict.go
generated
vendored
Normal file
118
vendor/github.com/go-task/slim-sprig/dict.go
generated
vendored
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
func get(d map[string]interface{}, key string) interface{} {
|
||||||
|
if val, ok := d[key]; ok {
|
||||||
|
return val
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func set(d map[string]interface{}, key string, value interface{}) map[string]interface{} {
|
||||||
|
d[key] = value
|
||||||
|
return d
|
||||||
|
}
|
||||||
|
|
||||||
|
func unset(d map[string]interface{}, key string) map[string]interface{} {
|
||||||
|
delete(d, key)
|
||||||
|
return d
|
||||||
|
}
|
||||||
|
|
||||||
|
func hasKey(d map[string]interface{}, key string) bool {
|
||||||
|
_, ok := d[key]
|
||||||
|
return ok
|
||||||
|
}
|
||||||
|
|
||||||
|
func pluck(key string, d ...map[string]interface{}) []interface{} {
|
||||||
|
res := []interface{}{}
|
||||||
|
for _, dict := range d {
|
||||||
|
if val, ok := dict[key]; ok {
|
||||||
|
res = append(res, val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func keys(dicts ...map[string]interface{}) []string {
|
||||||
|
k := []string{}
|
||||||
|
for _, dict := range dicts {
|
||||||
|
for key := range dict {
|
||||||
|
k = append(k, key)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return k
|
||||||
|
}
|
||||||
|
|
||||||
|
func pick(dict map[string]interface{}, keys ...string) map[string]interface{} {
|
||||||
|
res := map[string]interface{}{}
|
||||||
|
for _, k := range keys {
|
||||||
|
if v, ok := dict[k]; ok {
|
||||||
|
res[k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func omit(dict map[string]interface{}, keys ...string) map[string]interface{} {
|
||||||
|
res := map[string]interface{}{}
|
||||||
|
|
||||||
|
omit := make(map[string]bool, len(keys))
|
||||||
|
for _, k := range keys {
|
||||||
|
omit[k] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
for k, v := range dict {
|
||||||
|
if _, ok := omit[k]; !ok {
|
||||||
|
res[k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func dict(v ...interface{}) map[string]interface{} {
|
||||||
|
dict := map[string]interface{}{}
|
||||||
|
lenv := len(v)
|
||||||
|
for i := 0; i < lenv; i += 2 {
|
||||||
|
key := strval(v[i])
|
||||||
|
if i+1 >= lenv {
|
||||||
|
dict[key] = ""
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
dict[key] = v[i+1]
|
||||||
|
}
|
||||||
|
return dict
|
||||||
|
}
|
||||||
|
|
||||||
|
func values(dict map[string]interface{}) []interface{} {
|
||||||
|
values := []interface{}{}
|
||||||
|
for _, value := range dict {
|
||||||
|
values = append(values, value)
|
||||||
|
}
|
||||||
|
|
||||||
|
return values
|
||||||
|
}
|
||||||
|
|
||||||
|
func dig(ps ...interface{}) (interface{}, error) {
|
||||||
|
if len(ps) < 3 {
|
||||||
|
panic("dig needs at least three arguments")
|
||||||
|
}
|
||||||
|
dict := ps[len(ps)-1].(map[string]interface{})
|
||||||
|
def := ps[len(ps)-2]
|
||||||
|
ks := make([]string, len(ps)-2)
|
||||||
|
for i := 0; i < len(ks); i++ {
|
||||||
|
ks[i] = ps[i].(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
return digFromDict(dict, def, ks)
|
||||||
|
}
|
||||||
|
|
||||||
|
func digFromDict(dict map[string]interface{}, d interface{}, ks []string) (interface{}, error) {
|
||||||
|
k, ns := ks[0], ks[1:len(ks)]
|
||||||
|
step, has := dict[k]
|
||||||
|
if !has {
|
||||||
|
return d, nil
|
||||||
|
}
|
||||||
|
if len(ns) == 0 {
|
||||||
|
return step, nil
|
||||||
|
}
|
||||||
|
return digFromDict(step.(map[string]interface{}), d, ns)
|
||||||
|
}
|
||||||
19
vendor/github.com/go-task/slim-sprig/doc.go
generated
vendored
Normal file
19
vendor/github.com/go-task/slim-sprig/doc.go
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
/*
|
||||||
|
Package sprig provides template functions for Go.
|
||||||
|
|
||||||
|
This package contains a number of utility functions for working with data
|
||||||
|
inside of Go `html/template` and `text/template` files.
|
||||||
|
|
||||||
|
To add these functions, use the `template.Funcs()` method:
|
||||||
|
|
||||||
|
t := templates.New("foo").Funcs(sprig.FuncMap())
|
||||||
|
|
||||||
|
Note that you should add the function map before you parse any template files.
|
||||||
|
|
||||||
|
In several cases, Sprig reverses the order of arguments from the way they
|
||||||
|
appear in the standard library. This is to make it easier to pipe
|
||||||
|
arguments into functions.
|
||||||
|
|
||||||
|
See http://masterminds.github.io/sprig/ for more detailed documentation on each of the available functions.
|
||||||
|
*/
|
||||||
|
package sprig
|
||||||
317
vendor/github.com/go-task/slim-sprig/functions.go
generated
vendored
Normal file
317
vendor/github.com/go-task/slim-sprig/functions.go
generated
vendored
Normal file
@@ -0,0 +1,317 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"html/template"
|
||||||
|
"math/rand"
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
|
"path/filepath"
|
||||||
|
"reflect"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
ttemplate "text/template"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// FuncMap produces the function map.
|
||||||
|
//
|
||||||
|
// Use this to pass the functions into the template engine:
|
||||||
|
//
|
||||||
|
// tpl := template.New("foo").Funcs(sprig.FuncMap()))
|
||||||
|
//
|
||||||
|
func FuncMap() template.FuncMap {
|
||||||
|
return HtmlFuncMap()
|
||||||
|
}
|
||||||
|
|
||||||
|
// HermeticTxtFuncMap returns a 'text/template'.FuncMap with only repeatable functions.
|
||||||
|
func HermeticTxtFuncMap() ttemplate.FuncMap {
|
||||||
|
r := TxtFuncMap()
|
||||||
|
for _, name := range nonhermeticFunctions {
|
||||||
|
delete(r, name)
|
||||||
|
}
|
||||||
|
return r
|
||||||
|
}
|
||||||
|
|
||||||
|
// HermeticHtmlFuncMap returns an 'html/template'.Funcmap with only repeatable functions.
|
||||||
|
func HermeticHtmlFuncMap() template.FuncMap {
|
||||||
|
r := HtmlFuncMap()
|
||||||
|
for _, name := range nonhermeticFunctions {
|
||||||
|
delete(r, name)
|
||||||
|
}
|
||||||
|
return r
|
||||||
|
}
|
||||||
|
|
||||||
|
// TxtFuncMap returns a 'text/template'.FuncMap
|
||||||
|
func TxtFuncMap() ttemplate.FuncMap {
|
||||||
|
return ttemplate.FuncMap(GenericFuncMap())
|
||||||
|
}
|
||||||
|
|
||||||
|
// HtmlFuncMap returns an 'html/template'.Funcmap
|
||||||
|
func HtmlFuncMap() template.FuncMap {
|
||||||
|
return template.FuncMap(GenericFuncMap())
|
||||||
|
}
|
||||||
|
|
||||||
|
// GenericFuncMap returns a copy of the basic function map as a map[string]interface{}.
|
||||||
|
func GenericFuncMap() map[string]interface{} {
|
||||||
|
gfm := make(map[string]interface{}, len(genericMap))
|
||||||
|
for k, v := range genericMap {
|
||||||
|
gfm[k] = v
|
||||||
|
}
|
||||||
|
return gfm
|
||||||
|
}
|
||||||
|
|
||||||
|
// These functions are not guaranteed to evaluate to the same result for given input, because they
|
||||||
|
// refer to the environment or global state.
|
||||||
|
var nonhermeticFunctions = []string{
|
||||||
|
// Date functions
|
||||||
|
"date",
|
||||||
|
"date_in_zone",
|
||||||
|
"date_modify",
|
||||||
|
"now",
|
||||||
|
"htmlDate",
|
||||||
|
"htmlDateInZone",
|
||||||
|
"dateInZone",
|
||||||
|
"dateModify",
|
||||||
|
|
||||||
|
// Strings
|
||||||
|
"randAlphaNum",
|
||||||
|
"randAlpha",
|
||||||
|
"randAscii",
|
||||||
|
"randNumeric",
|
||||||
|
"randBytes",
|
||||||
|
"uuidv4",
|
||||||
|
|
||||||
|
// OS
|
||||||
|
"env",
|
||||||
|
"expandenv",
|
||||||
|
|
||||||
|
// Network
|
||||||
|
"getHostByName",
|
||||||
|
}
|
||||||
|
|
||||||
|
var genericMap = map[string]interface{}{
|
||||||
|
"hello": func() string { return "Hello!" },
|
||||||
|
|
||||||
|
// Date functions
|
||||||
|
"ago": dateAgo,
|
||||||
|
"date": date,
|
||||||
|
"date_in_zone": dateInZone,
|
||||||
|
"date_modify": dateModify,
|
||||||
|
"dateInZone": dateInZone,
|
||||||
|
"dateModify": dateModify,
|
||||||
|
"duration": duration,
|
||||||
|
"durationRound": durationRound,
|
||||||
|
"htmlDate": htmlDate,
|
||||||
|
"htmlDateInZone": htmlDateInZone,
|
||||||
|
"must_date_modify": mustDateModify,
|
||||||
|
"mustDateModify": mustDateModify,
|
||||||
|
"mustToDate": mustToDate,
|
||||||
|
"now": time.Now,
|
||||||
|
"toDate": toDate,
|
||||||
|
"unixEpoch": unixEpoch,
|
||||||
|
|
||||||
|
// Strings
|
||||||
|
"trunc": trunc,
|
||||||
|
"trim": strings.TrimSpace,
|
||||||
|
"upper": strings.ToUpper,
|
||||||
|
"lower": strings.ToLower,
|
||||||
|
"title": strings.Title,
|
||||||
|
"substr": substring,
|
||||||
|
// Switch order so that "foo" | repeat 5
|
||||||
|
"repeat": func(count int, str string) string { return strings.Repeat(str, count) },
|
||||||
|
// Deprecated: Use trimAll.
|
||||||
|
"trimall": func(a, b string) string { return strings.Trim(b, a) },
|
||||||
|
// Switch order so that "$foo" | trimall "$"
|
||||||
|
"trimAll": func(a, b string) string { return strings.Trim(b, a) },
|
||||||
|
"trimSuffix": func(a, b string) string { return strings.TrimSuffix(b, a) },
|
||||||
|
"trimPrefix": func(a, b string) string { return strings.TrimPrefix(b, a) },
|
||||||
|
// Switch order so that "foobar" | contains "foo"
|
||||||
|
"contains": func(substr string, str string) bool { return strings.Contains(str, substr) },
|
||||||
|
"hasPrefix": func(substr string, str string) bool { return strings.HasPrefix(str, substr) },
|
||||||
|
"hasSuffix": func(substr string, str string) bool { return strings.HasSuffix(str, substr) },
|
||||||
|
"quote": quote,
|
||||||
|
"squote": squote,
|
||||||
|
"cat": cat,
|
||||||
|
"indent": indent,
|
||||||
|
"nindent": nindent,
|
||||||
|
"replace": replace,
|
||||||
|
"plural": plural,
|
||||||
|
"sha1sum": sha1sum,
|
||||||
|
"sha256sum": sha256sum,
|
||||||
|
"adler32sum": adler32sum,
|
||||||
|
"toString": strval,
|
||||||
|
|
||||||
|
// Wrap Atoi to stop errors.
|
||||||
|
"atoi": func(a string) int { i, _ := strconv.Atoi(a); return i },
|
||||||
|
"int64": toInt64,
|
||||||
|
"int": toInt,
|
||||||
|
"float64": toFloat64,
|
||||||
|
"seq": seq,
|
||||||
|
"toDecimal": toDecimal,
|
||||||
|
|
||||||
|
//"gt": func(a, b int) bool {return a > b},
|
||||||
|
//"gte": func(a, b int) bool {return a >= b},
|
||||||
|
//"lt": func(a, b int) bool {return a < b},
|
||||||
|
//"lte": func(a, b int) bool {return a <= b},
|
||||||
|
|
||||||
|
// split "/" foo/bar returns map[int]string{0: foo, 1: bar}
|
||||||
|
"split": split,
|
||||||
|
"splitList": func(sep, orig string) []string { return strings.Split(orig, sep) },
|
||||||
|
// splitn "/" foo/bar/fuu returns map[int]string{0: foo, 1: bar/fuu}
|
||||||
|
"splitn": splitn,
|
||||||
|
"toStrings": strslice,
|
||||||
|
|
||||||
|
"until": until,
|
||||||
|
"untilStep": untilStep,
|
||||||
|
|
||||||
|
// VERY basic arithmetic.
|
||||||
|
"add1": func(i interface{}) int64 { return toInt64(i) + 1 },
|
||||||
|
"add": func(i ...interface{}) int64 {
|
||||||
|
var a int64 = 0
|
||||||
|
for _, b := range i {
|
||||||
|
a += toInt64(b)
|
||||||
|
}
|
||||||
|
return a
|
||||||
|
},
|
||||||
|
"sub": func(a, b interface{}) int64 { return toInt64(a) - toInt64(b) },
|
||||||
|
"div": func(a, b interface{}) int64 { return toInt64(a) / toInt64(b) },
|
||||||
|
"mod": func(a, b interface{}) int64 { return toInt64(a) % toInt64(b) },
|
||||||
|
"mul": func(a interface{}, v ...interface{}) int64 {
|
||||||
|
val := toInt64(a)
|
||||||
|
for _, b := range v {
|
||||||
|
val = val * toInt64(b)
|
||||||
|
}
|
||||||
|
return val
|
||||||
|
},
|
||||||
|
"randInt": func(min, max int) int { return rand.Intn(max-min) + min },
|
||||||
|
"biggest": max,
|
||||||
|
"max": max,
|
||||||
|
"min": min,
|
||||||
|
"maxf": maxf,
|
||||||
|
"minf": minf,
|
||||||
|
"ceil": ceil,
|
||||||
|
"floor": floor,
|
||||||
|
"round": round,
|
||||||
|
|
||||||
|
// string slices. Note that we reverse the order b/c that's better
|
||||||
|
// for template processing.
|
||||||
|
"join": join,
|
||||||
|
"sortAlpha": sortAlpha,
|
||||||
|
|
||||||
|
// Defaults
|
||||||
|
"default": dfault,
|
||||||
|
"empty": empty,
|
||||||
|
"coalesce": coalesce,
|
||||||
|
"all": all,
|
||||||
|
"any": any,
|
||||||
|
"compact": compact,
|
||||||
|
"mustCompact": mustCompact,
|
||||||
|
"fromJson": fromJson,
|
||||||
|
"toJson": toJson,
|
||||||
|
"toPrettyJson": toPrettyJson,
|
||||||
|
"toRawJson": toRawJson,
|
||||||
|
"mustFromJson": mustFromJson,
|
||||||
|
"mustToJson": mustToJson,
|
||||||
|
"mustToPrettyJson": mustToPrettyJson,
|
||||||
|
"mustToRawJson": mustToRawJson,
|
||||||
|
"ternary": ternary,
|
||||||
|
|
||||||
|
// Reflection
|
||||||
|
"typeOf": typeOf,
|
||||||
|
"typeIs": typeIs,
|
||||||
|
"typeIsLike": typeIsLike,
|
||||||
|
"kindOf": kindOf,
|
||||||
|
"kindIs": kindIs,
|
||||||
|
"deepEqual": reflect.DeepEqual,
|
||||||
|
|
||||||
|
// OS:
|
||||||
|
"env": os.Getenv,
|
||||||
|
"expandenv": os.ExpandEnv,
|
||||||
|
|
||||||
|
// Network:
|
||||||
|
"getHostByName": getHostByName,
|
||||||
|
|
||||||
|
// Paths:
|
||||||
|
"base": path.Base,
|
||||||
|
"dir": path.Dir,
|
||||||
|
"clean": path.Clean,
|
||||||
|
"ext": path.Ext,
|
||||||
|
"isAbs": path.IsAbs,
|
||||||
|
|
||||||
|
// Filepaths:
|
||||||
|
"osBase": filepath.Base,
|
||||||
|
"osClean": filepath.Clean,
|
||||||
|
"osDir": filepath.Dir,
|
||||||
|
"osExt": filepath.Ext,
|
||||||
|
"osIsAbs": filepath.IsAbs,
|
||||||
|
|
||||||
|
// Encoding:
|
||||||
|
"b64enc": base64encode,
|
||||||
|
"b64dec": base64decode,
|
||||||
|
"b32enc": base32encode,
|
||||||
|
"b32dec": base32decode,
|
||||||
|
|
||||||
|
// Data Structures:
|
||||||
|
"tuple": list, // FIXME: with the addition of append/prepend these are no longer immutable.
|
||||||
|
"list": list,
|
||||||
|
"dict": dict,
|
||||||
|
"get": get,
|
||||||
|
"set": set,
|
||||||
|
"unset": unset,
|
||||||
|
"hasKey": hasKey,
|
||||||
|
"pluck": pluck,
|
||||||
|
"keys": keys,
|
||||||
|
"pick": pick,
|
||||||
|
"omit": omit,
|
||||||
|
"values": values,
|
||||||
|
|
||||||
|
"append": push, "push": push,
|
||||||
|
"mustAppend": mustPush, "mustPush": mustPush,
|
||||||
|
"prepend": prepend,
|
||||||
|
"mustPrepend": mustPrepend,
|
||||||
|
"first": first,
|
||||||
|
"mustFirst": mustFirst,
|
||||||
|
"rest": rest,
|
||||||
|
"mustRest": mustRest,
|
||||||
|
"last": last,
|
||||||
|
"mustLast": mustLast,
|
||||||
|
"initial": initial,
|
||||||
|
"mustInitial": mustInitial,
|
||||||
|
"reverse": reverse,
|
||||||
|
"mustReverse": mustReverse,
|
||||||
|
"uniq": uniq,
|
||||||
|
"mustUniq": mustUniq,
|
||||||
|
"without": without,
|
||||||
|
"mustWithout": mustWithout,
|
||||||
|
"has": has,
|
||||||
|
"mustHas": mustHas,
|
||||||
|
"slice": slice,
|
||||||
|
"mustSlice": mustSlice,
|
||||||
|
"concat": concat,
|
||||||
|
"dig": dig,
|
||||||
|
"chunk": chunk,
|
||||||
|
"mustChunk": mustChunk,
|
||||||
|
|
||||||
|
// Flow Control:
|
||||||
|
"fail": func(msg string) (string, error) { return "", errors.New(msg) },
|
||||||
|
|
||||||
|
// Regex
|
||||||
|
"regexMatch": regexMatch,
|
||||||
|
"mustRegexMatch": mustRegexMatch,
|
||||||
|
"regexFindAll": regexFindAll,
|
||||||
|
"mustRegexFindAll": mustRegexFindAll,
|
||||||
|
"regexFind": regexFind,
|
||||||
|
"mustRegexFind": mustRegexFind,
|
||||||
|
"regexReplaceAll": regexReplaceAll,
|
||||||
|
"mustRegexReplaceAll": mustRegexReplaceAll,
|
||||||
|
"regexReplaceAllLiteral": regexReplaceAllLiteral,
|
||||||
|
"mustRegexReplaceAllLiteral": mustRegexReplaceAllLiteral,
|
||||||
|
"regexSplit": regexSplit,
|
||||||
|
"mustRegexSplit": mustRegexSplit,
|
||||||
|
"regexQuoteMeta": regexQuoteMeta,
|
||||||
|
|
||||||
|
// URLs:
|
||||||
|
"urlParse": urlParse,
|
||||||
|
"urlJoin": urlJoin,
|
||||||
|
}
|
||||||
464
vendor/github.com/go-task/slim-sprig/list.go
generated
vendored
Normal file
464
vendor/github.com/go-task/slim-sprig/list.go
generated
vendored
Normal file
@@ -0,0 +1,464 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"reflect"
|
||||||
|
"sort"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Reflection is used in these functions so that slices and arrays of strings,
|
||||||
|
// ints, and other types not implementing []interface{} can be worked with.
|
||||||
|
// For example, this is useful if you need to work on the output of regexs.
|
||||||
|
|
||||||
|
func list(v ...interface{}) []interface{} {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
func push(list interface{}, v interface{}) []interface{} {
|
||||||
|
l, err := mustPush(list, v)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustPush(list interface{}, v interface{}) ([]interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
nl := make([]interface{}, l)
|
||||||
|
for i := 0; i < l; i++ {
|
||||||
|
nl[i] = l2.Index(i).Interface()
|
||||||
|
}
|
||||||
|
|
||||||
|
return append(nl, v), nil
|
||||||
|
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot push on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func prepend(list interface{}, v interface{}) []interface{} {
|
||||||
|
l, err := mustPrepend(list, v)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustPrepend(list interface{}, v interface{}) ([]interface{}, error) {
|
||||||
|
//return append([]interface{}{v}, list...)
|
||||||
|
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
nl := make([]interface{}, l)
|
||||||
|
for i := 0; i < l; i++ {
|
||||||
|
nl[i] = l2.Index(i).Interface()
|
||||||
|
}
|
||||||
|
|
||||||
|
return append([]interface{}{v}, nl...), nil
|
||||||
|
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot prepend on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func chunk(size int, list interface{}) [][]interface{} {
|
||||||
|
l, err := mustChunk(size, list)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustChunk(size int, list interface{}) ([][]interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
|
||||||
|
cs := int(math.Floor(float64(l-1)/float64(size)) + 1)
|
||||||
|
nl := make([][]interface{}, cs)
|
||||||
|
|
||||||
|
for i := 0; i < cs; i++ {
|
||||||
|
clen := size
|
||||||
|
if i == cs-1 {
|
||||||
|
clen = int(math.Floor(math.Mod(float64(l), float64(size))))
|
||||||
|
if clen == 0 {
|
||||||
|
clen = size
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
nl[i] = make([]interface{}, clen)
|
||||||
|
|
||||||
|
for j := 0; j < clen; j++ {
|
||||||
|
ix := i*size + j
|
||||||
|
nl[i][j] = l2.Index(ix).Interface()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nl, nil
|
||||||
|
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot chunk type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func last(list interface{}) interface{} {
|
||||||
|
l, err := mustLast(list)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustLast(list interface{}) (interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
if l == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return l2.Index(l - 1).Interface(), nil
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot find last on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func first(list interface{}) interface{} {
|
||||||
|
l, err := mustFirst(list)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustFirst(list interface{}) (interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
if l == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return l2.Index(0).Interface(), nil
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot find first on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func rest(list interface{}) []interface{} {
|
||||||
|
l, err := mustRest(list)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustRest(list interface{}) ([]interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
if l == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
nl := make([]interface{}, l-1)
|
||||||
|
for i := 1; i < l; i++ {
|
||||||
|
nl[i-1] = l2.Index(i).Interface()
|
||||||
|
}
|
||||||
|
|
||||||
|
return nl, nil
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot find rest on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func initial(list interface{}) []interface{} {
|
||||||
|
l, err := mustInitial(list)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustInitial(list interface{}) ([]interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
if l == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
nl := make([]interface{}, l-1)
|
||||||
|
for i := 0; i < l-1; i++ {
|
||||||
|
nl[i] = l2.Index(i).Interface()
|
||||||
|
}
|
||||||
|
|
||||||
|
return nl, nil
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot find initial on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func sortAlpha(list interface{}) []string {
|
||||||
|
k := reflect.Indirect(reflect.ValueOf(list)).Kind()
|
||||||
|
switch k {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
a := strslice(list)
|
||||||
|
s := sort.StringSlice(a)
|
||||||
|
s.Sort()
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
return []string{strval(list)}
|
||||||
|
}
|
||||||
|
|
||||||
|
func reverse(v interface{}) []interface{} {
|
||||||
|
l, err := mustReverse(v)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustReverse(v interface{}) ([]interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(v).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(v)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
// We do not sort in place because the incoming array should not be altered.
|
||||||
|
nl := make([]interface{}, l)
|
||||||
|
for i := 0; i < l; i++ {
|
||||||
|
nl[l-i-1] = l2.Index(i).Interface()
|
||||||
|
}
|
||||||
|
|
||||||
|
return nl, nil
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot find reverse on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func compact(list interface{}) []interface{} {
|
||||||
|
l, err := mustCompact(list)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustCompact(list interface{}) ([]interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
nl := []interface{}{}
|
||||||
|
var item interface{}
|
||||||
|
for i := 0; i < l; i++ {
|
||||||
|
item = l2.Index(i).Interface()
|
||||||
|
if !empty(item) {
|
||||||
|
nl = append(nl, item)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nl, nil
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot compact on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func uniq(list interface{}) []interface{} {
|
||||||
|
l, err := mustUniq(list)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustUniq(list interface{}) ([]interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
dest := []interface{}{}
|
||||||
|
var item interface{}
|
||||||
|
for i := 0; i < l; i++ {
|
||||||
|
item = l2.Index(i).Interface()
|
||||||
|
if !inList(dest, item) {
|
||||||
|
dest = append(dest, item)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return dest, nil
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot find uniq on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func inList(haystack []interface{}, needle interface{}) bool {
|
||||||
|
for _, h := range haystack {
|
||||||
|
if reflect.DeepEqual(needle, h) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func without(list interface{}, omit ...interface{}) []interface{} {
|
||||||
|
l, err := mustWithout(list, omit...)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustWithout(list interface{}, omit ...interface{}) ([]interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
res := []interface{}{}
|
||||||
|
var item interface{}
|
||||||
|
for i := 0; i < l; i++ {
|
||||||
|
item = l2.Index(i).Interface()
|
||||||
|
if !inList(omit, item) {
|
||||||
|
res = append(res, item)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return res, nil
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("Cannot find without on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func has(needle interface{}, haystack interface{}) bool {
|
||||||
|
l, err := mustHas(needle, haystack)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustHas(needle interface{}, haystack interface{}) (bool, error) {
|
||||||
|
if haystack == nil {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
tp := reflect.TypeOf(haystack).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(haystack)
|
||||||
|
var item interface{}
|
||||||
|
l := l2.Len()
|
||||||
|
for i := 0; i < l; i++ {
|
||||||
|
item = l2.Index(i).Interface()
|
||||||
|
if reflect.DeepEqual(needle, item) {
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false, nil
|
||||||
|
default:
|
||||||
|
return false, fmt.Errorf("Cannot find has on type %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// $list := [1, 2, 3, 4, 5]
|
||||||
|
// slice $list -> list[0:5] = list[:]
|
||||||
|
// slice $list 0 3 -> list[0:3] = list[:3]
|
||||||
|
// slice $list 3 5 -> list[3:5]
|
||||||
|
// slice $list 3 -> list[3:5] = list[3:]
|
||||||
|
func slice(list interface{}, indices ...interface{}) interface{} {
|
||||||
|
l, err := mustSlice(list, indices...)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustSlice(list interface{}, indices ...interface{}) (interface{}, error) {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
|
||||||
|
l := l2.Len()
|
||||||
|
if l == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var start, end int
|
||||||
|
if len(indices) > 0 {
|
||||||
|
start = toInt(indices[0])
|
||||||
|
}
|
||||||
|
if len(indices) < 2 {
|
||||||
|
end = l
|
||||||
|
} else {
|
||||||
|
end = toInt(indices[1])
|
||||||
|
}
|
||||||
|
|
||||||
|
return l2.Slice(start, end).Interface(), nil
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("list should be type of slice or array but %s", tp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func concat(lists ...interface{}) interface{} {
|
||||||
|
var res []interface{}
|
||||||
|
for _, list := range lists {
|
||||||
|
tp := reflect.TypeOf(list).Kind()
|
||||||
|
switch tp {
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
l2 := reflect.ValueOf(list)
|
||||||
|
for i := 0; i < l2.Len(); i++ {
|
||||||
|
res = append(res, l2.Index(i).Interface())
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
panic(fmt.Sprintf("Cannot concat type %s as list", tp))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
12
vendor/github.com/go-task/slim-sprig/network.go
generated
vendored
Normal file
12
vendor/github.com/go-task/slim-sprig/network.go
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"math/rand"
|
||||||
|
"net"
|
||||||
|
)
|
||||||
|
|
||||||
|
func getHostByName(name string) string {
|
||||||
|
addrs, _ := net.LookupHost(name)
|
||||||
|
//TODO: add error handing when release v3 comes out
|
||||||
|
return addrs[rand.Intn(len(addrs))]
|
||||||
|
}
|
||||||
228
vendor/github.com/go-task/slim-sprig/numeric.go
generated
vendored
Normal file
228
vendor/github.com/go-task/slim-sprig/numeric.go
generated
vendored
Normal file
@@ -0,0 +1,228 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"reflect"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// toFloat64 converts 64-bit floats
|
||||||
|
func toFloat64(v interface{}) float64 {
|
||||||
|
if str, ok := v.(string); ok {
|
||||||
|
iv, err := strconv.ParseFloat(str, 64)
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return iv
|
||||||
|
}
|
||||||
|
|
||||||
|
val := reflect.Indirect(reflect.ValueOf(v))
|
||||||
|
switch val.Kind() {
|
||||||
|
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
|
||||||
|
return float64(val.Int())
|
||||||
|
case reflect.Uint8, reflect.Uint16, reflect.Uint32:
|
||||||
|
return float64(val.Uint())
|
||||||
|
case reflect.Uint, reflect.Uint64:
|
||||||
|
return float64(val.Uint())
|
||||||
|
case reflect.Float32, reflect.Float64:
|
||||||
|
return val.Float()
|
||||||
|
case reflect.Bool:
|
||||||
|
if val.Bool() {
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
default:
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func toInt(v interface{}) int {
|
||||||
|
//It's not optimal. Bud I don't want duplicate toInt64 code.
|
||||||
|
return int(toInt64(v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// toInt64 converts integer types to 64-bit integers
|
||||||
|
func toInt64(v interface{}) int64 {
|
||||||
|
if str, ok := v.(string); ok {
|
||||||
|
iv, err := strconv.ParseInt(str, 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return iv
|
||||||
|
}
|
||||||
|
|
||||||
|
val := reflect.Indirect(reflect.ValueOf(v))
|
||||||
|
switch val.Kind() {
|
||||||
|
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
|
||||||
|
return val.Int()
|
||||||
|
case reflect.Uint8, reflect.Uint16, reflect.Uint32:
|
||||||
|
return int64(val.Uint())
|
||||||
|
case reflect.Uint, reflect.Uint64:
|
||||||
|
tv := val.Uint()
|
||||||
|
if tv <= math.MaxInt64 {
|
||||||
|
return int64(tv)
|
||||||
|
}
|
||||||
|
// TODO: What is the sensible thing to do here?
|
||||||
|
return math.MaxInt64
|
||||||
|
case reflect.Float32, reflect.Float64:
|
||||||
|
return int64(val.Float())
|
||||||
|
case reflect.Bool:
|
||||||
|
if val.Bool() {
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
default:
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func max(a interface{}, i ...interface{}) int64 {
|
||||||
|
aa := toInt64(a)
|
||||||
|
for _, b := range i {
|
||||||
|
bb := toInt64(b)
|
||||||
|
if bb > aa {
|
||||||
|
aa = bb
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return aa
|
||||||
|
}
|
||||||
|
|
||||||
|
func maxf(a interface{}, i ...interface{}) float64 {
|
||||||
|
aa := toFloat64(a)
|
||||||
|
for _, b := range i {
|
||||||
|
bb := toFloat64(b)
|
||||||
|
aa = math.Max(aa, bb)
|
||||||
|
}
|
||||||
|
return aa
|
||||||
|
}
|
||||||
|
|
||||||
|
func min(a interface{}, i ...interface{}) int64 {
|
||||||
|
aa := toInt64(a)
|
||||||
|
for _, b := range i {
|
||||||
|
bb := toInt64(b)
|
||||||
|
if bb < aa {
|
||||||
|
aa = bb
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return aa
|
||||||
|
}
|
||||||
|
|
||||||
|
func minf(a interface{}, i ...interface{}) float64 {
|
||||||
|
aa := toFloat64(a)
|
||||||
|
for _, b := range i {
|
||||||
|
bb := toFloat64(b)
|
||||||
|
aa = math.Min(aa, bb)
|
||||||
|
}
|
||||||
|
return aa
|
||||||
|
}
|
||||||
|
|
||||||
|
func until(count int) []int {
|
||||||
|
step := 1
|
||||||
|
if count < 0 {
|
||||||
|
step = -1
|
||||||
|
}
|
||||||
|
return untilStep(0, count, step)
|
||||||
|
}
|
||||||
|
|
||||||
|
func untilStep(start, stop, step int) []int {
|
||||||
|
v := []int{}
|
||||||
|
|
||||||
|
if stop < start {
|
||||||
|
if step >= 0 {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
for i := start; i > stop; i += step {
|
||||||
|
v = append(v, i)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
if step <= 0 {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
for i := start; i < stop; i += step {
|
||||||
|
v = append(v, i)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
func floor(a interface{}) float64 {
|
||||||
|
aa := toFloat64(a)
|
||||||
|
return math.Floor(aa)
|
||||||
|
}
|
||||||
|
|
||||||
|
func ceil(a interface{}) float64 {
|
||||||
|
aa := toFloat64(a)
|
||||||
|
return math.Ceil(aa)
|
||||||
|
}
|
||||||
|
|
||||||
|
func round(a interface{}, p int, rOpt ...float64) float64 {
|
||||||
|
roundOn := .5
|
||||||
|
if len(rOpt) > 0 {
|
||||||
|
roundOn = rOpt[0]
|
||||||
|
}
|
||||||
|
val := toFloat64(a)
|
||||||
|
places := toFloat64(p)
|
||||||
|
|
||||||
|
var round float64
|
||||||
|
pow := math.Pow(10, places)
|
||||||
|
digit := pow * val
|
||||||
|
_, div := math.Modf(digit)
|
||||||
|
if div >= roundOn {
|
||||||
|
round = math.Ceil(digit)
|
||||||
|
} else {
|
||||||
|
round = math.Floor(digit)
|
||||||
|
}
|
||||||
|
return round / pow
|
||||||
|
}
|
||||||
|
|
||||||
|
// converts unix octal to decimal
|
||||||
|
func toDecimal(v interface{}) int64 {
|
||||||
|
result, err := strconv.ParseInt(fmt.Sprint(v), 8, 64)
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
func seq(params ...int) string {
|
||||||
|
increment := 1
|
||||||
|
switch len(params) {
|
||||||
|
case 0:
|
||||||
|
return ""
|
||||||
|
case 1:
|
||||||
|
start := 1
|
||||||
|
end := params[0]
|
||||||
|
if end < start {
|
||||||
|
increment = -1
|
||||||
|
}
|
||||||
|
return intArrayToString(untilStep(start, end+increment, increment), " ")
|
||||||
|
case 3:
|
||||||
|
start := params[0]
|
||||||
|
end := params[2]
|
||||||
|
step := params[1]
|
||||||
|
if end < start {
|
||||||
|
increment = -1
|
||||||
|
if step > 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return intArrayToString(untilStep(start, end+increment, step), " ")
|
||||||
|
case 2:
|
||||||
|
start := params[0]
|
||||||
|
end := params[1]
|
||||||
|
step := 1
|
||||||
|
if end < start {
|
||||||
|
step = -1
|
||||||
|
}
|
||||||
|
return intArrayToString(untilStep(start, end+step, step), " ")
|
||||||
|
default:
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func intArrayToString(slice []int, delimeter string) string {
|
||||||
|
return strings.Trim(strings.Join(strings.Fields(fmt.Sprint(slice)), delimeter), "[]")
|
||||||
|
}
|
||||||
28
vendor/github.com/go-task/slim-sprig/reflect.go
generated
vendored
Normal file
28
vendor/github.com/go-task/slim-sprig/reflect.go
generated
vendored
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"reflect"
|
||||||
|
)
|
||||||
|
|
||||||
|
// typeIs returns true if the src is the type named in target.
|
||||||
|
func typeIs(target string, src interface{}) bool {
|
||||||
|
return target == typeOf(src)
|
||||||
|
}
|
||||||
|
|
||||||
|
func typeIsLike(target string, src interface{}) bool {
|
||||||
|
t := typeOf(src)
|
||||||
|
return target == t || "*"+target == t
|
||||||
|
}
|
||||||
|
|
||||||
|
func typeOf(src interface{}) string {
|
||||||
|
return fmt.Sprintf("%T", src)
|
||||||
|
}
|
||||||
|
|
||||||
|
func kindIs(target string, src interface{}) bool {
|
||||||
|
return target == kindOf(src)
|
||||||
|
}
|
||||||
|
|
||||||
|
func kindOf(src interface{}) string {
|
||||||
|
return reflect.ValueOf(src).Kind().String()
|
||||||
|
}
|
||||||
83
vendor/github.com/go-task/slim-sprig/regex.go
generated
vendored
Normal file
83
vendor/github.com/go-task/slim-sprig/regex.go
generated
vendored
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"regexp"
|
||||||
|
)
|
||||||
|
|
||||||
|
func regexMatch(regex string, s string) bool {
|
||||||
|
match, _ := regexp.MatchString(regex, s)
|
||||||
|
return match
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustRegexMatch(regex string, s string) (bool, error) {
|
||||||
|
return regexp.MatchString(regex, s)
|
||||||
|
}
|
||||||
|
|
||||||
|
func regexFindAll(regex string, s string, n int) []string {
|
||||||
|
r := regexp.MustCompile(regex)
|
||||||
|
return r.FindAllString(s, n)
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustRegexFindAll(regex string, s string, n int) ([]string, error) {
|
||||||
|
r, err := regexp.Compile(regex)
|
||||||
|
if err != nil {
|
||||||
|
return []string{}, err
|
||||||
|
}
|
||||||
|
return r.FindAllString(s, n), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func regexFind(regex string, s string) string {
|
||||||
|
r := regexp.MustCompile(regex)
|
||||||
|
return r.FindString(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustRegexFind(regex string, s string) (string, error) {
|
||||||
|
r, err := regexp.Compile(regex)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return r.FindString(s), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func regexReplaceAll(regex string, s string, repl string) string {
|
||||||
|
r := regexp.MustCompile(regex)
|
||||||
|
return r.ReplaceAllString(s, repl)
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustRegexReplaceAll(regex string, s string, repl string) (string, error) {
|
||||||
|
r, err := regexp.Compile(regex)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return r.ReplaceAllString(s, repl), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func regexReplaceAllLiteral(regex string, s string, repl string) string {
|
||||||
|
r := regexp.MustCompile(regex)
|
||||||
|
return r.ReplaceAllLiteralString(s, repl)
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustRegexReplaceAllLiteral(regex string, s string, repl string) (string, error) {
|
||||||
|
r, err := regexp.Compile(regex)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return r.ReplaceAllLiteralString(s, repl), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func regexSplit(regex string, s string, n int) []string {
|
||||||
|
r := regexp.MustCompile(regex)
|
||||||
|
return r.Split(s, n)
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustRegexSplit(regex string, s string, n int) ([]string, error) {
|
||||||
|
r, err := regexp.Compile(regex)
|
||||||
|
if err != nil {
|
||||||
|
return []string{}, err
|
||||||
|
}
|
||||||
|
return r.Split(s, n), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func regexQuoteMeta(s string) string {
|
||||||
|
return regexp.QuoteMeta(s)
|
||||||
|
}
|
||||||
189
vendor/github.com/go-task/slim-sprig/strings.go
generated
vendored
Normal file
189
vendor/github.com/go-task/slim-sprig/strings.go
generated
vendored
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/base32"
|
||||||
|
"encoding/base64"
|
||||||
|
"fmt"
|
||||||
|
"reflect"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
func base64encode(v string) string {
|
||||||
|
return base64.StdEncoding.EncodeToString([]byte(v))
|
||||||
|
}
|
||||||
|
|
||||||
|
func base64decode(v string) string {
|
||||||
|
data, err := base64.StdEncoding.DecodeString(v)
|
||||||
|
if err != nil {
|
||||||
|
return err.Error()
|
||||||
|
}
|
||||||
|
return string(data)
|
||||||
|
}
|
||||||
|
|
||||||
|
func base32encode(v string) string {
|
||||||
|
return base32.StdEncoding.EncodeToString([]byte(v))
|
||||||
|
}
|
||||||
|
|
||||||
|
func base32decode(v string) string {
|
||||||
|
data, err := base32.StdEncoding.DecodeString(v)
|
||||||
|
if err != nil {
|
||||||
|
return err.Error()
|
||||||
|
}
|
||||||
|
return string(data)
|
||||||
|
}
|
||||||
|
|
||||||
|
func quote(str ...interface{}) string {
|
||||||
|
out := make([]string, 0, len(str))
|
||||||
|
for _, s := range str {
|
||||||
|
if s != nil {
|
||||||
|
out = append(out, fmt.Sprintf("%q", strval(s)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return strings.Join(out, " ")
|
||||||
|
}
|
||||||
|
|
||||||
|
func squote(str ...interface{}) string {
|
||||||
|
out := make([]string, 0, len(str))
|
||||||
|
for _, s := range str {
|
||||||
|
if s != nil {
|
||||||
|
out = append(out, fmt.Sprintf("'%v'", s))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return strings.Join(out, " ")
|
||||||
|
}
|
||||||
|
|
||||||
|
func cat(v ...interface{}) string {
|
||||||
|
v = removeNilElements(v)
|
||||||
|
r := strings.TrimSpace(strings.Repeat("%v ", len(v)))
|
||||||
|
return fmt.Sprintf(r, v...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func indent(spaces int, v string) string {
|
||||||
|
pad := strings.Repeat(" ", spaces)
|
||||||
|
return pad + strings.Replace(v, "\n", "\n"+pad, -1)
|
||||||
|
}
|
||||||
|
|
||||||
|
func nindent(spaces int, v string) string {
|
||||||
|
return "\n" + indent(spaces, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func replace(old, new, src string) string {
|
||||||
|
return strings.Replace(src, old, new, -1)
|
||||||
|
}
|
||||||
|
|
||||||
|
func plural(one, many string, count int) string {
|
||||||
|
if count == 1 {
|
||||||
|
return one
|
||||||
|
}
|
||||||
|
return many
|
||||||
|
}
|
||||||
|
|
||||||
|
func strslice(v interface{}) []string {
|
||||||
|
switch v := v.(type) {
|
||||||
|
case []string:
|
||||||
|
return v
|
||||||
|
case []interface{}:
|
||||||
|
b := make([]string, 0, len(v))
|
||||||
|
for _, s := range v {
|
||||||
|
if s != nil {
|
||||||
|
b = append(b, strval(s))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return b
|
||||||
|
default:
|
||||||
|
val := reflect.ValueOf(v)
|
||||||
|
switch val.Kind() {
|
||||||
|
case reflect.Array, reflect.Slice:
|
||||||
|
l := val.Len()
|
||||||
|
b := make([]string, 0, l)
|
||||||
|
for i := 0; i < l; i++ {
|
||||||
|
value := val.Index(i).Interface()
|
||||||
|
if value != nil {
|
||||||
|
b = append(b, strval(value))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return b
|
||||||
|
default:
|
||||||
|
if v == nil {
|
||||||
|
return []string{}
|
||||||
|
}
|
||||||
|
|
||||||
|
return []string{strval(v)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func removeNilElements(v []interface{}) []interface{} {
|
||||||
|
newSlice := make([]interface{}, 0, len(v))
|
||||||
|
for _, i := range v {
|
||||||
|
if i != nil {
|
||||||
|
newSlice = append(newSlice, i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return newSlice
|
||||||
|
}
|
||||||
|
|
||||||
|
func strval(v interface{}) string {
|
||||||
|
switch v := v.(type) {
|
||||||
|
case string:
|
||||||
|
return v
|
||||||
|
case []byte:
|
||||||
|
return string(v)
|
||||||
|
case error:
|
||||||
|
return v.Error()
|
||||||
|
case fmt.Stringer:
|
||||||
|
return v.String()
|
||||||
|
default:
|
||||||
|
return fmt.Sprintf("%v", v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func trunc(c int, s string) string {
|
||||||
|
if c < 0 && len(s)+c > 0 {
|
||||||
|
return s[len(s)+c:]
|
||||||
|
}
|
||||||
|
if c >= 0 && len(s) > c {
|
||||||
|
return s[:c]
|
||||||
|
}
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|
||||||
|
func join(sep string, v interface{}) string {
|
||||||
|
return strings.Join(strslice(v), sep)
|
||||||
|
}
|
||||||
|
|
||||||
|
func split(sep, orig string) map[string]string {
|
||||||
|
parts := strings.Split(orig, sep)
|
||||||
|
res := make(map[string]string, len(parts))
|
||||||
|
for i, v := range parts {
|
||||||
|
res["_"+strconv.Itoa(i)] = v
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func splitn(sep string, n int, orig string) map[string]string {
|
||||||
|
parts := strings.SplitN(orig, sep, n)
|
||||||
|
res := make(map[string]string, len(parts))
|
||||||
|
for i, v := range parts {
|
||||||
|
res["_"+strconv.Itoa(i)] = v
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
// substring creates a substring of the given string.
|
||||||
|
//
|
||||||
|
// If start is < 0, this calls string[:end].
|
||||||
|
//
|
||||||
|
// If start is >= 0 and end < 0 or end bigger than s length, this calls string[start:]
|
||||||
|
//
|
||||||
|
// Otherwise, this calls string[start, end].
|
||||||
|
func substring(start, end int, s string) string {
|
||||||
|
if start < 0 {
|
||||||
|
return s[:end]
|
||||||
|
}
|
||||||
|
if end < 0 || end > len(s) {
|
||||||
|
return s[start:]
|
||||||
|
}
|
||||||
|
return s[start:end]
|
||||||
|
}
|
||||||
66
vendor/github.com/go-task/slim-sprig/url.go
generated
vendored
Normal file
66
vendor/github.com/go-task/slim-sprig/url.go
generated
vendored
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
package sprig
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"reflect"
|
||||||
|
)
|
||||||
|
|
||||||
|
func dictGetOrEmpty(dict map[string]interface{}, key string) string {
|
||||||
|
value, ok := dict[key]
|
||||||
|
if !ok {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
tp := reflect.TypeOf(value).Kind()
|
||||||
|
if tp != reflect.String {
|
||||||
|
panic(fmt.Sprintf("unable to parse %s key, must be of type string, but %s found", key, tp.String()))
|
||||||
|
}
|
||||||
|
return reflect.ValueOf(value).String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// parses given URL to return dict object
|
||||||
|
func urlParse(v string) map[string]interface{} {
|
||||||
|
dict := map[string]interface{}{}
|
||||||
|
parsedURL, err := url.Parse(v)
|
||||||
|
if err != nil {
|
||||||
|
panic(fmt.Sprintf("unable to parse url: %s", err))
|
||||||
|
}
|
||||||
|
dict["scheme"] = parsedURL.Scheme
|
||||||
|
dict["host"] = parsedURL.Host
|
||||||
|
dict["hostname"] = parsedURL.Hostname()
|
||||||
|
dict["path"] = parsedURL.Path
|
||||||
|
dict["query"] = parsedURL.RawQuery
|
||||||
|
dict["opaque"] = parsedURL.Opaque
|
||||||
|
dict["fragment"] = parsedURL.Fragment
|
||||||
|
if parsedURL.User != nil {
|
||||||
|
dict["userinfo"] = parsedURL.User.String()
|
||||||
|
} else {
|
||||||
|
dict["userinfo"] = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
return dict
|
||||||
|
}
|
||||||
|
|
||||||
|
// join given dict to URL string
|
||||||
|
func urlJoin(d map[string]interface{}) string {
|
||||||
|
resURL := url.URL{
|
||||||
|
Scheme: dictGetOrEmpty(d, "scheme"),
|
||||||
|
Host: dictGetOrEmpty(d, "host"),
|
||||||
|
Path: dictGetOrEmpty(d, "path"),
|
||||||
|
RawQuery: dictGetOrEmpty(d, "query"),
|
||||||
|
Opaque: dictGetOrEmpty(d, "opaque"),
|
||||||
|
Fragment: dictGetOrEmpty(d, "fragment"),
|
||||||
|
}
|
||||||
|
userinfo := dictGetOrEmpty(d, "userinfo")
|
||||||
|
var user *url.Userinfo
|
||||||
|
if userinfo != "" {
|
||||||
|
tempURL, err := url.Parse(fmt.Sprintf("proto://%s@host", userinfo))
|
||||||
|
if err != nil {
|
||||||
|
panic(fmt.Sprintf("unable to parse userinfo in dict: %s", err))
|
||||||
|
}
|
||||||
|
user = tempURL.User
|
||||||
|
}
|
||||||
|
|
||||||
|
resURL.User = user
|
||||||
|
return resURL.String()
|
||||||
|
}
|
||||||
7
vendor/github.com/google/pprof/AUTHORS
generated
vendored
Normal file
7
vendor/github.com/google/pprof/AUTHORS
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
# This is the official list of pprof authors for copyright purposes.
|
||||||
|
# This file is distinct from the CONTRIBUTORS files.
|
||||||
|
# See the latter for an explanation.
|
||||||
|
# Names should be added to this file as:
|
||||||
|
# Name or Organization <email address>
|
||||||
|
# The email address is not required for organizations.
|
||||||
|
Google Inc.
|
||||||
16
vendor/github.com/google/pprof/CONTRIBUTORS
generated
vendored
Normal file
16
vendor/github.com/google/pprof/CONTRIBUTORS
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
# People who have agreed to one of the CLAs and can contribute patches.
|
||||||
|
# The AUTHORS file lists the copyright holders; this file
|
||||||
|
# lists people. For example, Google employees are listed here
|
||||||
|
# but not in AUTHORS, because Google holds the copyright.
|
||||||
|
#
|
||||||
|
# https://developers.google.com/open-source/cla/individual
|
||||||
|
# https://developers.google.com/open-source/cla/corporate
|
||||||
|
#
|
||||||
|
# Names should be added to this file as:
|
||||||
|
# Name <email address>
|
||||||
|
Raul Silvera <rsilvera@google.com>
|
||||||
|
Tipp Moseley <tipp@google.com>
|
||||||
|
Hyoun Kyu Cho <netforce@google.com>
|
||||||
|
Martin Spier <spiermar@gmail.com>
|
||||||
|
Taco de Wolff <tacodewolff@gmail.com>
|
||||||
|
Andrew Hunter <andrewhhunter@gmail.com>
|
||||||
202
vendor/github.com/google/pprof/LICENSE
generated
vendored
Normal file
202
vendor/github.com/google/pprof/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
|
||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright [yyyy] [name of copyright owner]
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
567
vendor/github.com/google/pprof/profile/encode.go
generated
vendored
Normal file
567
vendor/github.com/google/pprof/profile/encode.go
generated
vendored
Normal file
@@ -0,0 +1,567 @@
|
|||||||
|
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||||
|
//
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package profile
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"sort"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (p *Profile) decoder() []decoder {
|
||||||
|
return profileDecoder
|
||||||
|
}
|
||||||
|
|
||||||
|
// preEncode populates the unexported fields to be used by encode
|
||||||
|
// (with suffix X) from the corresponding exported fields. The
|
||||||
|
// exported fields are cleared up to facilitate testing.
|
||||||
|
func (p *Profile) preEncode() {
|
||||||
|
strings := make(map[string]int)
|
||||||
|
addString(strings, "")
|
||||||
|
|
||||||
|
for _, st := range p.SampleType {
|
||||||
|
st.typeX = addString(strings, st.Type)
|
||||||
|
st.unitX = addString(strings, st.Unit)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, s := range p.Sample {
|
||||||
|
s.labelX = nil
|
||||||
|
var keys []string
|
||||||
|
for k := range s.Label {
|
||||||
|
keys = append(keys, k)
|
||||||
|
}
|
||||||
|
sort.Strings(keys)
|
||||||
|
for _, k := range keys {
|
||||||
|
vs := s.Label[k]
|
||||||
|
for _, v := range vs {
|
||||||
|
s.labelX = append(s.labelX,
|
||||||
|
label{
|
||||||
|
keyX: addString(strings, k),
|
||||||
|
strX: addString(strings, v),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
var numKeys []string
|
||||||
|
for k := range s.NumLabel {
|
||||||
|
numKeys = append(numKeys, k)
|
||||||
|
}
|
||||||
|
sort.Strings(numKeys)
|
||||||
|
for _, k := range numKeys {
|
||||||
|
keyX := addString(strings, k)
|
||||||
|
vs := s.NumLabel[k]
|
||||||
|
units := s.NumUnit[k]
|
||||||
|
for i, v := range vs {
|
||||||
|
var unitX int64
|
||||||
|
if len(units) != 0 {
|
||||||
|
unitX = addString(strings, units[i])
|
||||||
|
}
|
||||||
|
s.labelX = append(s.labelX,
|
||||||
|
label{
|
||||||
|
keyX: keyX,
|
||||||
|
numX: v,
|
||||||
|
unitX: unitX,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
s.locationIDX = make([]uint64, len(s.Location))
|
||||||
|
for i, loc := range s.Location {
|
||||||
|
s.locationIDX[i] = loc.ID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, m := range p.Mapping {
|
||||||
|
m.fileX = addString(strings, m.File)
|
||||||
|
m.buildIDX = addString(strings, m.BuildID)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, l := range p.Location {
|
||||||
|
for i, ln := range l.Line {
|
||||||
|
if ln.Function != nil {
|
||||||
|
l.Line[i].functionIDX = ln.Function.ID
|
||||||
|
} else {
|
||||||
|
l.Line[i].functionIDX = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if l.Mapping != nil {
|
||||||
|
l.mappingIDX = l.Mapping.ID
|
||||||
|
} else {
|
||||||
|
l.mappingIDX = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, f := range p.Function {
|
||||||
|
f.nameX = addString(strings, f.Name)
|
||||||
|
f.systemNameX = addString(strings, f.SystemName)
|
||||||
|
f.filenameX = addString(strings, f.Filename)
|
||||||
|
}
|
||||||
|
|
||||||
|
p.dropFramesX = addString(strings, p.DropFrames)
|
||||||
|
p.keepFramesX = addString(strings, p.KeepFrames)
|
||||||
|
|
||||||
|
if pt := p.PeriodType; pt != nil {
|
||||||
|
pt.typeX = addString(strings, pt.Type)
|
||||||
|
pt.unitX = addString(strings, pt.Unit)
|
||||||
|
}
|
||||||
|
|
||||||
|
p.commentX = nil
|
||||||
|
for _, c := range p.Comments {
|
||||||
|
p.commentX = append(p.commentX, addString(strings, c))
|
||||||
|
}
|
||||||
|
|
||||||
|
p.defaultSampleTypeX = addString(strings, p.DefaultSampleType)
|
||||||
|
|
||||||
|
p.stringTable = make([]string, len(strings))
|
||||||
|
for s, i := range strings {
|
||||||
|
p.stringTable[i] = s
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Profile) encode(b *buffer) {
|
||||||
|
for _, x := range p.SampleType {
|
||||||
|
encodeMessage(b, 1, x)
|
||||||
|
}
|
||||||
|
for _, x := range p.Sample {
|
||||||
|
encodeMessage(b, 2, x)
|
||||||
|
}
|
||||||
|
for _, x := range p.Mapping {
|
||||||
|
encodeMessage(b, 3, x)
|
||||||
|
}
|
||||||
|
for _, x := range p.Location {
|
||||||
|
encodeMessage(b, 4, x)
|
||||||
|
}
|
||||||
|
for _, x := range p.Function {
|
||||||
|
encodeMessage(b, 5, x)
|
||||||
|
}
|
||||||
|
encodeStrings(b, 6, p.stringTable)
|
||||||
|
encodeInt64Opt(b, 7, p.dropFramesX)
|
||||||
|
encodeInt64Opt(b, 8, p.keepFramesX)
|
||||||
|
encodeInt64Opt(b, 9, p.TimeNanos)
|
||||||
|
encodeInt64Opt(b, 10, p.DurationNanos)
|
||||||
|
if pt := p.PeriodType; pt != nil && (pt.typeX != 0 || pt.unitX != 0) {
|
||||||
|
encodeMessage(b, 11, p.PeriodType)
|
||||||
|
}
|
||||||
|
encodeInt64Opt(b, 12, p.Period)
|
||||||
|
encodeInt64s(b, 13, p.commentX)
|
||||||
|
encodeInt64(b, 14, p.defaultSampleTypeX)
|
||||||
|
}
|
||||||
|
|
||||||
|
var profileDecoder = []decoder{
|
||||||
|
nil, // 0
|
||||||
|
// repeated ValueType sample_type = 1
|
||||||
|
func(b *buffer, m message) error {
|
||||||
|
x := new(ValueType)
|
||||||
|
pp := m.(*Profile)
|
||||||
|
pp.SampleType = append(pp.SampleType, x)
|
||||||
|
return decodeMessage(b, x)
|
||||||
|
},
|
||||||
|
// repeated Sample sample = 2
|
||||||
|
func(b *buffer, m message) error {
|
||||||
|
x := new(Sample)
|
||||||
|
pp := m.(*Profile)
|
||||||
|
pp.Sample = append(pp.Sample, x)
|
||||||
|
return decodeMessage(b, x)
|
||||||
|
},
|
||||||
|
// repeated Mapping mapping = 3
|
||||||
|
func(b *buffer, m message) error {
|
||||||
|
x := new(Mapping)
|
||||||
|
pp := m.(*Profile)
|
||||||
|
pp.Mapping = append(pp.Mapping, x)
|
||||||
|
return decodeMessage(b, x)
|
||||||
|
},
|
||||||
|
// repeated Location location = 4
|
||||||
|
func(b *buffer, m message) error {
|
||||||
|
x := new(Location)
|
||||||
|
x.Line = make([]Line, 0, 8) // Pre-allocate Line buffer
|
||||||
|
pp := m.(*Profile)
|
||||||
|
pp.Location = append(pp.Location, x)
|
||||||
|
err := decodeMessage(b, x)
|
||||||
|
var tmp []Line
|
||||||
|
x.Line = append(tmp, x.Line...) // Shrink to allocated size
|
||||||
|
return err
|
||||||
|
},
|
||||||
|
// repeated Function function = 5
|
||||||
|
func(b *buffer, m message) error {
|
||||||
|
x := new(Function)
|
||||||
|
pp := m.(*Profile)
|
||||||
|
pp.Function = append(pp.Function, x)
|
||||||
|
return decodeMessage(b, x)
|
||||||
|
},
|
||||||
|
// repeated string string_table = 6
|
||||||
|
func(b *buffer, m message) error {
|
||||||
|
err := decodeStrings(b, &m.(*Profile).stringTable)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if m.(*Profile).stringTable[0] != "" {
|
||||||
|
return errors.New("string_table[0] must be ''")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
// int64 drop_frames = 7
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Profile).dropFramesX) },
|
||||||
|
// int64 keep_frames = 8
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Profile).keepFramesX) },
|
||||||
|
// int64 time_nanos = 9
|
||||||
|
func(b *buffer, m message) error {
|
||||||
|
if m.(*Profile).TimeNanos != 0 {
|
||||||
|
return errConcatProfile
|
||||||
|
}
|
||||||
|
return decodeInt64(b, &m.(*Profile).TimeNanos)
|
||||||
|
},
|
||||||
|
// int64 duration_nanos = 10
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Profile).DurationNanos) },
|
||||||
|
// ValueType period_type = 11
|
||||||
|
func(b *buffer, m message) error {
|
||||||
|
x := new(ValueType)
|
||||||
|
pp := m.(*Profile)
|
||||||
|
pp.PeriodType = x
|
||||||
|
return decodeMessage(b, x)
|
||||||
|
},
|
||||||
|
// int64 period = 12
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Profile).Period) },
|
||||||
|
// repeated int64 comment = 13
|
||||||
|
func(b *buffer, m message) error { return decodeInt64s(b, &m.(*Profile).commentX) },
|
||||||
|
// int64 defaultSampleType = 14
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Profile).defaultSampleTypeX) },
|
||||||
|
}
|
||||||
|
|
||||||
|
// postDecode takes the unexported fields populated by decode (with
|
||||||
|
// suffix X) and populates the corresponding exported fields.
|
||||||
|
// The unexported fields are cleared up to facilitate testing.
|
||||||
|
func (p *Profile) postDecode() error {
|
||||||
|
var err error
|
||||||
|
mappings := make(map[uint64]*Mapping, len(p.Mapping))
|
||||||
|
mappingIds := make([]*Mapping, len(p.Mapping)+1)
|
||||||
|
for _, m := range p.Mapping {
|
||||||
|
m.File, err = getString(p.stringTable, &m.fileX, err)
|
||||||
|
m.BuildID, err = getString(p.stringTable, &m.buildIDX, err)
|
||||||
|
if m.ID < uint64(len(mappingIds)) {
|
||||||
|
mappingIds[m.ID] = m
|
||||||
|
} else {
|
||||||
|
mappings[m.ID] = m
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
functions := make(map[uint64]*Function, len(p.Function))
|
||||||
|
functionIds := make([]*Function, len(p.Function)+1)
|
||||||
|
for _, f := range p.Function {
|
||||||
|
f.Name, err = getString(p.stringTable, &f.nameX, err)
|
||||||
|
f.SystemName, err = getString(p.stringTable, &f.systemNameX, err)
|
||||||
|
f.Filename, err = getString(p.stringTable, &f.filenameX, err)
|
||||||
|
if f.ID < uint64(len(functionIds)) {
|
||||||
|
functionIds[f.ID] = f
|
||||||
|
} else {
|
||||||
|
functions[f.ID] = f
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
locations := make(map[uint64]*Location, len(p.Location))
|
||||||
|
locationIds := make([]*Location, len(p.Location)+1)
|
||||||
|
for _, l := range p.Location {
|
||||||
|
if id := l.mappingIDX; id < uint64(len(mappingIds)) {
|
||||||
|
l.Mapping = mappingIds[id]
|
||||||
|
} else {
|
||||||
|
l.Mapping = mappings[id]
|
||||||
|
}
|
||||||
|
l.mappingIDX = 0
|
||||||
|
for i, ln := range l.Line {
|
||||||
|
if id := ln.functionIDX; id != 0 {
|
||||||
|
l.Line[i].functionIDX = 0
|
||||||
|
if id < uint64(len(functionIds)) {
|
||||||
|
l.Line[i].Function = functionIds[id]
|
||||||
|
} else {
|
||||||
|
l.Line[i].Function = functions[id]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if l.ID < uint64(len(locationIds)) {
|
||||||
|
locationIds[l.ID] = l
|
||||||
|
} else {
|
||||||
|
locations[l.ID] = l
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, st := range p.SampleType {
|
||||||
|
st.Type, err = getString(p.stringTable, &st.typeX, err)
|
||||||
|
st.Unit, err = getString(p.stringTable, &st.unitX, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, s := range p.Sample {
|
||||||
|
labels := make(map[string][]string, len(s.labelX))
|
||||||
|
numLabels := make(map[string][]int64, len(s.labelX))
|
||||||
|
numUnits := make(map[string][]string, len(s.labelX))
|
||||||
|
for _, l := range s.labelX {
|
||||||
|
var key, value string
|
||||||
|
key, err = getString(p.stringTable, &l.keyX, err)
|
||||||
|
if l.strX != 0 {
|
||||||
|
value, err = getString(p.stringTable, &l.strX, err)
|
||||||
|
labels[key] = append(labels[key], value)
|
||||||
|
} else if l.numX != 0 || l.unitX != 0 {
|
||||||
|
numValues := numLabels[key]
|
||||||
|
units := numUnits[key]
|
||||||
|
if l.unitX != 0 {
|
||||||
|
var unit string
|
||||||
|
unit, err = getString(p.stringTable, &l.unitX, err)
|
||||||
|
units = padStringArray(units, len(numValues))
|
||||||
|
numUnits[key] = append(units, unit)
|
||||||
|
}
|
||||||
|
numLabels[key] = append(numLabels[key], l.numX)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(labels) > 0 {
|
||||||
|
s.Label = labels
|
||||||
|
}
|
||||||
|
if len(numLabels) > 0 {
|
||||||
|
s.NumLabel = numLabels
|
||||||
|
for key, units := range numUnits {
|
||||||
|
if len(units) > 0 {
|
||||||
|
numUnits[key] = padStringArray(units, len(numLabels[key]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
s.NumUnit = numUnits
|
||||||
|
}
|
||||||
|
s.Location = make([]*Location, len(s.locationIDX))
|
||||||
|
for i, lid := range s.locationIDX {
|
||||||
|
if lid < uint64(len(locationIds)) {
|
||||||
|
s.Location[i] = locationIds[lid]
|
||||||
|
} else {
|
||||||
|
s.Location[i] = locations[lid]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
s.locationIDX = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
p.DropFrames, err = getString(p.stringTable, &p.dropFramesX, err)
|
||||||
|
p.KeepFrames, err = getString(p.stringTable, &p.keepFramesX, err)
|
||||||
|
|
||||||
|
if pt := p.PeriodType; pt == nil {
|
||||||
|
p.PeriodType = &ValueType{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if pt := p.PeriodType; pt != nil {
|
||||||
|
pt.Type, err = getString(p.stringTable, &pt.typeX, err)
|
||||||
|
pt.Unit, err = getString(p.stringTable, &pt.unitX, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, i := range p.commentX {
|
||||||
|
var c string
|
||||||
|
c, err = getString(p.stringTable, &i, err)
|
||||||
|
p.Comments = append(p.Comments, c)
|
||||||
|
}
|
||||||
|
|
||||||
|
p.commentX = nil
|
||||||
|
p.DefaultSampleType, err = getString(p.stringTable, &p.defaultSampleTypeX, err)
|
||||||
|
p.stringTable = nil
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// padStringArray pads arr with enough empty strings to make arr
|
||||||
|
// length l when arr's length is less than l.
|
||||||
|
func padStringArray(arr []string, l int) []string {
|
||||||
|
if l <= len(arr) {
|
||||||
|
return arr
|
||||||
|
}
|
||||||
|
return append(arr, make([]string, l-len(arr))...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *ValueType) decoder() []decoder {
|
||||||
|
return valueTypeDecoder
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *ValueType) encode(b *buffer) {
|
||||||
|
encodeInt64Opt(b, 1, p.typeX)
|
||||||
|
encodeInt64Opt(b, 2, p.unitX)
|
||||||
|
}
|
||||||
|
|
||||||
|
var valueTypeDecoder = []decoder{
|
||||||
|
nil, // 0
|
||||||
|
// optional int64 type = 1
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*ValueType).typeX) },
|
||||||
|
// optional int64 unit = 2
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*ValueType).unitX) },
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Sample) decoder() []decoder {
|
||||||
|
return sampleDecoder
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Sample) encode(b *buffer) {
|
||||||
|
encodeUint64s(b, 1, p.locationIDX)
|
||||||
|
encodeInt64s(b, 2, p.Value)
|
||||||
|
for _, x := range p.labelX {
|
||||||
|
encodeMessage(b, 3, x)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var sampleDecoder = []decoder{
|
||||||
|
nil, // 0
|
||||||
|
// repeated uint64 location = 1
|
||||||
|
func(b *buffer, m message) error { return decodeUint64s(b, &m.(*Sample).locationIDX) },
|
||||||
|
// repeated int64 value = 2
|
||||||
|
func(b *buffer, m message) error { return decodeInt64s(b, &m.(*Sample).Value) },
|
||||||
|
// repeated Label label = 3
|
||||||
|
func(b *buffer, m message) error {
|
||||||
|
s := m.(*Sample)
|
||||||
|
n := len(s.labelX)
|
||||||
|
s.labelX = append(s.labelX, label{})
|
||||||
|
return decodeMessage(b, &s.labelX[n])
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p label) decoder() []decoder {
|
||||||
|
return labelDecoder
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p label) encode(b *buffer) {
|
||||||
|
encodeInt64Opt(b, 1, p.keyX)
|
||||||
|
encodeInt64Opt(b, 2, p.strX)
|
||||||
|
encodeInt64Opt(b, 3, p.numX)
|
||||||
|
encodeInt64Opt(b, 4, p.unitX)
|
||||||
|
}
|
||||||
|
|
||||||
|
var labelDecoder = []decoder{
|
||||||
|
nil, // 0
|
||||||
|
// optional int64 key = 1
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*label).keyX) },
|
||||||
|
// optional int64 str = 2
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*label).strX) },
|
||||||
|
// optional int64 num = 3
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*label).numX) },
|
||||||
|
// optional int64 num = 4
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*label).unitX) },
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Mapping) decoder() []decoder {
|
||||||
|
return mappingDecoder
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Mapping) encode(b *buffer) {
|
||||||
|
encodeUint64Opt(b, 1, p.ID)
|
||||||
|
encodeUint64Opt(b, 2, p.Start)
|
||||||
|
encodeUint64Opt(b, 3, p.Limit)
|
||||||
|
encodeUint64Opt(b, 4, p.Offset)
|
||||||
|
encodeInt64Opt(b, 5, p.fileX)
|
||||||
|
encodeInt64Opt(b, 6, p.buildIDX)
|
||||||
|
encodeBoolOpt(b, 7, p.HasFunctions)
|
||||||
|
encodeBoolOpt(b, 8, p.HasFilenames)
|
||||||
|
encodeBoolOpt(b, 9, p.HasLineNumbers)
|
||||||
|
encodeBoolOpt(b, 10, p.HasInlineFrames)
|
||||||
|
}
|
||||||
|
|
||||||
|
var mappingDecoder = []decoder{
|
||||||
|
nil, // 0
|
||||||
|
func(b *buffer, m message) error { return decodeUint64(b, &m.(*Mapping).ID) }, // optional uint64 id = 1
|
||||||
|
func(b *buffer, m message) error { return decodeUint64(b, &m.(*Mapping).Start) }, // optional uint64 memory_offset = 2
|
||||||
|
func(b *buffer, m message) error { return decodeUint64(b, &m.(*Mapping).Limit) }, // optional uint64 memory_limit = 3
|
||||||
|
func(b *buffer, m message) error { return decodeUint64(b, &m.(*Mapping).Offset) }, // optional uint64 file_offset = 4
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Mapping).fileX) }, // optional int64 filename = 5
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Mapping).buildIDX) }, // optional int64 build_id = 6
|
||||||
|
func(b *buffer, m message) error { return decodeBool(b, &m.(*Mapping).HasFunctions) }, // optional bool has_functions = 7
|
||||||
|
func(b *buffer, m message) error { return decodeBool(b, &m.(*Mapping).HasFilenames) }, // optional bool has_filenames = 8
|
||||||
|
func(b *buffer, m message) error { return decodeBool(b, &m.(*Mapping).HasLineNumbers) }, // optional bool has_line_numbers = 9
|
||||||
|
func(b *buffer, m message) error { return decodeBool(b, &m.(*Mapping).HasInlineFrames) }, // optional bool has_inline_frames = 10
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Location) decoder() []decoder {
|
||||||
|
return locationDecoder
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Location) encode(b *buffer) {
|
||||||
|
encodeUint64Opt(b, 1, p.ID)
|
||||||
|
encodeUint64Opt(b, 2, p.mappingIDX)
|
||||||
|
encodeUint64Opt(b, 3, p.Address)
|
||||||
|
for i := range p.Line {
|
||||||
|
encodeMessage(b, 4, &p.Line[i])
|
||||||
|
}
|
||||||
|
encodeBoolOpt(b, 5, p.IsFolded)
|
||||||
|
}
|
||||||
|
|
||||||
|
var locationDecoder = []decoder{
|
||||||
|
nil, // 0
|
||||||
|
func(b *buffer, m message) error { return decodeUint64(b, &m.(*Location).ID) }, // optional uint64 id = 1;
|
||||||
|
func(b *buffer, m message) error { return decodeUint64(b, &m.(*Location).mappingIDX) }, // optional uint64 mapping_id = 2;
|
||||||
|
func(b *buffer, m message) error { return decodeUint64(b, &m.(*Location).Address) }, // optional uint64 address = 3;
|
||||||
|
func(b *buffer, m message) error { // repeated Line line = 4
|
||||||
|
pp := m.(*Location)
|
||||||
|
n := len(pp.Line)
|
||||||
|
pp.Line = append(pp.Line, Line{})
|
||||||
|
return decodeMessage(b, &pp.Line[n])
|
||||||
|
},
|
||||||
|
func(b *buffer, m message) error { return decodeBool(b, &m.(*Location).IsFolded) }, // optional bool is_folded = 5;
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Line) decoder() []decoder {
|
||||||
|
return lineDecoder
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Line) encode(b *buffer) {
|
||||||
|
encodeUint64Opt(b, 1, p.functionIDX)
|
||||||
|
encodeInt64Opt(b, 2, p.Line)
|
||||||
|
}
|
||||||
|
|
||||||
|
var lineDecoder = []decoder{
|
||||||
|
nil, // 0
|
||||||
|
// optional uint64 function_id = 1
|
||||||
|
func(b *buffer, m message) error { return decodeUint64(b, &m.(*Line).functionIDX) },
|
||||||
|
// optional int64 line = 2
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Line).Line) },
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Function) decoder() []decoder {
|
||||||
|
return functionDecoder
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Function) encode(b *buffer) {
|
||||||
|
encodeUint64Opt(b, 1, p.ID)
|
||||||
|
encodeInt64Opt(b, 2, p.nameX)
|
||||||
|
encodeInt64Opt(b, 3, p.systemNameX)
|
||||||
|
encodeInt64Opt(b, 4, p.filenameX)
|
||||||
|
encodeInt64Opt(b, 5, p.StartLine)
|
||||||
|
}
|
||||||
|
|
||||||
|
var functionDecoder = []decoder{
|
||||||
|
nil, // 0
|
||||||
|
// optional uint64 id = 1
|
||||||
|
func(b *buffer, m message) error { return decodeUint64(b, &m.(*Function).ID) },
|
||||||
|
// optional int64 function_name = 2
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Function).nameX) },
|
||||||
|
// optional int64 function_system_name = 3
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Function).systemNameX) },
|
||||||
|
// repeated int64 filename = 4
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Function).filenameX) },
|
||||||
|
// optional int64 start_line = 5
|
||||||
|
func(b *buffer, m message) error { return decodeInt64(b, &m.(*Function).StartLine) },
|
||||||
|
}
|
||||||
|
|
||||||
|
func addString(strings map[string]int, s string) int64 {
|
||||||
|
i, ok := strings[s]
|
||||||
|
if !ok {
|
||||||
|
i = len(strings)
|
||||||
|
strings[s] = i
|
||||||
|
}
|
||||||
|
return int64(i)
|
||||||
|
}
|
||||||
|
|
||||||
|
func getString(strings []string, strng *int64, err error) (string, error) {
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
s := int(*strng)
|
||||||
|
if s < 0 || s >= len(strings) {
|
||||||
|
return "", errMalformed
|
||||||
|
}
|
||||||
|
*strng = 0
|
||||||
|
return strings[s], nil
|
||||||
|
}
|
||||||
270
vendor/github.com/google/pprof/profile/filter.go
generated
vendored
Normal file
270
vendor/github.com/google/pprof/profile/filter.go
generated
vendored
Normal file
@@ -0,0 +1,270 @@
|
|||||||
|
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||||
|
//
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package profile
|
||||||
|
|
||||||
|
// Implements methods to filter samples from profiles.
|
||||||
|
|
||||||
|
import "regexp"
|
||||||
|
|
||||||
|
// FilterSamplesByName filters the samples in a profile and only keeps
|
||||||
|
// samples where at least one frame matches focus but none match ignore.
|
||||||
|
// Returns true is the corresponding regexp matched at least one sample.
|
||||||
|
func (p *Profile) FilterSamplesByName(focus, ignore, hide, show *regexp.Regexp) (fm, im, hm, hnm bool) {
|
||||||
|
focusOrIgnore := make(map[uint64]bool)
|
||||||
|
hidden := make(map[uint64]bool)
|
||||||
|
for _, l := range p.Location {
|
||||||
|
if ignore != nil && l.matchesName(ignore) {
|
||||||
|
im = true
|
||||||
|
focusOrIgnore[l.ID] = false
|
||||||
|
} else if focus == nil || l.matchesName(focus) {
|
||||||
|
fm = true
|
||||||
|
focusOrIgnore[l.ID] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
if hide != nil && l.matchesName(hide) {
|
||||||
|
hm = true
|
||||||
|
l.Line = l.unmatchedLines(hide)
|
||||||
|
if len(l.Line) == 0 {
|
||||||
|
hidden[l.ID] = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if show != nil {
|
||||||
|
l.Line = l.matchedLines(show)
|
||||||
|
if len(l.Line) == 0 {
|
||||||
|
hidden[l.ID] = true
|
||||||
|
} else {
|
||||||
|
hnm = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
s := make([]*Sample, 0, len(p.Sample))
|
||||||
|
for _, sample := range p.Sample {
|
||||||
|
if focusedAndNotIgnored(sample.Location, focusOrIgnore) {
|
||||||
|
if len(hidden) > 0 {
|
||||||
|
var locs []*Location
|
||||||
|
for _, loc := range sample.Location {
|
||||||
|
if !hidden[loc.ID] {
|
||||||
|
locs = append(locs, loc)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(locs) == 0 {
|
||||||
|
// Remove sample with no locations (by not adding it to s).
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
sample.Location = locs
|
||||||
|
}
|
||||||
|
s = append(s, sample)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
p.Sample = s
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// ShowFrom drops all stack frames above the highest matching frame and returns
|
||||||
|
// whether a match was found. If showFrom is nil it returns false and does not
|
||||||
|
// modify the profile.
|
||||||
|
//
|
||||||
|
// Example: consider a sample with frames [A, B, C, B], where A is the root.
|
||||||
|
// ShowFrom(nil) returns false and has frames [A, B, C, B].
|
||||||
|
// ShowFrom(A) returns true and has frames [A, B, C, B].
|
||||||
|
// ShowFrom(B) returns true and has frames [B, C, B].
|
||||||
|
// ShowFrom(C) returns true and has frames [C, B].
|
||||||
|
// ShowFrom(D) returns false and drops the sample because no frames remain.
|
||||||
|
func (p *Profile) ShowFrom(showFrom *regexp.Regexp) (matched bool) {
|
||||||
|
if showFrom == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
// showFromLocs stores location IDs that matched ShowFrom.
|
||||||
|
showFromLocs := make(map[uint64]bool)
|
||||||
|
// Apply to locations.
|
||||||
|
for _, loc := range p.Location {
|
||||||
|
if filterShowFromLocation(loc, showFrom) {
|
||||||
|
showFromLocs[loc.ID] = true
|
||||||
|
matched = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// For all samples, strip locations after the highest matching one.
|
||||||
|
s := make([]*Sample, 0, len(p.Sample))
|
||||||
|
for _, sample := range p.Sample {
|
||||||
|
for i := len(sample.Location) - 1; i >= 0; i-- {
|
||||||
|
if showFromLocs[sample.Location[i].ID] {
|
||||||
|
sample.Location = sample.Location[:i+1]
|
||||||
|
s = append(s, sample)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
p.Sample = s
|
||||||
|
return matched
|
||||||
|
}
|
||||||
|
|
||||||
|
// filterShowFromLocation tests a showFrom regex against a location, removes
|
||||||
|
// lines after the last match and returns whether a match was found. If the
|
||||||
|
// mapping is matched, then all lines are kept.
|
||||||
|
func filterShowFromLocation(loc *Location, showFrom *regexp.Regexp) bool {
|
||||||
|
if m := loc.Mapping; m != nil && showFrom.MatchString(m.File) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if i := loc.lastMatchedLineIndex(showFrom); i >= 0 {
|
||||||
|
loc.Line = loc.Line[:i+1]
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// lastMatchedLineIndex returns the index of the last line that matches a regex,
|
||||||
|
// or -1 if no match is found.
|
||||||
|
func (loc *Location) lastMatchedLineIndex(re *regexp.Regexp) int {
|
||||||
|
for i := len(loc.Line) - 1; i >= 0; i-- {
|
||||||
|
if fn := loc.Line[i].Function; fn != nil {
|
||||||
|
if re.MatchString(fn.Name) || re.MatchString(fn.Filename) {
|
||||||
|
return i
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return -1
|
||||||
|
}
|
||||||
|
|
||||||
|
// FilterTagsByName filters the tags in a profile and only keeps
|
||||||
|
// tags that match show and not hide.
|
||||||
|
func (p *Profile) FilterTagsByName(show, hide *regexp.Regexp) (sm, hm bool) {
|
||||||
|
matchRemove := func(name string) bool {
|
||||||
|
matchShow := show == nil || show.MatchString(name)
|
||||||
|
matchHide := hide != nil && hide.MatchString(name)
|
||||||
|
|
||||||
|
if matchShow {
|
||||||
|
sm = true
|
||||||
|
}
|
||||||
|
if matchHide {
|
||||||
|
hm = true
|
||||||
|
}
|
||||||
|
return !matchShow || matchHide
|
||||||
|
}
|
||||||
|
for _, s := range p.Sample {
|
||||||
|
for lab := range s.Label {
|
||||||
|
if matchRemove(lab) {
|
||||||
|
delete(s.Label, lab)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for lab := range s.NumLabel {
|
||||||
|
if matchRemove(lab) {
|
||||||
|
delete(s.NumLabel, lab)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// matchesName returns whether the location matches the regular
|
||||||
|
// expression. It checks any available function names, file names, and
|
||||||
|
// mapping object filename.
|
||||||
|
func (loc *Location) matchesName(re *regexp.Regexp) bool {
|
||||||
|
for _, ln := range loc.Line {
|
||||||
|
if fn := ln.Function; fn != nil {
|
||||||
|
if re.MatchString(fn.Name) || re.MatchString(fn.Filename) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if m := loc.Mapping; m != nil && re.MatchString(m.File) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// unmatchedLines returns the lines in the location that do not match
|
||||||
|
// the regular expression.
|
||||||
|
func (loc *Location) unmatchedLines(re *regexp.Regexp) []Line {
|
||||||
|
if m := loc.Mapping; m != nil && re.MatchString(m.File) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var lines []Line
|
||||||
|
for _, ln := range loc.Line {
|
||||||
|
if fn := ln.Function; fn != nil {
|
||||||
|
if re.MatchString(fn.Name) || re.MatchString(fn.Filename) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
lines = append(lines, ln)
|
||||||
|
}
|
||||||
|
return lines
|
||||||
|
}
|
||||||
|
|
||||||
|
// matchedLines returns the lines in the location that match
|
||||||
|
// the regular expression.
|
||||||
|
func (loc *Location) matchedLines(re *regexp.Regexp) []Line {
|
||||||
|
if m := loc.Mapping; m != nil && re.MatchString(m.File) {
|
||||||
|
return loc.Line
|
||||||
|
}
|
||||||
|
var lines []Line
|
||||||
|
for _, ln := range loc.Line {
|
||||||
|
if fn := ln.Function; fn != nil {
|
||||||
|
if !re.MatchString(fn.Name) && !re.MatchString(fn.Filename) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
lines = append(lines, ln)
|
||||||
|
}
|
||||||
|
return lines
|
||||||
|
}
|
||||||
|
|
||||||
|
// focusedAndNotIgnored looks up a slice of ids against a map of
|
||||||
|
// focused/ignored locations. The map only contains locations that are
|
||||||
|
// explicitly focused or ignored. Returns whether there is at least
|
||||||
|
// one focused location but no ignored locations.
|
||||||
|
func focusedAndNotIgnored(locs []*Location, m map[uint64]bool) bool {
|
||||||
|
var f bool
|
||||||
|
for _, loc := range locs {
|
||||||
|
if focus, focusOrIgnore := m[loc.ID]; focusOrIgnore {
|
||||||
|
if focus {
|
||||||
|
// Found focused location. Must keep searching in case there
|
||||||
|
// is an ignored one as well.
|
||||||
|
f = true
|
||||||
|
} else {
|
||||||
|
// Found ignored location. Can return false right away.
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return f
|
||||||
|
}
|
||||||
|
|
||||||
|
// TagMatch selects tags for filtering
|
||||||
|
type TagMatch func(s *Sample) bool
|
||||||
|
|
||||||
|
// FilterSamplesByTag removes all samples from the profile, except
|
||||||
|
// those that match focus and do not match the ignore regular
|
||||||
|
// expression.
|
||||||
|
func (p *Profile) FilterSamplesByTag(focus, ignore TagMatch) (fm, im bool) {
|
||||||
|
samples := make([]*Sample, 0, len(p.Sample))
|
||||||
|
for _, s := range p.Sample {
|
||||||
|
focused, ignored := true, false
|
||||||
|
if focus != nil {
|
||||||
|
focused = focus(s)
|
||||||
|
}
|
||||||
|
if ignore != nil {
|
||||||
|
ignored = ignore(s)
|
||||||
|
}
|
||||||
|
fm = fm || focused
|
||||||
|
im = im || ignored
|
||||||
|
if focused && !ignored {
|
||||||
|
samples = append(samples, s)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
p.Sample = samples
|
||||||
|
return
|
||||||
|
}
|
||||||
64
vendor/github.com/google/pprof/profile/index.go
generated
vendored
Normal file
64
vendor/github.com/google/pprof/profile/index.go
generated
vendored
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
// Copyright 2016 Google Inc. All Rights Reserved.
|
||||||
|
//
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package profile
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// SampleIndexByName returns the appropriate index for a value of sample index.
|
||||||
|
// If numeric, it returns the number, otherwise it looks up the text in the
|
||||||
|
// profile sample types.
|
||||||
|
func (p *Profile) SampleIndexByName(sampleIndex string) (int, error) {
|
||||||
|
if sampleIndex == "" {
|
||||||
|
if dst := p.DefaultSampleType; dst != "" {
|
||||||
|
for i, t := range sampleTypes(p) {
|
||||||
|
if t == dst {
|
||||||
|
return i, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// By default select the last sample value
|
||||||
|
return len(p.SampleType) - 1, nil
|
||||||
|
}
|
||||||
|
if i, err := strconv.Atoi(sampleIndex); err == nil {
|
||||||
|
if i < 0 || i >= len(p.SampleType) {
|
||||||
|
return 0, fmt.Errorf("sample_index %s is outside the range [0..%d]", sampleIndex, len(p.SampleType)-1)
|
||||||
|
}
|
||||||
|
return i, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove the inuse_ prefix to support legacy pprof options
|
||||||
|
// "inuse_space" and "inuse_objects" for profiles containing types
|
||||||
|
// "space" and "objects".
|
||||||
|
noInuse := strings.TrimPrefix(sampleIndex, "inuse_")
|
||||||
|
for i, t := range p.SampleType {
|
||||||
|
if t.Type == sampleIndex || t.Type == noInuse {
|
||||||
|
return i, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0, fmt.Errorf("sample_index %q must be one of: %v", sampleIndex, sampleTypes(p))
|
||||||
|
}
|
||||||
|
|
||||||
|
func sampleTypes(p *Profile) []string {
|
||||||
|
types := make([]string, len(p.SampleType))
|
||||||
|
for i, t := range p.SampleType {
|
||||||
|
types[i] = t.Type
|
||||||
|
}
|
||||||
|
return types
|
||||||
|
}
|
||||||
315
vendor/github.com/google/pprof/profile/legacy_java_profile.go
generated
vendored
Normal file
315
vendor/github.com/google/pprof/profile/legacy_java_profile.go
generated
vendored
Normal file
@@ -0,0 +1,315 @@
|
|||||||
|
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||||
|
//
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
// This file implements parsers to convert java legacy profiles into
|
||||||
|
// the profile.proto format.
|
||||||
|
|
||||||
|
package profile
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
attributeRx = regexp.MustCompile(`([\w ]+)=([\w ]+)`)
|
||||||
|
javaSampleRx = regexp.MustCompile(` *(\d+) +(\d+) +@ +([ x0-9a-f]*)`)
|
||||||
|
javaLocationRx = regexp.MustCompile(`^\s*0x([[:xdigit:]]+)\s+(.*)\s*$`)
|
||||||
|
javaLocationFileLineRx = regexp.MustCompile(`^(.*)\s+\((.+):(-?[[:digit:]]+)\)$`)
|
||||||
|
javaLocationPathRx = regexp.MustCompile(`^(.*)\s+\((.*)\)$`)
|
||||||
|
)
|
||||||
|
|
||||||
|
// javaCPUProfile returns a new Profile from profilez data.
|
||||||
|
// b is the profile bytes after the header, period is the profiling
|
||||||
|
// period, and parse is a function to parse 8-byte chunks from the
|
||||||
|
// profile in its native endianness.
|
||||||
|
func javaCPUProfile(b []byte, period int64, parse func(b []byte) (uint64, []byte)) (*Profile, error) {
|
||||||
|
p := &Profile{
|
||||||
|
Period: period * 1000,
|
||||||
|
PeriodType: &ValueType{Type: "cpu", Unit: "nanoseconds"},
|
||||||
|
SampleType: []*ValueType{{Type: "samples", Unit: "count"}, {Type: "cpu", Unit: "nanoseconds"}},
|
||||||
|
}
|
||||||
|
var err error
|
||||||
|
var locs map[uint64]*Location
|
||||||
|
if b, locs, err = parseCPUSamples(b, parse, false, p); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err = parseJavaLocations(b, locs, p); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strip out addresses for better merge.
|
||||||
|
if err = p.Aggregate(true, true, true, true, false); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// parseJavaProfile returns a new profile from heapz or contentionz
|
||||||
|
// data. b is the profile bytes after the header.
|
||||||
|
func parseJavaProfile(b []byte) (*Profile, error) {
|
||||||
|
h := bytes.SplitAfterN(b, []byte("\n"), 2)
|
||||||
|
if len(h) < 2 {
|
||||||
|
return nil, errUnrecognized
|
||||||
|
}
|
||||||
|
|
||||||
|
p := &Profile{
|
||||||
|
PeriodType: &ValueType{},
|
||||||
|
}
|
||||||
|
header := string(bytes.TrimSpace(h[0]))
|
||||||
|
|
||||||
|
var err error
|
||||||
|
var pType string
|
||||||
|
switch header {
|
||||||
|
case "--- heapz 1 ---":
|
||||||
|
pType = "heap"
|
||||||
|
case "--- contentionz 1 ---":
|
||||||
|
pType = "contention"
|
||||||
|
default:
|
||||||
|
return nil, errUnrecognized
|
||||||
|
}
|
||||||
|
|
||||||
|
if b, err = parseJavaHeader(pType, h[1], p); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var locs map[uint64]*Location
|
||||||
|
if b, locs, err = parseJavaSamples(pType, b, p); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err = parseJavaLocations(b, locs, p); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strip out addresses for better merge.
|
||||||
|
if err = p.Aggregate(true, true, true, true, false); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// parseJavaHeader parses the attribute section on a java profile and
|
||||||
|
// populates a profile. Returns the remainder of the buffer after all
|
||||||
|
// attributes.
|
||||||
|
func parseJavaHeader(pType string, b []byte, p *Profile) ([]byte, error) {
|
||||||
|
nextNewLine := bytes.IndexByte(b, byte('\n'))
|
||||||
|
for nextNewLine != -1 {
|
||||||
|
line := string(bytes.TrimSpace(b[0:nextNewLine]))
|
||||||
|
if line != "" {
|
||||||
|
h := attributeRx.FindStringSubmatch(line)
|
||||||
|
if h == nil {
|
||||||
|
// Not a valid attribute, exit.
|
||||||
|
return b, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
attribute, value := strings.TrimSpace(h[1]), strings.TrimSpace(h[2])
|
||||||
|
var err error
|
||||||
|
switch pType + "/" + attribute {
|
||||||
|
case "heap/format", "cpu/format", "contention/format":
|
||||||
|
if value != "java" {
|
||||||
|
return nil, errUnrecognized
|
||||||
|
}
|
||||||
|
case "heap/resolution":
|
||||||
|
p.SampleType = []*ValueType{
|
||||||
|
{Type: "inuse_objects", Unit: "count"},
|
||||||
|
{Type: "inuse_space", Unit: value},
|
||||||
|
}
|
||||||
|
case "contention/resolution":
|
||||||
|
p.SampleType = []*ValueType{
|
||||||
|
{Type: "contentions", Unit: "count"},
|
||||||
|
{Type: "delay", Unit: value},
|
||||||
|
}
|
||||||
|
case "contention/sampling period":
|
||||||
|
p.PeriodType = &ValueType{
|
||||||
|
Type: "contentions", Unit: "count",
|
||||||
|
}
|
||||||
|
if p.Period, err = strconv.ParseInt(value, 0, 64); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse attribute %s: %v", line, err)
|
||||||
|
}
|
||||||
|
case "contention/ms since reset":
|
||||||
|
millis, err := strconv.ParseInt(value, 0, 64)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse attribute %s: %v", line, err)
|
||||||
|
}
|
||||||
|
p.DurationNanos = millis * 1000 * 1000
|
||||||
|
default:
|
||||||
|
return nil, errUnrecognized
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Grab next line.
|
||||||
|
b = b[nextNewLine+1:]
|
||||||
|
nextNewLine = bytes.IndexByte(b, byte('\n'))
|
||||||
|
}
|
||||||
|
return b, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// parseJavaSamples parses the samples from a java profile and
|
||||||
|
// populates the Samples in a profile. Returns the remainder of the
|
||||||
|
// buffer after the samples.
|
||||||
|
func parseJavaSamples(pType string, b []byte, p *Profile) ([]byte, map[uint64]*Location, error) {
|
||||||
|
nextNewLine := bytes.IndexByte(b, byte('\n'))
|
||||||
|
locs := make(map[uint64]*Location)
|
||||||
|
for nextNewLine != -1 {
|
||||||
|
line := string(bytes.TrimSpace(b[0:nextNewLine]))
|
||||||
|
if line != "" {
|
||||||
|
sample := javaSampleRx.FindStringSubmatch(line)
|
||||||
|
if sample == nil {
|
||||||
|
// Not a valid sample, exit.
|
||||||
|
return b, locs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Java profiles have data/fields inverted compared to other
|
||||||
|
// profile types.
|
||||||
|
var err error
|
||||||
|
value1, value2, value3 := sample[2], sample[1], sample[3]
|
||||||
|
addrs, err := parseHexAddresses(value3)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("malformed sample: %s: %v", line, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var sloc []*Location
|
||||||
|
for _, addr := range addrs {
|
||||||
|
loc := locs[addr]
|
||||||
|
if locs[addr] == nil {
|
||||||
|
loc = &Location{
|
||||||
|
Address: addr,
|
||||||
|
}
|
||||||
|
p.Location = append(p.Location, loc)
|
||||||
|
locs[addr] = loc
|
||||||
|
}
|
||||||
|
sloc = append(sloc, loc)
|
||||||
|
}
|
||||||
|
s := &Sample{
|
||||||
|
Value: make([]int64, 2),
|
||||||
|
Location: sloc,
|
||||||
|
}
|
||||||
|
|
||||||
|
if s.Value[0], err = strconv.ParseInt(value1, 0, 64); err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("parsing sample %s: %v", line, err)
|
||||||
|
}
|
||||||
|
if s.Value[1], err = strconv.ParseInt(value2, 0, 64); err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("parsing sample %s: %v", line, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
switch pType {
|
||||||
|
case "heap":
|
||||||
|
const javaHeapzSamplingRate = 524288 // 512K
|
||||||
|
if s.Value[0] == 0 {
|
||||||
|
return nil, nil, fmt.Errorf("parsing sample %s: second value must be non-zero", line)
|
||||||
|
}
|
||||||
|
s.NumLabel = map[string][]int64{"bytes": {s.Value[1] / s.Value[0]}}
|
||||||
|
s.Value[0], s.Value[1] = scaleHeapSample(s.Value[0], s.Value[1], javaHeapzSamplingRate)
|
||||||
|
case "contention":
|
||||||
|
if period := p.Period; period != 0 {
|
||||||
|
s.Value[0] = s.Value[0] * p.Period
|
||||||
|
s.Value[1] = s.Value[1] * p.Period
|
||||||
|
}
|
||||||
|
}
|
||||||
|
p.Sample = append(p.Sample, s)
|
||||||
|
}
|
||||||
|
// Grab next line.
|
||||||
|
b = b[nextNewLine+1:]
|
||||||
|
nextNewLine = bytes.IndexByte(b, byte('\n'))
|
||||||
|
}
|
||||||
|
return b, locs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// parseJavaLocations parses the location information in a java
|
||||||
|
// profile and populates the Locations in a profile. It uses the
|
||||||
|
// location addresses from the profile as both the ID of each
|
||||||
|
// location.
|
||||||
|
func parseJavaLocations(b []byte, locs map[uint64]*Location, p *Profile) error {
|
||||||
|
r := bytes.NewBuffer(b)
|
||||||
|
fns := make(map[string]*Function)
|
||||||
|
for {
|
||||||
|
line, err := r.ReadString('\n')
|
||||||
|
if err != nil {
|
||||||
|
if err != io.EOF {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if line == "" {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if line = strings.TrimSpace(line); line == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
jloc := javaLocationRx.FindStringSubmatch(line)
|
||||||
|
if len(jloc) != 3 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
addr, err := strconv.ParseUint(jloc[1], 16, 64)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("parsing sample %s: %v", line, err)
|
||||||
|
}
|
||||||
|
loc := locs[addr]
|
||||||
|
if loc == nil {
|
||||||
|
// Unused/unseen
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var lineFunc, lineFile string
|
||||||
|
var lineNo int64
|
||||||
|
|
||||||
|
if fileLine := javaLocationFileLineRx.FindStringSubmatch(jloc[2]); len(fileLine) == 4 {
|
||||||
|
// Found a line of the form: "function (file:line)"
|
||||||
|
lineFunc, lineFile = fileLine[1], fileLine[2]
|
||||||
|
if n, err := strconv.ParseInt(fileLine[3], 10, 64); err == nil && n > 0 {
|
||||||
|
lineNo = n
|
||||||
|
}
|
||||||
|
} else if filePath := javaLocationPathRx.FindStringSubmatch(jloc[2]); len(filePath) == 3 {
|
||||||
|
// If there's not a file:line, it's a shared library path.
|
||||||
|
// The path isn't interesting, so just give the .so.
|
||||||
|
lineFunc, lineFile = filePath[1], filepath.Base(filePath[2])
|
||||||
|
} else if strings.Contains(jloc[2], "generated stub/JIT") {
|
||||||
|
lineFunc = "STUB"
|
||||||
|
} else {
|
||||||
|
// Treat whole line as the function name. This is used by the
|
||||||
|
// java agent for internal states such as "GC" or "VM".
|
||||||
|
lineFunc = jloc[2]
|
||||||
|
}
|
||||||
|
fn := fns[lineFunc]
|
||||||
|
|
||||||
|
if fn == nil {
|
||||||
|
fn = &Function{
|
||||||
|
Name: lineFunc,
|
||||||
|
SystemName: lineFunc,
|
||||||
|
Filename: lineFile,
|
||||||
|
}
|
||||||
|
fns[lineFunc] = fn
|
||||||
|
p.Function = append(p.Function, fn)
|
||||||
|
}
|
||||||
|
loc.Line = []Line{
|
||||||
|
{
|
||||||
|
Function: fn,
|
||||||
|
Line: lineNo,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
loc.Address = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
p.remapLocationIDs()
|
||||||
|
p.remapFunctionIDs()
|
||||||
|
p.remapMappingIDs()
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
1225
vendor/github.com/google/pprof/profile/legacy_profile.go
generated
vendored
Normal file
1225
vendor/github.com/google/pprof/profile/legacy_profile.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
481
vendor/github.com/google/pprof/profile/merge.go
generated
vendored
Normal file
481
vendor/github.com/google/pprof/profile/merge.go
generated
vendored
Normal file
@@ -0,0 +1,481 @@
|
|||||||
|
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||||
|
//
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package profile
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"sort"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Compact performs garbage collection on a profile to remove any
|
||||||
|
// unreferenced fields. This is useful to reduce the size of a profile
|
||||||
|
// after samples or locations have been removed.
|
||||||
|
func (p *Profile) Compact() *Profile {
|
||||||
|
p, _ = Merge([]*Profile{p})
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
// Merge merges all the profiles in profs into a single Profile.
|
||||||
|
// Returns a new profile independent of the input profiles. The merged
|
||||||
|
// profile is compacted to eliminate unused samples, locations,
|
||||||
|
// functions and mappings. Profiles must have identical profile sample
|
||||||
|
// and period types or the merge will fail. profile.Period of the
|
||||||
|
// resulting profile will be the maximum of all profiles, and
|
||||||
|
// profile.TimeNanos will be the earliest nonzero one. Merges are
|
||||||
|
// associative with the caveat of the first profile having some
|
||||||
|
// specialization in how headers are combined. There may be other
|
||||||
|
// subtleties now or in the future regarding associativity.
|
||||||
|
func Merge(srcs []*Profile) (*Profile, error) {
|
||||||
|
if len(srcs) == 0 {
|
||||||
|
return nil, fmt.Errorf("no profiles to merge")
|
||||||
|
}
|
||||||
|
p, err := combineHeaders(srcs)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
pm := &profileMerger{
|
||||||
|
p: p,
|
||||||
|
samples: make(map[sampleKey]*Sample, len(srcs[0].Sample)),
|
||||||
|
locations: make(map[locationKey]*Location, len(srcs[0].Location)),
|
||||||
|
functions: make(map[functionKey]*Function, len(srcs[0].Function)),
|
||||||
|
mappings: make(map[mappingKey]*Mapping, len(srcs[0].Mapping)),
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, src := range srcs {
|
||||||
|
// Clear the profile-specific hash tables
|
||||||
|
pm.locationsByID = make(map[uint64]*Location, len(src.Location))
|
||||||
|
pm.functionsByID = make(map[uint64]*Function, len(src.Function))
|
||||||
|
pm.mappingsByID = make(map[uint64]mapInfo, len(src.Mapping))
|
||||||
|
|
||||||
|
if len(pm.mappings) == 0 && len(src.Mapping) > 0 {
|
||||||
|
// The Mapping list has the property that the first mapping
|
||||||
|
// represents the main binary. Take the first Mapping we see,
|
||||||
|
// otherwise the operations below will add mappings in an
|
||||||
|
// arbitrary order.
|
||||||
|
pm.mapMapping(src.Mapping[0])
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, s := range src.Sample {
|
||||||
|
if !isZeroSample(s) {
|
||||||
|
pm.mapSample(s)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, s := range p.Sample {
|
||||||
|
if isZeroSample(s) {
|
||||||
|
// If there are any zero samples, re-merge the profile to GC
|
||||||
|
// them.
|
||||||
|
return Merge([]*Profile{p})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normalize normalizes the source profile by multiplying each value in profile by the
|
||||||
|
// ratio of the sum of the base profile's values of that sample type to the sum of the
|
||||||
|
// source profile's value of that sample type.
|
||||||
|
func (p *Profile) Normalize(pb *Profile) error {
|
||||||
|
|
||||||
|
if err := p.compatible(pb); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
baseVals := make([]int64, len(p.SampleType))
|
||||||
|
for _, s := range pb.Sample {
|
||||||
|
for i, v := range s.Value {
|
||||||
|
baseVals[i] += v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
srcVals := make([]int64, len(p.SampleType))
|
||||||
|
for _, s := range p.Sample {
|
||||||
|
for i, v := range s.Value {
|
||||||
|
srcVals[i] += v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
normScale := make([]float64, len(baseVals))
|
||||||
|
for i := range baseVals {
|
||||||
|
if srcVals[i] == 0 {
|
||||||
|
normScale[i] = 0.0
|
||||||
|
} else {
|
||||||
|
normScale[i] = float64(baseVals[i]) / float64(srcVals[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
p.ScaleN(normScale)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func isZeroSample(s *Sample) bool {
|
||||||
|
for _, v := range s.Value {
|
||||||
|
if v != 0 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
type profileMerger struct {
|
||||||
|
p *Profile
|
||||||
|
|
||||||
|
// Memoization tables within a profile.
|
||||||
|
locationsByID map[uint64]*Location
|
||||||
|
functionsByID map[uint64]*Function
|
||||||
|
mappingsByID map[uint64]mapInfo
|
||||||
|
|
||||||
|
// Memoization tables for profile entities.
|
||||||
|
samples map[sampleKey]*Sample
|
||||||
|
locations map[locationKey]*Location
|
||||||
|
functions map[functionKey]*Function
|
||||||
|
mappings map[mappingKey]*Mapping
|
||||||
|
}
|
||||||
|
|
||||||
|
type mapInfo struct {
|
||||||
|
m *Mapping
|
||||||
|
offset int64
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pm *profileMerger) mapSample(src *Sample) *Sample {
|
||||||
|
s := &Sample{
|
||||||
|
Location: make([]*Location, len(src.Location)),
|
||||||
|
Value: make([]int64, len(src.Value)),
|
||||||
|
Label: make(map[string][]string, len(src.Label)),
|
||||||
|
NumLabel: make(map[string][]int64, len(src.NumLabel)),
|
||||||
|
NumUnit: make(map[string][]string, len(src.NumLabel)),
|
||||||
|
}
|
||||||
|
for i, l := range src.Location {
|
||||||
|
s.Location[i] = pm.mapLocation(l)
|
||||||
|
}
|
||||||
|
for k, v := range src.Label {
|
||||||
|
vv := make([]string, len(v))
|
||||||
|
copy(vv, v)
|
||||||
|
s.Label[k] = vv
|
||||||
|
}
|
||||||
|
for k, v := range src.NumLabel {
|
||||||
|
u := src.NumUnit[k]
|
||||||
|
vv := make([]int64, len(v))
|
||||||
|
uu := make([]string, len(u))
|
||||||
|
copy(vv, v)
|
||||||
|
copy(uu, u)
|
||||||
|
s.NumLabel[k] = vv
|
||||||
|
s.NumUnit[k] = uu
|
||||||
|
}
|
||||||
|
// Check memoization table. Must be done on the remapped location to
|
||||||
|
// account for the remapped mapping. Add current values to the
|
||||||
|
// existing sample.
|
||||||
|
k := s.key()
|
||||||
|
if ss, ok := pm.samples[k]; ok {
|
||||||
|
for i, v := range src.Value {
|
||||||
|
ss.Value[i] += v
|
||||||
|
}
|
||||||
|
return ss
|
||||||
|
}
|
||||||
|
copy(s.Value, src.Value)
|
||||||
|
pm.samples[k] = s
|
||||||
|
pm.p.Sample = append(pm.p.Sample, s)
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|
||||||
|
// key generates sampleKey to be used as a key for maps.
|
||||||
|
func (sample *Sample) key() sampleKey {
|
||||||
|
ids := make([]string, len(sample.Location))
|
||||||
|
for i, l := range sample.Location {
|
||||||
|
ids[i] = strconv.FormatUint(l.ID, 16)
|
||||||
|
}
|
||||||
|
|
||||||
|
labels := make([]string, 0, len(sample.Label))
|
||||||
|
for k, v := range sample.Label {
|
||||||
|
labels = append(labels, fmt.Sprintf("%q%q", k, v))
|
||||||
|
}
|
||||||
|
sort.Strings(labels)
|
||||||
|
|
||||||
|
numlabels := make([]string, 0, len(sample.NumLabel))
|
||||||
|
for k, v := range sample.NumLabel {
|
||||||
|
numlabels = append(numlabels, fmt.Sprintf("%q%x%x", k, v, sample.NumUnit[k]))
|
||||||
|
}
|
||||||
|
sort.Strings(numlabels)
|
||||||
|
|
||||||
|
return sampleKey{
|
||||||
|
strings.Join(ids, "|"),
|
||||||
|
strings.Join(labels, ""),
|
||||||
|
strings.Join(numlabels, ""),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type sampleKey struct {
|
||||||
|
locations string
|
||||||
|
labels string
|
||||||
|
numlabels string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pm *profileMerger) mapLocation(src *Location) *Location {
|
||||||
|
if src == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if l, ok := pm.locationsByID[src.ID]; ok {
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
mi := pm.mapMapping(src.Mapping)
|
||||||
|
l := &Location{
|
||||||
|
ID: uint64(len(pm.p.Location) + 1),
|
||||||
|
Mapping: mi.m,
|
||||||
|
Address: uint64(int64(src.Address) + mi.offset),
|
||||||
|
Line: make([]Line, len(src.Line)),
|
||||||
|
IsFolded: src.IsFolded,
|
||||||
|
}
|
||||||
|
for i, ln := range src.Line {
|
||||||
|
l.Line[i] = pm.mapLine(ln)
|
||||||
|
}
|
||||||
|
// Check memoization table. Must be done on the remapped location to
|
||||||
|
// account for the remapped mapping ID.
|
||||||
|
k := l.key()
|
||||||
|
if ll, ok := pm.locations[k]; ok {
|
||||||
|
pm.locationsByID[src.ID] = ll
|
||||||
|
return ll
|
||||||
|
}
|
||||||
|
pm.locationsByID[src.ID] = l
|
||||||
|
pm.locations[k] = l
|
||||||
|
pm.p.Location = append(pm.p.Location, l)
|
||||||
|
return l
|
||||||
|
}
|
||||||
|
|
||||||
|
// key generates locationKey to be used as a key for maps.
|
||||||
|
func (l *Location) key() locationKey {
|
||||||
|
key := locationKey{
|
||||||
|
addr: l.Address,
|
||||||
|
isFolded: l.IsFolded,
|
||||||
|
}
|
||||||
|
if l.Mapping != nil {
|
||||||
|
// Normalizes address to handle address space randomization.
|
||||||
|
key.addr -= l.Mapping.Start
|
||||||
|
key.mappingID = l.Mapping.ID
|
||||||
|
}
|
||||||
|
lines := make([]string, len(l.Line)*2)
|
||||||
|
for i, line := range l.Line {
|
||||||
|
if line.Function != nil {
|
||||||
|
lines[i*2] = strconv.FormatUint(line.Function.ID, 16)
|
||||||
|
}
|
||||||
|
lines[i*2+1] = strconv.FormatInt(line.Line, 16)
|
||||||
|
}
|
||||||
|
key.lines = strings.Join(lines, "|")
|
||||||
|
return key
|
||||||
|
}
|
||||||
|
|
||||||
|
type locationKey struct {
|
||||||
|
addr, mappingID uint64
|
||||||
|
lines string
|
||||||
|
isFolded bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pm *profileMerger) mapMapping(src *Mapping) mapInfo {
|
||||||
|
if src == nil {
|
||||||
|
return mapInfo{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if mi, ok := pm.mappingsByID[src.ID]; ok {
|
||||||
|
return mi
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check memoization tables.
|
||||||
|
mk := src.key()
|
||||||
|
if m, ok := pm.mappings[mk]; ok {
|
||||||
|
mi := mapInfo{m, int64(m.Start) - int64(src.Start)}
|
||||||
|
pm.mappingsByID[src.ID] = mi
|
||||||
|
return mi
|
||||||
|
}
|
||||||
|
m := &Mapping{
|
||||||
|
ID: uint64(len(pm.p.Mapping) + 1),
|
||||||
|
Start: src.Start,
|
||||||
|
Limit: src.Limit,
|
||||||
|
Offset: src.Offset,
|
||||||
|
File: src.File,
|
||||||
|
BuildID: src.BuildID,
|
||||||
|
HasFunctions: src.HasFunctions,
|
||||||
|
HasFilenames: src.HasFilenames,
|
||||||
|
HasLineNumbers: src.HasLineNumbers,
|
||||||
|
HasInlineFrames: src.HasInlineFrames,
|
||||||
|
}
|
||||||
|
pm.p.Mapping = append(pm.p.Mapping, m)
|
||||||
|
|
||||||
|
// Update memoization tables.
|
||||||
|
pm.mappings[mk] = m
|
||||||
|
mi := mapInfo{m, 0}
|
||||||
|
pm.mappingsByID[src.ID] = mi
|
||||||
|
return mi
|
||||||
|
}
|
||||||
|
|
||||||
|
// key generates encoded strings of Mapping to be used as a key for
|
||||||
|
// maps.
|
||||||
|
func (m *Mapping) key() mappingKey {
|
||||||
|
// Normalize addresses to handle address space randomization.
|
||||||
|
// Round up to next 4K boundary to avoid minor discrepancies.
|
||||||
|
const mapsizeRounding = 0x1000
|
||||||
|
|
||||||
|
size := m.Limit - m.Start
|
||||||
|
size = size + mapsizeRounding - 1
|
||||||
|
size = size - (size % mapsizeRounding)
|
||||||
|
key := mappingKey{
|
||||||
|
size: size,
|
||||||
|
offset: m.Offset,
|
||||||
|
}
|
||||||
|
|
||||||
|
switch {
|
||||||
|
case m.BuildID != "":
|
||||||
|
key.buildIDOrFile = m.BuildID
|
||||||
|
case m.File != "":
|
||||||
|
key.buildIDOrFile = m.File
|
||||||
|
default:
|
||||||
|
// A mapping containing neither build ID nor file name is a fake mapping. A
|
||||||
|
// key with empty buildIDOrFile is used for fake mappings so that they are
|
||||||
|
// treated as the same mapping during merging.
|
||||||
|
}
|
||||||
|
return key
|
||||||
|
}
|
||||||
|
|
||||||
|
type mappingKey struct {
|
||||||
|
size, offset uint64
|
||||||
|
buildIDOrFile string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pm *profileMerger) mapLine(src Line) Line {
|
||||||
|
ln := Line{
|
||||||
|
Function: pm.mapFunction(src.Function),
|
||||||
|
Line: src.Line,
|
||||||
|
}
|
||||||
|
return ln
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pm *profileMerger) mapFunction(src *Function) *Function {
|
||||||
|
if src == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if f, ok := pm.functionsByID[src.ID]; ok {
|
||||||
|
return f
|
||||||
|
}
|
||||||
|
k := src.key()
|
||||||
|
if f, ok := pm.functions[k]; ok {
|
||||||
|
pm.functionsByID[src.ID] = f
|
||||||
|
return f
|
||||||
|
}
|
||||||
|
f := &Function{
|
||||||
|
ID: uint64(len(pm.p.Function) + 1),
|
||||||
|
Name: src.Name,
|
||||||
|
SystemName: src.SystemName,
|
||||||
|
Filename: src.Filename,
|
||||||
|
StartLine: src.StartLine,
|
||||||
|
}
|
||||||
|
pm.functions[k] = f
|
||||||
|
pm.functionsByID[src.ID] = f
|
||||||
|
pm.p.Function = append(pm.p.Function, f)
|
||||||
|
return f
|
||||||
|
}
|
||||||
|
|
||||||
|
// key generates a struct to be used as a key for maps.
|
||||||
|
func (f *Function) key() functionKey {
|
||||||
|
return functionKey{
|
||||||
|
f.StartLine,
|
||||||
|
f.Name,
|
||||||
|
f.SystemName,
|
||||||
|
f.Filename,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type functionKey struct {
|
||||||
|
startLine int64
|
||||||
|
name, systemName, fileName string
|
||||||
|
}
|
||||||
|
|
||||||
|
// combineHeaders checks that all profiles can be merged and returns
|
||||||
|
// their combined profile.
|
||||||
|
func combineHeaders(srcs []*Profile) (*Profile, error) {
|
||||||
|
for _, s := range srcs[1:] {
|
||||||
|
if err := srcs[0].compatible(s); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var timeNanos, durationNanos, period int64
|
||||||
|
var comments []string
|
||||||
|
seenComments := map[string]bool{}
|
||||||
|
var defaultSampleType string
|
||||||
|
for _, s := range srcs {
|
||||||
|
if timeNanos == 0 || s.TimeNanos < timeNanos {
|
||||||
|
timeNanos = s.TimeNanos
|
||||||
|
}
|
||||||
|
durationNanos += s.DurationNanos
|
||||||
|
if period == 0 || period < s.Period {
|
||||||
|
period = s.Period
|
||||||
|
}
|
||||||
|
for _, c := range s.Comments {
|
||||||
|
if seen := seenComments[c]; !seen {
|
||||||
|
comments = append(comments, c)
|
||||||
|
seenComments[c] = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if defaultSampleType == "" {
|
||||||
|
defaultSampleType = s.DefaultSampleType
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
p := &Profile{
|
||||||
|
SampleType: make([]*ValueType, len(srcs[0].SampleType)),
|
||||||
|
|
||||||
|
DropFrames: srcs[0].DropFrames,
|
||||||
|
KeepFrames: srcs[0].KeepFrames,
|
||||||
|
|
||||||
|
TimeNanos: timeNanos,
|
||||||
|
DurationNanos: durationNanos,
|
||||||
|
PeriodType: srcs[0].PeriodType,
|
||||||
|
Period: period,
|
||||||
|
|
||||||
|
Comments: comments,
|
||||||
|
DefaultSampleType: defaultSampleType,
|
||||||
|
}
|
||||||
|
copy(p.SampleType, srcs[0].SampleType)
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// compatible determines if two profiles can be compared/merged.
|
||||||
|
// returns nil if the profiles are compatible; otherwise an error with
|
||||||
|
// details on the incompatibility.
|
||||||
|
func (p *Profile) compatible(pb *Profile) error {
|
||||||
|
if !equalValueType(p.PeriodType, pb.PeriodType) {
|
||||||
|
return fmt.Errorf("incompatible period types %v and %v", p.PeriodType, pb.PeriodType)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(p.SampleType) != len(pb.SampleType) {
|
||||||
|
return fmt.Errorf("incompatible sample types %v and %v", p.SampleType, pb.SampleType)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range p.SampleType {
|
||||||
|
if !equalValueType(p.SampleType[i], pb.SampleType[i]) {
|
||||||
|
return fmt.Errorf("incompatible sample types %v and %v", p.SampleType, pb.SampleType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// equalValueType returns true if the two value types are semantically
|
||||||
|
// equal. It ignores the internal fields used during encode/decode.
|
||||||
|
func equalValueType(st1, st2 *ValueType) bool {
|
||||||
|
return st1.Type == st2.Type && st1.Unit == st2.Unit
|
||||||
|
}
|
||||||
805
vendor/github.com/google/pprof/profile/profile.go
generated
vendored
Normal file
805
vendor/github.com/google/pprof/profile/profile.go
generated
vendored
Normal file
@@ -0,0 +1,805 @@
|
|||||||
|
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||||
|
//
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
// Package profile provides a representation of profile.proto and
|
||||||
|
// methods to encode/decode profiles in this format.
|
||||||
|
package profile
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"compress/gzip"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"io/ioutil"
|
||||||
|
"math"
|
||||||
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Profile is an in-memory representation of profile.proto.
|
||||||
|
type Profile struct {
|
||||||
|
SampleType []*ValueType
|
||||||
|
DefaultSampleType string
|
||||||
|
Sample []*Sample
|
||||||
|
Mapping []*Mapping
|
||||||
|
Location []*Location
|
||||||
|
Function []*Function
|
||||||
|
Comments []string
|
||||||
|
|
||||||
|
DropFrames string
|
||||||
|
KeepFrames string
|
||||||
|
|
||||||
|
TimeNanos int64
|
||||||
|
DurationNanos int64
|
||||||
|
PeriodType *ValueType
|
||||||
|
Period int64
|
||||||
|
|
||||||
|
// The following fields are modified during encoding and copying,
|
||||||
|
// so are protected by a Mutex.
|
||||||
|
encodeMu sync.Mutex
|
||||||
|
|
||||||
|
commentX []int64
|
||||||
|
dropFramesX int64
|
||||||
|
keepFramesX int64
|
||||||
|
stringTable []string
|
||||||
|
defaultSampleTypeX int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// ValueType corresponds to Profile.ValueType
|
||||||
|
type ValueType struct {
|
||||||
|
Type string // cpu, wall, inuse_space, etc
|
||||||
|
Unit string // seconds, nanoseconds, bytes, etc
|
||||||
|
|
||||||
|
typeX int64
|
||||||
|
unitX int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sample corresponds to Profile.Sample
|
||||||
|
type Sample struct {
|
||||||
|
Location []*Location
|
||||||
|
Value []int64
|
||||||
|
Label map[string][]string
|
||||||
|
NumLabel map[string][]int64
|
||||||
|
NumUnit map[string][]string
|
||||||
|
|
||||||
|
locationIDX []uint64
|
||||||
|
labelX []label
|
||||||
|
}
|
||||||
|
|
||||||
|
// label corresponds to Profile.Label
|
||||||
|
type label struct {
|
||||||
|
keyX int64
|
||||||
|
// Exactly one of the two following values must be set
|
||||||
|
strX int64
|
||||||
|
numX int64 // Integer value for this label
|
||||||
|
// can be set if numX has value
|
||||||
|
unitX int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mapping corresponds to Profile.Mapping
|
||||||
|
type Mapping struct {
|
||||||
|
ID uint64
|
||||||
|
Start uint64
|
||||||
|
Limit uint64
|
||||||
|
Offset uint64
|
||||||
|
File string
|
||||||
|
BuildID string
|
||||||
|
HasFunctions bool
|
||||||
|
HasFilenames bool
|
||||||
|
HasLineNumbers bool
|
||||||
|
HasInlineFrames bool
|
||||||
|
|
||||||
|
fileX int64
|
||||||
|
buildIDX int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// Location corresponds to Profile.Location
|
||||||
|
type Location struct {
|
||||||
|
ID uint64
|
||||||
|
Mapping *Mapping
|
||||||
|
Address uint64
|
||||||
|
Line []Line
|
||||||
|
IsFolded bool
|
||||||
|
|
||||||
|
mappingIDX uint64
|
||||||
|
}
|
||||||
|
|
||||||
|
// Line corresponds to Profile.Line
|
||||||
|
type Line struct {
|
||||||
|
Function *Function
|
||||||
|
Line int64
|
||||||
|
|
||||||
|
functionIDX uint64
|
||||||
|
}
|
||||||
|
|
||||||
|
// Function corresponds to Profile.Function
|
||||||
|
type Function struct {
|
||||||
|
ID uint64
|
||||||
|
Name string
|
||||||
|
SystemName string
|
||||||
|
Filename string
|
||||||
|
StartLine int64
|
||||||
|
|
||||||
|
nameX int64
|
||||||
|
systemNameX int64
|
||||||
|
filenameX int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse parses a profile and checks for its validity. The input
|
||||||
|
// may be a gzip-compressed encoded protobuf or one of many legacy
|
||||||
|
// profile formats which may be unsupported in the future.
|
||||||
|
func Parse(r io.Reader) (*Profile, error) {
|
||||||
|
data, err := ioutil.ReadAll(r)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return ParseData(data)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseData parses a profile from a buffer and checks for its
|
||||||
|
// validity.
|
||||||
|
func ParseData(data []byte) (*Profile, error) {
|
||||||
|
var p *Profile
|
||||||
|
var err error
|
||||||
|
if len(data) >= 2 && data[0] == 0x1f && data[1] == 0x8b {
|
||||||
|
gz, err := gzip.NewReader(bytes.NewBuffer(data))
|
||||||
|
if err == nil {
|
||||||
|
data, err = ioutil.ReadAll(gz)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("decompressing profile: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if p, err = ParseUncompressed(data); err != nil && err != errNoData && err != errConcatProfile {
|
||||||
|
p, err = parseLegacy(data)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("parsing profile: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := p.CheckValid(); err != nil {
|
||||||
|
return nil, fmt.Errorf("malformed profile: %v", err)
|
||||||
|
}
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var errUnrecognized = fmt.Errorf("unrecognized profile format")
|
||||||
|
var errMalformed = fmt.Errorf("malformed profile format")
|
||||||
|
var errNoData = fmt.Errorf("empty input file")
|
||||||
|
var errConcatProfile = fmt.Errorf("concatenated profiles detected")
|
||||||
|
|
||||||
|
func parseLegacy(data []byte) (*Profile, error) {
|
||||||
|
parsers := []func([]byte) (*Profile, error){
|
||||||
|
parseCPU,
|
||||||
|
parseHeap,
|
||||||
|
parseGoCount, // goroutine, threadcreate
|
||||||
|
parseThread,
|
||||||
|
parseContention,
|
||||||
|
parseJavaProfile,
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, parser := range parsers {
|
||||||
|
p, err := parser(data)
|
||||||
|
if err == nil {
|
||||||
|
p.addLegacyFrameInfo()
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
if err != errUnrecognized {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil, errUnrecognized
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseUncompressed parses an uncompressed protobuf into a profile.
|
||||||
|
func ParseUncompressed(data []byte) (*Profile, error) {
|
||||||
|
if len(data) == 0 {
|
||||||
|
return nil, errNoData
|
||||||
|
}
|
||||||
|
p := &Profile{}
|
||||||
|
if err := unmarshal(data, p); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := p.postDecode(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var libRx = regexp.MustCompile(`([.]so$|[.]so[._][0-9]+)`)
|
||||||
|
|
||||||
|
// massageMappings applies heuristic-based changes to the profile
|
||||||
|
// mappings to account for quirks of some environments.
|
||||||
|
func (p *Profile) massageMappings() {
|
||||||
|
// Merge adjacent regions with matching names, checking that the offsets match
|
||||||
|
if len(p.Mapping) > 1 {
|
||||||
|
mappings := []*Mapping{p.Mapping[0]}
|
||||||
|
for _, m := range p.Mapping[1:] {
|
||||||
|
lm := mappings[len(mappings)-1]
|
||||||
|
if adjacent(lm, m) {
|
||||||
|
lm.Limit = m.Limit
|
||||||
|
if m.File != "" {
|
||||||
|
lm.File = m.File
|
||||||
|
}
|
||||||
|
if m.BuildID != "" {
|
||||||
|
lm.BuildID = m.BuildID
|
||||||
|
}
|
||||||
|
p.updateLocationMapping(m, lm)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
mappings = append(mappings, m)
|
||||||
|
}
|
||||||
|
p.Mapping = mappings
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use heuristics to identify main binary and move it to the top of the list of mappings
|
||||||
|
for i, m := range p.Mapping {
|
||||||
|
file := strings.TrimSpace(strings.Replace(m.File, "(deleted)", "", -1))
|
||||||
|
if len(file) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if len(libRx.FindStringSubmatch(file)) > 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if file[0] == '[' {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// Swap what we guess is main to position 0.
|
||||||
|
p.Mapping[0], p.Mapping[i] = p.Mapping[i], p.Mapping[0]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
// Keep the mapping IDs neatly sorted
|
||||||
|
for i, m := range p.Mapping {
|
||||||
|
m.ID = uint64(i + 1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// adjacent returns whether two mapping entries represent the same
|
||||||
|
// mapping that has been split into two. Check that their addresses are adjacent,
|
||||||
|
// and if the offsets match, if they are available.
|
||||||
|
func adjacent(m1, m2 *Mapping) bool {
|
||||||
|
if m1.File != "" && m2.File != "" {
|
||||||
|
if m1.File != m2.File {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if m1.BuildID != "" && m2.BuildID != "" {
|
||||||
|
if m1.BuildID != m2.BuildID {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if m1.Limit != m2.Start {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if m1.Offset != 0 && m2.Offset != 0 {
|
||||||
|
offset := m1.Offset + (m1.Limit - m1.Start)
|
||||||
|
if offset != m2.Offset {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Profile) updateLocationMapping(from, to *Mapping) {
|
||||||
|
for _, l := range p.Location {
|
||||||
|
if l.Mapping == from {
|
||||||
|
l.Mapping = to
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func serialize(p *Profile) []byte {
|
||||||
|
p.encodeMu.Lock()
|
||||||
|
p.preEncode()
|
||||||
|
b := marshal(p)
|
||||||
|
p.encodeMu.Unlock()
|
||||||
|
return b
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write writes the profile as a gzip-compressed marshaled protobuf.
|
||||||
|
func (p *Profile) Write(w io.Writer) error {
|
||||||
|
zw := gzip.NewWriter(w)
|
||||||
|
defer zw.Close()
|
||||||
|
_, err := zw.Write(serialize(p))
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// WriteUncompressed writes the profile as a marshaled protobuf.
|
||||||
|
func (p *Profile) WriteUncompressed(w io.Writer) error {
|
||||||
|
_, err := w.Write(serialize(p))
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// CheckValid tests whether the profile is valid. Checks include, but are
|
||||||
|
// not limited to:
|
||||||
|
// - len(Profile.Sample[n].value) == len(Profile.value_unit)
|
||||||
|
// - Sample.id has a corresponding Profile.Location
|
||||||
|
func (p *Profile) CheckValid() error {
|
||||||
|
// Check that sample values are consistent
|
||||||
|
sampleLen := len(p.SampleType)
|
||||||
|
if sampleLen == 0 && len(p.Sample) != 0 {
|
||||||
|
return fmt.Errorf("missing sample type information")
|
||||||
|
}
|
||||||
|
for _, s := range p.Sample {
|
||||||
|
if s == nil {
|
||||||
|
return fmt.Errorf("profile has nil sample")
|
||||||
|
}
|
||||||
|
if len(s.Value) != sampleLen {
|
||||||
|
return fmt.Errorf("mismatch: sample has %d values vs. %d types", len(s.Value), len(p.SampleType))
|
||||||
|
}
|
||||||
|
for _, l := range s.Location {
|
||||||
|
if l == nil {
|
||||||
|
return fmt.Errorf("sample has nil location")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check that all mappings/locations/functions are in the tables
|
||||||
|
// Check that there are no duplicate ids
|
||||||
|
mappings := make(map[uint64]*Mapping, len(p.Mapping))
|
||||||
|
for _, m := range p.Mapping {
|
||||||
|
if m == nil {
|
||||||
|
return fmt.Errorf("profile has nil mapping")
|
||||||
|
}
|
||||||
|
if m.ID == 0 {
|
||||||
|
return fmt.Errorf("found mapping with reserved ID=0")
|
||||||
|
}
|
||||||
|
if mappings[m.ID] != nil {
|
||||||
|
return fmt.Errorf("multiple mappings with same id: %d", m.ID)
|
||||||
|
}
|
||||||
|
mappings[m.ID] = m
|
||||||
|
}
|
||||||
|
functions := make(map[uint64]*Function, len(p.Function))
|
||||||
|
for _, f := range p.Function {
|
||||||
|
if f == nil {
|
||||||
|
return fmt.Errorf("profile has nil function")
|
||||||
|
}
|
||||||
|
if f.ID == 0 {
|
||||||
|
return fmt.Errorf("found function with reserved ID=0")
|
||||||
|
}
|
||||||
|
if functions[f.ID] != nil {
|
||||||
|
return fmt.Errorf("multiple functions with same id: %d", f.ID)
|
||||||
|
}
|
||||||
|
functions[f.ID] = f
|
||||||
|
}
|
||||||
|
locations := make(map[uint64]*Location, len(p.Location))
|
||||||
|
for _, l := range p.Location {
|
||||||
|
if l == nil {
|
||||||
|
return fmt.Errorf("profile has nil location")
|
||||||
|
}
|
||||||
|
if l.ID == 0 {
|
||||||
|
return fmt.Errorf("found location with reserved id=0")
|
||||||
|
}
|
||||||
|
if locations[l.ID] != nil {
|
||||||
|
return fmt.Errorf("multiple locations with same id: %d", l.ID)
|
||||||
|
}
|
||||||
|
locations[l.ID] = l
|
||||||
|
if m := l.Mapping; m != nil {
|
||||||
|
if m.ID == 0 || mappings[m.ID] != m {
|
||||||
|
return fmt.Errorf("inconsistent mapping %p: %d", m, m.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, ln := range l.Line {
|
||||||
|
f := ln.Function
|
||||||
|
if f == nil {
|
||||||
|
return fmt.Errorf("location id: %d has a line with nil function", l.ID)
|
||||||
|
}
|
||||||
|
if f.ID == 0 || functions[f.ID] != f {
|
||||||
|
return fmt.Errorf("inconsistent function %p: %d", f, f.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate merges the locations in the profile into equivalence
|
||||||
|
// classes preserving the request attributes. It also updates the
|
||||||
|
// samples to point to the merged locations.
|
||||||
|
func (p *Profile) Aggregate(inlineFrame, function, filename, linenumber, address bool) error {
|
||||||
|
for _, m := range p.Mapping {
|
||||||
|
m.HasInlineFrames = m.HasInlineFrames && inlineFrame
|
||||||
|
m.HasFunctions = m.HasFunctions && function
|
||||||
|
m.HasFilenames = m.HasFilenames && filename
|
||||||
|
m.HasLineNumbers = m.HasLineNumbers && linenumber
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate functions
|
||||||
|
if !function || !filename {
|
||||||
|
for _, f := range p.Function {
|
||||||
|
if !function {
|
||||||
|
f.Name = ""
|
||||||
|
f.SystemName = ""
|
||||||
|
}
|
||||||
|
if !filename {
|
||||||
|
f.Filename = ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate locations
|
||||||
|
if !inlineFrame || !address || !linenumber {
|
||||||
|
for _, l := range p.Location {
|
||||||
|
if !inlineFrame && len(l.Line) > 1 {
|
||||||
|
l.Line = l.Line[len(l.Line)-1:]
|
||||||
|
}
|
||||||
|
if !linenumber {
|
||||||
|
for i := range l.Line {
|
||||||
|
l.Line[i].Line = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !address {
|
||||||
|
l.Address = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return p.CheckValid()
|
||||||
|
}
|
||||||
|
|
||||||
|
// NumLabelUnits returns a map of numeric label keys to the units
|
||||||
|
// associated with those keys and a map of those keys to any units
|
||||||
|
// that were encountered but not used.
|
||||||
|
// Unit for a given key is the first encountered unit for that key. If multiple
|
||||||
|
// units are encountered for values paired with a particular key, then the first
|
||||||
|
// unit encountered is used and all other units are returned in sorted order
|
||||||
|
// in map of ignored units.
|
||||||
|
// If no units are encountered for a particular key, the unit is then inferred
|
||||||
|
// based on the key.
|
||||||
|
func (p *Profile) NumLabelUnits() (map[string]string, map[string][]string) {
|
||||||
|
numLabelUnits := map[string]string{}
|
||||||
|
ignoredUnits := map[string]map[string]bool{}
|
||||||
|
encounteredKeys := map[string]bool{}
|
||||||
|
|
||||||
|
// Determine units based on numeric tags for each sample.
|
||||||
|
for _, s := range p.Sample {
|
||||||
|
for k := range s.NumLabel {
|
||||||
|
encounteredKeys[k] = true
|
||||||
|
for _, unit := range s.NumUnit[k] {
|
||||||
|
if unit == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if wantUnit, ok := numLabelUnits[k]; !ok {
|
||||||
|
numLabelUnits[k] = unit
|
||||||
|
} else if wantUnit != unit {
|
||||||
|
if v, ok := ignoredUnits[k]; ok {
|
||||||
|
v[unit] = true
|
||||||
|
} else {
|
||||||
|
ignoredUnits[k] = map[string]bool{unit: true}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Infer units for keys without any units associated with
|
||||||
|
// numeric tag values.
|
||||||
|
for key := range encounteredKeys {
|
||||||
|
unit := numLabelUnits[key]
|
||||||
|
if unit == "" {
|
||||||
|
switch key {
|
||||||
|
case "alignment", "request":
|
||||||
|
numLabelUnits[key] = "bytes"
|
||||||
|
default:
|
||||||
|
numLabelUnits[key] = key
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy ignored units into more readable format
|
||||||
|
unitsIgnored := make(map[string][]string, len(ignoredUnits))
|
||||||
|
for key, values := range ignoredUnits {
|
||||||
|
units := make([]string, len(values))
|
||||||
|
i := 0
|
||||||
|
for unit := range values {
|
||||||
|
units[i] = unit
|
||||||
|
i++
|
||||||
|
}
|
||||||
|
sort.Strings(units)
|
||||||
|
unitsIgnored[key] = units
|
||||||
|
}
|
||||||
|
|
||||||
|
return numLabelUnits, unitsIgnored
|
||||||
|
}
|
||||||
|
|
||||||
|
// String dumps a text representation of a profile. Intended mainly
|
||||||
|
// for debugging purposes.
|
||||||
|
func (p *Profile) String() string {
|
||||||
|
ss := make([]string, 0, len(p.Comments)+len(p.Sample)+len(p.Mapping)+len(p.Location))
|
||||||
|
for _, c := range p.Comments {
|
||||||
|
ss = append(ss, "Comment: "+c)
|
||||||
|
}
|
||||||
|
if pt := p.PeriodType; pt != nil {
|
||||||
|
ss = append(ss, fmt.Sprintf("PeriodType: %s %s", pt.Type, pt.Unit))
|
||||||
|
}
|
||||||
|
ss = append(ss, fmt.Sprintf("Period: %d", p.Period))
|
||||||
|
if p.TimeNanos != 0 {
|
||||||
|
ss = append(ss, fmt.Sprintf("Time: %v", time.Unix(0, p.TimeNanos)))
|
||||||
|
}
|
||||||
|
if p.DurationNanos != 0 {
|
||||||
|
ss = append(ss, fmt.Sprintf("Duration: %.4v", time.Duration(p.DurationNanos)))
|
||||||
|
}
|
||||||
|
|
||||||
|
ss = append(ss, "Samples:")
|
||||||
|
var sh1 string
|
||||||
|
for _, s := range p.SampleType {
|
||||||
|
dflt := ""
|
||||||
|
if s.Type == p.DefaultSampleType {
|
||||||
|
dflt = "[dflt]"
|
||||||
|
}
|
||||||
|
sh1 = sh1 + fmt.Sprintf("%s/%s%s ", s.Type, s.Unit, dflt)
|
||||||
|
}
|
||||||
|
ss = append(ss, strings.TrimSpace(sh1))
|
||||||
|
for _, s := range p.Sample {
|
||||||
|
ss = append(ss, s.string())
|
||||||
|
}
|
||||||
|
|
||||||
|
ss = append(ss, "Locations")
|
||||||
|
for _, l := range p.Location {
|
||||||
|
ss = append(ss, l.string())
|
||||||
|
}
|
||||||
|
|
||||||
|
ss = append(ss, "Mappings")
|
||||||
|
for _, m := range p.Mapping {
|
||||||
|
ss = append(ss, m.string())
|
||||||
|
}
|
||||||
|
|
||||||
|
return strings.Join(ss, "\n") + "\n"
|
||||||
|
}
|
||||||
|
|
||||||
|
// string dumps a text representation of a mapping. Intended mainly
|
||||||
|
// for debugging purposes.
|
||||||
|
func (m *Mapping) string() string {
|
||||||
|
bits := ""
|
||||||
|
if m.HasFunctions {
|
||||||
|
bits = bits + "[FN]"
|
||||||
|
}
|
||||||
|
if m.HasFilenames {
|
||||||
|
bits = bits + "[FL]"
|
||||||
|
}
|
||||||
|
if m.HasLineNumbers {
|
||||||
|
bits = bits + "[LN]"
|
||||||
|
}
|
||||||
|
if m.HasInlineFrames {
|
||||||
|
bits = bits + "[IN]"
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%d: %#x/%#x/%#x %s %s %s",
|
||||||
|
m.ID,
|
||||||
|
m.Start, m.Limit, m.Offset,
|
||||||
|
m.File,
|
||||||
|
m.BuildID,
|
||||||
|
bits)
|
||||||
|
}
|
||||||
|
|
||||||
|
// string dumps a text representation of a location. Intended mainly
|
||||||
|
// for debugging purposes.
|
||||||
|
func (l *Location) string() string {
|
||||||
|
ss := []string{}
|
||||||
|
locStr := fmt.Sprintf("%6d: %#x ", l.ID, l.Address)
|
||||||
|
if m := l.Mapping; m != nil {
|
||||||
|
locStr = locStr + fmt.Sprintf("M=%d ", m.ID)
|
||||||
|
}
|
||||||
|
if l.IsFolded {
|
||||||
|
locStr = locStr + "[F] "
|
||||||
|
}
|
||||||
|
if len(l.Line) == 0 {
|
||||||
|
ss = append(ss, locStr)
|
||||||
|
}
|
||||||
|
for li := range l.Line {
|
||||||
|
lnStr := "??"
|
||||||
|
if fn := l.Line[li].Function; fn != nil {
|
||||||
|
lnStr = fmt.Sprintf("%s %s:%d s=%d",
|
||||||
|
fn.Name,
|
||||||
|
fn.Filename,
|
||||||
|
l.Line[li].Line,
|
||||||
|
fn.StartLine)
|
||||||
|
if fn.Name != fn.SystemName {
|
||||||
|
lnStr = lnStr + "(" + fn.SystemName + ")"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ss = append(ss, locStr+lnStr)
|
||||||
|
// Do not print location details past the first line
|
||||||
|
locStr = " "
|
||||||
|
}
|
||||||
|
return strings.Join(ss, "\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
// string dumps a text representation of a sample. Intended mainly
|
||||||
|
// for debugging purposes.
|
||||||
|
func (s *Sample) string() string {
|
||||||
|
ss := []string{}
|
||||||
|
var sv string
|
||||||
|
for _, v := range s.Value {
|
||||||
|
sv = fmt.Sprintf("%s %10d", sv, v)
|
||||||
|
}
|
||||||
|
sv = sv + ": "
|
||||||
|
for _, l := range s.Location {
|
||||||
|
sv = sv + fmt.Sprintf("%d ", l.ID)
|
||||||
|
}
|
||||||
|
ss = append(ss, sv)
|
||||||
|
const labelHeader = " "
|
||||||
|
if len(s.Label) > 0 {
|
||||||
|
ss = append(ss, labelHeader+labelsToString(s.Label))
|
||||||
|
}
|
||||||
|
if len(s.NumLabel) > 0 {
|
||||||
|
ss = append(ss, labelHeader+numLabelsToString(s.NumLabel, s.NumUnit))
|
||||||
|
}
|
||||||
|
return strings.Join(ss, "\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
// labelsToString returns a string representation of a
|
||||||
|
// map representing labels.
|
||||||
|
func labelsToString(labels map[string][]string) string {
|
||||||
|
ls := []string{}
|
||||||
|
for k, v := range labels {
|
||||||
|
ls = append(ls, fmt.Sprintf("%s:%v", k, v))
|
||||||
|
}
|
||||||
|
sort.Strings(ls)
|
||||||
|
return strings.Join(ls, " ")
|
||||||
|
}
|
||||||
|
|
||||||
|
// numLabelsToString returns a string representation of a map
|
||||||
|
// representing numeric labels.
|
||||||
|
func numLabelsToString(numLabels map[string][]int64, numUnits map[string][]string) string {
|
||||||
|
ls := []string{}
|
||||||
|
for k, v := range numLabels {
|
||||||
|
units := numUnits[k]
|
||||||
|
var labelString string
|
||||||
|
if len(units) == len(v) {
|
||||||
|
values := make([]string, len(v))
|
||||||
|
for i, vv := range v {
|
||||||
|
values[i] = fmt.Sprintf("%d %s", vv, units[i])
|
||||||
|
}
|
||||||
|
labelString = fmt.Sprintf("%s:%v", k, values)
|
||||||
|
} else {
|
||||||
|
labelString = fmt.Sprintf("%s:%v", k, v)
|
||||||
|
}
|
||||||
|
ls = append(ls, labelString)
|
||||||
|
}
|
||||||
|
sort.Strings(ls)
|
||||||
|
return strings.Join(ls, " ")
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetLabel sets the specified key to the specified value for all samples in the
|
||||||
|
// profile.
|
||||||
|
func (p *Profile) SetLabel(key string, value []string) {
|
||||||
|
for _, sample := range p.Sample {
|
||||||
|
if sample.Label == nil {
|
||||||
|
sample.Label = map[string][]string{key: value}
|
||||||
|
} else {
|
||||||
|
sample.Label[key] = value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveLabel removes all labels associated with the specified key for all
|
||||||
|
// samples in the profile.
|
||||||
|
func (p *Profile) RemoveLabel(key string) {
|
||||||
|
for _, sample := range p.Sample {
|
||||||
|
delete(sample.Label, key)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasLabel returns true if a sample has a label with indicated key and value.
|
||||||
|
func (s *Sample) HasLabel(key, value string) bool {
|
||||||
|
for _, v := range s.Label[key] {
|
||||||
|
if v == value {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// DiffBaseSample returns true if a sample belongs to the diff base and false
|
||||||
|
// otherwise.
|
||||||
|
func (s *Sample) DiffBaseSample() bool {
|
||||||
|
return s.HasLabel("pprof::base", "true")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scale multiplies all sample values in a profile by a constant and keeps
|
||||||
|
// only samples that have at least one non-zero value.
|
||||||
|
func (p *Profile) Scale(ratio float64) {
|
||||||
|
if ratio == 1 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ratios := make([]float64, len(p.SampleType))
|
||||||
|
for i := range p.SampleType {
|
||||||
|
ratios[i] = ratio
|
||||||
|
}
|
||||||
|
p.ScaleN(ratios)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ScaleN multiplies each sample values in a sample by a different amount
|
||||||
|
// and keeps only samples that have at least one non-zero value.
|
||||||
|
func (p *Profile) ScaleN(ratios []float64) error {
|
||||||
|
if len(p.SampleType) != len(ratios) {
|
||||||
|
return fmt.Errorf("mismatched scale ratios, got %d, want %d", len(ratios), len(p.SampleType))
|
||||||
|
}
|
||||||
|
allOnes := true
|
||||||
|
for _, r := range ratios {
|
||||||
|
if r != 1 {
|
||||||
|
allOnes = false
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if allOnes {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
fillIdx := 0
|
||||||
|
for _, s := range p.Sample {
|
||||||
|
keepSample := false
|
||||||
|
for i, v := range s.Value {
|
||||||
|
if ratios[i] != 1 {
|
||||||
|
val := int64(math.Round(float64(v) * ratios[i]))
|
||||||
|
s.Value[i] = val
|
||||||
|
keepSample = keepSample || val != 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if keepSample {
|
||||||
|
p.Sample[fillIdx] = s
|
||||||
|
fillIdx++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
p.Sample = p.Sample[:fillIdx]
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasFunctions determines if all locations in this profile have
|
||||||
|
// symbolized function information.
|
||||||
|
func (p *Profile) HasFunctions() bool {
|
||||||
|
for _, l := range p.Location {
|
||||||
|
if l.Mapping != nil && !l.Mapping.HasFunctions {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasFileLines determines if all locations in this profile have
|
||||||
|
// symbolized file and line number information.
|
||||||
|
func (p *Profile) HasFileLines() bool {
|
||||||
|
for _, l := range p.Location {
|
||||||
|
if l.Mapping != nil && (!l.Mapping.HasFilenames || !l.Mapping.HasLineNumbers) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unsymbolizable returns true if a mapping points to a binary for which
|
||||||
|
// locations can't be symbolized in principle, at least now. Examples are
|
||||||
|
// "[vdso]", [vsyscall]" and some others, see the code.
|
||||||
|
func (m *Mapping) Unsymbolizable() bool {
|
||||||
|
name := filepath.Base(m.File)
|
||||||
|
return strings.HasPrefix(name, "[") || strings.HasPrefix(name, "linux-vdso") || strings.HasPrefix(m.File, "/dev/dri/")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy makes a fully independent copy of a profile.
|
||||||
|
func (p *Profile) Copy() *Profile {
|
||||||
|
pp := &Profile{}
|
||||||
|
if err := unmarshal(serialize(p), pp); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
if err := pp.postDecode(); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return pp
|
||||||
|
}
|
||||||
370
vendor/github.com/google/pprof/profile/proto.go
generated
vendored
Normal file
370
vendor/github.com/google/pprof/profile/proto.go
generated
vendored
Normal file
@@ -0,0 +1,370 @@
|
|||||||
|
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||||
|
//
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
// This file is a simple protocol buffer encoder and decoder.
|
||||||
|
// The format is described at
|
||||||
|
// https://developers.google.com/protocol-buffers/docs/encoding
|
||||||
|
//
|
||||||
|
// A protocol message must implement the message interface:
|
||||||
|
// decoder() []decoder
|
||||||
|
// encode(*buffer)
|
||||||
|
//
|
||||||
|
// The decode method returns a slice indexed by field number that gives the
|
||||||
|
// function to decode that field.
|
||||||
|
// The encode method encodes its receiver into the given buffer.
|
||||||
|
//
|
||||||
|
// The two methods are simple enough to be implemented by hand rather than
|
||||||
|
// by using a protocol compiler.
|
||||||
|
//
|
||||||
|
// See profile.go for examples of messages implementing this interface.
|
||||||
|
//
|
||||||
|
// There is no support for groups, message sets, or "has" bits.
|
||||||
|
|
||||||
|
package profile
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
)
|
||||||
|
|
||||||
|
type buffer struct {
|
||||||
|
field int // field tag
|
||||||
|
typ int // proto wire type code for field
|
||||||
|
u64 uint64
|
||||||
|
data []byte
|
||||||
|
tmp [16]byte
|
||||||
|
}
|
||||||
|
|
||||||
|
type decoder func(*buffer, message) error
|
||||||
|
|
||||||
|
type message interface {
|
||||||
|
decoder() []decoder
|
||||||
|
encode(*buffer)
|
||||||
|
}
|
||||||
|
|
||||||
|
func marshal(m message) []byte {
|
||||||
|
var b buffer
|
||||||
|
m.encode(&b)
|
||||||
|
return b.data
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeVarint(b *buffer, x uint64) {
|
||||||
|
for x >= 128 {
|
||||||
|
b.data = append(b.data, byte(x)|0x80)
|
||||||
|
x >>= 7
|
||||||
|
}
|
||||||
|
b.data = append(b.data, byte(x))
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeLength(b *buffer, tag int, len int) {
|
||||||
|
encodeVarint(b, uint64(tag)<<3|2)
|
||||||
|
encodeVarint(b, uint64(len))
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeUint64(b *buffer, tag int, x uint64) {
|
||||||
|
// append varint to b.data
|
||||||
|
encodeVarint(b, uint64(tag)<<3)
|
||||||
|
encodeVarint(b, x)
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeUint64s(b *buffer, tag int, x []uint64) {
|
||||||
|
if len(x) > 2 {
|
||||||
|
// Use packed encoding
|
||||||
|
n1 := len(b.data)
|
||||||
|
for _, u := range x {
|
||||||
|
encodeVarint(b, u)
|
||||||
|
}
|
||||||
|
n2 := len(b.data)
|
||||||
|
encodeLength(b, tag, n2-n1)
|
||||||
|
n3 := len(b.data)
|
||||||
|
copy(b.tmp[:], b.data[n2:n3])
|
||||||
|
copy(b.data[n1+(n3-n2):], b.data[n1:n2])
|
||||||
|
copy(b.data[n1:], b.tmp[:n3-n2])
|
||||||
|
return
|
||||||
|
}
|
||||||
|
for _, u := range x {
|
||||||
|
encodeUint64(b, tag, u)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeUint64Opt(b *buffer, tag int, x uint64) {
|
||||||
|
if x == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
encodeUint64(b, tag, x)
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeInt64(b *buffer, tag int, x int64) {
|
||||||
|
u := uint64(x)
|
||||||
|
encodeUint64(b, tag, u)
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeInt64s(b *buffer, tag int, x []int64) {
|
||||||
|
if len(x) > 2 {
|
||||||
|
// Use packed encoding
|
||||||
|
n1 := len(b.data)
|
||||||
|
for _, u := range x {
|
||||||
|
encodeVarint(b, uint64(u))
|
||||||
|
}
|
||||||
|
n2 := len(b.data)
|
||||||
|
encodeLength(b, tag, n2-n1)
|
||||||
|
n3 := len(b.data)
|
||||||
|
copy(b.tmp[:], b.data[n2:n3])
|
||||||
|
copy(b.data[n1+(n3-n2):], b.data[n1:n2])
|
||||||
|
copy(b.data[n1:], b.tmp[:n3-n2])
|
||||||
|
return
|
||||||
|
}
|
||||||
|
for _, u := range x {
|
||||||
|
encodeInt64(b, tag, u)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeInt64Opt(b *buffer, tag int, x int64) {
|
||||||
|
if x == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
encodeInt64(b, tag, x)
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeString(b *buffer, tag int, x string) {
|
||||||
|
encodeLength(b, tag, len(x))
|
||||||
|
b.data = append(b.data, x...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeStrings(b *buffer, tag int, x []string) {
|
||||||
|
for _, s := range x {
|
||||||
|
encodeString(b, tag, s)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeBool(b *buffer, tag int, x bool) {
|
||||||
|
if x {
|
||||||
|
encodeUint64(b, tag, 1)
|
||||||
|
} else {
|
||||||
|
encodeUint64(b, tag, 0)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeBoolOpt(b *buffer, tag int, x bool) {
|
||||||
|
if x {
|
||||||
|
encodeBool(b, tag, x)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeMessage(b *buffer, tag int, m message) {
|
||||||
|
n1 := len(b.data)
|
||||||
|
m.encode(b)
|
||||||
|
n2 := len(b.data)
|
||||||
|
encodeLength(b, tag, n2-n1)
|
||||||
|
n3 := len(b.data)
|
||||||
|
copy(b.tmp[:], b.data[n2:n3])
|
||||||
|
copy(b.data[n1+(n3-n2):], b.data[n1:n2])
|
||||||
|
copy(b.data[n1:], b.tmp[:n3-n2])
|
||||||
|
}
|
||||||
|
|
||||||
|
func unmarshal(data []byte, m message) (err error) {
|
||||||
|
b := buffer{data: data, typ: 2}
|
||||||
|
return decodeMessage(&b, m)
|
||||||
|
}
|
||||||
|
|
||||||
|
func le64(p []byte) uint64 {
|
||||||
|
return uint64(p[0]) | uint64(p[1])<<8 | uint64(p[2])<<16 | uint64(p[3])<<24 | uint64(p[4])<<32 | uint64(p[5])<<40 | uint64(p[6])<<48 | uint64(p[7])<<56
|
||||||
|
}
|
||||||
|
|
||||||
|
func le32(p []byte) uint32 {
|
||||||
|
return uint32(p[0]) | uint32(p[1])<<8 | uint32(p[2])<<16 | uint32(p[3])<<24
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeVarint(data []byte) (uint64, []byte, error) {
|
||||||
|
var u uint64
|
||||||
|
for i := 0; ; i++ {
|
||||||
|
if i >= 10 || i >= len(data) {
|
||||||
|
return 0, nil, errors.New("bad varint")
|
||||||
|
}
|
||||||
|
u |= uint64(data[i]&0x7F) << uint(7*i)
|
||||||
|
if data[i]&0x80 == 0 {
|
||||||
|
return u, data[i+1:], nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeField(b *buffer, data []byte) ([]byte, error) {
|
||||||
|
x, data, err := decodeVarint(data)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
b.field = int(x >> 3)
|
||||||
|
b.typ = int(x & 7)
|
||||||
|
b.data = nil
|
||||||
|
b.u64 = 0
|
||||||
|
switch b.typ {
|
||||||
|
case 0:
|
||||||
|
b.u64, data, err = decodeVarint(data)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
case 1:
|
||||||
|
if len(data) < 8 {
|
||||||
|
return nil, errors.New("not enough data")
|
||||||
|
}
|
||||||
|
b.u64 = le64(data[:8])
|
||||||
|
data = data[8:]
|
||||||
|
case 2:
|
||||||
|
var n uint64
|
||||||
|
n, data, err = decodeVarint(data)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if n > uint64(len(data)) {
|
||||||
|
return nil, errors.New("too much data")
|
||||||
|
}
|
||||||
|
b.data = data[:n]
|
||||||
|
data = data[n:]
|
||||||
|
case 5:
|
||||||
|
if len(data) < 4 {
|
||||||
|
return nil, errors.New("not enough data")
|
||||||
|
}
|
||||||
|
b.u64 = uint64(le32(data[:4]))
|
||||||
|
data = data[4:]
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("unknown wire type: %d", b.typ)
|
||||||
|
}
|
||||||
|
|
||||||
|
return data, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkType(b *buffer, typ int) error {
|
||||||
|
if b.typ != typ {
|
||||||
|
return errors.New("type mismatch")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeMessage(b *buffer, m message) error {
|
||||||
|
if err := checkType(b, 2); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
dec := m.decoder()
|
||||||
|
data := b.data
|
||||||
|
for len(data) > 0 {
|
||||||
|
// pull varint field# + type
|
||||||
|
var err error
|
||||||
|
data, err = decodeField(b, data)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if b.field >= len(dec) || dec[b.field] == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err := dec[b.field](b, m); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeInt64(b *buffer, x *int64) error {
|
||||||
|
if err := checkType(b, 0); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
*x = int64(b.u64)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeInt64s(b *buffer, x *[]int64) error {
|
||||||
|
if b.typ == 2 {
|
||||||
|
// Packed encoding
|
||||||
|
data := b.data
|
||||||
|
tmp := make([]int64, 0, len(data)) // Maximally sized
|
||||||
|
for len(data) > 0 {
|
||||||
|
var u uint64
|
||||||
|
var err error
|
||||||
|
|
||||||
|
if u, data, err = decodeVarint(data); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
tmp = append(tmp, int64(u))
|
||||||
|
}
|
||||||
|
*x = append(*x, tmp...)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var i int64
|
||||||
|
if err := decodeInt64(b, &i); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
*x = append(*x, i)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeUint64(b *buffer, x *uint64) error {
|
||||||
|
if err := checkType(b, 0); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
*x = b.u64
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeUint64s(b *buffer, x *[]uint64) error {
|
||||||
|
if b.typ == 2 {
|
||||||
|
data := b.data
|
||||||
|
// Packed encoding
|
||||||
|
tmp := make([]uint64, 0, len(data)) // Maximally sized
|
||||||
|
for len(data) > 0 {
|
||||||
|
var u uint64
|
||||||
|
var err error
|
||||||
|
|
||||||
|
if u, data, err = decodeVarint(data); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
tmp = append(tmp, u)
|
||||||
|
}
|
||||||
|
*x = append(*x, tmp...)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var u uint64
|
||||||
|
if err := decodeUint64(b, &u); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
*x = append(*x, u)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeString(b *buffer, x *string) error {
|
||||||
|
if err := checkType(b, 2); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
*x = string(b.data)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeStrings(b *buffer, x *[]string) error {
|
||||||
|
var s string
|
||||||
|
if err := decodeString(b, &s); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
*x = append(*x, s)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeBool(b *buffer, x *bool) error {
|
||||||
|
if err := checkType(b, 0); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if int64(b.u64) == 0 {
|
||||||
|
*x = false
|
||||||
|
} else {
|
||||||
|
*x = true
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
178
vendor/github.com/google/pprof/profile/prune.go
generated
vendored
Normal file
178
vendor/github.com/google/pprof/profile/prune.go
generated
vendored
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||||
|
//
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
// Implements methods to remove frames from profiles.
|
||||||
|
|
||||||
|
package profile
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"regexp"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
reservedNames = []string{"(anonymous namespace)", "operator()"}
|
||||||
|
bracketRx = func() *regexp.Regexp {
|
||||||
|
var quotedNames []string
|
||||||
|
for _, name := range append(reservedNames, "(") {
|
||||||
|
quotedNames = append(quotedNames, regexp.QuoteMeta(name))
|
||||||
|
}
|
||||||
|
return regexp.MustCompile(strings.Join(quotedNames, "|"))
|
||||||
|
}()
|
||||||
|
)
|
||||||
|
|
||||||
|
// simplifyFunc does some primitive simplification of function names.
|
||||||
|
func simplifyFunc(f string) string {
|
||||||
|
// Account for leading '.' on the PPC ELF v1 ABI.
|
||||||
|
funcName := strings.TrimPrefix(f, ".")
|
||||||
|
// Account for unsimplified names -- try to remove the argument list by trimming
|
||||||
|
// starting from the first '(', but skipping reserved names that have '('.
|
||||||
|
for _, ind := range bracketRx.FindAllStringSubmatchIndex(funcName, -1) {
|
||||||
|
foundReserved := false
|
||||||
|
for _, res := range reservedNames {
|
||||||
|
if funcName[ind[0]:ind[1]] == res {
|
||||||
|
foundReserved = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !foundReserved {
|
||||||
|
funcName = funcName[:ind[0]]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return funcName
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prune removes all nodes beneath a node matching dropRx, and not
|
||||||
|
// matching keepRx. If the root node of a Sample matches, the sample
|
||||||
|
// will have an empty stack.
|
||||||
|
func (p *Profile) Prune(dropRx, keepRx *regexp.Regexp) {
|
||||||
|
prune := make(map[uint64]bool)
|
||||||
|
pruneBeneath := make(map[uint64]bool)
|
||||||
|
|
||||||
|
for _, loc := range p.Location {
|
||||||
|
var i int
|
||||||
|
for i = len(loc.Line) - 1; i >= 0; i-- {
|
||||||
|
if fn := loc.Line[i].Function; fn != nil && fn.Name != "" {
|
||||||
|
funcName := simplifyFunc(fn.Name)
|
||||||
|
if dropRx.MatchString(funcName) {
|
||||||
|
if keepRx == nil || !keepRx.MatchString(funcName) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if i >= 0 {
|
||||||
|
// Found matching entry to prune.
|
||||||
|
pruneBeneath[loc.ID] = true
|
||||||
|
|
||||||
|
// Remove the matching location.
|
||||||
|
if i == len(loc.Line)-1 {
|
||||||
|
// Matched the top entry: prune the whole location.
|
||||||
|
prune[loc.ID] = true
|
||||||
|
} else {
|
||||||
|
loc.Line = loc.Line[i+1:]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prune locs from each Sample
|
||||||
|
for _, sample := range p.Sample {
|
||||||
|
// Scan from the root to the leaves to find the prune location.
|
||||||
|
// Do not prune frames before the first user frame, to avoid
|
||||||
|
// pruning everything.
|
||||||
|
foundUser := false
|
||||||
|
for i := len(sample.Location) - 1; i >= 0; i-- {
|
||||||
|
id := sample.Location[i].ID
|
||||||
|
if !prune[id] && !pruneBeneath[id] {
|
||||||
|
foundUser = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !foundUser {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if prune[id] {
|
||||||
|
sample.Location = sample.Location[i+1:]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if pruneBeneath[id] {
|
||||||
|
sample.Location = sample.Location[i:]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveUninteresting prunes and elides profiles using built-in
|
||||||
|
// tables of uninteresting function names.
|
||||||
|
func (p *Profile) RemoveUninteresting() error {
|
||||||
|
var keep, drop *regexp.Regexp
|
||||||
|
var err error
|
||||||
|
|
||||||
|
if p.DropFrames != "" {
|
||||||
|
if drop, err = regexp.Compile("^(" + p.DropFrames + ")$"); err != nil {
|
||||||
|
return fmt.Errorf("failed to compile regexp %s: %v", p.DropFrames, err)
|
||||||
|
}
|
||||||
|
if p.KeepFrames != "" {
|
||||||
|
if keep, err = regexp.Compile("^(" + p.KeepFrames + ")$"); err != nil {
|
||||||
|
return fmt.Errorf("failed to compile regexp %s: %v", p.KeepFrames, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
p.Prune(drop, keep)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// PruneFrom removes all nodes beneath the lowest node matching dropRx, not including itself.
|
||||||
|
//
|
||||||
|
// Please see the example below to understand this method as well as
|
||||||
|
// the difference from Prune method.
|
||||||
|
//
|
||||||
|
// A sample contains Location of [A,B,C,B,D] where D is the top frame and there's no inline.
|
||||||
|
//
|
||||||
|
// PruneFrom(A) returns [A,B,C,B,D] because there's no node beneath A.
|
||||||
|
// Prune(A, nil) returns [B,C,B,D] by removing A itself.
|
||||||
|
//
|
||||||
|
// PruneFrom(B) returns [B,C,B,D] by removing all nodes beneath the first B when scanning from the bottom.
|
||||||
|
// Prune(B, nil) returns [D] because a matching node is found by scanning from the root.
|
||||||
|
func (p *Profile) PruneFrom(dropRx *regexp.Regexp) {
|
||||||
|
pruneBeneath := make(map[uint64]bool)
|
||||||
|
|
||||||
|
for _, loc := range p.Location {
|
||||||
|
for i := 0; i < len(loc.Line); i++ {
|
||||||
|
if fn := loc.Line[i].Function; fn != nil && fn.Name != "" {
|
||||||
|
funcName := simplifyFunc(fn.Name)
|
||||||
|
if dropRx.MatchString(funcName) {
|
||||||
|
// Found matching entry to prune.
|
||||||
|
pruneBeneath[loc.ID] = true
|
||||||
|
loc.Line = loc.Line[i:]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prune locs from each Sample
|
||||||
|
for _, sample := range p.Sample {
|
||||||
|
// Scan from the bottom leaf to the root to find the prune location.
|
||||||
|
for i, loc := range sample.Location {
|
||||||
|
if pruneBeneath[loc.ID] {
|
||||||
|
sample.Location = sample.Location[i:]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
61
vendor/github.com/onsi/ginkgo/v2/ginkgo/build/build_command.go
generated
vendored
Normal file
61
vendor/github.com/onsi/ginkgo/v2/ginkgo/build/build_command.go
generated
vendored
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/internal"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
func BuildBuildCommand() command.Command {
|
||||||
|
var cliConfig = types.NewDefaultCLIConfig()
|
||||||
|
var goFlagsConfig = types.NewDefaultGoFlagsConfig()
|
||||||
|
|
||||||
|
flags, err := types.BuildBuildCommandFlagSet(&cliConfig, &goFlagsConfig)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return command.Command{
|
||||||
|
Name: "build",
|
||||||
|
Flags: flags,
|
||||||
|
Usage: "ginkgo build <FLAGS> <PACKAGES>",
|
||||||
|
ShortDoc: "Build the passed in <PACKAGES> (or the package in the current directory if left blank).",
|
||||||
|
DocLink: "precompiling-suites",
|
||||||
|
Command: func(args []string, _ []string) {
|
||||||
|
var errors []error
|
||||||
|
cliConfig, goFlagsConfig, errors = types.VetAndInitializeCLIAndGoConfig(cliConfig, goFlagsConfig)
|
||||||
|
command.AbortIfErrors("Ginkgo detected configuration issues:", errors)
|
||||||
|
|
||||||
|
buildSpecs(args, cliConfig, goFlagsConfig)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildSpecs(args []string, cliConfig types.CLIConfig, goFlagsConfig types.GoFlagsConfig) {
|
||||||
|
suites := internal.FindSuites(args, cliConfig, false).WithoutState(internal.TestSuiteStateSkippedByFilter)
|
||||||
|
if len(suites) == 0 {
|
||||||
|
command.AbortWith("Found no test suites")
|
||||||
|
}
|
||||||
|
|
||||||
|
opc := internal.NewOrderedParallelCompiler(cliConfig.ComputedNumCompilers())
|
||||||
|
opc.StartCompiling(suites, goFlagsConfig)
|
||||||
|
|
||||||
|
for {
|
||||||
|
suiteIdx, suite := opc.Next()
|
||||||
|
if suiteIdx >= len(suites) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
suites[suiteIdx] = suite
|
||||||
|
if suite.State.Is(internal.TestSuiteStateFailedToCompile) {
|
||||||
|
fmt.Println(suite.CompilationError.Error())
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Compiled %s.test\n", suite.PackageName)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if suites.CountWithState(internal.TestSuiteStateFailedToCompile) > 0 {
|
||||||
|
command.AbortWith("Failed to compile all tests")
|
||||||
|
}
|
||||||
|
}
|
||||||
61
vendor/github.com/onsi/ginkgo/v2/ginkgo/command/abort.go
generated
vendored
Normal file
61
vendor/github.com/onsi/ginkgo/v2/ginkgo/command/abort.go
generated
vendored
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
package command
|
||||||
|
|
||||||
|
import "fmt"
|
||||||
|
|
||||||
|
type AbortDetails struct {
|
||||||
|
ExitCode int
|
||||||
|
Error error
|
||||||
|
EmitUsage bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func Abort(details AbortDetails) {
|
||||||
|
panic(details)
|
||||||
|
}
|
||||||
|
|
||||||
|
func AbortGracefullyWith(format string, args ...interface{}) {
|
||||||
|
Abort(AbortDetails{
|
||||||
|
ExitCode: 0,
|
||||||
|
Error: fmt.Errorf(format, args...),
|
||||||
|
EmitUsage: false,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func AbortWith(format string, args ...interface{}) {
|
||||||
|
Abort(AbortDetails{
|
||||||
|
ExitCode: 1,
|
||||||
|
Error: fmt.Errorf(format, args...),
|
||||||
|
EmitUsage: false,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func AbortWithUsage(format string, args ...interface{}) {
|
||||||
|
Abort(AbortDetails{
|
||||||
|
ExitCode: 1,
|
||||||
|
Error: fmt.Errorf(format, args...),
|
||||||
|
EmitUsage: true,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func AbortIfError(preamble string, err error) {
|
||||||
|
if err != nil {
|
||||||
|
Abort(AbortDetails{
|
||||||
|
ExitCode: 1,
|
||||||
|
Error: fmt.Errorf("%s\n%s", preamble, err.Error()),
|
||||||
|
EmitUsage: false,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func AbortIfErrors(preamble string, errors []error) {
|
||||||
|
if len(errors) > 0 {
|
||||||
|
out := ""
|
||||||
|
for _, err := range errors {
|
||||||
|
out += err.Error()
|
||||||
|
}
|
||||||
|
Abort(AbortDetails{
|
||||||
|
ExitCode: 1,
|
||||||
|
Error: fmt.Errorf("%s\n%s", preamble, out),
|
||||||
|
EmitUsage: false,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
50
vendor/github.com/onsi/ginkgo/v2/ginkgo/command/command.go
generated
vendored
Normal file
50
vendor/github.com/onsi/ginkgo/v2/ginkgo/command/command.go
generated
vendored
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
package command
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/formatter"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Command struct {
|
||||||
|
Name string
|
||||||
|
Flags types.GinkgoFlagSet
|
||||||
|
Usage string
|
||||||
|
ShortDoc string
|
||||||
|
Documentation string
|
||||||
|
DocLink string
|
||||||
|
Command func(args []string, additionalArgs []string)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c Command) Run(args []string, additionalArgs []string) {
|
||||||
|
args, err := c.Flags.Parse(args)
|
||||||
|
if err != nil {
|
||||||
|
AbortWithUsage(err.Error())
|
||||||
|
}
|
||||||
|
|
||||||
|
c.Command(args, additionalArgs)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c Command) EmitUsage(writer io.Writer) {
|
||||||
|
fmt.Fprintln(writer, formatter.F("{{bold}}"+c.Usage+"{{/}}"))
|
||||||
|
fmt.Fprintln(writer, formatter.F("{{gray}}%s{{/}}", strings.Repeat("-", len(c.Usage))))
|
||||||
|
if c.ShortDoc != "" {
|
||||||
|
fmt.Fprintln(writer, formatter.Fiw(0, formatter.COLS, c.ShortDoc))
|
||||||
|
fmt.Fprintln(writer, "")
|
||||||
|
}
|
||||||
|
if c.Documentation != "" {
|
||||||
|
fmt.Fprintln(writer, formatter.Fiw(0, formatter.COLS, c.Documentation))
|
||||||
|
fmt.Fprintln(writer, "")
|
||||||
|
}
|
||||||
|
if c.DocLink != "" {
|
||||||
|
fmt.Fprintln(writer, formatter.Fi(0, "{{bold}}Learn more at:{{/}} {{cyan}}{{underline}}http://onsi.github.io/ginkgo/#%s{{/}}", c.DocLink))
|
||||||
|
fmt.Fprintln(writer, "")
|
||||||
|
}
|
||||||
|
flagUsage := c.Flags.Usage()
|
||||||
|
if flagUsage != "" {
|
||||||
|
fmt.Fprintf(writer, formatter.F(flagUsage))
|
||||||
|
}
|
||||||
|
}
|
||||||
182
vendor/github.com/onsi/ginkgo/v2/ginkgo/command/program.go
generated
vendored
Normal file
182
vendor/github.com/onsi/ginkgo/v2/ginkgo/command/program.go
generated
vendored
Normal file
@@ -0,0 +1,182 @@
|
|||||||
|
package command
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/formatter"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Program struct {
|
||||||
|
Name string
|
||||||
|
Heading string
|
||||||
|
Commands []Command
|
||||||
|
DefaultCommand Command
|
||||||
|
DeprecatedCommands []DeprecatedCommand
|
||||||
|
|
||||||
|
//For testing - leave as nil in production
|
||||||
|
OutWriter io.Writer
|
||||||
|
ErrWriter io.Writer
|
||||||
|
Exiter func(code int)
|
||||||
|
}
|
||||||
|
|
||||||
|
type DeprecatedCommand struct {
|
||||||
|
Name string
|
||||||
|
Deprecation types.Deprecation
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p Program) RunAndExit(osArgs []string) {
|
||||||
|
var command Command
|
||||||
|
deprecationTracker := types.NewDeprecationTracker()
|
||||||
|
if p.Exiter == nil {
|
||||||
|
p.Exiter = os.Exit
|
||||||
|
}
|
||||||
|
if p.OutWriter == nil {
|
||||||
|
p.OutWriter = formatter.ColorableStdOut
|
||||||
|
}
|
||||||
|
if p.ErrWriter == nil {
|
||||||
|
p.ErrWriter = formatter.ColorableStdErr
|
||||||
|
}
|
||||||
|
|
||||||
|
defer func() {
|
||||||
|
exitCode := 0
|
||||||
|
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
details, ok := r.(AbortDetails)
|
||||||
|
if !ok {
|
||||||
|
panic(r)
|
||||||
|
}
|
||||||
|
|
||||||
|
if details.Error != nil {
|
||||||
|
fmt.Fprintln(p.ErrWriter, formatter.F("{{red}}{{bold}}%s %s{{/}} {{red}}failed{{/}}", p.Name, command.Name))
|
||||||
|
fmt.Fprintln(p.ErrWriter, formatter.Fi(1, details.Error.Error()))
|
||||||
|
}
|
||||||
|
if details.EmitUsage {
|
||||||
|
if details.Error != nil {
|
||||||
|
fmt.Fprintln(p.ErrWriter, "")
|
||||||
|
}
|
||||||
|
command.EmitUsage(p.ErrWriter)
|
||||||
|
}
|
||||||
|
exitCode = details.ExitCode
|
||||||
|
}
|
||||||
|
|
||||||
|
command.Flags.ValidateDeprecations(deprecationTracker)
|
||||||
|
if deprecationTracker.DidTrackDeprecations() {
|
||||||
|
fmt.Fprintln(p.ErrWriter, deprecationTracker.DeprecationsReport())
|
||||||
|
}
|
||||||
|
p.Exiter(exitCode)
|
||||||
|
return
|
||||||
|
}()
|
||||||
|
|
||||||
|
args, additionalArgs := []string{}, []string{}
|
||||||
|
|
||||||
|
foundDelimiter := false
|
||||||
|
for _, arg := range osArgs[1:] {
|
||||||
|
if !foundDelimiter {
|
||||||
|
if arg == "--" {
|
||||||
|
foundDelimiter = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if foundDelimiter {
|
||||||
|
additionalArgs = append(additionalArgs, arg)
|
||||||
|
} else {
|
||||||
|
args = append(args, arg)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
command = p.DefaultCommand
|
||||||
|
if len(args) > 0 {
|
||||||
|
p.handleHelpRequestsAndExit(p.OutWriter, args)
|
||||||
|
if command.Name == args[0] {
|
||||||
|
args = args[1:]
|
||||||
|
} else {
|
||||||
|
for _, deprecatedCommand := range p.DeprecatedCommands {
|
||||||
|
if deprecatedCommand.Name == args[0] {
|
||||||
|
deprecationTracker.TrackDeprecation(deprecatedCommand.Deprecation)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, tryCommand := range p.Commands {
|
||||||
|
if tryCommand.Name == args[0] {
|
||||||
|
command, args = tryCommand, args[1:]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
command.Run(args, additionalArgs)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p Program) handleHelpRequestsAndExit(writer io.Writer, args []string) {
|
||||||
|
if len(args) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
matchesHelpFlag := func(args ...string) bool {
|
||||||
|
for _, arg := range args {
|
||||||
|
if arg == "--help" || arg == "-help" || arg == "-h" || arg == "--h" {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if len(args) == 1 {
|
||||||
|
if args[0] == "help" || matchesHelpFlag(args[0]) {
|
||||||
|
p.EmitUsage(writer)
|
||||||
|
Abort(AbortDetails{})
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
var name string
|
||||||
|
if args[0] == "help" || matchesHelpFlag(args[0]) {
|
||||||
|
name = args[1]
|
||||||
|
} else if matchesHelpFlag(args[1:]...) {
|
||||||
|
name = args[0]
|
||||||
|
} else {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if p.DefaultCommand.Name == name || p.Name == name {
|
||||||
|
p.DefaultCommand.EmitUsage(writer)
|
||||||
|
Abort(AbortDetails{})
|
||||||
|
}
|
||||||
|
for _, command := range p.Commands {
|
||||||
|
if command.Name == name {
|
||||||
|
command.EmitUsage(writer)
|
||||||
|
Abort(AbortDetails{})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintln(writer, formatter.F("{{red}}Unknown Command: {{bold}}%s{{/}}", name))
|
||||||
|
fmt.Fprintln(writer, "")
|
||||||
|
p.EmitUsage(writer)
|
||||||
|
Abort(AbortDetails{ExitCode: 1})
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p Program) EmitUsage(writer io.Writer) {
|
||||||
|
fmt.Fprintln(writer, formatter.F(p.Heading))
|
||||||
|
fmt.Fprintln(writer, formatter.F("{{gray}}%s{{/}}", strings.Repeat("-", len(p.Heading))))
|
||||||
|
fmt.Fprintln(writer, formatter.F("For usage information for a command, run {{bold}}%s help COMMAND{{/}}.", p.Name))
|
||||||
|
fmt.Fprintln(writer, formatter.F("For usage information for the default command, run {{bold}}%s help %s{{/}} or {{bold}}%s help %s{{/}}.", p.Name, p.Name, p.Name, p.DefaultCommand.Name))
|
||||||
|
fmt.Fprintln(writer, "")
|
||||||
|
fmt.Fprintln(writer, formatter.F("The following commands are available:"))
|
||||||
|
|
||||||
|
fmt.Fprintln(writer, formatter.Fi(1, "{{bold}}%s{{/}} or %s {{bold}}%s{{/}} - {{gray}}%s{{/}}", p.Name, p.Name, p.DefaultCommand.Name, p.DefaultCommand.Usage))
|
||||||
|
if p.DefaultCommand.ShortDoc != "" {
|
||||||
|
fmt.Fprintln(writer, formatter.Fi(2, p.DefaultCommand.ShortDoc))
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, command := range p.Commands {
|
||||||
|
fmt.Fprintln(writer, formatter.Fi(1, "{{bold}}%s{{/}} - {{gray}}%s{{/}}", command.Name, command.Usage))
|
||||||
|
if command.ShortDoc != "" {
|
||||||
|
fmt.Fprintln(writer, formatter.Fi(2, command.ShortDoc))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
48
vendor/github.com/onsi/ginkgo/v2/ginkgo/generators/boostrap_templates.go
generated
vendored
Normal file
48
vendor/github.com/onsi/ginkgo/v2/ginkgo/generators/boostrap_templates.go
generated
vendored
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
package generators
|
||||||
|
|
||||||
|
var bootstrapText = `package {{.Package}}
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
{{.GinkgoImport}}
|
||||||
|
{{.GomegaImport}}
|
||||||
|
)
|
||||||
|
|
||||||
|
func Test{{.FormattedName}}(t *testing.T) {
|
||||||
|
{{.GomegaPackage}}RegisterFailHandler({{.GinkgoPackage}}Fail)
|
||||||
|
{{.GinkgoPackage}}RunSpecs(t, "{{.FormattedName}} Suite")
|
||||||
|
}
|
||||||
|
`
|
||||||
|
|
||||||
|
var agoutiBootstrapText = `package {{.Package}}
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
{{.GinkgoImport}}
|
||||||
|
{{.GomegaImport}}
|
||||||
|
"github.com/sclevine/agouti"
|
||||||
|
)
|
||||||
|
|
||||||
|
func Test{{.FormattedName}}(t *testing.T) {
|
||||||
|
{{.GomegaPackage}}RegisterFailHandler({{.GinkgoPackage}}Fail)
|
||||||
|
{{.GinkgoPackage}}RunSpecs(t, "{{.FormattedName}} Suite")
|
||||||
|
}
|
||||||
|
|
||||||
|
var agoutiDriver *agouti.WebDriver
|
||||||
|
|
||||||
|
var _ = {{.GinkgoPackage}}BeforeSuite(func() {
|
||||||
|
// Choose a WebDriver:
|
||||||
|
|
||||||
|
agoutiDriver = agouti.PhantomJS()
|
||||||
|
// agoutiDriver = agouti.Selenium()
|
||||||
|
// agoutiDriver = agouti.ChromeDriver()
|
||||||
|
|
||||||
|
{{.GomegaPackage}}Expect(agoutiDriver.Start()).To({{.GomegaPackage}}Succeed())
|
||||||
|
})
|
||||||
|
|
||||||
|
var _ = {{.GinkgoPackage}}AfterSuite(func() {
|
||||||
|
{{.GomegaPackage}}Expect(agoutiDriver.Stop()).To({{.GomegaPackage}}Succeed())
|
||||||
|
})
|
||||||
|
`
|
||||||
113
vendor/github.com/onsi/ginkgo/v2/ginkgo/generators/bootstrap_command.go
generated
vendored
Normal file
113
vendor/github.com/onsi/ginkgo/v2/ginkgo/generators/bootstrap_command.go
generated
vendored
Normal file
@@ -0,0 +1,113 @@
|
|||||||
|
package generators
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"text/template"
|
||||||
|
|
||||||
|
sprig "github.com/go-task/slim-sprig"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/internal"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
func BuildBootstrapCommand() command.Command {
|
||||||
|
conf := GeneratorsConfig{}
|
||||||
|
flags, err := types.NewGinkgoFlagSet(
|
||||||
|
types.GinkgoFlags{
|
||||||
|
{Name: "agouti", KeyPath: "Agouti",
|
||||||
|
Usage: "If set, bootstrap will generate a bootstrap file for writing Agouti tests"},
|
||||||
|
{Name: "nodot", KeyPath: "NoDot",
|
||||||
|
Usage: "If set, bootstrap will generate a bootstrap test file that does not dot-import ginkgo and gomega"},
|
||||||
|
{Name: "internal", KeyPath: "Internal",
|
||||||
|
Usage: "If set, bootstrap will generate a bootstrap test file that uses the regular package name (i.e. `package X`, not `package X_test`)"},
|
||||||
|
{Name: "template", KeyPath: "CustomTemplate",
|
||||||
|
UsageArgument: "template-file",
|
||||||
|
Usage: "If specified, generate will use the contents of the file passed as the bootstrap template"},
|
||||||
|
},
|
||||||
|
&conf,
|
||||||
|
types.GinkgoFlagSections{},
|
||||||
|
)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return command.Command{
|
||||||
|
Name: "bootstrap",
|
||||||
|
Usage: "ginkgo bootstrap",
|
||||||
|
ShortDoc: "Bootstrap a test suite for the current package",
|
||||||
|
Documentation: `Tests written in Ginkgo and Gomega require a small amount of boilerplate to hook into Go's testing infrastructure.
|
||||||
|
|
||||||
|
{{bold}}ginkgo bootstrap{{/}} generates this boilerplate for you in a file named X_suite_test.go where X is the name of the package under test.`,
|
||||||
|
DocLink: "generators",
|
||||||
|
Flags: flags,
|
||||||
|
Command: func(_ []string, _ []string) {
|
||||||
|
generateBootstrap(conf)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type bootstrapData struct {
|
||||||
|
Package string
|
||||||
|
FormattedName string
|
||||||
|
|
||||||
|
GinkgoImport string
|
||||||
|
GomegaImport string
|
||||||
|
GinkgoPackage string
|
||||||
|
GomegaPackage string
|
||||||
|
}
|
||||||
|
|
||||||
|
func generateBootstrap(conf GeneratorsConfig) {
|
||||||
|
packageName, bootstrapFilePrefix, formattedName := getPackageAndFormattedName()
|
||||||
|
|
||||||
|
data := bootstrapData{
|
||||||
|
Package: determinePackageName(packageName, conf.Internal),
|
||||||
|
FormattedName: formattedName,
|
||||||
|
|
||||||
|
GinkgoImport: `. "github.com/onsi/ginkgo/v2"`,
|
||||||
|
GomegaImport: `. "github.com/onsi/gomega"`,
|
||||||
|
GinkgoPackage: "",
|
||||||
|
GomegaPackage: "",
|
||||||
|
}
|
||||||
|
|
||||||
|
if conf.NoDot {
|
||||||
|
data.GinkgoImport = `"github.com/onsi/ginkgo/v2"`
|
||||||
|
data.GomegaImport = `"github.com/onsi/gomega"`
|
||||||
|
data.GinkgoPackage = `ginkgo.`
|
||||||
|
data.GomegaPackage = `gomega.`
|
||||||
|
}
|
||||||
|
|
||||||
|
targetFile := fmt.Sprintf("%s_suite_test.go", bootstrapFilePrefix)
|
||||||
|
if internal.FileExists(targetFile) {
|
||||||
|
command.AbortWith("{{bold}}%s{{/}} already exists", targetFile)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Generating ginkgo test suite bootstrap for %s in:\n\t%s\n", packageName, targetFile)
|
||||||
|
}
|
||||||
|
|
||||||
|
f, err := os.Create(targetFile)
|
||||||
|
command.AbortIfError("Failed to create file:", err)
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
var templateText string
|
||||||
|
if conf.CustomTemplate != "" {
|
||||||
|
tpl, err := os.ReadFile(conf.CustomTemplate)
|
||||||
|
command.AbortIfError("Failed to read custom bootstrap file:", err)
|
||||||
|
templateText = string(tpl)
|
||||||
|
} else if conf.Agouti {
|
||||||
|
templateText = agoutiBootstrapText
|
||||||
|
} else {
|
||||||
|
templateText = bootstrapText
|
||||||
|
}
|
||||||
|
|
||||||
|
bootstrapTemplate, err := template.New("bootstrap").Funcs(sprig.TxtFuncMap()).Parse(templateText)
|
||||||
|
command.AbortIfError("Failed to parse bootstrap template:", err)
|
||||||
|
|
||||||
|
buf := &bytes.Buffer{}
|
||||||
|
bootstrapTemplate.Execute(buf, data)
|
||||||
|
|
||||||
|
buf.WriteTo(f)
|
||||||
|
|
||||||
|
internal.GoFmt(targetFile)
|
||||||
|
}
|
||||||
239
vendor/github.com/onsi/ginkgo/v2/ginkgo/generators/generate_command.go
generated
vendored
Normal file
239
vendor/github.com/onsi/ginkgo/v2/ginkgo/generators/generate_command.go
generated
vendored
Normal file
@@ -0,0 +1,239 @@
|
|||||||
|
package generators
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"text/template"
|
||||||
|
|
||||||
|
sprig "github.com/go-task/slim-sprig"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/internal"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
func BuildGenerateCommand() command.Command {
|
||||||
|
conf := GeneratorsConfig{}
|
||||||
|
flags, err := types.NewGinkgoFlagSet(
|
||||||
|
types.GinkgoFlags{
|
||||||
|
{Name: "agouti", KeyPath: "Agouti",
|
||||||
|
Usage: "If set, generate will create a test file for writing Agouti tests"},
|
||||||
|
{Name: "nodot", KeyPath: "NoDot",
|
||||||
|
Usage: "If set, generate will create a test file that does not dot-import ginkgo and gomega"},
|
||||||
|
{Name: "internal", KeyPath: "Internal",
|
||||||
|
Usage: "If set, generate will create a test file that uses the regular package name (i.e. `package X`, not `package X_test`)"},
|
||||||
|
{Name: "template", KeyPath: "CustomTemplate",
|
||||||
|
UsageArgument: "template-file",
|
||||||
|
Usage: "If specified, generate will use the contents of the file passed as the test file template"},
|
||||||
|
},
|
||||||
|
&conf,
|
||||||
|
types.GinkgoFlagSections{},
|
||||||
|
)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return command.Command{
|
||||||
|
Name: "generate",
|
||||||
|
Usage: "ginkgo generate <filename(s)>",
|
||||||
|
ShortDoc: "Generate a test file named <filename>_test.go",
|
||||||
|
Documentation: `If the optional <filename> argument is omitted, a file named after the package in the current directory will be created.
|
||||||
|
|
||||||
|
You can pass multiple <filename(s)> to generate multiple files simultaneously. The resulting files are named <filename>_test.go.
|
||||||
|
|
||||||
|
You can also pass a <filename> of the form "file.go" and generate will emit "file_test.go".`,
|
||||||
|
DocLink: "generators",
|
||||||
|
Flags: flags,
|
||||||
|
Command: func(args []string, _ []string) {
|
||||||
|
generateTestFiles(conf, args)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type specData struct {
|
||||||
|
Package string
|
||||||
|
Subject string
|
||||||
|
PackageImportPath string
|
||||||
|
ImportPackage bool
|
||||||
|
|
||||||
|
GinkgoImport string
|
||||||
|
GomegaImport string
|
||||||
|
GinkgoPackage string
|
||||||
|
GomegaPackage string
|
||||||
|
}
|
||||||
|
|
||||||
|
func generateTestFiles(conf GeneratorsConfig, args []string) {
|
||||||
|
subjects := args
|
||||||
|
if len(subjects) == 0 {
|
||||||
|
subjects = []string{""}
|
||||||
|
}
|
||||||
|
for _, subject := range subjects {
|
||||||
|
generateTestFileForSubject(subject, conf)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func generateTestFileForSubject(subject string, conf GeneratorsConfig) {
|
||||||
|
packageName, specFilePrefix, formattedName := getPackageAndFormattedName()
|
||||||
|
if subject != "" {
|
||||||
|
specFilePrefix = formatSubject(subject)
|
||||||
|
formattedName = prettifyName(specFilePrefix)
|
||||||
|
}
|
||||||
|
|
||||||
|
if conf.Internal {
|
||||||
|
specFilePrefix = specFilePrefix + "_internal"
|
||||||
|
}
|
||||||
|
|
||||||
|
data := specData{
|
||||||
|
Package: determinePackageName(packageName, conf.Internal),
|
||||||
|
Subject: formattedName,
|
||||||
|
PackageImportPath: getPackageImportPath(),
|
||||||
|
ImportPackage: !conf.Internal,
|
||||||
|
|
||||||
|
GinkgoImport: `. "github.com/onsi/ginkgo/v2"`,
|
||||||
|
GomegaImport: `. "github.com/onsi/gomega"`,
|
||||||
|
GinkgoPackage: "",
|
||||||
|
GomegaPackage: "",
|
||||||
|
}
|
||||||
|
|
||||||
|
if conf.NoDot {
|
||||||
|
data.GinkgoImport = `"github.com/onsi/ginkgo/v2"`
|
||||||
|
data.GomegaImport = `"github.com/onsi/gomega"`
|
||||||
|
data.GinkgoPackage = `ginkgo.`
|
||||||
|
data.GomegaPackage = `gomega.`
|
||||||
|
}
|
||||||
|
|
||||||
|
targetFile := fmt.Sprintf("%s_test.go", specFilePrefix)
|
||||||
|
if internal.FileExists(targetFile) {
|
||||||
|
command.AbortWith("{{bold}}%s{{/}} already exists", targetFile)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Generating ginkgo test for %s in:\n %s\n", data.Subject, targetFile)
|
||||||
|
}
|
||||||
|
|
||||||
|
f, err := os.Create(targetFile)
|
||||||
|
command.AbortIfError("Failed to create test file:", err)
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
var templateText string
|
||||||
|
if conf.CustomTemplate != "" {
|
||||||
|
tpl, err := os.ReadFile(conf.CustomTemplate)
|
||||||
|
command.AbortIfError("Failed to read custom template file:", err)
|
||||||
|
templateText = string(tpl)
|
||||||
|
} else if conf.Agouti {
|
||||||
|
templateText = agoutiSpecText
|
||||||
|
} else {
|
||||||
|
templateText = specText
|
||||||
|
}
|
||||||
|
|
||||||
|
specTemplate, err := template.New("spec").Funcs(sprig.TxtFuncMap()).Parse(templateText)
|
||||||
|
command.AbortIfError("Failed to read parse test template:", err)
|
||||||
|
|
||||||
|
specTemplate.Execute(f, data)
|
||||||
|
internal.GoFmt(targetFile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatSubject(name string) string {
|
||||||
|
name = strings.Replace(name, "-", "_", -1)
|
||||||
|
name = strings.Replace(name, " ", "_", -1)
|
||||||
|
name = strings.Split(name, ".go")[0]
|
||||||
|
name = strings.Split(name, "_test")[0]
|
||||||
|
return name
|
||||||
|
}
|
||||||
|
|
||||||
|
// moduleName returns module name from go.mod from given module root directory
|
||||||
|
func moduleName(modRoot string) string {
|
||||||
|
modFile, err := os.Open(filepath.Join(modRoot, "go.mod"))
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
mod := make([]byte, 128)
|
||||||
|
_, err = modFile.Read(mod)
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
slashSlash := []byte("//")
|
||||||
|
moduleStr := []byte("module")
|
||||||
|
|
||||||
|
for len(mod) > 0 {
|
||||||
|
line := mod
|
||||||
|
mod = nil
|
||||||
|
if i := bytes.IndexByte(line, '\n'); i >= 0 {
|
||||||
|
line, mod = line[:i], line[i+1:]
|
||||||
|
}
|
||||||
|
if i := bytes.Index(line, slashSlash); i >= 0 {
|
||||||
|
line = line[:i]
|
||||||
|
}
|
||||||
|
line = bytes.TrimSpace(line)
|
||||||
|
if !bytes.HasPrefix(line, moduleStr) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
line = line[len(moduleStr):]
|
||||||
|
n := len(line)
|
||||||
|
line = bytes.TrimSpace(line)
|
||||||
|
if len(line) == n || len(line) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if line[0] == '"' || line[0] == '`' {
|
||||||
|
p, err := strconv.Unquote(string(line))
|
||||||
|
if err != nil {
|
||||||
|
return "" // malformed quoted string or multiline module path
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
return string(line)
|
||||||
|
}
|
||||||
|
|
||||||
|
return "" // missing module path
|
||||||
|
}
|
||||||
|
|
||||||
|
func findModuleRoot(dir string) (root string) {
|
||||||
|
dir = filepath.Clean(dir)
|
||||||
|
|
||||||
|
// Look for enclosing go.mod.
|
||||||
|
for {
|
||||||
|
if fi, err := os.Stat(filepath.Join(dir, "go.mod")); err == nil && !fi.IsDir() {
|
||||||
|
return dir
|
||||||
|
}
|
||||||
|
d := filepath.Dir(dir)
|
||||||
|
if d == dir {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
dir = d
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func getPackageImportPath() string {
|
||||||
|
workingDir, err := os.Getwd()
|
||||||
|
if err != nil {
|
||||||
|
panic(err.Error())
|
||||||
|
}
|
||||||
|
|
||||||
|
sep := string(filepath.Separator)
|
||||||
|
|
||||||
|
// Try go.mod file first
|
||||||
|
modRoot := findModuleRoot(workingDir)
|
||||||
|
if modRoot != "" {
|
||||||
|
modName := moduleName(modRoot)
|
||||||
|
if modName != "" {
|
||||||
|
cd := strings.Replace(workingDir, modRoot, "", -1)
|
||||||
|
cd = strings.ReplaceAll(cd, sep, "/")
|
||||||
|
return modName + cd
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback to GOPATH structure
|
||||||
|
paths := strings.Split(workingDir, sep+"src"+sep)
|
||||||
|
if len(paths) == 1 {
|
||||||
|
fmt.Printf("\nCouldn't identify package import path.\n\n\tginkgo generate\n\nMust be run within a package directory under $GOPATH/src/...\nYou're going to have to change UNKNOWN_PACKAGE_PATH in the generated file...\n\n")
|
||||||
|
return "UNKNOWN_PACKAGE_PATH"
|
||||||
|
}
|
||||||
|
return filepath.ToSlash(paths[len(paths)-1])
|
||||||
|
}
|
||||||
41
vendor/github.com/onsi/ginkgo/v2/ginkgo/generators/generate_templates.go
generated
vendored
Normal file
41
vendor/github.com/onsi/ginkgo/v2/ginkgo/generators/generate_templates.go
generated
vendored
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
package generators
|
||||||
|
|
||||||
|
var specText = `package {{.Package}}
|
||||||
|
|
||||||
|
import (
|
||||||
|
{{.GinkgoImport}}
|
||||||
|
{{.GomegaImport}}
|
||||||
|
|
||||||
|
{{if .ImportPackage}}"{{.PackageImportPath}}"{{end}}
|
||||||
|
)
|
||||||
|
|
||||||
|
var _ = {{.GinkgoPackage}}Describe("{{.Subject}}", func() {
|
||||||
|
|
||||||
|
})
|
||||||
|
`
|
||||||
|
|
||||||
|
var agoutiSpecText = `package {{.Package}}
|
||||||
|
|
||||||
|
import (
|
||||||
|
{{.GinkgoImport}}
|
||||||
|
{{.GomegaImport}}
|
||||||
|
"github.com/sclevine/agouti"
|
||||||
|
. "github.com/sclevine/agouti/matchers"
|
||||||
|
|
||||||
|
{{if .ImportPackage}}"{{.PackageImportPath}}"{{end}}
|
||||||
|
)
|
||||||
|
|
||||||
|
var _ = {{.GinkgoPackage}}Describe("{{.Subject}}", func() {
|
||||||
|
var page *agouti.Page
|
||||||
|
|
||||||
|
{{.GinkgoPackage}}BeforeEach(func() {
|
||||||
|
var err error
|
||||||
|
page, err = agoutiDriver.NewPage()
|
||||||
|
{{.GomegaPackage}}Expect(err).NotTo({{.GomegaPackage}}HaveOccurred())
|
||||||
|
})
|
||||||
|
|
||||||
|
{{.GinkgoPackage}}AfterEach(func() {
|
||||||
|
{{.GomegaPackage}}Expect(page.Destroy()).To({{.GomegaPackage}}Succeed())
|
||||||
|
})
|
||||||
|
})
|
||||||
|
`
|
||||||
63
vendor/github.com/onsi/ginkgo/v2/ginkgo/generators/generators_common.go
generated
vendored
Normal file
63
vendor/github.com/onsi/ginkgo/v2/ginkgo/generators/generators_common.go
generated
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
package generators
|
||||||
|
|
||||||
|
import (
|
||||||
|
"go/build"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
)
|
||||||
|
|
||||||
|
type GeneratorsConfig struct {
|
||||||
|
Agouti, NoDot, Internal bool
|
||||||
|
CustomTemplate string
|
||||||
|
}
|
||||||
|
|
||||||
|
func getPackageAndFormattedName() (string, string, string) {
|
||||||
|
path, err := os.Getwd()
|
||||||
|
command.AbortIfError("Could not get current working directory:", err)
|
||||||
|
|
||||||
|
dirName := strings.Replace(filepath.Base(path), "-", "_", -1)
|
||||||
|
dirName = strings.Replace(dirName, " ", "_", -1)
|
||||||
|
|
||||||
|
pkg, err := build.ImportDir(path, 0)
|
||||||
|
packageName := pkg.Name
|
||||||
|
if err != nil {
|
||||||
|
packageName = ensureLegalPackageName(dirName)
|
||||||
|
}
|
||||||
|
|
||||||
|
formattedName := prettifyName(filepath.Base(path))
|
||||||
|
return packageName, dirName, formattedName
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensureLegalPackageName(name string) string {
|
||||||
|
if name == "_" {
|
||||||
|
return "underscore"
|
||||||
|
}
|
||||||
|
if len(name) == 0 {
|
||||||
|
return "empty"
|
||||||
|
}
|
||||||
|
n, isDigitErr := strconv.Atoi(string(name[0]))
|
||||||
|
if isDigitErr == nil {
|
||||||
|
return []string{"zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine"}[n] + name[1:]
|
||||||
|
}
|
||||||
|
return name
|
||||||
|
}
|
||||||
|
|
||||||
|
func prettifyName(name string) string {
|
||||||
|
name = strings.Replace(name, "-", " ", -1)
|
||||||
|
name = strings.Replace(name, "_", " ", -1)
|
||||||
|
name = strings.Title(name)
|
||||||
|
name = strings.Replace(name, " ", "", -1)
|
||||||
|
return name
|
||||||
|
}
|
||||||
|
|
||||||
|
func determinePackageName(name string, internal bool) string {
|
||||||
|
if internal {
|
||||||
|
return name
|
||||||
|
}
|
||||||
|
|
||||||
|
return name + "_test"
|
||||||
|
}
|
||||||
152
vendor/github.com/onsi/ginkgo/v2/ginkgo/internal/compile.go
generated
vendored
Normal file
152
vendor/github.com/onsi/ginkgo/v2/ginkgo/internal/compile.go
generated
vendored
Normal file
@@ -0,0 +1,152 @@
|
|||||||
|
package internal
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
func CompileSuite(suite TestSuite, goFlagsConfig types.GoFlagsConfig) TestSuite {
|
||||||
|
if suite.PathToCompiledTest != "" {
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
suite.CompilationError = nil
|
||||||
|
|
||||||
|
path, err := filepath.Abs(filepath.Join(suite.Path, suite.PackageName+".test"))
|
||||||
|
if err != nil {
|
||||||
|
suite.State = TestSuiteStateFailedToCompile
|
||||||
|
suite.CompilationError = fmt.Errorf("Failed to compute compilation target path:\n%s", err.Error())
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
args, err := types.GenerateGoTestCompileArgs(goFlagsConfig, path, "./")
|
||||||
|
if err != nil {
|
||||||
|
suite.State = TestSuiteStateFailedToCompile
|
||||||
|
suite.CompilationError = fmt.Errorf("Failed to generate go test compile flags:\n%s", err.Error())
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd := exec.Command("go", args...)
|
||||||
|
cmd.Dir = suite.Path
|
||||||
|
output, err := cmd.CombinedOutput()
|
||||||
|
if err != nil {
|
||||||
|
if len(output) > 0 {
|
||||||
|
suite.State = TestSuiteStateFailedToCompile
|
||||||
|
suite.CompilationError = fmt.Errorf("Failed to compile %s:\n\n%s", suite.PackageName, output)
|
||||||
|
} else {
|
||||||
|
suite.State = TestSuiteStateFailedToCompile
|
||||||
|
suite.CompilationError = fmt.Errorf("Failed to compile %s\n%s", suite.PackageName, err.Error())
|
||||||
|
}
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.Contains(string(output), "[no test files]") {
|
||||||
|
suite.State = TestSuiteStateSkippedDueToEmptyCompilation
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(output) > 0 {
|
||||||
|
fmt.Println(string(output))
|
||||||
|
}
|
||||||
|
|
||||||
|
if !FileExists(path) {
|
||||||
|
suite.State = TestSuiteStateFailedToCompile
|
||||||
|
suite.CompilationError = fmt.Errorf("Failed to compile %s:\nOutput file %s could not be found", suite.PackageName, path)
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
suite.State = TestSuiteStateCompiled
|
||||||
|
suite.PathToCompiledTest = path
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
func Cleanup(goFlagsConfig types.GoFlagsConfig, suites ...TestSuite) {
|
||||||
|
if goFlagsConfig.BinaryMustBePreserved() {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
for _, suite := range suites {
|
||||||
|
if !suite.Precompiled {
|
||||||
|
os.Remove(suite.PathToCompiledTest)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type parallelSuiteBundle struct {
|
||||||
|
suite TestSuite
|
||||||
|
compiled chan TestSuite
|
||||||
|
}
|
||||||
|
|
||||||
|
type OrderedParallelCompiler struct {
|
||||||
|
mutex *sync.Mutex
|
||||||
|
stopped bool
|
||||||
|
numCompilers int
|
||||||
|
|
||||||
|
idx int
|
||||||
|
numSuites int
|
||||||
|
completionChannels []chan TestSuite
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewOrderedParallelCompiler(numCompilers int) *OrderedParallelCompiler {
|
||||||
|
return &OrderedParallelCompiler{
|
||||||
|
mutex: &sync.Mutex{},
|
||||||
|
numCompilers: numCompilers,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (opc *OrderedParallelCompiler) StartCompiling(suites TestSuites, goFlagsConfig types.GoFlagsConfig) {
|
||||||
|
opc.stopped = false
|
||||||
|
opc.idx = 0
|
||||||
|
opc.numSuites = len(suites)
|
||||||
|
opc.completionChannels = make([]chan TestSuite, opc.numSuites)
|
||||||
|
|
||||||
|
toCompile := make(chan parallelSuiteBundle, opc.numCompilers)
|
||||||
|
for compiler := 0; compiler < opc.numCompilers; compiler++ {
|
||||||
|
go func() {
|
||||||
|
for bundle := range toCompile {
|
||||||
|
c, suite := bundle.compiled, bundle.suite
|
||||||
|
opc.mutex.Lock()
|
||||||
|
stopped := opc.stopped
|
||||||
|
opc.mutex.Unlock()
|
||||||
|
if !stopped {
|
||||||
|
suite = CompileSuite(suite, goFlagsConfig)
|
||||||
|
}
|
||||||
|
c <- suite
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
for idx, suite := range suites {
|
||||||
|
opc.completionChannels[idx] = make(chan TestSuite, 1)
|
||||||
|
toCompile <- parallelSuiteBundle{suite, opc.completionChannels[idx]}
|
||||||
|
if idx == 0 { //compile first suite serially
|
||||||
|
suite = <-opc.completionChannels[0]
|
||||||
|
opc.completionChannels[0] <- suite
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
close(toCompile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (opc *OrderedParallelCompiler) Next() (int, TestSuite) {
|
||||||
|
if opc.idx >= opc.numSuites {
|
||||||
|
return opc.numSuites, TestSuite{}
|
||||||
|
}
|
||||||
|
|
||||||
|
idx := opc.idx
|
||||||
|
suite := <-opc.completionChannels[idx]
|
||||||
|
opc.idx = opc.idx + 1
|
||||||
|
|
||||||
|
return idx, suite
|
||||||
|
}
|
||||||
|
|
||||||
|
func (opc *OrderedParallelCompiler) StopAndDrain() {
|
||||||
|
opc.mutex.Lock()
|
||||||
|
opc.stopped = true
|
||||||
|
opc.mutex.Unlock()
|
||||||
|
}
|
||||||
237
vendor/github.com/onsi/ginkgo/v2/ginkgo/internal/profiles_and_reports.go
generated
vendored
Normal file
237
vendor/github.com/onsi/ginkgo/v2/ginkgo/internal/profiles_and_reports.go
generated
vendored
Normal file
@@ -0,0 +1,237 @@
|
|||||||
|
package internal
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/google/pprof/profile"
|
||||||
|
"github.com/onsi/ginkgo/v2/reporters"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
func AbsPathForGeneratedAsset(assetName string, suite TestSuite, cliConfig types.CLIConfig, process int) string {
|
||||||
|
suffix := ""
|
||||||
|
if process != 0 {
|
||||||
|
suffix = fmt.Sprintf(".%d", process)
|
||||||
|
}
|
||||||
|
if cliConfig.OutputDir == "" {
|
||||||
|
return filepath.Join(suite.AbsPath(), assetName+suffix)
|
||||||
|
}
|
||||||
|
outputDir, _ := filepath.Abs(cliConfig.OutputDir)
|
||||||
|
return filepath.Join(outputDir, suite.NamespacedName()+"_"+assetName+suffix)
|
||||||
|
}
|
||||||
|
|
||||||
|
func FinalizeProfilesAndReportsForSuites(suites TestSuites, cliConfig types.CLIConfig, suiteConfig types.SuiteConfig, reporterConfig types.ReporterConfig, goFlagsConfig types.GoFlagsConfig) ([]string, error) {
|
||||||
|
messages := []string{}
|
||||||
|
suitesWithProfiles := suites.WithState(TestSuiteStatePassed, TestSuiteStateFailed) //anything else won't have actually run and generated a profile
|
||||||
|
|
||||||
|
// merge cover profiles if need be
|
||||||
|
if goFlagsConfig.Cover && !cliConfig.KeepSeparateCoverprofiles {
|
||||||
|
coverProfiles := []string{}
|
||||||
|
for _, suite := range suitesWithProfiles {
|
||||||
|
if !suite.HasProgrammaticFocus {
|
||||||
|
coverProfiles = append(coverProfiles, AbsPathForGeneratedAsset(goFlagsConfig.CoverProfile, suite, cliConfig, 0))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(coverProfiles) > 0 {
|
||||||
|
dst := goFlagsConfig.CoverProfile
|
||||||
|
if cliConfig.OutputDir != "" {
|
||||||
|
dst = filepath.Join(cliConfig.OutputDir, goFlagsConfig.CoverProfile)
|
||||||
|
}
|
||||||
|
err := MergeAndCleanupCoverProfiles(coverProfiles, dst)
|
||||||
|
if err != nil {
|
||||||
|
return messages, err
|
||||||
|
}
|
||||||
|
coverage, err := GetCoverageFromCoverProfile(dst)
|
||||||
|
if err != nil {
|
||||||
|
return messages, err
|
||||||
|
}
|
||||||
|
if coverage == 0 {
|
||||||
|
messages = append(messages, "composite coverage: [no statements]")
|
||||||
|
} else if suitesWithProfiles.AnyHaveProgrammaticFocus() {
|
||||||
|
messages = append(messages, fmt.Sprintf("composite coverage: %.1f%% of statements however some suites did not contribute because they included programatically focused specs", coverage))
|
||||||
|
} else {
|
||||||
|
messages = append(messages, fmt.Sprintf("composite coverage: %.1f%% of statements", coverage))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
messages = append(messages, "no composite coverage computed: all suites included programatically focused specs")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// copy binaries if need be
|
||||||
|
for _, suite := range suitesWithProfiles {
|
||||||
|
if goFlagsConfig.BinaryMustBePreserved() && cliConfig.OutputDir != "" {
|
||||||
|
src := suite.PathToCompiledTest
|
||||||
|
dst := filepath.Join(cliConfig.OutputDir, suite.NamespacedName()+".test")
|
||||||
|
if suite.Precompiled {
|
||||||
|
if err := CopyFile(src, dst); err != nil {
|
||||||
|
return messages, err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if err := os.Rename(src, dst); err != nil {
|
||||||
|
return messages, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type reportFormat struct {
|
||||||
|
ReportName string
|
||||||
|
GenerateFunc func(types.Report, string) error
|
||||||
|
MergeFunc func([]string, string) ([]string, error)
|
||||||
|
}
|
||||||
|
reportFormats := []reportFormat{}
|
||||||
|
if reporterConfig.JSONReport != "" {
|
||||||
|
reportFormats = append(reportFormats, reportFormat{ReportName: reporterConfig.JSONReport, GenerateFunc: reporters.GenerateJSONReport, MergeFunc: reporters.MergeAndCleanupJSONReports})
|
||||||
|
}
|
||||||
|
if reporterConfig.JUnitReport != "" {
|
||||||
|
reportFormats = append(reportFormats, reportFormat{ReportName: reporterConfig.JUnitReport, GenerateFunc: reporters.GenerateJUnitReport, MergeFunc: reporters.MergeAndCleanupJUnitReports})
|
||||||
|
}
|
||||||
|
if reporterConfig.TeamcityReport != "" {
|
||||||
|
reportFormats = append(reportFormats, reportFormat{ReportName: reporterConfig.TeamcityReport, GenerateFunc: reporters.GenerateTeamcityReport, MergeFunc: reporters.MergeAndCleanupTeamcityReports})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate reports for suites that failed to run
|
||||||
|
reportableSuites := suites.ThatAreGinkgoSuites()
|
||||||
|
for _, suite := range reportableSuites.WithState(TestSuiteStateFailedToCompile, TestSuiteStateFailedDueToTimeout, TestSuiteStateSkippedDueToPriorFailures, TestSuiteStateSkippedDueToEmptyCompilation) {
|
||||||
|
report := types.Report{
|
||||||
|
SuitePath: suite.AbsPath(),
|
||||||
|
SuiteConfig: suiteConfig,
|
||||||
|
SuiteSucceeded: false,
|
||||||
|
}
|
||||||
|
switch suite.State {
|
||||||
|
case TestSuiteStateFailedToCompile:
|
||||||
|
report.SpecialSuiteFailureReasons = append(report.SpecialSuiteFailureReasons, suite.CompilationError.Error())
|
||||||
|
case TestSuiteStateFailedDueToTimeout:
|
||||||
|
report.SpecialSuiteFailureReasons = append(report.SpecialSuiteFailureReasons, TIMEOUT_ELAPSED_FAILURE_REASON)
|
||||||
|
case TestSuiteStateSkippedDueToPriorFailures:
|
||||||
|
report.SpecialSuiteFailureReasons = append(report.SpecialSuiteFailureReasons, PRIOR_FAILURES_FAILURE_REASON)
|
||||||
|
case TestSuiteStateSkippedDueToEmptyCompilation:
|
||||||
|
report.SpecialSuiteFailureReasons = append(report.SpecialSuiteFailureReasons, EMPTY_SKIP_FAILURE_REASON)
|
||||||
|
report.SuiteSucceeded = true
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, format := range reportFormats {
|
||||||
|
format.GenerateFunc(report, AbsPathForGeneratedAsset(format.ReportName, suite, cliConfig, 0))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Merge reports unless we've been asked to keep them separate
|
||||||
|
if !cliConfig.KeepSeparateReports {
|
||||||
|
for _, format := range reportFormats {
|
||||||
|
reports := []string{}
|
||||||
|
for _, suite := range reportableSuites {
|
||||||
|
reports = append(reports, AbsPathForGeneratedAsset(format.ReportName, suite, cliConfig, 0))
|
||||||
|
}
|
||||||
|
dst := format.ReportName
|
||||||
|
if cliConfig.OutputDir != "" {
|
||||||
|
dst = filepath.Join(cliConfig.OutputDir, format.ReportName)
|
||||||
|
}
|
||||||
|
mergeMessages, err := format.MergeFunc(reports, dst)
|
||||||
|
messages = append(messages, mergeMessages...)
|
||||||
|
if err != nil {
|
||||||
|
return messages, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return messages, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
//loads each profile, combines them, deletes them, stores them in destination
|
||||||
|
func MergeAndCleanupCoverProfiles(profiles []string, destination string) error {
|
||||||
|
combined := &bytes.Buffer{}
|
||||||
|
modeRegex := regexp.MustCompile(`^mode: .*\n`)
|
||||||
|
for i, profile := range profiles {
|
||||||
|
contents, err := os.ReadFile(profile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Unable to read coverage file %s:\n%s", profile, err.Error())
|
||||||
|
}
|
||||||
|
os.Remove(profile)
|
||||||
|
|
||||||
|
// remove the cover mode line from every file
|
||||||
|
// except the first one
|
||||||
|
if i > 0 {
|
||||||
|
contents = modeRegex.ReplaceAll(contents, []byte{})
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = combined.Write(contents)
|
||||||
|
|
||||||
|
// Add a newline to the end of every file if missing.
|
||||||
|
if err == nil && len(contents) > 0 && contents[len(contents)-1] != '\n' {
|
||||||
|
_, err = combined.Write([]byte("\n"))
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Unable to append to coverprofile:\n%s", err.Error())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
err := os.WriteFile(destination, combined.Bytes(), 0666)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Unable to create combined cover profile:\n%s", err.Error())
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetCoverageFromCoverProfile(profile string) (float64, error) {
|
||||||
|
cmd := exec.Command("go", "tool", "cover", "-func", profile)
|
||||||
|
output, err := cmd.CombinedOutput()
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("Could not process Coverprofile %s: %s", profile, err.Error())
|
||||||
|
}
|
||||||
|
re := regexp.MustCompile(`total:\s*\(statements\)\s*(\d*\.\d*)\%`)
|
||||||
|
matches := re.FindStringSubmatch(string(output))
|
||||||
|
if matches == nil {
|
||||||
|
return 0, fmt.Errorf("Could not parse Coverprofile to compute coverage percentage")
|
||||||
|
}
|
||||||
|
coverageString := matches[1]
|
||||||
|
coverage, err := strconv.ParseFloat(coverageString, 64)
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("Could not parse Coverprofile to compute coverage percentage: %s", err.Error())
|
||||||
|
}
|
||||||
|
|
||||||
|
return coverage, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func MergeProfiles(profilePaths []string, destination string) error {
|
||||||
|
profiles := []*profile.Profile{}
|
||||||
|
for _, profilePath := range profilePaths {
|
||||||
|
proFile, err := os.Open(profilePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Could not open profile: %s\n%s", profilePath, err.Error())
|
||||||
|
}
|
||||||
|
prof, err := profile.Parse(proFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Could not parse profile: %s\n%s", profilePath, err.Error())
|
||||||
|
}
|
||||||
|
profiles = append(profiles, prof)
|
||||||
|
os.Remove(profilePath)
|
||||||
|
}
|
||||||
|
|
||||||
|
mergedProfile, err := profile.Merge(profiles)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Could not merge profiles:\n%s", err.Error())
|
||||||
|
}
|
||||||
|
|
||||||
|
outFile, err := os.Create(destination)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Could not create merged profile %s:\n%s", destination, err.Error())
|
||||||
|
}
|
||||||
|
err = mergedProfile.Write(outFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Could not write merged profile %s:\n%s", destination, err.Error())
|
||||||
|
}
|
||||||
|
err = outFile.Close()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Could not close merged profile %s:\n%s", destination, err.Error())
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
348
vendor/github.com/onsi/ginkgo/v2/ginkgo/internal/run.go
generated
vendored
Normal file
348
vendor/github.com/onsi/ginkgo/v2/ginkgo/internal/run.go
generated
vendored
Normal file
@@ -0,0 +1,348 @@
|
|||||||
|
package internal
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"regexp"
|
||||||
|
"strings"
|
||||||
|
"syscall"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/formatter"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
"github.com/onsi/ginkgo/v2/internal/parallel_support"
|
||||||
|
"github.com/onsi/ginkgo/v2/reporters"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
func RunCompiledSuite(suite TestSuite, ginkgoConfig types.SuiteConfig, reporterConfig types.ReporterConfig, cliConfig types.CLIConfig, goFlagsConfig types.GoFlagsConfig, additionalArgs []string) TestSuite {
|
||||||
|
suite.State = TestSuiteStateFailed
|
||||||
|
suite.HasProgrammaticFocus = false
|
||||||
|
|
||||||
|
if suite.PathToCompiledTest == "" {
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
if suite.IsGinkgo && cliConfig.ComputedProcs() > 1 {
|
||||||
|
suite = runParallel(suite, ginkgoConfig, reporterConfig, cliConfig, goFlagsConfig, additionalArgs)
|
||||||
|
} else if suite.IsGinkgo {
|
||||||
|
suite = runSerial(suite, ginkgoConfig, reporterConfig, cliConfig, goFlagsConfig, additionalArgs)
|
||||||
|
} else {
|
||||||
|
suite = runGoTest(suite, cliConfig, goFlagsConfig)
|
||||||
|
}
|
||||||
|
runAfterRunHook(cliConfig.AfterRunHook, reporterConfig.NoColor, suite)
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildAndStartCommand(suite TestSuite, args []string, pipeToStdout bool) (*exec.Cmd, *bytes.Buffer) {
|
||||||
|
buf := &bytes.Buffer{}
|
||||||
|
cmd := exec.Command(suite.PathToCompiledTest, args...)
|
||||||
|
cmd.Dir = suite.Path
|
||||||
|
if pipeToStdout {
|
||||||
|
cmd.Stderr = io.MultiWriter(os.Stdout, buf)
|
||||||
|
cmd.Stdout = os.Stdout
|
||||||
|
} else {
|
||||||
|
cmd.Stderr = buf
|
||||||
|
cmd.Stdout = buf
|
||||||
|
}
|
||||||
|
err := cmd.Start()
|
||||||
|
command.AbortIfError("Failed to start test suite", err)
|
||||||
|
|
||||||
|
return cmd, buf
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkForNoTestsWarning(buf *bytes.Buffer) bool {
|
||||||
|
if strings.Contains(buf.String(), "warning: no tests to run") {
|
||||||
|
fmt.Fprintf(os.Stderr, `Found no test suites, did you forget to run "ginkgo bootstrap"?`)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func runGoTest(suite TestSuite, cliConfig types.CLIConfig, goFlagsConfig types.GoFlagsConfig) TestSuite {
|
||||||
|
args, err := types.GenerateGoTestRunArgs(goFlagsConfig)
|
||||||
|
command.AbortIfError("Failed to generate test run arguments", err)
|
||||||
|
cmd, buf := buildAndStartCommand(suite, args, true)
|
||||||
|
|
||||||
|
cmd.Wait()
|
||||||
|
|
||||||
|
exitStatus := cmd.ProcessState.Sys().(syscall.WaitStatus).ExitStatus()
|
||||||
|
passed := (exitStatus == 0) || (exitStatus == types.GINKGO_FOCUS_EXIT_CODE)
|
||||||
|
passed = !(checkForNoTestsWarning(buf) && cliConfig.RequireSuite) && passed
|
||||||
|
if passed {
|
||||||
|
suite.State = TestSuiteStatePassed
|
||||||
|
} else {
|
||||||
|
suite.State = TestSuiteStateFailed
|
||||||
|
}
|
||||||
|
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
func runSerial(suite TestSuite, ginkgoConfig types.SuiteConfig, reporterConfig types.ReporterConfig, cliConfig types.CLIConfig, goFlagsConfig types.GoFlagsConfig, additionalArgs []string) TestSuite {
|
||||||
|
if goFlagsConfig.Cover {
|
||||||
|
goFlagsConfig.CoverProfile = AbsPathForGeneratedAsset(goFlagsConfig.CoverProfile, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
if goFlagsConfig.BlockProfile != "" {
|
||||||
|
goFlagsConfig.BlockProfile = AbsPathForGeneratedAsset(goFlagsConfig.BlockProfile, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
if goFlagsConfig.CPUProfile != "" {
|
||||||
|
goFlagsConfig.CPUProfile = AbsPathForGeneratedAsset(goFlagsConfig.CPUProfile, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
if goFlagsConfig.MemProfile != "" {
|
||||||
|
goFlagsConfig.MemProfile = AbsPathForGeneratedAsset(goFlagsConfig.MemProfile, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
if goFlagsConfig.MutexProfile != "" {
|
||||||
|
goFlagsConfig.MutexProfile = AbsPathForGeneratedAsset(goFlagsConfig.MutexProfile, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
if reporterConfig.JSONReport != "" {
|
||||||
|
reporterConfig.JSONReport = AbsPathForGeneratedAsset(reporterConfig.JSONReport, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
if reporterConfig.JUnitReport != "" {
|
||||||
|
reporterConfig.JUnitReport = AbsPathForGeneratedAsset(reporterConfig.JUnitReport, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
if reporterConfig.TeamcityReport != "" {
|
||||||
|
reporterConfig.TeamcityReport = AbsPathForGeneratedAsset(reporterConfig.TeamcityReport, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
args, err := types.GenerateGinkgoTestRunArgs(ginkgoConfig, reporterConfig, goFlagsConfig)
|
||||||
|
command.AbortIfError("Failed to generate test run arguments", err)
|
||||||
|
args = append([]string{"--test.timeout=0"}, args...)
|
||||||
|
args = append(args, additionalArgs...)
|
||||||
|
|
||||||
|
cmd, buf := buildAndStartCommand(suite, args, true)
|
||||||
|
|
||||||
|
cmd.Wait()
|
||||||
|
|
||||||
|
exitStatus := cmd.ProcessState.Sys().(syscall.WaitStatus).ExitStatus()
|
||||||
|
suite.HasProgrammaticFocus = (exitStatus == types.GINKGO_FOCUS_EXIT_CODE)
|
||||||
|
passed := (exitStatus == 0) || (exitStatus == types.GINKGO_FOCUS_EXIT_CODE)
|
||||||
|
passed = !(checkForNoTestsWarning(buf) && cliConfig.RequireSuite) && passed
|
||||||
|
if passed {
|
||||||
|
suite.State = TestSuiteStatePassed
|
||||||
|
} else {
|
||||||
|
suite.State = TestSuiteStateFailed
|
||||||
|
}
|
||||||
|
|
||||||
|
if suite.HasProgrammaticFocus {
|
||||||
|
if goFlagsConfig.Cover {
|
||||||
|
fmt.Fprintln(os.Stdout, "coverage: no coverfile was generated because specs are programmatically focused")
|
||||||
|
}
|
||||||
|
if goFlagsConfig.BlockProfile != "" {
|
||||||
|
fmt.Fprintln(os.Stdout, "no block profile was generated because specs are programmatically focused")
|
||||||
|
}
|
||||||
|
if goFlagsConfig.CPUProfile != "" {
|
||||||
|
fmt.Fprintln(os.Stdout, "no cpu profile was generated because specs are programmatically focused")
|
||||||
|
}
|
||||||
|
if goFlagsConfig.MemProfile != "" {
|
||||||
|
fmt.Fprintln(os.Stdout, "no mem profile was generated because specs are programmatically focused")
|
||||||
|
}
|
||||||
|
if goFlagsConfig.MutexProfile != "" {
|
||||||
|
fmt.Fprintln(os.Stdout, "no mutex profile was generated because specs are programmatically focused")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
func runParallel(suite TestSuite, ginkgoConfig types.SuiteConfig, reporterConfig types.ReporterConfig, cliConfig types.CLIConfig, goFlagsConfig types.GoFlagsConfig, additionalArgs []string) TestSuite {
|
||||||
|
type procResult struct {
|
||||||
|
passed bool
|
||||||
|
hasProgrammaticFocus bool
|
||||||
|
}
|
||||||
|
|
||||||
|
numProcs := cliConfig.ComputedProcs()
|
||||||
|
procOutput := make([]*bytes.Buffer, numProcs)
|
||||||
|
coverProfiles := []string{}
|
||||||
|
|
||||||
|
blockProfiles := []string{}
|
||||||
|
cpuProfiles := []string{}
|
||||||
|
memProfiles := []string{}
|
||||||
|
mutexProfiles := []string{}
|
||||||
|
|
||||||
|
procResults := make(chan procResult)
|
||||||
|
|
||||||
|
server, err := parallel_support.NewServer(numProcs, reporters.NewDefaultReporter(reporterConfig, formatter.ColorableStdOut))
|
||||||
|
command.AbortIfError("Failed to start parallel spec server", err)
|
||||||
|
server.Start()
|
||||||
|
defer server.Close()
|
||||||
|
|
||||||
|
if reporterConfig.JSONReport != "" {
|
||||||
|
reporterConfig.JSONReport = AbsPathForGeneratedAsset(reporterConfig.JSONReport, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
if reporterConfig.JUnitReport != "" {
|
||||||
|
reporterConfig.JUnitReport = AbsPathForGeneratedAsset(reporterConfig.JUnitReport, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
if reporterConfig.TeamcityReport != "" {
|
||||||
|
reporterConfig.TeamcityReport = AbsPathForGeneratedAsset(reporterConfig.TeamcityReport, suite, cliConfig, 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
for proc := 1; proc <= numProcs; proc++ {
|
||||||
|
procGinkgoConfig := ginkgoConfig
|
||||||
|
procGinkgoConfig.ParallelProcess, procGinkgoConfig.ParallelTotal, procGinkgoConfig.ParallelHost = proc, numProcs, server.Address()
|
||||||
|
|
||||||
|
procGoFlagsConfig := goFlagsConfig
|
||||||
|
if goFlagsConfig.Cover {
|
||||||
|
procGoFlagsConfig.CoverProfile = AbsPathForGeneratedAsset(goFlagsConfig.CoverProfile, suite, cliConfig, proc)
|
||||||
|
coverProfiles = append(coverProfiles, procGoFlagsConfig.CoverProfile)
|
||||||
|
}
|
||||||
|
if goFlagsConfig.BlockProfile != "" {
|
||||||
|
procGoFlagsConfig.BlockProfile = AbsPathForGeneratedAsset(goFlagsConfig.BlockProfile, suite, cliConfig, proc)
|
||||||
|
blockProfiles = append(blockProfiles, procGoFlagsConfig.BlockProfile)
|
||||||
|
}
|
||||||
|
if goFlagsConfig.CPUProfile != "" {
|
||||||
|
procGoFlagsConfig.CPUProfile = AbsPathForGeneratedAsset(goFlagsConfig.CPUProfile, suite, cliConfig, proc)
|
||||||
|
cpuProfiles = append(cpuProfiles, procGoFlagsConfig.CPUProfile)
|
||||||
|
}
|
||||||
|
if goFlagsConfig.MemProfile != "" {
|
||||||
|
procGoFlagsConfig.MemProfile = AbsPathForGeneratedAsset(goFlagsConfig.MemProfile, suite, cliConfig, proc)
|
||||||
|
memProfiles = append(memProfiles, procGoFlagsConfig.MemProfile)
|
||||||
|
}
|
||||||
|
if goFlagsConfig.MutexProfile != "" {
|
||||||
|
procGoFlagsConfig.MutexProfile = AbsPathForGeneratedAsset(goFlagsConfig.MutexProfile, suite, cliConfig, proc)
|
||||||
|
mutexProfiles = append(mutexProfiles, procGoFlagsConfig.MutexProfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
args, err := types.GenerateGinkgoTestRunArgs(procGinkgoConfig, reporterConfig, procGoFlagsConfig)
|
||||||
|
command.AbortIfError("Failed to generate test run arguments", err)
|
||||||
|
args = append([]string{"--test.timeout=0"}, args...)
|
||||||
|
args = append(args, additionalArgs...)
|
||||||
|
|
||||||
|
cmd, buf := buildAndStartCommand(suite, args, false)
|
||||||
|
procOutput[proc-1] = buf
|
||||||
|
server.RegisterAlive(proc, func() bool { return cmd.ProcessState == nil || !cmd.ProcessState.Exited() })
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
cmd.Wait()
|
||||||
|
exitStatus := cmd.ProcessState.Sys().(syscall.WaitStatus).ExitStatus()
|
||||||
|
procResults <- procResult{
|
||||||
|
passed: (exitStatus == 0) || (exitStatus == types.GINKGO_FOCUS_EXIT_CODE),
|
||||||
|
hasProgrammaticFocus: exitStatus == types.GINKGO_FOCUS_EXIT_CODE,
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
passed := true
|
||||||
|
for proc := 1; proc <= cliConfig.ComputedProcs(); proc++ {
|
||||||
|
result := <-procResults
|
||||||
|
passed = passed && result.passed
|
||||||
|
suite.HasProgrammaticFocus = suite.HasProgrammaticFocus || result.hasProgrammaticFocus
|
||||||
|
}
|
||||||
|
if passed {
|
||||||
|
suite.State = TestSuiteStatePassed
|
||||||
|
} else {
|
||||||
|
suite.State = TestSuiteStateFailed
|
||||||
|
}
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-server.GetSuiteDone():
|
||||||
|
fmt.Println("")
|
||||||
|
case <-time.After(time.Second):
|
||||||
|
//one of the nodes never finished reporting to the server. Something must have gone wrong.
|
||||||
|
fmt.Fprint(formatter.ColorableStdErr, formatter.F("\n{{bold}}{{red}}Ginkgo timed out waiting for all parallel procs to report back{{/}}\n"))
|
||||||
|
fmt.Fprint(formatter.ColorableStdErr, formatter.F("{{gray}}Test suite:{{/}} %s (%s)\n\n", suite.PackageName, suite.Path))
|
||||||
|
fmt.Fprint(formatter.ColorableStdErr, formatter.Fiw(0, formatter.COLS, "This occurs if a parallel process exits before it reports its results to the Ginkgo CLI. The CLI will now print out all the stdout/stderr output it's collected from the running processes. However you may not see anything useful in these logs because the individual test processes usually intercept output to stdout/stderr in order to capture it in the spec reports.\n\nYou may want to try rerunning your test suite with {{light-gray}}--output-interceptor-mode=none{{/}} to see additional output here and debug your suite.\n"))
|
||||||
|
fmt.Fprintln(formatter.ColorableStdErr, " ")
|
||||||
|
for proc := 1; proc <= cliConfig.ComputedProcs(); proc++ {
|
||||||
|
fmt.Fprintf(formatter.ColorableStdErr, formatter.F("{{bold}}Output from proc %d:{{/}}\n", proc))
|
||||||
|
fmt.Fprintln(os.Stderr, formatter.Fi(1, "%s", procOutput[proc-1].String()))
|
||||||
|
}
|
||||||
|
fmt.Fprintf(os.Stderr, "** End **")
|
||||||
|
}
|
||||||
|
|
||||||
|
for proc := 1; proc <= cliConfig.ComputedProcs(); proc++ {
|
||||||
|
output := procOutput[proc-1].String()
|
||||||
|
if proc == 1 && checkForNoTestsWarning(procOutput[0]) && cliConfig.RequireSuite {
|
||||||
|
suite.State = TestSuiteStateFailed
|
||||||
|
}
|
||||||
|
if strings.Contains(output, "deprecated Ginkgo functionality") {
|
||||||
|
fmt.Fprintln(os.Stderr, output)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(coverProfiles) > 0 {
|
||||||
|
if suite.HasProgrammaticFocus {
|
||||||
|
fmt.Fprintln(os.Stdout, "coverage: no coverfile was generated because specs are programmatically focused")
|
||||||
|
} else {
|
||||||
|
coverProfile := AbsPathForGeneratedAsset(goFlagsConfig.CoverProfile, suite, cliConfig, 0)
|
||||||
|
err := MergeAndCleanupCoverProfiles(coverProfiles, coverProfile)
|
||||||
|
command.AbortIfError("Failed to combine cover profiles", err)
|
||||||
|
|
||||||
|
coverage, err := GetCoverageFromCoverProfile(coverProfile)
|
||||||
|
command.AbortIfError("Failed to compute coverage", err)
|
||||||
|
if coverage == 0 {
|
||||||
|
fmt.Fprintln(os.Stdout, "coverage: [no statements]")
|
||||||
|
} else {
|
||||||
|
fmt.Fprintf(os.Stdout, "coverage: %.1f%% of statements\n", coverage)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(blockProfiles) > 0 {
|
||||||
|
if suite.HasProgrammaticFocus {
|
||||||
|
fmt.Fprintln(os.Stdout, "no block profile was generated because specs are programmatically focused")
|
||||||
|
} else {
|
||||||
|
blockProfile := AbsPathForGeneratedAsset(goFlagsConfig.BlockProfile, suite, cliConfig, 0)
|
||||||
|
err := MergeProfiles(blockProfiles, blockProfile)
|
||||||
|
command.AbortIfError("Failed to combine blockprofiles", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(cpuProfiles) > 0 {
|
||||||
|
if suite.HasProgrammaticFocus {
|
||||||
|
fmt.Fprintln(os.Stdout, "no cpu profile was generated because specs are programmatically focused")
|
||||||
|
} else {
|
||||||
|
cpuProfile := AbsPathForGeneratedAsset(goFlagsConfig.CPUProfile, suite, cliConfig, 0)
|
||||||
|
err := MergeProfiles(cpuProfiles, cpuProfile)
|
||||||
|
command.AbortIfError("Failed to combine cpuprofiles", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(memProfiles) > 0 {
|
||||||
|
if suite.HasProgrammaticFocus {
|
||||||
|
fmt.Fprintln(os.Stdout, "no mem profile was generated because specs are programmatically focused")
|
||||||
|
} else {
|
||||||
|
memProfile := AbsPathForGeneratedAsset(goFlagsConfig.MemProfile, suite, cliConfig, 0)
|
||||||
|
err := MergeProfiles(memProfiles, memProfile)
|
||||||
|
command.AbortIfError("Failed to combine memprofiles", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(mutexProfiles) > 0 {
|
||||||
|
if suite.HasProgrammaticFocus {
|
||||||
|
fmt.Fprintln(os.Stdout, "no mutex profile was generated because specs are programmatically focused")
|
||||||
|
} else {
|
||||||
|
mutexProfile := AbsPathForGeneratedAsset(goFlagsConfig.MutexProfile, suite, cliConfig, 0)
|
||||||
|
err := MergeProfiles(mutexProfiles, mutexProfile)
|
||||||
|
command.AbortIfError("Failed to combine mutexprofiles", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
func runAfterRunHook(command string, noColor bool, suite TestSuite) {
|
||||||
|
if command == "" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
f := formatter.NewWithNoColorBool(noColor)
|
||||||
|
|
||||||
|
// Allow for string replacement to pass input to the command
|
||||||
|
passed := "[FAIL]"
|
||||||
|
if suite.State.Is(TestSuiteStatePassed) {
|
||||||
|
passed = "[PASS]"
|
||||||
|
}
|
||||||
|
command = strings.Replace(command, "(ginkgo-suite-passed)", passed, -1)
|
||||||
|
command = strings.Replace(command, "(ginkgo-suite-name)", suite.PackageName, -1)
|
||||||
|
|
||||||
|
// Must break command into parts
|
||||||
|
splitArgs := regexp.MustCompile(`'.+'|".+"|\S+`)
|
||||||
|
parts := splitArgs.FindAllString(command, -1)
|
||||||
|
|
||||||
|
output, err := exec.Command(parts[0], parts[1:]...).CombinedOutput()
|
||||||
|
if err != nil {
|
||||||
|
fmt.Fprintln(formatter.ColorableStdOut, f.Fi(0, "{{red}}{{bold}}After-run-hook failed:{{/}}"))
|
||||||
|
fmt.Fprintln(formatter.ColorableStdOut, f.Fi(1, "{{red}}%s{{/}}", output))
|
||||||
|
} else {
|
||||||
|
fmt.Fprintln(formatter.ColorableStdOut, f.Fi(0, "{{green}}{{bold}}After-run-hook succeeded:{{/}}"))
|
||||||
|
fmt.Fprintln(formatter.ColorableStdOut, f.Fi(1, "{{green}}%s{{/}}", output))
|
||||||
|
}
|
||||||
|
}
|
||||||
283
vendor/github.com/onsi/ginkgo/v2/ginkgo/internal/test_suite.go
generated
vendored
Normal file
283
vendor/github.com/onsi/ginkgo/v2/ginkgo/internal/test_suite.go
generated
vendored
Normal file
@@ -0,0 +1,283 @@
|
|||||||
|
package internal
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"math/rand"
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
const TIMEOUT_ELAPSED_FAILURE_REASON = "Suite did not run because the timeout elapsed"
|
||||||
|
const PRIOR_FAILURES_FAILURE_REASON = "Suite did not run because prior suites failed and --keep-going is not set"
|
||||||
|
const EMPTY_SKIP_FAILURE_REASON = "Suite did not run go test reported that no test files were found"
|
||||||
|
|
||||||
|
type TestSuiteState uint
|
||||||
|
|
||||||
|
const (
|
||||||
|
TestSuiteStateInvalid TestSuiteState = iota
|
||||||
|
|
||||||
|
TestSuiteStateUncompiled
|
||||||
|
TestSuiteStateCompiled
|
||||||
|
|
||||||
|
TestSuiteStatePassed
|
||||||
|
|
||||||
|
TestSuiteStateSkippedDueToEmptyCompilation
|
||||||
|
TestSuiteStateSkippedByFilter
|
||||||
|
TestSuiteStateSkippedDueToPriorFailures
|
||||||
|
|
||||||
|
TestSuiteStateFailed
|
||||||
|
TestSuiteStateFailedDueToTimeout
|
||||||
|
TestSuiteStateFailedToCompile
|
||||||
|
)
|
||||||
|
|
||||||
|
var TestSuiteStateFailureStates = []TestSuiteState{TestSuiteStateFailed, TestSuiteStateFailedDueToTimeout, TestSuiteStateFailedToCompile}
|
||||||
|
|
||||||
|
func (state TestSuiteState) Is(states ...TestSuiteState) bool {
|
||||||
|
for _, suiteState := range states {
|
||||||
|
if suiteState == state {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
type TestSuite struct {
|
||||||
|
Path string
|
||||||
|
PackageName string
|
||||||
|
IsGinkgo bool
|
||||||
|
|
||||||
|
Precompiled bool
|
||||||
|
PathToCompiledTest string
|
||||||
|
CompilationError error
|
||||||
|
|
||||||
|
HasProgrammaticFocus bool
|
||||||
|
State TestSuiteState
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ts TestSuite) AbsPath() string {
|
||||||
|
path, _ := filepath.Abs(ts.Path)
|
||||||
|
return path
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ts TestSuite) NamespacedName() string {
|
||||||
|
name := relPath(ts.Path)
|
||||||
|
name = strings.TrimLeft(name, "."+string(filepath.Separator))
|
||||||
|
name = strings.ReplaceAll(name, string(filepath.Separator), "_")
|
||||||
|
name = strings.ReplaceAll(name, " ", "_")
|
||||||
|
if name == "" {
|
||||||
|
return ts.PackageName
|
||||||
|
}
|
||||||
|
return name
|
||||||
|
}
|
||||||
|
|
||||||
|
type TestSuites []TestSuite
|
||||||
|
|
||||||
|
func (ts TestSuites) AnyHaveProgrammaticFocus() bool {
|
||||||
|
for _, suite := range ts {
|
||||||
|
if suite.HasProgrammaticFocus {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ts TestSuites) ThatAreGinkgoSuites() TestSuites {
|
||||||
|
out := TestSuites{}
|
||||||
|
for _, suite := range ts {
|
||||||
|
if suite.IsGinkgo {
|
||||||
|
out = append(out, suite)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ts TestSuites) CountWithState(states ...TestSuiteState) int {
|
||||||
|
n := 0
|
||||||
|
for _, suite := range ts {
|
||||||
|
if suite.State.Is(states...) {
|
||||||
|
n += 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ts TestSuites) WithState(states ...TestSuiteState) TestSuites {
|
||||||
|
out := TestSuites{}
|
||||||
|
for _, suite := range ts {
|
||||||
|
if suite.State.Is(states...) {
|
||||||
|
out = append(out, suite)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ts TestSuites) WithoutState(states ...TestSuiteState) TestSuites {
|
||||||
|
out := TestSuites{}
|
||||||
|
for _, suite := range ts {
|
||||||
|
if !suite.State.Is(states...) {
|
||||||
|
out = append(out, suite)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ts TestSuites) ShuffledCopy(seed int64) TestSuites {
|
||||||
|
out := make(TestSuites, len(ts))
|
||||||
|
permutation := rand.New(rand.NewSource(seed)).Perm(len(ts))
|
||||||
|
for i, j := range permutation {
|
||||||
|
out[i] = ts[j]
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func FindSuites(args []string, cliConfig types.CLIConfig, allowPrecompiled bool) TestSuites {
|
||||||
|
suites := TestSuites{}
|
||||||
|
|
||||||
|
if len(args) > 0 {
|
||||||
|
for _, arg := range args {
|
||||||
|
if allowPrecompiled {
|
||||||
|
suite, err := precompiledTestSuite(arg)
|
||||||
|
if err == nil {
|
||||||
|
suites = append(suites, suite)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
recurseForSuite := cliConfig.Recurse
|
||||||
|
if strings.HasSuffix(arg, "/...") && arg != "/..." {
|
||||||
|
arg = arg[:len(arg)-4]
|
||||||
|
recurseForSuite = true
|
||||||
|
}
|
||||||
|
suites = append(suites, suitesInDir(arg, recurseForSuite)...)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
suites = suitesInDir(".", cliConfig.Recurse)
|
||||||
|
}
|
||||||
|
|
||||||
|
if cliConfig.SkipPackage != "" {
|
||||||
|
skipFilters := strings.Split(cliConfig.SkipPackage, ",")
|
||||||
|
for idx := range suites {
|
||||||
|
for _, skipFilter := range skipFilters {
|
||||||
|
if strings.Contains(suites[idx].Path, skipFilter) {
|
||||||
|
suites[idx].State = TestSuiteStateSkippedByFilter
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return suites
|
||||||
|
}
|
||||||
|
|
||||||
|
func precompiledTestSuite(path string) (TestSuite, error) {
|
||||||
|
info, err := os.Stat(path)
|
||||||
|
if err != nil {
|
||||||
|
return TestSuite{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if info.IsDir() {
|
||||||
|
return TestSuite{}, errors.New("this is a directory, not a file")
|
||||||
|
}
|
||||||
|
|
||||||
|
if filepath.Ext(path) != ".test" && filepath.Ext(path) != ".exe" {
|
||||||
|
return TestSuite{}, errors.New("this is not a .test binary")
|
||||||
|
}
|
||||||
|
|
||||||
|
if filepath.Ext(path) == ".test" && info.Mode()&0111 == 0 {
|
||||||
|
return TestSuite{}, errors.New("this is not executable")
|
||||||
|
}
|
||||||
|
|
||||||
|
dir := relPath(filepath.Dir(path))
|
||||||
|
packageName := strings.TrimSuffix(filepath.Base(path), ".exe")
|
||||||
|
packageName = strings.TrimSuffix(packageName, ".test")
|
||||||
|
|
||||||
|
path, err = filepath.Abs(path)
|
||||||
|
if err != nil {
|
||||||
|
return TestSuite{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return TestSuite{
|
||||||
|
Path: dir,
|
||||||
|
PackageName: packageName,
|
||||||
|
IsGinkgo: true,
|
||||||
|
Precompiled: true,
|
||||||
|
PathToCompiledTest: path,
|
||||||
|
State: TestSuiteStateCompiled,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func suitesInDir(dir string, recurse bool) TestSuites {
|
||||||
|
suites := TestSuites{}
|
||||||
|
|
||||||
|
if path.Base(dir) == "vendor" {
|
||||||
|
return suites
|
||||||
|
}
|
||||||
|
|
||||||
|
files, _ := os.ReadDir(dir)
|
||||||
|
re := regexp.MustCompile(`^[^._].*_test\.go$`)
|
||||||
|
for _, file := range files {
|
||||||
|
if !file.IsDir() && re.Match([]byte(file.Name())) {
|
||||||
|
suite := TestSuite{
|
||||||
|
Path: relPath(dir),
|
||||||
|
PackageName: packageNameForSuite(dir),
|
||||||
|
IsGinkgo: filesHaveGinkgoSuite(dir, files),
|
||||||
|
State: TestSuiteStateUncompiled,
|
||||||
|
}
|
||||||
|
suites = append(suites, suite)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if recurse {
|
||||||
|
re = regexp.MustCompile(`^[._]`)
|
||||||
|
for _, file := range files {
|
||||||
|
if file.IsDir() && !re.Match([]byte(file.Name())) {
|
||||||
|
suites = append(suites, suitesInDir(dir+"/"+file.Name(), recurse)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return suites
|
||||||
|
}
|
||||||
|
|
||||||
|
func relPath(dir string) string {
|
||||||
|
dir, _ = filepath.Abs(dir)
|
||||||
|
cwd, _ := os.Getwd()
|
||||||
|
dir, _ = filepath.Rel(cwd, filepath.Clean(dir))
|
||||||
|
|
||||||
|
if string(dir[0]) != "." {
|
||||||
|
dir = "." + string(filepath.Separator) + dir
|
||||||
|
}
|
||||||
|
|
||||||
|
return dir
|
||||||
|
}
|
||||||
|
|
||||||
|
func packageNameForSuite(dir string) string {
|
||||||
|
path, _ := filepath.Abs(dir)
|
||||||
|
return filepath.Base(path)
|
||||||
|
}
|
||||||
|
|
||||||
|
func filesHaveGinkgoSuite(dir string, files []os.DirEntry) bool {
|
||||||
|
reTestFile := regexp.MustCompile(`_test\.go$`)
|
||||||
|
reGinkgo := regexp.MustCompile(`package ginkgo|\/ginkgo"|\/ginkgo\/v2"|\/ginkgo\/v2/dsl/`)
|
||||||
|
|
||||||
|
for _, file := range files {
|
||||||
|
if !file.IsDir() && reTestFile.Match([]byte(file.Name())) {
|
||||||
|
contents, _ := os.ReadFile(dir + "/" + file.Name())
|
||||||
|
if reGinkgo.Match(contents) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
86
vendor/github.com/onsi/ginkgo/v2/ginkgo/internal/utils.go
generated
vendored
Normal file
86
vendor/github.com/onsi/ginkgo/v2/ginkgo/internal/utils.go
generated
vendored
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
package internal
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/formatter"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
)
|
||||||
|
|
||||||
|
func FileExists(path string) bool {
|
||||||
|
_, err := os.Stat(path)
|
||||||
|
return err == nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func CopyFile(src string, dest string) error {
|
||||||
|
srcFile, err := os.Open(src)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
srcStat, err := srcFile.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := os.Stat(dest); err == nil {
|
||||||
|
os.Remove(dest)
|
||||||
|
}
|
||||||
|
|
||||||
|
destFile, err := os.OpenFile(dest, os.O_WRONLY|os.O_CREATE, srcStat.Mode())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = io.Copy(destFile, srcFile)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := srcFile.Close(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return destFile.Close()
|
||||||
|
}
|
||||||
|
|
||||||
|
func GoFmt(path string) {
|
||||||
|
out, err := exec.Command("go", "fmt", path).CombinedOutput()
|
||||||
|
if err != nil {
|
||||||
|
command.AbortIfError(fmt.Sprintf("Could not fmt:\n%s\n", string(out)), err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func PluralizedWord(singular, plural string, count int) string {
|
||||||
|
if count == 1 {
|
||||||
|
return singular
|
||||||
|
}
|
||||||
|
return plural
|
||||||
|
}
|
||||||
|
|
||||||
|
func FailedSuitesReport(suites TestSuites, f formatter.Formatter) string {
|
||||||
|
out := ""
|
||||||
|
out += "There were failures detected in the following suites:\n"
|
||||||
|
|
||||||
|
maxPackageNameLength := 0
|
||||||
|
for _, suite := range suites.WithState(TestSuiteStateFailureStates...) {
|
||||||
|
if len(suite.PackageName) > maxPackageNameLength {
|
||||||
|
maxPackageNameLength = len(suite.PackageName)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
packageNameFormatter := fmt.Sprintf("%%%ds", maxPackageNameLength)
|
||||||
|
for _, suite := range suites {
|
||||||
|
switch suite.State {
|
||||||
|
case TestSuiteStateFailed:
|
||||||
|
out += f.Fi(1, "{{red}}"+packageNameFormatter+" {{gray}}%s{{/}}\n", suite.PackageName, suite.Path)
|
||||||
|
case TestSuiteStateFailedToCompile:
|
||||||
|
out += f.Fi(1, "{{red}}"+packageNameFormatter+" {{gray}}%s {{magenta}}[Compilation failure]{{/}}\n", suite.PackageName, suite.Path)
|
||||||
|
case TestSuiteStateFailedDueToTimeout:
|
||||||
|
out += f.Fi(1, "{{red}}"+packageNameFormatter+" {{gray}}%s {{orange}}[%s]{{/}}\n", suite.PackageName, suite.Path, TIMEOUT_ELAPSED_FAILURE_REASON)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
123
vendor/github.com/onsi/ginkgo/v2/ginkgo/labels/labels_command.go
generated
vendored
Normal file
123
vendor/github.com/onsi/ginkgo/v2/ginkgo/labels/labels_command.go
generated
vendored
Normal file
@@ -0,0 +1,123 @@
|
|||||||
|
package labels
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"go/ast"
|
||||||
|
"go/parser"
|
||||||
|
"go/token"
|
||||||
|
"sort"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/internal"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
"golang.org/x/tools/go/ast/inspector"
|
||||||
|
)
|
||||||
|
|
||||||
|
func BuildLabelsCommand() command.Command {
|
||||||
|
var cliConfig = types.NewDefaultCLIConfig()
|
||||||
|
|
||||||
|
flags, err := types.BuildLabelsCommandFlagSet(&cliConfig)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return command.Command{
|
||||||
|
Name: "labels",
|
||||||
|
Usage: "ginkgo labels <FLAGS> <PACKAGES>",
|
||||||
|
Flags: flags,
|
||||||
|
ShortDoc: "List labels detected in the passed-in packages (or the package in the current directory if left blank).",
|
||||||
|
DocLink: "spec-labels",
|
||||||
|
Command: func(args []string, _ []string) {
|
||||||
|
ListLabels(args, cliConfig)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func ListLabels(args []string, cliConfig types.CLIConfig) {
|
||||||
|
suites := internal.FindSuites(args, cliConfig, false).WithoutState(internal.TestSuiteStateSkippedByFilter)
|
||||||
|
if len(suites) == 0 {
|
||||||
|
command.AbortWith("Found no test suites")
|
||||||
|
}
|
||||||
|
for _, suite := range suites {
|
||||||
|
labels := fetchLabelsFromPackage(suite.Path)
|
||||||
|
if len(labels) == 0 {
|
||||||
|
fmt.Printf("%s: No labels found\n", suite.PackageName)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("%s: [%s]\n", suite.PackageName, strings.Join(labels, ", "))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func fetchLabelsFromPackage(packagePath string) []string {
|
||||||
|
fset := token.NewFileSet()
|
||||||
|
parsedPackages, err := parser.ParseDir(fset, packagePath, nil, 0)
|
||||||
|
command.AbortIfError("Failed to parse package source:", err)
|
||||||
|
|
||||||
|
files := []*ast.File{}
|
||||||
|
hasTestPackage := false
|
||||||
|
for key, pkg := range parsedPackages {
|
||||||
|
if strings.HasSuffix(key, "_test") {
|
||||||
|
hasTestPackage = true
|
||||||
|
for _, file := range pkg.Files {
|
||||||
|
files = append(files, file)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !hasTestPackage {
|
||||||
|
for _, pkg := range parsedPackages {
|
||||||
|
for _, file := range pkg.Files {
|
||||||
|
files = append(files, file)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
seen := map[string]bool{}
|
||||||
|
labels := []string{}
|
||||||
|
ispr := inspector.New(files)
|
||||||
|
ispr.Preorder([]ast.Node{&ast.CallExpr{}}, func(n ast.Node) {
|
||||||
|
potentialLabels := fetchLabels(n.(*ast.CallExpr))
|
||||||
|
for _, label := range potentialLabels {
|
||||||
|
if !seen[label] {
|
||||||
|
seen[label] = true
|
||||||
|
labels = append(labels, strconv.Quote(label))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
sort.Strings(labels)
|
||||||
|
return labels
|
||||||
|
}
|
||||||
|
|
||||||
|
func fetchLabels(callExpr *ast.CallExpr) []string {
|
||||||
|
out := []string{}
|
||||||
|
switch expr := callExpr.Fun.(type) {
|
||||||
|
case *ast.Ident:
|
||||||
|
if expr.Name != "Label" {
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
case *ast.SelectorExpr:
|
||||||
|
if expr.Sel.Name != "Label" {
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
for _, arg := range callExpr.Args {
|
||||||
|
switch expr := arg.(type) {
|
||||||
|
case *ast.BasicLit:
|
||||||
|
if expr.Kind == token.STRING {
|
||||||
|
unquoted, err := strconv.Unquote(expr.Value)
|
||||||
|
if err != nil {
|
||||||
|
unquoted = expr.Value
|
||||||
|
}
|
||||||
|
validated, err := types.ValidateAndCleanupLabel(unquoted, types.CodeLocation{})
|
||||||
|
if err == nil {
|
||||||
|
out = append(out, validated)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
58
vendor/github.com/onsi/ginkgo/v2/ginkgo/main.go
generated
vendored
Normal file
58
vendor/github.com/onsi/ginkgo/v2/ginkgo/main.go
generated
vendored
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/build"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/generators"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/labels"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/outline"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/run"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/unfocus"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/watch"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
var program command.Program
|
||||||
|
|
||||||
|
func GenerateCommands() []command.Command {
|
||||||
|
return []command.Command{
|
||||||
|
watch.BuildWatchCommand(),
|
||||||
|
build.BuildBuildCommand(),
|
||||||
|
generators.BuildBootstrapCommand(),
|
||||||
|
generators.BuildGenerateCommand(),
|
||||||
|
labels.BuildLabelsCommand(),
|
||||||
|
outline.BuildOutlineCommand(),
|
||||||
|
unfocus.BuildUnfocusCommand(),
|
||||||
|
BuildVersionCommand(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
program = command.Program{
|
||||||
|
Name: "ginkgo",
|
||||||
|
Heading: fmt.Sprintf("Ginkgo Version %s", types.VERSION),
|
||||||
|
Commands: GenerateCommands(),
|
||||||
|
DefaultCommand: run.BuildRunCommand(),
|
||||||
|
DeprecatedCommands: []command.DeprecatedCommand{
|
||||||
|
{Name: "convert", Deprecation: types.Deprecations.Convert()},
|
||||||
|
{Name: "blur", Deprecation: types.Deprecations.Blur()},
|
||||||
|
{Name: "nodot", Deprecation: types.Deprecations.Nodot()},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
program.RunAndExit(os.Args)
|
||||||
|
}
|
||||||
|
|
||||||
|
func BuildVersionCommand() command.Command {
|
||||||
|
return command.Command{
|
||||||
|
Name: "version",
|
||||||
|
Usage: "ginkgo version",
|
||||||
|
ShortDoc: "Print Ginkgo's version",
|
||||||
|
Command: func(_ []string, _ []string) {
|
||||||
|
fmt.Printf("Ginkgo Version %s\n", types.VERSION)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
218
vendor/github.com/onsi/ginkgo/v2/ginkgo/outline/ginkgo.go
generated
vendored
Normal file
218
vendor/github.com/onsi/ginkgo/v2/ginkgo/outline/ginkgo.go
generated
vendored
Normal file
@@ -0,0 +1,218 @@
|
|||||||
|
package outline
|
||||||
|
|
||||||
|
import (
|
||||||
|
"go/ast"
|
||||||
|
"go/token"
|
||||||
|
"strconv"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// undefinedTextAlt is used if the spec/container text cannot be derived
|
||||||
|
undefinedTextAlt = "undefined"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ginkgoMetadata holds useful bits of information for every entry in the outline
|
||||||
|
type ginkgoMetadata struct {
|
||||||
|
// Name is the spec or container function name, e.g. `Describe` or `It`
|
||||||
|
Name string `json:"name"`
|
||||||
|
|
||||||
|
// Text is the `text` argument passed to specs, and some containers
|
||||||
|
Text string `json:"text"`
|
||||||
|
|
||||||
|
// Start is the position of first character of the spec or container block
|
||||||
|
Start int `json:"start"`
|
||||||
|
|
||||||
|
// End is the position of first character immediately after the spec or container block
|
||||||
|
End int `json:"end"`
|
||||||
|
|
||||||
|
Spec bool `json:"spec"`
|
||||||
|
Focused bool `json:"focused"`
|
||||||
|
Pending bool `json:"pending"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// ginkgoNode is used to construct the outline as a tree
|
||||||
|
type ginkgoNode struct {
|
||||||
|
ginkgoMetadata
|
||||||
|
Nodes []*ginkgoNode `json:"nodes"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type walkFunc func(n *ginkgoNode)
|
||||||
|
|
||||||
|
func (n *ginkgoNode) PreOrder(f walkFunc) {
|
||||||
|
f(n)
|
||||||
|
for _, m := range n.Nodes {
|
||||||
|
m.PreOrder(f)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (n *ginkgoNode) PostOrder(f walkFunc) {
|
||||||
|
for _, m := range n.Nodes {
|
||||||
|
m.PostOrder(f)
|
||||||
|
}
|
||||||
|
f(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (n *ginkgoNode) Walk(pre, post walkFunc) {
|
||||||
|
pre(n)
|
||||||
|
for _, m := range n.Nodes {
|
||||||
|
m.Walk(pre, post)
|
||||||
|
}
|
||||||
|
post(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
// PropagateInheritedProperties propagates the Pending and Focused properties
|
||||||
|
// through the subtree rooted at n.
|
||||||
|
func (n *ginkgoNode) PropagateInheritedProperties() {
|
||||||
|
n.PreOrder(func(thisNode *ginkgoNode) {
|
||||||
|
for _, descendantNode := range thisNode.Nodes {
|
||||||
|
if thisNode.Pending {
|
||||||
|
descendantNode.Pending = true
|
||||||
|
descendantNode.Focused = false
|
||||||
|
}
|
||||||
|
if thisNode.Focused && !descendantNode.Pending {
|
||||||
|
descendantNode.Focused = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// BackpropagateUnfocus propagates the Focused property through the subtree
|
||||||
|
// rooted at n. It applies the rule described in the Ginkgo docs:
|
||||||
|
// > Nested programmatically focused specs follow a simple rule: if a
|
||||||
|
// > leaf-node is marked focused, any of its ancestor nodes that are marked
|
||||||
|
// > focus will be unfocused.
|
||||||
|
func (n *ginkgoNode) BackpropagateUnfocus() {
|
||||||
|
focusedSpecInSubtreeStack := []bool{}
|
||||||
|
n.PostOrder(func(thisNode *ginkgoNode) {
|
||||||
|
if thisNode.Spec {
|
||||||
|
focusedSpecInSubtreeStack = append(focusedSpecInSubtreeStack, thisNode.Focused)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
focusedSpecInSubtree := false
|
||||||
|
for range thisNode.Nodes {
|
||||||
|
focusedSpecInSubtree = focusedSpecInSubtree || focusedSpecInSubtreeStack[len(focusedSpecInSubtreeStack)-1]
|
||||||
|
focusedSpecInSubtreeStack = focusedSpecInSubtreeStack[0 : len(focusedSpecInSubtreeStack)-1]
|
||||||
|
}
|
||||||
|
focusedSpecInSubtreeStack = append(focusedSpecInSubtreeStack, focusedSpecInSubtree)
|
||||||
|
if focusedSpecInSubtree {
|
||||||
|
thisNode.Focused = false
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func packageAndIdentNamesFromCallExpr(ce *ast.CallExpr) (string, string, bool) {
|
||||||
|
switch ex := ce.Fun.(type) {
|
||||||
|
case *ast.Ident:
|
||||||
|
return "", ex.Name, true
|
||||||
|
case *ast.SelectorExpr:
|
||||||
|
pkgID, ok := ex.X.(*ast.Ident)
|
||||||
|
if !ok {
|
||||||
|
return "", "", false
|
||||||
|
}
|
||||||
|
// A package identifier is top-level, so Obj must be nil
|
||||||
|
if pkgID.Obj != nil {
|
||||||
|
return "", "", false
|
||||||
|
}
|
||||||
|
if ex.Sel == nil {
|
||||||
|
return "", "", false
|
||||||
|
}
|
||||||
|
return pkgID.Name, ex.Sel.Name, true
|
||||||
|
default:
|
||||||
|
return "", "", false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// absoluteOffsetsForNode derives the absolute character offsets of the node start and
|
||||||
|
// end positions.
|
||||||
|
func absoluteOffsetsForNode(fset *token.FileSet, n ast.Node) (start, end int) {
|
||||||
|
return fset.PositionFor(n.Pos(), false).Offset, fset.PositionFor(n.End(), false).Offset
|
||||||
|
}
|
||||||
|
|
||||||
|
// ginkgoNodeFromCallExpr derives an outline entry from a go AST subtree
|
||||||
|
// corresponding to a Ginkgo container or spec.
|
||||||
|
func ginkgoNodeFromCallExpr(fset *token.FileSet, ce *ast.CallExpr, ginkgoPackageName *string) (*ginkgoNode, bool) {
|
||||||
|
packageName, identName, ok := packageAndIdentNamesFromCallExpr(ce)
|
||||||
|
if !ok {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
n := ginkgoNode{}
|
||||||
|
n.Name = identName
|
||||||
|
n.Start, n.End = absoluteOffsetsForNode(fset, ce)
|
||||||
|
n.Nodes = make([]*ginkgoNode, 0)
|
||||||
|
switch identName {
|
||||||
|
case "It", "Specify", "Entry":
|
||||||
|
n.Spec = true
|
||||||
|
n.Text = textOrAltFromCallExpr(ce, undefinedTextAlt)
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
case "FIt", "FSpecify", "FEntry":
|
||||||
|
n.Spec = true
|
||||||
|
n.Focused = true
|
||||||
|
n.Text = textOrAltFromCallExpr(ce, undefinedTextAlt)
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
case "PIt", "PSpecify", "XIt", "XSpecify", "PEntry", "XEntry":
|
||||||
|
n.Spec = true
|
||||||
|
n.Pending = true
|
||||||
|
n.Text = textOrAltFromCallExpr(ce, undefinedTextAlt)
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
case "Context", "Describe", "When", "DescribeTable":
|
||||||
|
n.Text = textOrAltFromCallExpr(ce, undefinedTextAlt)
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
case "FContext", "FDescribe", "FWhen", "FDescribeTable":
|
||||||
|
n.Focused = true
|
||||||
|
n.Text = textOrAltFromCallExpr(ce, undefinedTextAlt)
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
case "PContext", "PDescribe", "PWhen", "XContext", "XDescribe", "XWhen", "PDescribeTable", "XDescribeTable":
|
||||||
|
n.Pending = true
|
||||||
|
n.Text = textOrAltFromCallExpr(ce, undefinedTextAlt)
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
case "By":
|
||||||
|
n.Text = textOrAltFromCallExpr(ce, undefinedTextAlt)
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
case "AfterEach", "BeforeEach":
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
case "JustAfterEach", "JustBeforeEach":
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
case "AfterSuite", "BeforeSuite":
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
case "SynchronizedAfterSuite", "SynchronizedBeforeSuite":
|
||||||
|
return &n, ginkgoPackageName != nil && *ginkgoPackageName == packageName
|
||||||
|
default:
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// textOrAltFromCallExpr tries to derive the "text" of a Ginkgo spec or
|
||||||
|
// container. If it cannot derive it, it returns the alt text.
|
||||||
|
func textOrAltFromCallExpr(ce *ast.CallExpr, alt string) string {
|
||||||
|
text, defined := textFromCallExpr(ce)
|
||||||
|
if !defined {
|
||||||
|
return alt
|
||||||
|
}
|
||||||
|
return text
|
||||||
|
}
|
||||||
|
|
||||||
|
// textFromCallExpr tries to derive the "text" of a Ginkgo spec or container. If
|
||||||
|
// it cannot derive it, it returns false.
|
||||||
|
func textFromCallExpr(ce *ast.CallExpr) (string, bool) {
|
||||||
|
if len(ce.Args) < 1 {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
text, ok := ce.Args[0].(*ast.BasicLit)
|
||||||
|
if !ok {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
switch text.Kind {
|
||||||
|
case token.CHAR, token.STRING:
|
||||||
|
// For token.CHAR and token.STRING, Value is quoted
|
||||||
|
unquoted, err := strconv.Unquote(text.Value)
|
||||||
|
if err != nil {
|
||||||
|
// If unquoting fails, just use the raw Value
|
||||||
|
return text.Value, true
|
||||||
|
}
|
||||||
|
return unquoted, true
|
||||||
|
default:
|
||||||
|
return text.Value, true
|
||||||
|
}
|
||||||
|
}
|
||||||
65
vendor/github.com/onsi/ginkgo/v2/ginkgo/outline/import.go
generated
vendored
Normal file
65
vendor/github.com/onsi/ginkgo/v2/ginkgo/outline/import.go
generated
vendored
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
// Copyright 2013 The Go Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a BSD-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
// Most of the required functions were available in the
|
||||||
|
// "golang.org/x/tools/go/ast/astutil" package, but not exported.
|
||||||
|
// They were copied from https://github.com/golang/tools/blob/2b0845dc783e36ae26d683f4915a5840ef01ab0f/go/ast/astutil/imports.go
|
||||||
|
|
||||||
|
package outline
|
||||||
|
|
||||||
|
import (
|
||||||
|
"go/ast"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// packageNameForImport returns the package name for the package. If the package
|
||||||
|
// is not imported, it returns nil. "Package name" refers to `pkgname` in the
|
||||||
|
// call expression `pkgname.ExportedIdentifier`. Examples:
|
||||||
|
// (import path not found) -> nil
|
||||||
|
// "import example.com/pkg/foo" -> "foo"
|
||||||
|
// "import fooalias example.com/pkg/foo" -> "fooalias"
|
||||||
|
// "import . example.com/pkg/foo" -> ""
|
||||||
|
func packageNameForImport(f *ast.File, path string) *string {
|
||||||
|
spec := importSpec(f, path)
|
||||||
|
if spec == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
name := spec.Name.String()
|
||||||
|
if name == "<nil>" {
|
||||||
|
// If the package name is not explicitly specified,
|
||||||
|
// make an educated guess. This is not guaranteed to be correct.
|
||||||
|
lastSlash := strings.LastIndex(path, "/")
|
||||||
|
if lastSlash == -1 {
|
||||||
|
name = path
|
||||||
|
} else {
|
||||||
|
name = path[lastSlash+1:]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if name == "." {
|
||||||
|
name = ""
|
||||||
|
}
|
||||||
|
return &name
|
||||||
|
}
|
||||||
|
|
||||||
|
// importSpec returns the import spec if f imports path,
|
||||||
|
// or nil otherwise.
|
||||||
|
func importSpec(f *ast.File, path string) *ast.ImportSpec {
|
||||||
|
for _, s := range f.Imports {
|
||||||
|
if importPath(s) == path {
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// importPath returns the unquoted import path of s,
|
||||||
|
// or "" if the path is not properly quoted.
|
||||||
|
func importPath(s *ast.ImportSpec) string {
|
||||||
|
t, err := strconv.Unquote(s.Path.Value)
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return t
|
||||||
|
}
|
||||||
103
vendor/github.com/onsi/ginkgo/v2/ginkgo/outline/outline.go
generated
vendored
Normal file
103
vendor/github.com/onsi/ginkgo/v2/ginkgo/outline/outline.go
generated
vendored
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
package outline
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"go/ast"
|
||||||
|
"go/token"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"golang.org/x/tools/go/ast/inspector"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// ginkgoImportPath is the well-known ginkgo import path
|
||||||
|
ginkgoImportPath = "github.com/onsi/ginkgo/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
// FromASTFile returns an outline for a Ginkgo test source file
|
||||||
|
func FromASTFile(fset *token.FileSet, src *ast.File) (*outline, error) {
|
||||||
|
ginkgoPackageName := packageNameForImport(src, ginkgoImportPath)
|
||||||
|
if ginkgoPackageName == nil {
|
||||||
|
return nil, fmt.Errorf("file does not import %q", ginkgoImportPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
root := ginkgoNode{}
|
||||||
|
stack := []*ginkgoNode{&root}
|
||||||
|
ispr := inspector.New([]*ast.File{src})
|
||||||
|
ispr.Nodes([]ast.Node{(*ast.CallExpr)(nil)}, func(node ast.Node, push bool) bool {
|
||||||
|
if push {
|
||||||
|
// Pre-order traversal
|
||||||
|
ce, ok := node.(*ast.CallExpr)
|
||||||
|
if !ok {
|
||||||
|
// Because `Nodes` calls this function only when the node is an
|
||||||
|
// ast.CallExpr, this should never happen
|
||||||
|
panic(fmt.Errorf("node starting at %d, ending at %d is not an *ast.CallExpr", node.Pos(), node.End()))
|
||||||
|
}
|
||||||
|
gn, ok := ginkgoNodeFromCallExpr(fset, ce, ginkgoPackageName)
|
||||||
|
if !ok {
|
||||||
|
// Node is not a Ginkgo spec or container, continue
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
parent := stack[len(stack)-1]
|
||||||
|
parent.Nodes = append(parent.Nodes, gn)
|
||||||
|
stack = append(stack, gn)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
// Post-order traversal
|
||||||
|
start, end := absoluteOffsetsForNode(fset, node)
|
||||||
|
lastVisitedGinkgoNode := stack[len(stack)-1]
|
||||||
|
if start != lastVisitedGinkgoNode.Start || end != lastVisitedGinkgoNode.End {
|
||||||
|
// Node is not a Ginkgo spec or container, so it was not pushed onto the stack, continue
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
stack = stack[0 : len(stack)-1]
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
if len(root.Nodes) == 0 {
|
||||||
|
return &outline{[]*ginkgoNode{}}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Derive the final focused property for all nodes. This must be done
|
||||||
|
// _before_ propagating the inherited focused property.
|
||||||
|
root.BackpropagateUnfocus()
|
||||||
|
// Now, propagate inherited properties, including focused and pending.
|
||||||
|
root.PropagateInheritedProperties()
|
||||||
|
|
||||||
|
return &outline{root.Nodes}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type outline struct {
|
||||||
|
Nodes []*ginkgoNode `json:"nodes"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o *outline) MarshalJSON() ([]byte, error) {
|
||||||
|
return json.Marshal(o.Nodes)
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns a CSV-formatted outline. Spec or container are output in
|
||||||
|
// depth-first order.
|
||||||
|
func (o *outline) String() string {
|
||||||
|
return o.StringIndent(0)
|
||||||
|
}
|
||||||
|
|
||||||
|
// StringIndent returns a CSV-formated outline, but every line is indented by
|
||||||
|
// one 'width' of spaces for every level of nesting.
|
||||||
|
func (o *outline) StringIndent(width int) string {
|
||||||
|
var b strings.Builder
|
||||||
|
b.WriteString("Name,Text,Start,End,Spec,Focused,Pending\n")
|
||||||
|
|
||||||
|
currentIndent := 0
|
||||||
|
pre := func(n *ginkgoNode) {
|
||||||
|
b.WriteString(fmt.Sprintf("%*s", currentIndent, ""))
|
||||||
|
b.WriteString(fmt.Sprintf("%s,%s,%d,%d,%t,%t,%t\n", n.Name, n.Text, n.Start, n.End, n.Spec, n.Focused, n.Pending))
|
||||||
|
currentIndent += width
|
||||||
|
}
|
||||||
|
post := func(n *ginkgoNode) {
|
||||||
|
currentIndent -= width
|
||||||
|
}
|
||||||
|
for _, n := range o.Nodes {
|
||||||
|
n.Walk(pre, post)
|
||||||
|
}
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
98
vendor/github.com/onsi/ginkgo/v2/ginkgo/outline/outline_command.go
generated
vendored
Normal file
98
vendor/github.com/onsi/ginkgo/v2/ginkgo/outline/outline_command.go
generated
vendored
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
package outline
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"go/parser"
|
||||||
|
"go/token"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// indentWidth is the width used by the 'indent' output
|
||||||
|
indentWidth = 4
|
||||||
|
// stdinAlias is a portable alias for stdin. This convention is used in
|
||||||
|
// other CLIs, e.g., kubectl.
|
||||||
|
stdinAlias = "-"
|
||||||
|
usageCommand = "ginkgo outline <filename>"
|
||||||
|
)
|
||||||
|
|
||||||
|
type outlineConfig struct {
|
||||||
|
Format string
|
||||||
|
}
|
||||||
|
|
||||||
|
func BuildOutlineCommand() command.Command {
|
||||||
|
conf := outlineConfig{
|
||||||
|
Format: "csv",
|
||||||
|
}
|
||||||
|
flags, err := types.NewGinkgoFlagSet(
|
||||||
|
types.GinkgoFlags{
|
||||||
|
{Name: "format", KeyPath: "Format",
|
||||||
|
Usage: "Format of outline",
|
||||||
|
UsageArgument: "one of 'csv', 'indent', or 'json'",
|
||||||
|
UsageDefaultValue: conf.Format,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
&conf,
|
||||||
|
types.GinkgoFlagSections{},
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return command.Command{
|
||||||
|
Name: "outline",
|
||||||
|
Usage: "ginkgo outline <filename>",
|
||||||
|
ShortDoc: "Create an outline of Ginkgo symbols for a file",
|
||||||
|
Documentation: "To read from stdin, use: `ginkgo outline -`",
|
||||||
|
DocLink: "creating-an-outline-of-specs",
|
||||||
|
Flags: flags,
|
||||||
|
Command: func(args []string, _ []string) {
|
||||||
|
outlineFile(args, conf.Format)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func outlineFile(args []string, format string) {
|
||||||
|
if len(args) != 1 {
|
||||||
|
command.AbortWithUsage("outline expects exactly one argument")
|
||||||
|
}
|
||||||
|
|
||||||
|
filename := args[0]
|
||||||
|
var src *os.File
|
||||||
|
if filename == stdinAlias {
|
||||||
|
src = os.Stdin
|
||||||
|
} else {
|
||||||
|
var err error
|
||||||
|
src, err = os.Open(filename)
|
||||||
|
command.AbortIfError("Failed to open file:", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fset := token.NewFileSet()
|
||||||
|
|
||||||
|
parsedSrc, err := parser.ParseFile(fset, filename, src, 0)
|
||||||
|
command.AbortIfError("Failed to parse source:", err)
|
||||||
|
|
||||||
|
o, err := FromASTFile(fset, parsedSrc)
|
||||||
|
command.AbortIfError("Failed to create outline:", err)
|
||||||
|
|
||||||
|
var oerr error
|
||||||
|
switch format {
|
||||||
|
case "csv":
|
||||||
|
_, oerr = fmt.Print(o)
|
||||||
|
case "indent":
|
||||||
|
_, oerr = fmt.Print(o.StringIndent(indentWidth))
|
||||||
|
case "json":
|
||||||
|
b, err := json.Marshal(o)
|
||||||
|
if err != nil {
|
||||||
|
println(fmt.Sprintf("error marshalling to json: %s", err))
|
||||||
|
}
|
||||||
|
_, oerr = fmt.Println(string(b))
|
||||||
|
default:
|
||||||
|
command.AbortWith("Format %s not accepted", format)
|
||||||
|
}
|
||||||
|
command.AbortIfError("Failed to write outline:", oerr)
|
||||||
|
}
|
||||||
230
vendor/github.com/onsi/ginkgo/v2/ginkgo/run/run_command.go
generated
vendored
Normal file
230
vendor/github.com/onsi/ginkgo/v2/ginkgo/run/run_command.go
generated
vendored
Normal file
@@ -0,0 +1,230 @@
|
|||||||
|
package run
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/formatter"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/internal"
|
||||||
|
"github.com/onsi/ginkgo/v2/internal/interrupt_handler"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
func BuildRunCommand() command.Command {
|
||||||
|
var suiteConfig = types.NewDefaultSuiteConfig()
|
||||||
|
var reporterConfig = types.NewDefaultReporterConfig()
|
||||||
|
var cliConfig = types.NewDefaultCLIConfig()
|
||||||
|
var goFlagsConfig = types.NewDefaultGoFlagsConfig()
|
||||||
|
|
||||||
|
flags, err := types.BuildRunCommandFlagSet(&suiteConfig, &reporterConfig, &cliConfig, &goFlagsConfig)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
interruptHandler := interrupt_handler.NewInterruptHandler(0, nil)
|
||||||
|
interrupt_handler.SwallowSigQuit()
|
||||||
|
|
||||||
|
return command.Command{
|
||||||
|
Name: "run",
|
||||||
|
Flags: flags,
|
||||||
|
Usage: "ginkgo run <FLAGS> <PACKAGES> -- <PASS-THROUGHS>",
|
||||||
|
ShortDoc: "Run the tests in the passed in <PACKAGES> (or the package in the current directory if left blank)",
|
||||||
|
Documentation: "Any arguments after -- will be passed to the test.",
|
||||||
|
DocLink: "running-tests",
|
||||||
|
Command: func(args []string, additionalArgs []string) {
|
||||||
|
var errors []error
|
||||||
|
cliConfig, goFlagsConfig, errors = types.VetAndInitializeCLIAndGoConfig(cliConfig, goFlagsConfig)
|
||||||
|
command.AbortIfErrors("Ginkgo detected configuration issues:", errors)
|
||||||
|
|
||||||
|
runner := &SpecRunner{
|
||||||
|
cliConfig: cliConfig,
|
||||||
|
goFlagsConfig: goFlagsConfig,
|
||||||
|
suiteConfig: suiteConfig,
|
||||||
|
reporterConfig: reporterConfig,
|
||||||
|
flags: flags,
|
||||||
|
|
||||||
|
interruptHandler: interruptHandler,
|
||||||
|
}
|
||||||
|
|
||||||
|
runner.RunSpecs(args, additionalArgs)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type SpecRunner struct {
|
||||||
|
suiteConfig types.SuiteConfig
|
||||||
|
reporterConfig types.ReporterConfig
|
||||||
|
cliConfig types.CLIConfig
|
||||||
|
goFlagsConfig types.GoFlagsConfig
|
||||||
|
flags types.GinkgoFlagSet
|
||||||
|
|
||||||
|
interruptHandler *interrupt_handler.InterruptHandler
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *SpecRunner) RunSpecs(args []string, additionalArgs []string) {
|
||||||
|
suites := internal.FindSuites(args, r.cliConfig, true)
|
||||||
|
skippedSuites := suites.WithState(internal.TestSuiteStateSkippedByFilter)
|
||||||
|
suites = suites.WithoutState(internal.TestSuiteStateSkippedByFilter)
|
||||||
|
|
||||||
|
if len(skippedSuites) > 0 {
|
||||||
|
fmt.Println("Will skip:")
|
||||||
|
for _, skippedSuite := range skippedSuites {
|
||||||
|
fmt.Println(" " + skippedSuite.Path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(skippedSuites) > 0 && len(suites) == 0 {
|
||||||
|
command.AbortGracefullyWith("All tests skipped! Exiting...")
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(suites) == 0 {
|
||||||
|
command.AbortWith("Found no test suites")
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(suites) > 1 && !r.flags.WasSet("succinct") && r.reporterConfig.Verbosity().LT(types.VerbosityLevelVerbose) {
|
||||||
|
r.reporterConfig.Succinct = true
|
||||||
|
}
|
||||||
|
|
||||||
|
t := time.Now()
|
||||||
|
var endTime time.Time
|
||||||
|
if r.suiteConfig.Timeout > 0 {
|
||||||
|
endTime = t.Add(r.suiteConfig.Timeout)
|
||||||
|
}
|
||||||
|
|
||||||
|
iteration := 0
|
||||||
|
OUTER_LOOP:
|
||||||
|
for {
|
||||||
|
if !r.flags.WasSet("seed") {
|
||||||
|
r.suiteConfig.RandomSeed = time.Now().Unix()
|
||||||
|
}
|
||||||
|
if r.cliConfig.RandomizeSuites && len(suites) > 1 {
|
||||||
|
suites = suites.ShuffledCopy(r.suiteConfig.RandomSeed)
|
||||||
|
}
|
||||||
|
|
||||||
|
opc := internal.NewOrderedParallelCompiler(r.cliConfig.ComputedNumCompilers())
|
||||||
|
opc.StartCompiling(suites, r.goFlagsConfig)
|
||||||
|
|
||||||
|
SUITE_LOOP:
|
||||||
|
for {
|
||||||
|
suiteIdx, suite := opc.Next()
|
||||||
|
if suiteIdx >= len(suites) {
|
||||||
|
break SUITE_LOOP
|
||||||
|
}
|
||||||
|
suites[suiteIdx] = suite
|
||||||
|
|
||||||
|
if r.interruptHandler.Status().Interrupted {
|
||||||
|
opc.StopAndDrain()
|
||||||
|
break OUTER_LOOP
|
||||||
|
}
|
||||||
|
|
||||||
|
if suites[suiteIdx].State.Is(internal.TestSuiteStateSkippedDueToEmptyCompilation) {
|
||||||
|
fmt.Printf("Skipping %s (no test files)\n", suite.Path)
|
||||||
|
continue SUITE_LOOP
|
||||||
|
}
|
||||||
|
|
||||||
|
if suites[suiteIdx].State.Is(internal.TestSuiteStateFailedToCompile) {
|
||||||
|
fmt.Println(suites[suiteIdx].CompilationError.Error())
|
||||||
|
if !r.cliConfig.KeepGoing {
|
||||||
|
opc.StopAndDrain()
|
||||||
|
}
|
||||||
|
continue SUITE_LOOP
|
||||||
|
}
|
||||||
|
|
||||||
|
if suites.CountWithState(internal.TestSuiteStateFailureStates...) > 0 && !r.cliConfig.KeepGoing {
|
||||||
|
suites[suiteIdx].State = internal.TestSuiteStateSkippedDueToPriorFailures
|
||||||
|
opc.StopAndDrain()
|
||||||
|
continue SUITE_LOOP
|
||||||
|
}
|
||||||
|
|
||||||
|
if !endTime.IsZero() {
|
||||||
|
r.suiteConfig.Timeout = endTime.Sub(time.Now())
|
||||||
|
if r.suiteConfig.Timeout <= 0 {
|
||||||
|
suites[suiteIdx].State = internal.TestSuiteStateFailedDueToTimeout
|
||||||
|
opc.StopAndDrain()
|
||||||
|
continue SUITE_LOOP
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
suites[suiteIdx] = internal.RunCompiledSuite(suites[suiteIdx], r.suiteConfig, r.reporterConfig, r.cliConfig, r.goFlagsConfig, additionalArgs)
|
||||||
|
}
|
||||||
|
|
||||||
|
if suites.CountWithState(internal.TestSuiteStateFailureStates...) > 0 {
|
||||||
|
if iteration > 0 {
|
||||||
|
fmt.Printf("\nTests failed on attempt #%d\n\n", iteration+1)
|
||||||
|
}
|
||||||
|
break OUTER_LOOP
|
||||||
|
}
|
||||||
|
|
||||||
|
if r.cliConfig.UntilItFails {
|
||||||
|
fmt.Printf("\nAll tests passed...\nWill keep running them until they fail.\nThis was attempt #%d\n%s\n", iteration+1, orcMessage(iteration+1))
|
||||||
|
} else if r.cliConfig.Repeat > 0 && iteration < r.cliConfig.Repeat {
|
||||||
|
fmt.Printf("\nAll tests passed...\nThis was attempt %d of %d.\n", iteration+1, r.cliConfig.Repeat+1)
|
||||||
|
} else {
|
||||||
|
break OUTER_LOOP
|
||||||
|
}
|
||||||
|
iteration += 1
|
||||||
|
}
|
||||||
|
|
||||||
|
internal.Cleanup(r.goFlagsConfig, suites...)
|
||||||
|
|
||||||
|
messages, err := internal.FinalizeProfilesAndReportsForSuites(suites, r.cliConfig, r.suiteConfig, r.reporterConfig, r.goFlagsConfig)
|
||||||
|
command.AbortIfError("could not finalize profiles:", err)
|
||||||
|
for _, message := range messages {
|
||||||
|
fmt.Println(message)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("\nGinkgo ran %d %s in %s\n", len(suites), internal.PluralizedWord("suite", "suites", len(suites)), time.Since(t))
|
||||||
|
|
||||||
|
if suites.CountWithState(internal.TestSuiteStateFailureStates...) == 0 {
|
||||||
|
if suites.AnyHaveProgrammaticFocus() && strings.TrimSpace(os.Getenv("GINKGO_EDITOR_INTEGRATION")) == "" {
|
||||||
|
fmt.Printf("Test Suite Passed\n")
|
||||||
|
fmt.Printf("Detected Programmatic Focus - setting exit status to %d\n", types.GINKGO_FOCUS_EXIT_CODE)
|
||||||
|
command.Abort(command.AbortDetails{ExitCode: types.GINKGO_FOCUS_EXIT_CODE})
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Test Suite Passed\n")
|
||||||
|
command.Abort(command.AbortDetails{})
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
fmt.Fprintln(formatter.ColorableStdOut, "")
|
||||||
|
if len(suites) > 1 && suites.CountWithState(internal.TestSuiteStateFailureStates...) > 0 {
|
||||||
|
fmt.Fprintln(formatter.ColorableStdOut,
|
||||||
|
internal.FailedSuitesReport(suites, formatter.NewWithNoColorBool(r.reporterConfig.NoColor)))
|
||||||
|
}
|
||||||
|
fmt.Printf("Test Suite Failed\n")
|
||||||
|
command.Abort(command.AbortDetails{ExitCode: 1})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func orcMessage(iteration int) string {
|
||||||
|
if iteration < 10 {
|
||||||
|
return ""
|
||||||
|
} else if iteration < 30 {
|
||||||
|
return []string{
|
||||||
|
"If at first you succeed...",
|
||||||
|
"...try, try again.",
|
||||||
|
"Looking good!",
|
||||||
|
"Still good...",
|
||||||
|
"I think your tests are fine....",
|
||||||
|
"Yep, still passing",
|
||||||
|
"Oh boy, here I go testin' again!",
|
||||||
|
"Even the gophers are getting bored",
|
||||||
|
"Did you try -race?",
|
||||||
|
"Maybe you should stop now?",
|
||||||
|
"I'm getting tired...",
|
||||||
|
"What if I just made you a sandwich?",
|
||||||
|
"Hit ^C, hit ^C, please hit ^C",
|
||||||
|
"Make it stop. Please!",
|
||||||
|
"Come on! Enough is enough!",
|
||||||
|
"Dave, this conversation can serve no purpose anymore. Goodbye.",
|
||||||
|
"Just what do you think you're doing, Dave? ",
|
||||||
|
"I, Sisyphus",
|
||||||
|
"Insanity: doing the same thing over and over again and expecting different results. -Einstein",
|
||||||
|
"I guess Einstein never tried to churn butter",
|
||||||
|
}[iteration-10] + "\n"
|
||||||
|
} else {
|
||||||
|
return "No, seriously... you can probably stop now.\n"
|
||||||
|
}
|
||||||
|
}
|
||||||
186
vendor/github.com/onsi/ginkgo/v2/ginkgo/unfocus/unfocus_command.go
generated
vendored
Normal file
186
vendor/github.com/onsi/ginkgo/v2/ginkgo/unfocus/unfocus_command.go
generated
vendored
Normal file
@@ -0,0 +1,186 @@
|
|||||||
|
package unfocus
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"go/ast"
|
||||||
|
"go/parser"
|
||||||
|
"go/token"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
)
|
||||||
|
|
||||||
|
func BuildUnfocusCommand() command.Command {
|
||||||
|
return command.Command{
|
||||||
|
Name: "unfocus",
|
||||||
|
Usage: "ginkgo unfocus",
|
||||||
|
ShortDoc: "Recursively unfocus any focused tests under the current directory",
|
||||||
|
DocLink: "filtering-specs",
|
||||||
|
Command: func(_ []string, _ []string) {
|
||||||
|
unfocusSpecs()
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func unfocusSpecs() {
|
||||||
|
fmt.Println("Scanning for focus...")
|
||||||
|
|
||||||
|
goFiles := make(chan string)
|
||||||
|
go func() {
|
||||||
|
unfocusDir(goFiles, ".")
|
||||||
|
close(goFiles)
|
||||||
|
}()
|
||||||
|
|
||||||
|
const workers = 10
|
||||||
|
wg := sync.WaitGroup{}
|
||||||
|
wg.Add(workers)
|
||||||
|
|
||||||
|
for i := 0; i < workers; i++ {
|
||||||
|
go func() {
|
||||||
|
for path := range goFiles {
|
||||||
|
unfocusFile(path)
|
||||||
|
}
|
||||||
|
wg.Done()
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
wg.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
func unfocusDir(goFiles chan string, path string) {
|
||||||
|
files, err := os.ReadDir(path)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Println(err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, f := range files {
|
||||||
|
switch {
|
||||||
|
case f.IsDir() && shouldProcessDir(f.Name()):
|
||||||
|
unfocusDir(goFiles, filepath.Join(path, f.Name()))
|
||||||
|
case !f.IsDir() && shouldProcessFile(f.Name()):
|
||||||
|
goFiles <- filepath.Join(path, f.Name())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func shouldProcessDir(basename string) bool {
|
||||||
|
return basename != "vendor" && !strings.HasPrefix(basename, ".")
|
||||||
|
}
|
||||||
|
|
||||||
|
func shouldProcessFile(basename string) bool {
|
||||||
|
return strings.HasSuffix(basename, ".go")
|
||||||
|
}
|
||||||
|
|
||||||
|
func unfocusFile(path string) {
|
||||||
|
data, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Printf("error reading file '%s': %s\n", path, err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ast, err := parser.ParseFile(token.NewFileSet(), path, bytes.NewReader(data), parser.ParseComments)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Printf("error parsing file '%s': %s\n", path, err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
eliminations := scanForFocus(ast)
|
||||||
|
if len(eliminations) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("...updating %s\n", path)
|
||||||
|
backup, err := writeBackup(path, data)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Printf("error creating backup file: %s\n", err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := updateFile(path, data, eliminations); err != nil {
|
||||||
|
fmt.Printf("error writing file '%s': %s\n", path, err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
os.Remove(backup)
|
||||||
|
}
|
||||||
|
|
||||||
|
func writeBackup(path string, data []byte) (string, error) {
|
||||||
|
t, err := os.CreateTemp(filepath.Dir(path), filepath.Base(path))
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("error creating temporary file: %w", err)
|
||||||
|
}
|
||||||
|
defer t.Close()
|
||||||
|
|
||||||
|
if _, err := io.Copy(t, bytes.NewReader(data)); err != nil {
|
||||||
|
return "", fmt.Errorf("error writing to temporary file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return t.Name(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func updateFile(path string, data []byte, eliminations [][]int64) error {
|
||||||
|
to, err := os.Create(path)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("error opening file for writing '%s': %w\n", path, err)
|
||||||
|
}
|
||||||
|
defer to.Close()
|
||||||
|
|
||||||
|
from := bytes.NewReader(data)
|
||||||
|
var cursor int64
|
||||||
|
for _, eliminationRange := range eliminations {
|
||||||
|
positionToEliminate, lengthToEliminate := eliminationRange[0]-1, eliminationRange[1]
|
||||||
|
if _, err := io.CopyN(to, from, positionToEliminate-cursor); err != nil {
|
||||||
|
return fmt.Errorf("error copying data: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cursor = positionToEliminate + lengthToEliminate
|
||||||
|
|
||||||
|
if _, err := from.Seek(lengthToEliminate, io.SeekCurrent); err != nil {
|
||||||
|
return fmt.Errorf("error seeking to position in buffer: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := io.Copy(to, from); err != nil {
|
||||||
|
return fmt.Errorf("error copying end data: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func scanForFocus(file *ast.File) (eliminations [][]int64) {
|
||||||
|
ast.Inspect(file, func(n ast.Node) bool {
|
||||||
|
if c, ok := n.(*ast.CallExpr); ok {
|
||||||
|
if i, ok := c.Fun.(*ast.Ident); ok {
|
||||||
|
if isFocus(i.Name) {
|
||||||
|
eliminations = append(eliminations, []int64{int64(i.Pos()), 1})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if i, ok := n.(*ast.Ident); ok {
|
||||||
|
if i.Name == "Focus" {
|
||||||
|
eliminations = append(eliminations, []int64{int64(i.Pos()), 6})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
|
||||||
|
return eliminations
|
||||||
|
}
|
||||||
|
|
||||||
|
func isFocus(name string) bool {
|
||||||
|
switch name {
|
||||||
|
case "FDescribe", "FContext", "FIt", "FDescribeTable", "FEntry", "FSpecify", "FWhen":
|
||||||
|
return true
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
22
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/delta.go
generated
vendored
Normal file
22
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/delta.go
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
package watch
|
||||||
|
|
||||||
|
import "sort"
|
||||||
|
|
||||||
|
type Delta struct {
|
||||||
|
ModifiedPackages []string
|
||||||
|
|
||||||
|
NewSuites []*Suite
|
||||||
|
RemovedSuites []*Suite
|
||||||
|
modifiedSuites []*Suite
|
||||||
|
}
|
||||||
|
|
||||||
|
type DescendingByDelta []*Suite
|
||||||
|
|
||||||
|
func (a DescendingByDelta) Len() int { return len(a) }
|
||||||
|
func (a DescendingByDelta) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||||
|
func (a DescendingByDelta) Less(i, j int) bool { return a[i].Delta() > a[j].Delta() }
|
||||||
|
|
||||||
|
func (d Delta) ModifiedSuites() []*Suite {
|
||||||
|
sort.Sort(DescendingByDelta(d.modifiedSuites))
|
||||||
|
return d.modifiedSuites
|
||||||
|
}
|
||||||
75
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/delta_tracker.go
generated
vendored
Normal file
75
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/delta_tracker.go
generated
vendored
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
package watch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"regexp"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/internal"
|
||||||
|
)
|
||||||
|
|
||||||
|
type SuiteErrors map[internal.TestSuite]error
|
||||||
|
|
||||||
|
type DeltaTracker struct {
|
||||||
|
maxDepth int
|
||||||
|
watchRegExp *regexp.Regexp
|
||||||
|
suites map[string]*Suite
|
||||||
|
packageHashes *PackageHashes
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewDeltaTracker(maxDepth int, watchRegExp *regexp.Regexp) *DeltaTracker {
|
||||||
|
return &DeltaTracker{
|
||||||
|
maxDepth: maxDepth,
|
||||||
|
watchRegExp: watchRegExp,
|
||||||
|
packageHashes: NewPackageHashes(watchRegExp),
|
||||||
|
suites: map[string]*Suite{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *DeltaTracker) Delta(suites internal.TestSuites) (delta Delta, errors SuiteErrors) {
|
||||||
|
errors = SuiteErrors{}
|
||||||
|
delta.ModifiedPackages = d.packageHashes.CheckForChanges()
|
||||||
|
|
||||||
|
providedSuitePaths := map[string]bool{}
|
||||||
|
for _, suite := range suites {
|
||||||
|
providedSuitePaths[suite.Path] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
d.packageHashes.StartTrackingUsage()
|
||||||
|
|
||||||
|
for _, suite := range d.suites {
|
||||||
|
if providedSuitePaths[suite.Suite.Path] {
|
||||||
|
if suite.Delta() > 0 {
|
||||||
|
delta.modifiedSuites = append(delta.modifiedSuites, suite)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
delta.RemovedSuites = append(delta.RemovedSuites, suite)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d.packageHashes.StopTrackingUsageAndPrune()
|
||||||
|
|
||||||
|
for _, suite := range suites {
|
||||||
|
_, ok := d.suites[suite.Path]
|
||||||
|
if !ok {
|
||||||
|
s, err := NewSuite(suite, d.maxDepth, d.packageHashes)
|
||||||
|
if err != nil {
|
||||||
|
errors[suite] = err
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
d.suites[suite.Path] = s
|
||||||
|
delta.NewSuites = append(delta.NewSuites, s)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return delta, errors
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *DeltaTracker) WillRun(suite internal.TestSuite) error {
|
||||||
|
s, ok := d.suites[suite.Path]
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("unknown suite %s", suite.Path)
|
||||||
|
}
|
||||||
|
|
||||||
|
return s.MarkAsRunAndRecomputedDependencies(d.maxDepth)
|
||||||
|
}
|
||||||
92
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/dependencies.go
generated
vendored
Normal file
92
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/dependencies.go
generated
vendored
Normal file
@@ -0,0 +1,92 @@
|
|||||||
|
package watch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"go/build"
|
||||||
|
"regexp"
|
||||||
|
)
|
||||||
|
|
||||||
|
var ginkgoAndGomegaFilter = regexp.MustCompile(`github\.com/onsi/ginkgo|github\.com/onsi/gomega`)
|
||||||
|
var ginkgoIntegrationTestFilter = regexp.MustCompile(`github\.com/onsi/ginkgo/integration`) //allow us to integration test this thing
|
||||||
|
|
||||||
|
type Dependencies struct {
|
||||||
|
deps map[string]int
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewDependencies(path string, maxDepth int) (Dependencies, error) {
|
||||||
|
d := Dependencies{
|
||||||
|
deps: map[string]int{},
|
||||||
|
}
|
||||||
|
|
||||||
|
if maxDepth == 0 {
|
||||||
|
return d, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
err := d.seedWithDepsForPackageAtPath(path)
|
||||||
|
if err != nil {
|
||||||
|
return d, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for depth := 1; depth < maxDepth; depth++ {
|
||||||
|
n := len(d.deps)
|
||||||
|
d.addDepsForDepth(depth)
|
||||||
|
if n == len(d.deps) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return d, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d Dependencies) Dependencies() map[string]int {
|
||||||
|
return d.deps
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d Dependencies) seedWithDepsForPackageAtPath(path string) error {
|
||||||
|
pkg, err := build.ImportDir(path, 0)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
d.resolveAndAdd(pkg.Imports, 1)
|
||||||
|
d.resolveAndAdd(pkg.TestImports, 1)
|
||||||
|
d.resolveAndAdd(pkg.XTestImports, 1)
|
||||||
|
|
||||||
|
delete(d.deps, pkg.Dir)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d Dependencies) addDepsForDepth(depth int) {
|
||||||
|
for dep, depDepth := range d.deps {
|
||||||
|
if depDepth == depth {
|
||||||
|
d.addDepsForDep(dep, depth+1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d Dependencies) addDepsForDep(dep string, depth int) {
|
||||||
|
pkg, err := build.ImportDir(dep, 0)
|
||||||
|
if err != nil {
|
||||||
|
println(err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
d.resolveAndAdd(pkg.Imports, depth)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d Dependencies) resolveAndAdd(deps []string, depth int) {
|
||||||
|
for _, dep := range deps {
|
||||||
|
pkg, err := build.Import(dep, ".", 0)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !pkg.Goroot && (!ginkgoAndGomegaFilter.Match([]byte(pkg.Dir)) || ginkgoIntegrationTestFilter.Match([]byte(pkg.Dir))) {
|
||||||
|
d.addDepIfNotPresent(pkg.Dir, depth)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d Dependencies) addDepIfNotPresent(dep string, depth int) {
|
||||||
|
_, ok := d.deps[dep]
|
||||||
|
if !ok {
|
||||||
|
d.deps[dep] = depth
|
||||||
|
}
|
||||||
|
}
|
||||||
108
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/package_hash.go
generated
vendored
Normal file
108
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/package_hash.go
generated
vendored
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
package watch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"regexp"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
var goTestRegExp = regexp.MustCompile(`_test\.go$`)
|
||||||
|
|
||||||
|
type PackageHash struct {
|
||||||
|
CodeModifiedTime time.Time
|
||||||
|
TestModifiedTime time.Time
|
||||||
|
Deleted bool
|
||||||
|
|
||||||
|
path string
|
||||||
|
codeHash string
|
||||||
|
testHash string
|
||||||
|
watchRegExp *regexp.Regexp
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewPackageHash(path string, watchRegExp *regexp.Regexp) *PackageHash {
|
||||||
|
p := &PackageHash{
|
||||||
|
path: path,
|
||||||
|
watchRegExp: watchRegExp,
|
||||||
|
}
|
||||||
|
|
||||||
|
p.codeHash, _, p.testHash, _, p.Deleted = p.computeHashes()
|
||||||
|
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *PackageHash) CheckForChanges() bool {
|
||||||
|
codeHash, codeModifiedTime, testHash, testModifiedTime, deleted := p.computeHashes()
|
||||||
|
|
||||||
|
if deleted {
|
||||||
|
if !p.Deleted {
|
||||||
|
t := time.Now()
|
||||||
|
p.CodeModifiedTime = t
|
||||||
|
p.TestModifiedTime = t
|
||||||
|
}
|
||||||
|
p.Deleted = true
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
modified := false
|
||||||
|
p.Deleted = false
|
||||||
|
|
||||||
|
if p.codeHash != codeHash {
|
||||||
|
p.CodeModifiedTime = codeModifiedTime
|
||||||
|
modified = true
|
||||||
|
}
|
||||||
|
if p.testHash != testHash {
|
||||||
|
p.TestModifiedTime = testModifiedTime
|
||||||
|
modified = true
|
||||||
|
}
|
||||||
|
|
||||||
|
p.codeHash = codeHash
|
||||||
|
p.testHash = testHash
|
||||||
|
return modified
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *PackageHash) computeHashes() (codeHash string, codeModifiedTime time.Time, testHash string, testModifiedTime time.Time, deleted bool) {
|
||||||
|
entries, err := os.ReadDir(p.path)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
deleted = true
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, entry := range entries {
|
||||||
|
if entry.IsDir() {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
info, err := entry.Info()
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if goTestRegExp.Match([]byte(info.Name())) {
|
||||||
|
testHash += p.hashForFileInfo(info)
|
||||||
|
if info.ModTime().After(testModifiedTime) {
|
||||||
|
testModifiedTime = info.ModTime()
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if p.watchRegExp.Match([]byte(info.Name())) {
|
||||||
|
codeHash += p.hashForFileInfo(info)
|
||||||
|
if info.ModTime().After(codeModifiedTime) {
|
||||||
|
codeModifiedTime = info.ModTime()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
testHash += codeHash
|
||||||
|
if codeModifiedTime.After(testModifiedTime) {
|
||||||
|
testModifiedTime = codeModifiedTime
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *PackageHash) hashForFileInfo(info os.FileInfo) string {
|
||||||
|
return fmt.Sprintf("%s_%d_%d", info.Name(), info.Size(), info.ModTime().UnixNano())
|
||||||
|
}
|
||||||
85
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/package_hashes.go
generated
vendored
Normal file
85
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/package_hashes.go
generated
vendored
Normal file
@@ -0,0 +1,85 @@
|
|||||||
|
package watch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
|
"sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
type PackageHashes struct {
|
||||||
|
PackageHashes map[string]*PackageHash
|
||||||
|
usedPaths map[string]bool
|
||||||
|
watchRegExp *regexp.Regexp
|
||||||
|
lock *sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewPackageHashes(watchRegExp *regexp.Regexp) *PackageHashes {
|
||||||
|
return &PackageHashes{
|
||||||
|
PackageHashes: map[string]*PackageHash{},
|
||||||
|
usedPaths: nil,
|
||||||
|
watchRegExp: watchRegExp,
|
||||||
|
lock: &sync.Mutex{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *PackageHashes) CheckForChanges() []string {
|
||||||
|
p.lock.Lock()
|
||||||
|
defer p.lock.Unlock()
|
||||||
|
|
||||||
|
modified := []string{}
|
||||||
|
|
||||||
|
for _, packageHash := range p.PackageHashes {
|
||||||
|
if packageHash.CheckForChanges() {
|
||||||
|
modified = append(modified, packageHash.path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return modified
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *PackageHashes) Add(path string) *PackageHash {
|
||||||
|
p.lock.Lock()
|
||||||
|
defer p.lock.Unlock()
|
||||||
|
|
||||||
|
path, _ = filepath.Abs(path)
|
||||||
|
_, ok := p.PackageHashes[path]
|
||||||
|
if !ok {
|
||||||
|
p.PackageHashes[path] = NewPackageHash(path, p.watchRegExp)
|
||||||
|
}
|
||||||
|
|
||||||
|
if p.usedPaths != nil {
|
||||||
|
p.usedPaths[path] = true
|
||||||
|
}
|
||||||
|
return p.PackageHashes[path]
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *PackageHashes) Get(path string) *PackageHash {
|
||||||
|
p.lock.Lock()
|
||||||
|
defer p.lock.Unlock()
|
||||||
|
|
||||||
|
path, _ = filepath.Abs(path)
|
||||||
|
if p.usedPaths != nil {
|
||||||
|
p.usedPaths[path] = true
|
||||||
|
}
|
||||||
|
return p.PackageHashes[path]
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *PackageHashes) StartTrackingUsage() {
|
||||||
|
p.lock.Lock()
|
||||||
|
defer p.lock.Unlock()
|
||||||
|
|
||||||
|
p.usedPaths = map[string]bool{}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *PackageHashes) StopTrackingUsageAndPrune() {
|
||||||
|
p.lock.Lock()
|
||||||
|
defer p.lock.Unlock()
|
||||||
|
|
||||||
|
for path := range p.PackageHashes {
|
||||||
|
if !p.usedPaths[path] {
|
||||||
|
delete(p.PackageHashes, path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
p.usedPaths = nil
|
||||||
|
}
|
||||||
87
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/suite.go
generated
vendored
Normal file
87
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/suite.go
generated
vendored
Normal file
@@ -0,0 +1,87 @@
|
|||||||
|
package watch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/internal"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Suite struct {
|
||||||
|
Suite internal.TestSuite
|
||||||
|
RunTime time.Time
|
||||||
|
Dependencies Dependencies
|
||||||
|
|
||||||
|
sharedPackageHashes *PackageHashes
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewSuite(suite internal.TestSuite, maxDepth int, sharedPackageHashes *PackageHashes) (*Suite, error) {
|
||||||
|
deps, err := NewDependencies(suite.Path, maxDepth)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
sharedPackageHashes.Add(suite.Path)
|
||||||
|
for dep := range deps.Dependencies() {
|
||||||
|
sharedPackageHashes.Add(dep)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &Suite{
|
||||||
|
Suite: suite,
|
||||||
|
Dependencies: deps,
|
||||||
|
|
||||||
|
sharedPackageHashes: sharedPackageHashes,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Suite) Delta() float64 {
|
||||||
|
delta := s.delta(s.Suite.Path, true, 0) * 1000
|
||||||
|
for dep, depth := range s.Dependencies.Dependencies() {
|
||||||
|
delta += s.delta(dep, false, depth)
|
||||||
|
}
|
||||||
|
return delta
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Suite) MarkAsRunAndRecomputedDependencies(maxDepth int) error {
|
||||||
|
s.RunTime = time.Now()
|
||||||
|
|
||||||
|
deps, err := NewDependencies(s.Suite.Path, maxDepth)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
s.sharedPackageHashes.Add(s.Suite.Path)
|
||||||
|
for dep := range deps.Dependencies() {
|
||||||
|
s.sharedPackageHashes.Add(dep)
|
||||||
|
}
|
||||||
|
|
||||||
|
s.Dependencies = deps
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Suite) Description() string {
|
||||||
|
numDeps := len(s.Dependencies.Dependencies())
|
||||||
|
pluralizer := "ies"
|
||||||
|
if numDeps == 1 {
|
||||||
|
pluralizer = "y"
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%s [%d dependenc%s]", s.Suite.Path, numDeps, pluralizer)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Suite) delta(packagePath string, includeTests bool, depth int) float64 {
|
||||||
|
return math.Max(float64(s.dt(packagePath, includeTests)), 0) / float64(depth+1)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Suite) dt(packagePath string, includeTests bool) time.Duration {
|
||||||
|
packageHash := s.sharedPackageHashes.Get(packagePath)
|
||||||
|
var modifiedTime time.Time
|
||||||
|
if includeTests {
|
||||||
|
modifiedTime = packageHash.TestModifiedTime
|
||||||
|
} else {
|
||||||
|
modifiedTime = packageHash.CodeModifiedTime
|
||||||
|
}
|
||||||
|
|
||||||
|
return modifiedTime.Sub(s.RunTime)
|
||||||
|
}
|
||||||
190
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/watch_command.go
generated
vendored
Normal file
190
vendor/github.com/onsi/ginkgo/v2/ginkgo/watch/watch_command.go
generated
vendored
Normal file
@@ -0,0 +1,190 @@
|
|||||||
|
package watch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"regexp"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/onsi/ginkgo/v2/formatter"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/command"
|
||||||
|
"github.com/onsi/ginkgo/v2/ginkgo/internal"
|
||||||
|
"github.com/onsi/ginkgo/v2/internal/interrupt_handler"
|
||||||
|
"github.com/onsi/ginkgo/v2/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
func BuildWatchCommand() command.Command {
|
||||||
|
var suiteConfig = types.NewDefaultSuiteConfig()
|
||||||
|
var reporterConfig = types.NewDefaultReporterConfig()
|
||||||
|
var cliConfig = types.NewDefaultCLIConfig()
|
||||||
|
var goFlagsConfig = types.NewDefaultGoFlagsConfig()
|
||||||
|
|
||||||
|
flags, err := types.BuildWatchCommandFlagSet(&suiteConfig, &reporterConfig, &cliConfig, &goFlagsConfig)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
interruptHandler := interrupt_handler.NewInterruptHandler(0, nil)
|
||||||
|
interrupt_handler.SwallowSigQuit()
|
||||||
|
|
||||||
|
return command.Command{
|
||||||
|
Name: "watch",
|
||||||
|
Flags: flags,
|
||||||
|
Usage: "ginkgo watch <FLAGS> <PACKAGES> -- <PASS-THROUGHS>",
|
||||||
|
ShortDoc: "Watch the passed in <PACKAGES> and runs their tests whenever changes occur.",
|
||||||
|
Documentation: "Any arguments after -- will be passed to the test.",
|
||||||
|
DocLink: "watching-for-changes",
|
||||||
|
Command: func(args []string, additionalArgs []string) {
|
||||||
|
var errors []error
|
||||||
|
cliConfig, goFlagsConfig, errors = types.VetAndInitializeCLIAndGoConfig(cliConfig, goFlagsConfig)
|
||||||
|
command.AbortIfErrors("Ginkgo detected configuration issues:", errors)
|
||||||
|
|
||||||
|
watcher := &SpecWatcher{
|
||||||
|
cliConfig: cliConfig,
|
||||||
|
goFlagsConfig: goFlagsConfig,
|
||||||
|
suiteConfig: suiteConfig,
|
||||||
|
reporterConfig: reporterConfig,
|
||||||
|
flags: flags,
|
||||||
|
|
||||||
|
interruptHandler: interruptHandler,
|
||||||
|
}
|
||||||
|
|
||||||
|
watcher.WatchSpecs(args, additionalArgs)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type SpecWatcher struct {
|
||||||
|
suiteConfig types.SuiteConfig
|
||||||
|
reporterConfig types.ReporterConfig
|
||||||
|
cliConfig types.CLIConfig
|
||||||
|
goFlagsConfig types.GoFlagsConfig
|
||||||
|
flags types.GinkgoFlagSet
|
||||||
|
|
||||||
|
interruptHandler *interrupt_handler.InterruptHandler
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *SpecWatcher) WatchSpecs(args []string, additionalArgs []string) {
|
||||||
|
suites := internal.FindSuites(args, w.cliConfig, false).WithoutState(internal.TestSuiteStateSkippedByFilter)
|
||||||
|
|
||||||
|
if len(suites) == 0 {
|
||||||
|
command.AbortWith("Found no test suites")
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Identified %d test %s. Locating dependencies to a depth of %d (this may take a while)...\n", len(suites), internal.PluralizedWord("suite", "suites", len(suites)), w.cliConfig.Depth)
|
||||||
|
deltaTracker := NewDeltaTracker(w.cliConfig.Depth, regexp.MustCompile(w.cliConfig.WatchRegExp))
|
||||||
|
delta, errors := deltaTracker.Delta(suites)
|
||||||
|
|
||||||
|
fmt.Printf("Watching %d %s:\n", len(delta.NewSuites), internal.PluralizedWord("suite", "suites", len(delta.NewSuites)))
|
||||||
|
for _, suite := range delta.NewSuites {
|
||||||
|
fmt.Println(" " + suite.Description())
|
||||||
|
}
|
||||||
|
|
||||||
|
for suite, err := range errors {
|
||||||
|
fmt.Printf("Failed to watch %s: %s\n", suite.PackageName, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(suites) == 1 {
|
||||||
|
w.updateSeed()
|
||||||
|
w.compileAndRun(suites[0], additionalArgs)
|
||||||
|
}
|
||||||
|
|
||||||
|
ticker := time.NewTicker(time.Second)
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ticker.C:
|
||||||
|
suites := internal.FindSuites(args, w.cliConfig, false).WithoutState(internal.TestSuiteStateSkippedByFilter)
|
||||||
|
delta, _ := deltaTracker.Delta(suites)
|
||||||
|
coloredStream := formatter.ColorableStdOut
|
||||||
|
|
||||||
|
suites = internal.TestSuites{}
|
||||||
|
|
||||||
|
if len(delta.NewSuites) > 0 {
|
||||||
|
fmt.Fprintln(coloredStream, formatter.F("{{green}}Detected %d new %s:{{/}}", len(delta.NewSuites), internal.PluralizedWord("suite", "suites", len(delta.NewSuites))))
|
||||||
|
for _, suite := range delta.NewSuites {
|
||||||
|
suites = append(suites, suite.Suite)
|
||||||
|
fmt.Fprintln(coloredStream, formatter.Fi(1, "%s", suite.Description()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
modifiedSuites := delta.ModifiedSuites()
|
||||||
|
if len(modifiedSuites) > 0 {
|
||||||
|
fmt.Fprintln(coloredStream, formatter.F("{{green}}Detected changes in:{{/}}"))
|
||||||
|
for _, pkg := range delta.ModifiedPackages {
|
||||||
|
fmt.Fprintln(coloredStream, formatter.Fi(1, "%s", pkg))
|
||||||
|
}
|
||||||
|
fmt.Fprintln(coloredStream, formatter.F("{{green}}Will run %d %s:{{/}}", len(modifiedSuites), internal.PluralizedWord("suite", "suites", len(modifiedSuites))))
|
||||||
|
for _, suite := range modifiedSuites {
|
||||||
|
suites = append(suites, suite.Suite)
|
||||||
|
fmt.Fprintln(coloredStream, formatter.Fi(1, "%s", suite.Description()))
|
||||||
|
}
|
||||||
|
fmt.Fprintln(coloredStream, "")
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(suites) == 0 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
w.updateSeed()
|
||||||
|
w.computeSuccinctMode(len(suites))
|
||||||
|
for idx := range suites {
|
||||||
|
if w.interruptHandler.Status().Interrupted {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
deltaTracker.WillRun(suites[idx])
|
||||||
|
suites[idx] = w.compileAndRun(suites[idx], additionalArgs)
|
||||||
|
}
|
||||||
|
color := "{{green}}"
|
||||||
|
if suites.CountWithState(internal.TestSuiteStateFailureStates...) > 0 {
|
||||||
|
color = "{{red}}"
|
||||||
|
}
|
||||||
|
fmt.Fprintln(coloredStream, formatter.F(color+"\nDone. Resuming watch...{{/}}"))
|
||||||
|
|
||||||
|
messages, err := internal.FinalizeProfilesAndReportsForSuites(suites, w.cliConfig, w.suiteConfig, w.reporterConfig, w.goFlagsConfig)
|
||||||
|
command.AbortIfError("could not finalize profiles:", err)
|
||||||
|
for _, message := range messages {
|
||||||
|
fmt.Println(message)
|
||||||
|
}
|
||||||
|
case <-w.interruptHandler.Status().Channel:
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *SpecWatcher) compileAndRun(suite internal.TestSuite, additionalArgs []string) internal.TestSuite {
|
||||||
|
suite = internal.CompileSuite(suite, w.goFlagsConfig)
|
||||||
|
if suite.State.Is(internal.TestSuiteStateFailedToCompile) {
|
||||||
|
fmt.Println(suite.CompilationError.Error())
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
if w.interruptHandler.Status().Interrupted {
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
suite = internal.RunCompiledSuite(suite, w.suiteConfig, w.reporterConfig, w.cliConfig, w.goFlagsConfig, additionalArgs)
|
||||||
|
internal.Cleanup(w.goFlagsConfig, suite)
|
||||||
|
return suite
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *SpecWatcher) computeSuccinctMode(numSuites int) {
|
||||||
|
if w.reporterConfig.Verbosity().GTE(types.VerbosityLevelVerbose) {
|
||||||
|
w.reporterConfig.Succinct = false
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if w.flags.WasSet("succinct") {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if numSuites == 1 {
|
||||||
|
w.reporterConfig.Succinct = false
|
||||||
|
}
|
||||||
|
|
||||||
|
if numSuites > 1 {
|
||||||
|
w.reporterConfig.Succinct = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *SpecWatcher) updateSeed() {
|
||||||
|
if !w.flags.WasSet("seed") {
|
||||||
|
w.suiteConfig.RandomSeed = time.Now().Unix()
|
||||||
|
}
|
||||||
|
}
|
||||||
186
vendor/golang.org/x/tools/go/ast/inspector/inspector.go
generated
vendored
Normal file
186
vendor/golang.org/x/tools/go/ast/inspector/inspector.go
generated
vendored
Normal file
@@ -0,0 +1,186 @@
|
|||||||
|
// Copyright 2018 The Go Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a BSD-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
// Package inspector provides helper functions for traversal over the
|
||||||
|
// syntax trees of a package, including node filtering by type, and
|
||||||
|
// materialization of the traversal stack.
|
||||||
|
//
|
||||||
|
// During construction, the inspector does a complete traversal and
|
||||||
|
// builds a list of push/pop events and their node type. Subsequent
|
||||||
|
// method calls that request a traversal scan this list, rather than walk
|
||||||
|
// the AST, and perform type filtering using efficient bit sets.
|
||||||
|
//
|
||||||
|
// Experiments suggest the inspector's traversals are about 2.5x faster
|
||||||
|
// than ast.Inspect, but it may take around 5 traversals for this
|
||||||
|
// benefit to amortize the inspector's construction cost.
|
||||||
|
// If efficiency is the primary concern, do not use Inspector for
|
||||||
|
// one-off traversals.
|
||||||
|
package inspector
|
||||||
|
|
||||||
|
// There are four orthogonal features in a traversal:
|
||||||
|
// 1 type filtering
|
||||||
|
// 2 pruning
|
||||||
|
// 3 postorder calls to f
|
||||||
|
// 4 stack
|
||||||
|
// Rather than offer all of them in the API,
|
||||||
|
// only a few combinations are exposed:
|
||||||
|
// - Preorder is the fastest and has fewest features,
|
||||||
|
// but is the most commonly needed traversal.
|
||||||
|
// - Nodes and WithStack both provide pruning and postorder calls,
|
||||||
|
// even though few clients need it, because supporting two versions
|
||||||
|
// is not justified.
|
||||||
|
// More combinations could be supported by expressing them as
|
||||||
|
// wrappers around a more generic traversal, but this was measured
|
||||||
|
// and found to degrade performance significantly (30%).
|
||||||
|
|
||||||
|
import (
|
||||||
|
"go/ast"
|
||||||
|
)
|
||||||
|
|
||||||
|
// An Inspector provides methods for inspecting
|
||||||
|
// (traversing) the syntax trees of a package.
|
||||||
|
type Inspector struct {
|
||||||
|
events []event
|
||||||
|
}
|
||||||
|
|
||||||
|
// New returns an Inspector for the specified syntax trees.
|
||||||
|
func New(files []*ast.File) *Inspector {
|
||||||
|
return &Inspector{traverse(files)}
|
||||||
|
}
|
||||||
|
|
||||||
|
// An event represents a push or a pop
|
||||||
|
// of an ast.Node during a traversal.
|
||||||
|
type event struct {
|
||||||
|
node ast.Node
|
||||||
|
typ uint64 // typeOf(node)
|
||||||
|
index int // 1 + index of corresponding pop event, or 0 if this is a pop
|
||||||
|
}
|
||||||
|
|
||||||
|
// Preorder visits all the nodes of the files supplied to New in
|
||||||
|
// depth-first order. It calls f(n) for each node n before it visits
|
||||||
|
// n's children.
|
||||||
|
//
|
||||||
|
// The types argument, if non-empty, enables type-based filtering of
|
||||||
|
// events. The function f if is called only for nodes whose type
|
||||||
|
// matches an element of the types slice.
|
||||||
|
func (in *Inspector) Preorder(types []ast.Node, f func(ast.Node)) {
|
||||||
|
// Because it avoids postorder calls to f, and the pruning
|
||||||
|
// check, Preorder is almost twice as fast as Nodes. The two
|
||||||
|
// features seem to contribute similar slowdowns (~1.4x each).
|
||||||
|
|
||||||
|
mask := maskOf(types)
|
||||||
|
for i := 0; i < len(in.events); {
|
||||||
|
ev := in.events[i]
|
||||||
|
if ev.typ&mask != 0 {
|
||||||
|
if ev.index > 0 {
|
||||||
|
f(ev.node)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
i++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Nodes visits the nodes of the files supplied to New in depth-first
|
||||||
|
// order. It calls f(n, true) for each node n before it visits n's
|
||||||
|
// children. If f returns true, Nodes invokes f recursively for each
|
||||||
|
// of the non-nil children of the node, followed by a call of
|
||||||
|
// f(n, false).
|
||||||
|
//
|
||||||
|
// The types argument, if non-empty, enables type-based filtering of
|
||||||
|
// events. The function f if is called only for nodes whose type
|
||||||
|
// matches an element of the types slice.
|
||||||
|
func (in *Inspector) Nodes(types []ast.Node, f func(n ast.Node, push bool) (proceed bool)) {
|
||||||
|
mask := maskOf(types)
|
||||||
|
for i := 0; i < len(in.events); {
|
||||||
|
ev := in.events[i]
|
||||||
|
if ev.typ&mask != 0 {
|
||||||
|
if ev.index > 0 {
|
||||||
|
// push
|
||||||
|
if !f(ev.node, true) {
|
||||||
|
i = ev.index // jump to corresponding pop + 1
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// pop
|
||||||
|
f(ev.node, false)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
i++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithStack visits nodes in a similar manner to Nodes, but it
|
||||||
|
// supplies each call to f an additional argument, the current
|
||||||
|
// traversal stack. The stack's first element is the outermost node,
|
||||||
|
// an *ast.File; its last is the innermost, n.
|
||||||
|
func (in *Inspector) WithStack(types []ast.Node, f func(n ast.Node, push bool, stack []ast.Node) (proceed bool)) {
|
||||||
|
mask := maskOf(types)
|
||||||
|
var stack []ast.Node
|
||||||
|
for i := 0; i < len(in.events); {
|
||||||
|
ev := in.events[i]
|
||||||
|
if ev.index > 0 {
|
||||||
|
// push
|
||||||
|
stack = append(stack, ev.node)
|
||||||
|
if ev.typ&mask != 0 {
|
||||||
|
if !f(ev.node, true, stack) {
|
||||||
|
i = ev.index
|
||||||
|
stack = stack[:len(stack)-1]
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// pop
|
||||||
|
if ev.typ&mask != 0 {
|
||||||
|
f(ev.node, false, stack)
|
||||||
|
}
|
||||||
|
stack = stack[:len(stack)-1]
|
||||||
|
}
|
||||||
|
i++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// traverse builds the table of events representing a traversal.
|
||||||
|
func traverse(files []*ast.File) []event {
|
||||||
|
// Preallocate approximate number of events
|
||||||
|
// based on source file extent.
|
||||||
|
// This makes traverse faster by 4x (!).
|
||||||
|
var extent int
|
||||||
|
for _, f := range files {
|
||||||
|
extent += int(f.End() - f.Pos())
|
||||||
|
}
|
||||||
|
// This estimate is based on the net/http package.
|
||||||
|
capacity := extent * 33 / 100
|
||||||
|
if capacity > 1e6 {
|
||||||
|
capacity = 1e6 // impose some reasonable maximum
|
||||||
|
}
|
||||||
|
events := make([]event, 0, capacity)
|
||||||
|
|
||||||
|
var stack []event
|
||||||
|
for _, f := range files {
|
||||||
|
ast.Inspect(f, func(n ast.Node) bool {
|
||||||
|
if n != nil {
|
||||||
|
// push
|
||||||
|
ev := event{
|
||||||
|
node: n,
|
||||||
|
typ: typeOf(n),
|
||||||
|
index: len(events), // push event temporarily holds own index
|
||||||
|
}
|
||||||
|
stack = append(stack, ev)
|
||||||
|
events = append(events, ev)
|
||||||
|
} else {
|
||||||
|
// pop
|
||||||
|
ev := stack[len(stack)-1]
|
||||||
|
stack = stack[:len(stack)-1]
|
||||||
|
|
||||||
|
events[ev.index].index = len(events) + 1 // make push refer to pop
|
||||||
|
|
||||||
|
ev.index = 0 // turn ev into a pop event
|
||||||
|
events = append(events, ev)
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return events
|
||||||
|
}
|
||||||
227
vendor/golang.org/x/tools/go/ast/inspector/typeof.go
generated
vendored
Normal file
227
vendor/golang.org/x/tools/go/ast/inspector/typeof.go
generated
vendored
Normal file
@@ -0,0 +1,227 @@
|
|||||||
|
// Copyright 2018 The Go Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a BSD-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
package inspector
|
||||||
|
|
||||||
|
// This file defines func typeOf(ast.Node) uint64.
|
||||||
|
//
|
||||||
|
// The initial map-based implementation was too slow;
|
||||||
|
// see https://go-review.googlesource.com/c/tools/+/135655/1/go/ast/inspector/inspector.go#196
|
||||||
|
|
||||||
|
import (
|
||||||
|
"go/ast"
|
||||||
|
|
||||||
|
"golang.org/x/tools/internal/typeparams"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
nArrayType = iota
|
||||||
|
nAssignStmt
|
||||||
|
nBadDecl
|
||||||
|
nBadExpr
|
||||||
|
nBadStmt
|
||||||
|
nBasicLit
|
||||||
|
nBinaryExpr
|
||||||
|
nBlockStmt
|
||||||
|
nBranchStmt
|
||||||
|
nCallExpr
|
||||||
|
nCaseClause
|
||||||
|
nChanType
|
||||||
|
nCommClause
|
||||||
|
nComment
|
||||||
|
nCommentGroup
|
||||||
|
nCompositeLit
|
||||||
|
nDeclStmt
|
||||||
|
nDeferStmt
|
||||||
|
nEllipsis
|
||||||
|
nEmptyStmt
|
||||||
|
nExprStmt
|
||||||
|
nField
|
||||||
|
nFieldList
|
||||||
|
nFile
|
||||||
|
nForStmt
|
||||||
|
nFuncDecl
|
||||||
|
nFuncLit
|
||||||
|
nFuncType
|
||||||
|
nGenDecl
|
||||||
|
nGoStmt
|
||||||
|
nIdent
|
||||||
|
nIfStmt
|
||||||
|
nImportSpec
|
||||||
|
nIncDecStmt
|
||||||
|
nIndexExpr
|
||||||
|
nIndexListExpr
|
||||||
|
nInterfaceType
|
||||||
|
nKeyValueExpr
|
||||||
|
nLabeledStmt
|
||||||
|
nMapType
|
||||||
|
nPackage
|
||||||
|
nParenExpr
|
||||||
|
nRangeStmt
|
||||||
|
nReturnStmt
|
||||||
|
nSelectStmt
|
||||||
|
nSelectorExpr
|
||||||
|
nSendStmt
|
||||||
|
nSliceExpr
|
||||||
|
nStarExpr
|
||||||
|
nStructType
|
||||||
|
nSwitchStmt
|
||||||
|
nTypeAssertExpr
|
||||||
|
nTypeSpec
|
||||||
|
nTypeSwitchStmt
|
||||||
|
nUnaryExpr
|
||||||
|
nValueSpec
|
||||||
|
)
|
||||||
|
|
||||||
|
// typeOf returns a distinct single-bit value that represents the type of n.
|
||||||
|
//
|
||||||
|
// Various implementations were benchmarked with BenchmarkNewInspector:
|
||||||
|
// GOGC=off
|
||||||
|
// - type switch 4.9-5.5ms 2.1ms
|
||||||
|
// - binary search over a sorted list of types 5.5-5.9ms 2.5ms
|
||||||
|
// - linear scan, frequency-ordered list 5.9-6.1ms 2.7ms
|
||||||
|
// - linear scan, unordered list 6.4ms 2.7ms
|
||||||
|
// - hash table 6.5ms 3.1ms
|
||||||
|
// A perfect hash seemed like overkill.
|
||||||
|
//
|
||||||
|
// The compiler's switch statement is the clear winner
|
||||||
|
// as it produces a binary tree in code,
|
||||||
|
// with constant conditions and good branch prediction.
|
||||||
|
// (Sadly it is the most verbose in source code.)
|
||||||
|
// Binary search suffered from poor branch prediction.
|
||||||
|
//
|
||||||
|
func typeOf(n ast.Node) uint64 {
|
||||||
|
// Fast path: nearly half of all nodes are identifiers.
|
||||||
|
if _, ok := n.(*ast.Ident); ok {
|
||||||
|
return 1 << nIdent
|
||||||
|
}
|
||||||
|
|
||||||
|
// These cases include all nodes encountered by ast.Inspect.
|
||||||
|
switch n.(type) {
|
||||||
|
case *ast.ArrayType:
|
||||||
|
return 1 << nArrayType
|
||||||
|
case *ast.AssignStmt:
|
||||||
|
return 1 << nAssignStmt
|
||||||
|
case *ast.BadDecl:
|
||||||
|
return 1 << nBadDecl
|
||||||
|
case *ast.BadExpr:
|
||||||
|
return 1 << nBadExpr
|
||||||
|
case *ast.BadStmt:
|
||||||
|
return 1 << nBadStmt
|
||||||
|
case *ast.BasicLit:
|
||||||
|
return 1 << nBasicLit
|
||||||
|
case *ast.BinaryExpr:
|
||||||
|
return 1 << nBinaryExpr
|
||||||
|
case *ast.BlockStmt:
|
||||||
|
return 1 << nBlockStmt
|
||||||
|
case *ast.BranchStmt:
|
||||||
|
return 1 << nBranchStmt
|
||||||
|
case *ast.CallExpr:
|
||||||
|
return 1 << nCallExpr
|
||||||
|
case *ast.CaseClause:
|
||||||
|
return 1 << nCaseClause
|
||||||
|
case *ast.ChanType:
|
||||||
|
return 1 << nChanType
|
||||||
|
case *ast.CommClause:
|
||||||
|
return 1 << nCommClause
|
||||||
|
case *ast.Comment:
|
||||||
|
return 1 << nComment
|
||||||
|
case *ast.CommentGroup:
|
||||||
|
return 1 << nCommentGroup
|
||||||
|
case *ast.CompositeLit:
|
||||||
|
return 1 << nCompositeLit
|
||||||
|
case *ast.DeclStmt:
|
||||||
|
return 1 << nDeclStmt
|
||||||
|
case *ast.DeferStmt:
|
||||||
|
return 1 << nDeferStmt
|
||||||
|
case *ast.Ellipsis:
|
||||||
|
return 1 << nEllipsis
|
||||||
|
case *ast.EmptyStmt:
|
||||||
|
return 1 << nEmptyStmt
|
||||||
|
case *ast.ExprStmt:
|
||||||
|
return 1 << nExprStmt
|
||||||
|
case *ast.Field:
|
||||||
|
return 1 << nField
|
||||||
|
case *ast.FieldList:
|
||||||
|
return 1 << nFieldList
|
||||||
|
case *ast.File:
|
||||||
|
return 1 << nFile
|
||||||
|
case *ast.ForStmt:
|
||||||
|
return 1 << nForStmt
|
||||||
|
case *ast.FuncDecl:
|
||||||
|
return 1 << nFuncDecl
|
||||||
|
case *ast.FuncLit:
|
||||||
|
return 1 << nFuncLit
|
||||||
|
case *ast.FuncType:
|
||||||
|
return 1 << nFuncType
|
||||||
|
case *ast.GenDecl:
|
||||||
|
return 1 << nGenDecl
|
||||||
|
case *ast.GoStmt:
|
||||||
|
return 1 << nGoStmt
|
||||||
|
case *ast.Ident:
|
||||||
|
return 1 << nIdent
|
||||||
|
case *ast.IfStmt:
|
||||||
|
return 1 << nIfStmt
|
||||||
|
case *ast.ImportSpec:
|
||||||
|
return 1 << nImportSpec
|
||||||
|
case *ast.IncDecStmt:
|
||||||
|
return 1 << nIncDecStmt
|
||||||
|
case *ast.IndexExpr:
|
||||||
|
return 1 << nIndexExpr
|
||||||
|
case *typeparams.IndexListExpr:
|
||||||
|
return 1 << nIndexListExpr
|
||||||
|
case *ast.InterfaceType:
|
||||||
|
return 1 << nInterfaceType
|
||||||
|
case *ast.KeyValueExpr:
|
||||||
|
return 1 << nKeyValueExpr
|
||||||
|
case *ast.LabeledStmt:
|
||||||
|
return 1 << nLabeledStmt
|
||||||
|
case *ast.MapType:
|
||||||
|
return 1 << nMapType
|
||||||
|
case *ast.Package:
|
||||||
|
return 1 << nPackage
|
||||||
|
case *ast.ParenExpr:
|
||||||
|
return 1 << nParenExpr
|
||||||
|
case *ast.RangeStmt:
|
||||||
|
return 1 << nRangeStmt
|
||||||
|
case *ast.ReturnStmt:
|
||||||
|
return 1 << nReturnStmt
|
||||||
|
case *ast.SelectStmt:
|
||||||
|
return 1 << nSelectStmt
|
||||||
|
case *ast.SelectorExpr:
|
||||||
|
return 1 << nSelectorExpr
|
||||||
|
case *ast.SendStmt:
|
||||||
|
return 1 << nSendStmt
|
||||||
|
case *ast.SliceExpr:
|
||||||
|
return 1 << nSliceExpr
|
||||||
|
case *ast.StarExpr:
|
||||||
|
return 1 << nStarExpr
|
||||||
|
case *ast.StructType:
|
||||||
|
return 1 << nStructType
|
||||||
|
case *ast.SwitchStmt:
|
||||||
|
return 1 << nSwitchStmt
|
||||||
|
case *ast.TypeAssertExpr:
|
||||||
|
return 1 << nTypeAssertExpr
|
||||||
|
case *ast.TypeSpec:
|
||||||
|
return 1 << nTypeSpec
|
||||||
|
case *ast.TypeSwitchStmt:
|
||||||
|
return 1 << nTypeSwitchStmt
|
||||||
|
case *ast.UnaryExpr:
|
||||||
|
return 1 << nUnaryExpr
|
||||||
|
case *ast.ValueSpec:
|
||||||
|
return 1 << nValueSpec
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func maskOf(nodes []ast.Node) uint64 {
|
||||||
|
if nodes == nil {
|
||||||
|
return 1<<64 - 1 // match all node types
|
||||||
|
}
|
||||||
|
var mask uint64
|
||||||
|
for _, n := range nodes {
|
||||||
|
mask |= typeOf(n)
|
||||||
|
}
|
||||||
|
return mask
|
||||||
|
}
|
||||||
17
vendor/modules.txt
vendored
17
vendor/modules.txt
vendored
@@ -308,6 +308,9 @@ github.com/go-openapi/swag
|
|||||||
## explicit
|
## explicit
|
||||||
github.com/go-ozzo/ozzo-validation
|
github.com/go-ozzo/ozzo-validation
|
||||||
github.com/go-ozzo/ozzo-validation/is
|
github.com/go-ozzo/ozzo-validation/is
|
||||||
|
# github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 => github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0
|
||||||
|
## explicit; go 1.13
|
||||||
|
github.com/go-task/slim-sprig
|
||||||
# github.com/godbus/dbus/v5 v5.0.6 => github.com/godbus/dbus/v5 v5.0.6
|
# github.com/godbus/dbus/v5 v5.0.6 => github.com/godbus/dbus/v5 v5.0.6
|
||||||
## explicit; go 1.12
|
## explicit; go 1.12
|
||||||
github.com/godbus/dbus/v5
|
github.com/godbus/dbus/v5
|
||||||
@@ -451,6 +454,9 @@ github.com/google/go-cmp/cmp/internal/value
|
|||||||
# github.com/google/gofuzz v1.1.0 => github.com/google/gofuzz v1.1.0
|
# github.com/google/gofuzz v1.1.0 => github.com/google/gofuzz v1.1.0
|
||||||
## explicit; go 1.12
|
## explicit; go 1.12
|
||||||
github.com/google/gofuzz
|
github.com/google/gofuzz
|
||||||
|
# github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 => github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38
|
||||||
|
## explicit; go 1.14
|
||||||
|
github.com/google/pprof/profile
|
||||||
# github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 => github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
|
# github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 => github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
|
||||||
## explicit; go 1.13
|
## explicit; go 1.13
|
||||||
github.com/google/shlex
|
github.com/google/shlex
|
||||||
@@ -652,6 +658,16 @@ github.com/onsi/ginkgo/types
|
|||||||
github.com/onsi/ginkgo/v2
|
github.com/onsi/ginkgo/v2
|
||||||
github.com/onsi/ginkgo/v2/config
|
github.com/onsi/ginkgo/v2/config
|
||||||
github.com/onsi/ginkgo/v2/formatter
|
github.com/onsi/ginkgo/v2/formatter
|
||||||
|
github.com/onsi/ginkgo/v2/ginkgo
|
||||||
|
github.com/onsi/ginkgo/v2/ginkgo/build
|
||||||
|
github.com/onsi/ginkgo/v2/ginkgo/command
|
||||||
|
github.com/onsi/ginkgo/v2/ginkgo/generators
|
||||||
|
github.com/onsi/ginkgo/v2/ginkgo/internal
|
||||||
|
github.com/onsi/ginkgo/v2/ginkgo/labels
|
||||||
|
github.com/onsi/ginkgo/v2/ginkgo/outline
|
||||||
|
github.com/onsi/ginkgo/v2/ginkgo/run
|
||||||
|
github.com/onsi/ginkgo/v2/ginkgo/unfocus
|
||||||
|
github.com/onsi/ginkgo/v2/ginkgo/watch
|
||||||
github.com/onsi/ginkgo/v2/internal
|
github.com/onsi/ginkgo/v2/internal
|
||||||
github.com/onsi/ginkgo/v2/internal/global
|
github.com/onsi/ginkgo/v2/internal/global
|
||||||
github.com/onsi/ginkgo/v2/internal/interrupt_handler
|
github.com/onsi/ginkgo/v2/internal/interrupt_handler
|
||||||
@@ -1159,6 +1175,7 @@ golang.org/x/time/rate
|
|||||||
golang.org/x/tools/benchmark/parse
|
golang.org/x/tools/benchmark/parse
|
||||||
golang.org/x/tools/container/intsets
|
golang.org/x/tools/container/intsets
|
||||||
golang.org/x/tools/go/ast/astutil
|
golang.org/x/tools/go/ast/astutil
|
||||||
|
golang.org/x/tools/go/ast/inspector
|
||||||
golang.org/x/tools/go/gcexportdata
|
golang.org/x/tools/go/gcexportdata
|
||||||
golang.org/x/tools/go/internal/gcimporter
|
golang.org/x/tools/go/internal/gcimporter
|
||||||
golang.org/x/tools/go/internal/packagesdriver
|
golang.org/x/tools/go/internal/packagesdriver
|
||||||
|
|||||||
Reference in New Issue
Block a user